Quantcast
Channel: 看得透又看得远者prevail. ppt.cc/flUmLx ppt.cc/fqtgqx ppt.cc/fZsXUx ppt.cc/fhWnZx ppt.cc/fnrkVx ppt.cc/f2CBVx
Viewing all 20550 articles
Browse latest View live

Volcano

$
0
0

Volcano是一套简单的运维管理工具,管理服务器集群,一键上传本地文件或者目录到远程服务器,执行命令等。

======INSTALL========
./install.sh

======Configure======
conf/volcano.conf
# volcano configure file
ip_address1 password
ip_address2 password

========Run=========
python volcano.py
 
from https://github.com/firefoxbug/Volcano

OpenCDN_Node

$
0
0
###Node端安装手册
###配置
本文档以 Centos 6.x 86_64 为蓝本本文档约定 所有命令以#打头
#wget http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

#wget http://58.215.133.101:801/rpm/inotify-tools-3.14-1.el6.x86_64.rpm

#wget http://58.215.133.101:801/rpm/nginx-1.2.0-5.el6.x86_64.rpm

#wget http://58.215.133.101:801/rpm/opencdn-node-1.1-2.el6-noarch.rpm

#rpm -ivh epel-release-6-8.noarch.rpm

#rpm -ivh inotify-tools-3.14-1.el6.x86_64.rpm

#rpm -ihv nginx-1.2.0-5.el6.x86_64.rpm

**yum -y localinstall opencdn-node-1.1-2.el6-noarch**推荐这么安装。会动解决依赖关系.

检查Selinux状态

#sestatus

如果输出不为 SELinux status: disabled .可以昨时先关闭 .命令如下:

#setenforce 0

永久关闭方法:

#vim /etc/sysconfig/selinux 把SELINUX=disabled 并重启系统
####修改配置
#sed -i 's#localhost#8.8.8.8.#g' /usr/local/opencdn/conf/opencdn.conf  设置为你主控端ip 这里以8.8.8.8.为例

#sed -i 's#0.0.0.0#119.147.0.239#g' /etc/syslog-ng/syslog-ng.conf 修改syslog-ng 上传的日志中心(一般为主控端)
####重启webserver->http
/etc/init.d/httpd restart
####启动opencdn
#/etc/init.d/opencdn restart
检查一下opencdn开启状态,查看日志。查看有没有异常.
#cd /var/log/opencdn/ 相看相关日志
 
from https://github.com/firefoxbug/OpenCDN_Node
------------------------------------------------
 
CDN software 
 
INTROUCDE:
1.Full free CDN deployment tools, including CDN nodes management platform and accelerate the deployment package. OpenCDN provides a convenient tool builders, real-time self-creation of CDN acceleration services
2.OpenCDN is Based on nginx + proxy_cache cache module, without operator profiles, click the mouse to set up high availability CDN acceleration system
3.OpenCDN management center capable of operating status of each node, the system load and network traffic in real-time monitoring and unified management and control node's cache strategy to synchronize all the nodes.

BEFORE INSTALL:
1.OpenCDN2.0 provides centralized control center, no longer need to deploy separate control center.After you install OpenCDN2.0 Node software on your multiple CDN nodes,you can visit our official website to manage your CDN nodes.
2.OpenCDN2.0 Console center communicates with CDN nodes by TCP port 80 default,but if it doesn't work,communication port will change to 9242 automatically.
3.OpenCDN2.0 Node installtion will uninstall your nginx and stop httpd running.Be careful before you install.

INSTALL on Linux:
Platform : CentOS 5.X CentOS 6.x 32bits 64bits
wget https://github.com/firefoxbug/OpenCDN2.0/archive/master.zip
unzip master.zip
cd OpenCDN2.0-master/
./install.sh
 
USAGE:
After install you will get a token which identifys your host.
service opencdn start
service nginx start
Visit http://opencdn.secon.me/login

UNINSTALL:
./unstall.sh
 
from https://github.com/firefoxbug/OpenCDN_Node2.0
----------------------------------------------------
 
###主控端安装手册
###LAMP环境配置
本文档以 Centos 6.x 86_64 为蓝本本文档约定 所有命令以#打头
#wget http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm #wget http://58.215.133.101:801/rpm/inotify-tools-3.14-1.el6.x86_64.rpm #wget http://58.215.133.101:801/rpm/opencdn-console-1.1-2.el6-noarch.rpm #rpm -ivh epel-release-6-8.noarch.rpm #rpm -ivh inotify-tools-3.14-1.el6.x86_64.rpm **yum -y localinstall opencdn-console-1.1-2.el6-noarch.rpm** 推荐这么安装。会动解决依赖关系. 检查Selinux状态 #sestatus 如果输出不为 SELinux status: disabled .可以昨时先关闭 .命令如下: #setenforce 0 永久关闭方法: #vim /etc/sysconfig/selinux 把SELINUX=disabled 并重启系统 #启动Mysql 并设置密码 #service mysqld start #/usr/bin/mysqladmin -u root password '123'设置mysql密码 ####导入数据
#cd /usr/local/opencdn/ocdn #mysql -uroot -p123 -e 'create database cdn_info'新建cdn_info数据库 #mysql -uroot -p123 cdn_info ####重启webserver->http
/etc/init.d/httpd restart ####启动opencdn
#/etc/init.d/opencdn restart 检查一下opencdn开启状态,查看日志。查看有没有异常.
#cd /var/log/opencdn/ 相看相关日志 ####启动服务
#service httpd restart 访问http://x.x.x.x/ocdn/index.php
默认用户名:admin@ocdn.me密码:ocdn.me

from https://github.com/firefoxbug/OpenCDN_Console
 
 
 

太平洋上神秘巨大的太平洋王國!祖先都來自中國,如今生活讓人嫉妒不已

$
0
0

波利尼西亚人确实好幸福

57爆新聞 '香港法'侵門踏戶,北京暴怒

香港区议会选举成为变相公投 亲北京阵营兵败如山倒

$
0
0


香港区议会选举建制派覆没,民主派获压倒性胜利
2019-11-24
(自由亚洲电台 记者 吕熙、文海欣、李智智 )反送中运动爆发5个多月来,由于北京控制了主流媒体,不论特区政府还是亲北京阵营,都认为以武力镇压示威者,是有一定的民意基础。直至11月24日的区议会选举,真实的民意反映在选票上。
周日举行的地方区议会选举,没有出现暴力事件,投票时间由早上7时半至晚上10时半,总计294万人投票,投票率高达71.2%,为香港选举史上新高。有不少人形容,这也是「反送中」运动展开至今一次变相公投,虽然区议会是地方行政机关,没有实际法定权力,但对特首选举及立法会却有一定影响力,成为各党派未来大选争取优势的前哨战。
区议会选举共有1090名候选人,将争夺452个席次,工作人员彻夜开票,截至清晨,民主派已取得343席,已笃定取得全港整体过半数议席;亲共阵营在区议会直选中只取得41席。由于特首选委会的1200人中,有117人是由1区议员互选组成,此次泛民派取得全港整体过半议席,代表肯定全取117个选委席位,其实其中60席为新界各区议员互选,另外 57席为港九各区议员互选,目前新界及港九的议席已肯定过半。
支持北京强硬立场候选人全军覆没
其中最触目的屯门翠乐选区,亲北京议员何君尧连任失败,不敌民主党卢俊宇,并在脸书发文承认落败,表示自己只得2613票,比卢俊宇少近1200票,但却称「结果异常」。他确定落败后,在场市民立刻大声欢呼,还有人高叫「开香槟」庆祝。
香港的721事件,黑道人物被指在警察的纵容下大规模袭击市民,何君尧亦被指与施袭关系密切。加上他经常发表极左言论,他在北京全力支持下仍然在选举中败北,显示了民意的转向。
另外,亲北京政党民建联和工联会,亦在今次选举中兵败如山倒。代表民建联新一代的周浩鼎,以大比数落败,工联会的多位核心干部亦几近全军覆没。就连在乡郊地区,部分传统乡事势力亦被民主派候选人击败。
选举结果公布后,各方开台注视香港特区政府和北京的反应,周日的选举过程,大陆派出了大批官媒记者采访,但未见他们发放消息。
各界关注北京如何作出反应
代表传统左派的工联会,其中一位落败的候选人麦美娟在落败后表示,这场选举对左派阵营不公平,她更质疑选民的选择 : “他们(选民)是立场先行,而不是以候选人的实际工作作为投票准则。”
环球时报深夜在网上发言,说这次选民的投票率非常高,这样是好事。他还说西方国家为香港反对派助选 : “澳大利亚情报机构突然放出“中国情报人员”王立强“投诚”澳洲并自爆其在香港搞“渗透”的消息,还有快被忘了的英国驻港总领馆前雇员在深圳嫖娼被抓事件的主角突然前两天冒出来接受BBC采访,再加上更重磅的,美国参众两院赶时间通过“香港人权与民主法案”,我认为这些都是在为香港的反对派助选。”
胡锡进强调,无论结果如何,香港的天永远都是中国的天, “香港的地永远都是中国的地。有选举,就会有来回摆动,但香港处在中国的治下,这个事实永远变不了。”

區選變公投 建制覆沒 民主派贏近九成民選議席 控制十七區 總選票維持六四比

2019/11/25


區議會選舉在歷來最高的 71.2% 投票率、超過 294 萬選民投票下,造就民主派大勝,建制派慘敗,民主派在全部 452 個民選議席當中,贏得 86% 議席共 388 席(未計算未完成點票的觀塘藍田)。至於近年壟斷區議會的建制派,整體區議會議席由 325 席,暴跌到只餘 58 席;十八區區議會當中,除了有大量當然議員的離島區外,其餘 17 個區選後都由民主派控制。
得票維持六四比
雖然議席比例懸殊,但值得留意是兩大陣營整體得票比例未有顯著改變,仍維持約「六四」比例。
今屆區議會選舉中,合共 452 個民選議席,民主派一舉贏得 388 個議席(未計算未完成點票的觀塘藍田),建制派則只剩 58 個議席,是自區議會選舉舉行以來最大敗仗,其中建制派第一大黨民建聯近乎全軍覆沒,6 名尋求連任的立法會議員,只有兩人當選,詳見相關報道
18區僅離島一區因當然議席未奪下
民主派大勝,將重奪區議會的控制權,除了本身有 8 席當然議員的離島區,另外 17 區都由民主派議員均佔多數,其中大埔、黃大仙民主派更全勝(詳見相關報道),將可完全控制這些區議會正副主席之位。
但在大勝背後,其實得利於區議會單議席單議席制度,若整合全港民主派和建制派總得票,民主派總得票為 167 萬票、建制派總得票則是 120 萬票,總體比例約 57%:41%,在超高投票率下,兩大陣營的得票「六四」比例,其實未有明顯變化。
篤定全取 117 特首選委、下屆立會功能組別議席
然而隨著區議會控制權易手,下屆立法會功能界別的區議會(第一)議席,由於亦是由全體區議員互選,民主派將可取得這一席;特首選委會的 1200 人中,有 117 人由區議員互選組成,民主派經此一役亦肯定全取 2021 年特首選舉 117 個選委席位。

香港民主派正在获得2019区议会选举的压倒性胜利

最后更新: 2019年11月25日

香港民主派支持者庆祝亲北京的立法会议员何君尧败选
香港民主派支持者庆祝亲北京的立法会议员何君尧败选

 美国之音 萧雨 华盛顿 — 一夜之间,香港政治版图剧变。截至当地时间星期一(11月25日)上午9点,民主派阵营在星期日(11月24日)的区议会选举中以压倒性胜利击败亲北京阵营,从原来的不到三成的席位增长到近九成。​
本届选举共有1090名候选人角逐全港18区总计452个民选议席。经过周日晚间的彻夜开票,到目前为止逾九成席位已经揭晓。民主派目前获得387个席位,建制派仅获59席。
以往建制派在这些选举中一直拥有优势。上届区议会选举中,亲北京阵营占据431个席位中的327席,民主派拥有124个。
卷入元朗7.21袭击事件中的立法会议员何君尧在屯门连任区议员失败。他在脸书上承认败选,称输给对手近1200张选票。他用“翻天覆地”来形容今年的选情。
“今年非常,选举非常,结果也异常,”他写道。
“这是历史性的时刻,” 香港知名活动人士黄之锋早些时候在推特上说, “香港人已经发声,洪亮且清晰。国际社会必须认识到,近六个月来,公众舆论没有站在这场运动的对立面,”
黄之锋是唯一被禁止参选今年区议会选举的候选人。一名选举官员裁定他的参选不符合香港法律。他在早些时候的一次采访中对美国之音说,自己被取消参选资格完全是北京的政治压迫,显示香港的“一国两制”名存实亡。
区议会是香港级别最低的民选政府机构之一,不具立法资格。往年的选举通常波澜不惊,但经历了5个多月的政治动荡,本届香港选民的投票热情空前高涨。选举并未出现此前外界担心的暴力,或提前终止投票的情况。
官方当地时间星期一凌晨公布的初步投票率为71.2%,全港410多万选民中约有294多万登记选民投了票,而2015年为47%。
“我们可以把这次区议员的选举看作是民众的一次公决,”华盛顿非政府组织公民力量发起人杨建利对美国之音说。
“这次选举所表达出的民意和街头上表达出的民意是相吻合的,”他说,“普遍地表达了对于现在的香港政府以及北京的不信任。”
在中国大陆的社交媒体平台新浪微博上,民众的意见却是一面倒地支持北京。
“HK选举实实在在的给内地老百姓上了一课,沉默的大多数都是支持乱的,HK是反中乱华的桥头堡。HK必然衰落了。HK基本结束历史使命了。别管他的生死了,自找的,”一位微博用户写道。

香港区议会选举 泛民主派大胜原因众说纷纭


泛民主派支持者得知属意的候选人胜选后欢呼庆祝。
Image caption泛民主派支持者得知属意的候选人胜选后欢呼庆祝。

香港泛民主派在区议会选举取得压倒性大胜,截至今天早上8:45,被视为泛民主派的候选人在452个民选议席中取得超过350席,是香港主权移交以来最多。许多泛民主派当选人认为,结果反映香港市民强烈要求政府答应近月示威浪潮的“五大诉求”,而建制派认为区议会主要处理地区民生事务,但选举结果显示这方面的工作已经不能保证选情。
这次选举中许多知名的建制派人物都落选,包括被视为“激进建制派”的立法会议员何君尧、香港最大亲北京政党民建联两名副主席张国钧和周浩鼎。另外,中国全国人民代表大会香港区代表田北辰也竞逐连任失败。
泛民主派在这次选举中,在全香港18个区议会大多取得过半数议席,泛民议员预料可以在这些议会成为主席。另外,分析也预期他们会全数控制负责选举香港特首的选举委员会预留给区议员的117席,将成为这个有1200名成员的委员会其中一个具影响力的组群。

Image caption这次投票率约70%,是香港主权移交以来最高。

“民意如山”

这次区议会创下多个香港选举纪录,包括有约290万名选民参加,投票率达70%,是主权移交以来最高。泛民主派也首次在区议会取得超过一半议席。
多次举办反对《逃犯条例》修例的“民阵”召集人岑子杰胜选。他接受访问时形容结果是“民意如山”,希望林郑月娥“把握这个机会,顺应民意”,落实示威者的“五大诉求”。
亲北京建制派的工联会梁美芬认为,政府的施政令选民不满是建制派落败的其中一个原因,但建制派候选人在选举期间遇到的不公平对待也是原因。她又认为结果显示香港的投票已经变成“立场先行”的选举,而不是以每名议员的工作表现为标准,因此令建制派选情受挫。
亲北京政党民建联副主席陈克勤说,他留意到有人在投票后重复排队,制造轮候投票时间长的假象,令一些不愿意久候的选民却步。陈克勤没有给出具体个案,但选举管理委员会主席冯骅说,故意阻碍其他人投票是违规行为,如果有切实证据有采取行动执法。
何君尧在连任区议员失败后在社交网站发表文章,形容选举结果“异常”,但没有详细解释。


Netch:一款开源的网络游戏加速工具(目前仅支持 Windows系统)

$
0
0
Netch是一款今天开源的网络游戏加速器,支持Socks5、ss/ssr、V2ray等协议,UDP NAT FullCone及指定进程加速不需要难维护的IP规则,功能上和SSTAP差不多,不过听说加速体验效果比后者要更好,甚至堪比一些付费的加速器,当然前提需要你的线路给力,不然加速就没意义了,这里就分享下,具体效果就需要自行体验了。

Github地址:https://github.com/netchx/Netch

下载地址:https://github.com/netchx/Netch/releases
https://github.com/NetchX/NetchBinaries

这里就说下大概使用演示.
首先双击打开程序,添加好55R等服务器或者添加订阅,然后点击快速创建模式。

点击扫描,选择游戏安装包,会自动扫描所有的exe程序,然后保存即可。

最后选择节点启动加速即可。

最后要补充的是UDP NAT支持FullCone的,能解决游戏NAT类型严格的问题,更详细的使用文档查看→传送门,强烈建议看一下。至于更多的功能的话,只能等待作者慢慢添加了,效果自行测试。
-----------------

Netch


游戏加速工具

目录

  1. 下载与安装
  2. 简介
  3. 使用方法
  4. 常见问题 (Frequently Asked Questions)
  5. 截图
  6. 依赖
  7. 证书
  8. 注释

下载与安装

当前发布版本为免安装版本,解压后点击 Netch.exe 即可使用,目前仅支持 Windows。
注意
  • Windows 64 位系统使用 x64 版本
  • Windows 32 位系统使用 x86 版本
  • 否则你会遇到驱动问题
最新版下载地址

简介

Netch 是一款 Windows 平台的开源游戏加速工具,Netch 可以实现类似 SocksCap64那样的进程代理,也可以实现 SSTap那样的全局 TUN/TAP 代理,和 shadowsocks-windows那样的本地 Socks5,HTTP 和系统代理。至于连接至远程服务器的代理协议,目前 Netch 支持以下代理协议:Shadowsocks,Vmess,Socks5,ShadowsocksR
与此同时 Netch 避免了 SSTap 的 NAT 问题 [1],检查 NAT 类型 [2]即可知道是否有 NAT 问题。使用 SSTap 加速部分 P2P 联机,对 NAT 类型有要求的游戏时,可能会因为 NAT 类型严格遇到无法加入联机,或者其他影响游戏体验的情况
更新日志
进群提问前请务必先看下方使用方法和常见问题

使用方法

包括模式说明,驱动更新,进程模式创建的方法等
NetchMode/docs/README.zh-CN.md

常见问题 (Frequently Asked Questions)

编辑自 Netch 版本发布频道第 50 条消息

错误报告类问题

无法运行

  • Q:我的系统无法运行(秒出启动失败)
  • A:是不是 64 位系统下着 32 位的包?
  • Q:好像是的,眼瞎了 ……
  • A:……
  • Q:我的 win7 系统无法运行(秒出启动失败),已确认是系统和软件版本位数一致
  • A:如果是驱动问题,详见 issue #14,安装补丁 kb4503292 或者将系统更新至最新
  • Q:我的系统无法运行(打都打不开)
  • A:看下面,装一下运行库
  • Q:装了啊,提示已经安装,但是还是不行
  • A:建议您重装一下系统(已知有用户系统被玩坏了,安装其实根本没装上)
  • Q:我的疯狂报错
  • A:安装一下 .NET Framework 4.8 打上最新的 Visual C++ 合集先
  • Q:照做了,还是有问题
  • A:重装系统谢谢(已知有用户系统被玩坏了,安装其实根本没装上)
  • Q:有时候报错提示 ShadowsocksR 进程已停止运行
  • A:您好,这个问题我这里处理不了,我没法去修改 ssr-libev 的代码让其不异常退出,未来版本也会取消内置的 ShadowsocksR 支持,参考加入更多的 SSR 参数支持
如果重装系统不能解决问题。建议大哥考虑一下购买一台新电脑

订阅无法导入

  • Q:为什么订阅导入不完整?
  • A:导入后看看 logging 目录里的 application.log 吧(也许会暗示什么)
  • Q:啥也没有
  • A:私发订阅链接看看(加群后联系 @ConnectionRefused),一般来讲是订阅链接中有不被识别的 unicode 字符导致的,类似的问题参见 issue #7,这可能会是一个功能改进,但是目前没有时间表

进程模式下无法进入游戏 / 无法成功代理

  • Q: xxx 游戏扫描后仍然无法代理
  • A:除了自带的模式经测试后可用,其他游戏确实会出现代理后反而无法连接进入游戏的情况
进程模式规则问题
譬如守望先锋必须只代理 Overwatch Launcher.exe 而不是其他 exe 才能进游戏
进程名重复问题
你代理的进程名需要和 Netch 使用到的 exe 名称不一样,否则可能会发生代理回环。譬如 bin文件夹下的 Shadowsocks.exe,如果你使用 Shadowsocks代理,模式中就不应该出现 Shadowsocks.exe这样的进程名。你可以通过修改你要代理的 exe 的名称,或者替换为进程名的全路径名(譬如 C:\xxx\xxx.exe)来避免这个问题
shadowsocks 参数问题
譬如 shadowsocks 参数中就不建议 timeout 参数设置过短,否则会影响战网客户端的正常连接,建议删掉该参数保持默认值即可
bin/Redirector.exe 问题
关于 bin/Redirector.exe的新 issue 请统一到 issue #152按照格式来回复
该文件是闭源的,主要是负责和底层 Netfilter SDK 的控制,其各个版本之间还有细微差距,经反馈,较为稳定的为 1.0.9-STABLE - 1.2.4-STABLE(无流量统计)和 1.2.9 的版本,以下为推荐的旧版本下载链接,请大家自行尝试。下载后,只需将 bin/Redirector.exe覆盖即可
只有 1.3.0 及以后和 1.2.4-STABLE 及以前的 bin/Redirector.exe有处理进程所在中文路径的能力,如果需要使用不支持中文路径的 bin/Redirector.exe,请自行修改进程所在路径
请根据 Netch 的 x86/x64 版本来下载 bin/Redirector.exe
1.2.9 bin/Redirector.exe
1.0.9-STABLE - 1.2.4-STABLE bin/Redirector.exe
其他版本(1.3.0 及之前):
将最左列的 hash 值复制替换掉该链接指定位置即可用于下载 https://github.com/NetchX/Netch/raw/替换这里/binaries/x64/Redirector.exe,x86 版本换掉 x64 即可
$ git log --pretty=oneline --decorate --source --tags binaries/x64/Redirector.exe
6a6a1db17092c668546eb073ac5b79bb717b0b7a 190929 1.3.3 [Redirector] Bypass IPv6 loopback
fc94119e7a68e9da16d5ee857c798ce908e1e54f 190928 1.3.2 Update x64 Redirector
e3a9a75343bd808593a5e93781e42e414e9c8e1c 190927 1.3.1 Return short path when fetching long path fails
4860e038c7d667026b48e7ea7e42a777646c6782 190917 1.3.0 Fix path contains chinese
349c44f8947e5f6aae8677b2ea93ea7eb441a537 190906 1.2.9 Update redirector, now support custom tcp port with -t arg
ed60a46dee8179836773731c0970d2e004375024 190904 1.2.9 Fix and optimize redirector
fee275a25c86b2f7c18a9362ff12a0882ae90bc1 190902 1.2.8-BETA [Redirector] Optimize speed statistics, Optimize performance, Add logs for UDP
9b837629fda39c1f30a4579cbe343076c0e14380 190831 1.2.7-STABLE Recompile redirector with new driver
ac57ae0be6137fcd5abf9b0529d55206fd81359b 190830 1.2.6-STABLE (tag: 1.2.6-STABLE) Optimize
a0a5b64833b520a065084024a425fe8ada2967f3 190830 1.2.6-STABLE Speed and bandwidth optimized
7b30473f41e4468d6744456dd040f0d62a271e7a 190830 1.2.6-STABLE Speed and bandwidth working now (Need optimize)
b8164a02419d630753fdfa27981100289abd9b89 190830 1.2.5-STABLE Update prebuilt binares
45954d7f4ed9500014d4dfae48c23b0887db1b77 190830 1.2.5-STABLE Update prebuilt binaries with upx compress
acb4bc24651509c21558420d97865262e959bc0c 190629 1.0.9-STABLE Rollback
5012a4d3011eafa3368f6cc97901e21af2e2874d 190628 1.0.9-STABLE Merge redirector and update version code to 1.0.9-STABLE
666050c3071dba67e2f0c6aae5eb5381a5acb39d 190625 1.0.5-STABLE Updated
进程模式以外的方法
如果你遇到的问题仍无法解决,你还可以将模式切换为 TUN/TAP 模式来加速游戏,不同于 SSTap,Netch 底层使用的 tun2socks 不存在 NAT 类型严格的问题,只是这样就是全局代理了。如果有按规则代理的需要,可以参考 NetchMode/docs/README.zh-CN.md。如果 TUN/TAP 模式还是不行,建议使用 outline 或者 SSTap 来解决问题,其中 outline 也没有 NAT 问题,如果不在意规则,能接受全局,建议使用 outline

NAT 类型限制

  • Q: xxx 游戏对 NAT 类型有要求,你们这个加速器代理后 NAT 类型还是严格 xxx ,我甚至用 NATTypeTester 测过了,还是不行 xxx
  • A:经过测试这款软件是可以做到 Full Cone 的 NATType 。如果你自己测试不行,需要考虑三个方面的问题
    • 第一个是你的服务器是否支持 Full Cone 的 NATType ,这可以通过其他软件的测试来佐证,譬如使用 Sockscap64 之类
    • 第二个是你本地的网络环境问题,首先,关闭 windows 防火墙,经测试 windows 防火墙会将 Full Cone 限制到 Port Restricted Cone,无论你是使用 TUN/TAP 模式,还是进程模式,除非你的游戏对 NAT 不敏感,否则请务必先把 windows 防火墙关闭。其次,某些杀软的防火墙可能也会影响到 NAT 类型,根据情况你可以关闭杀软的防火墙,或者卸载杀软来避免问题发生
    • 第三个是运营商的网络问题,经测试联通数据和长宽等网络,即使在代理后也无法做到 Full cone ,就算服务器是支持 Full cone 的。这种情况下你可能需要切换全局的 VPN 代理工具(WireGuard , Badvpn , Openvpn , tinyfecVPN 等),也可以尝试 Netch 的 TUN/TAP 模式,或者更换本地网络环境
    • 第四个是某些游戏的代理模式有问题,可能遇到各种玄学问题,参见上方

Steam / 浏览器无法正常打开页面

  • Q:用来加速 Steam / 浏览器,结果无法正常打开页面
  • A:有人测试可行有人测试不可行。首先声明一下,本软件的功能主要不是用来代理 Steam / 浏览器打开页面的,建议使用专门的工具,如 SteamCommunity 302,浏览器则建议用 shadowsocks-windows, clash for windows 等等,你甚至可以尝试 shadowsocks-windows over Netch,这可能会是一个功能改进,但是目前没有时间表

UWP 应用无法代理

  • Q:UWP 应用 xxx 无法代理
  • A:请按照此方法设置即可

功能建议类问题

加入本地代理功能

  • Q:我想在电脑上代理斯维奇
  • A:腾讯加速器好像可以免费加速主机游戏
  • A:会考虑加入,但不会是高优先级,你可以考虑通过 Pull Request 的方式为本软件加入该支持

加入更多的 SSR 参数支持

  • Q:希望能加入更多的 SSR 参数支持,我那个机场的订阅好多节点无法导入 issue #11
  • A:根据最新的 项目计划表,shadowsocksr的支持将在未来的版本由于各种原因而被放弃。在未来的版本中,可以通过 Socks5 代理进行中转

截图

主界面

依赖

注释

[1]  ↑  NAT 原理
[2]  ↑  NAT 类型检测工具

from https://github.com/NetchX/Netch/blob/master/docs/README.zh-CN.md

为什么 DNS 使用 UDP 协议

$
0
0
今天要分析的具体问题是『为什么 DNS 使用 UDP 协议』,DNS 作为整个互联网的电话簿,它能够将可以被人理解的域名翻译成可以被机器理解的 IP 地址,使得互联网的使用者不再需要直接接触很难阅读和理解的 IP 地址。

相信 DNS 使用 UDP 协议已经成为了软件工程师的常识,对计算机网络稍有了解的人都知道 DNS 会使用 UDP 协议传输数据,但是这一观点其实不是完全正确的,我们在这里就会详细分析『为什么 DNS 会使用 UDP 传输数据』以及『为什么 DNS 不止会使用 UDP 传输数据』两个问题,希望能够帮助各位读者理解 DNS 协议的全貌。

我们将要讨论的两个问题其实并不冲突,在绝大多数情况下,DNS 都是使用 UDP 协议进行通信的,DNS 协议在设计之初也推荐我们在进行域名解析时首先使用 UDP,这确实能解决很多需求,但是不能解决全部的问题。
实际上,DNS 不仅使用了 UDP 协议,也使用了 TCP 协议,不过在具体介绍今天的问题之前,我们还是要对 DNS 协议进行简单的介绍:DNS 查询的类型不止包含 A 记录、CNAME 记录等常见查询,还包含 AXFR 类型的特殊查询,这种特殊查询主要用于 DNS 区域传输,它的作用就是在多个命名服务器之间快速迁移记录,由于查询返回的响应比较大,所以会使用 TCP 协议来传输数据包。
作为被广泛使用的协议,我们能够找到非常多 DNS 相关的 RFC 文档,DNS Camel Viewer中列出了将近 300 个与 DNS 协议相关的 RFC 文档,其中有 6 个是目前的互联网标准,有 102 个是 DNS 相关的提案,这些文档共同构成了我们目前对于 DNS 协议的设计理解,作者也没有办法去一一阅读其中的内容,只选择了其中一些重要的文档帮我们理解 DNS 的发展史以及它与 UDP/TCP 协议的关系,这里只会摘抄文档中与 UDP/TCP 协议相关的内容:
  1. RFC1034 · Domain Names - Concepts and Facilities Internet Standard, 1987-11
    1. DNS 查询可以通过 UDP 数据包或者 TCP 连接进行传输;
    2. 由于 DNS 区域传输的功能对于数据的准确有着较强的需求,所以我们必须使用 TCP 或者其他的可靠协议来处理 AXFR 类型的请求;
  2. RFC1035 · Domain Names - Implementation and Specification
    1. 互联网支持命名服务器通过 TCP 或者 UDP 协议进行访问;
    2. UDP 协议携带的消息不应该超过 512 字节,超过的消息会被截断并设置 DNS 协议的 TC位,UDP 协议对于区域传输功能是不可接受的,不过是互联网上标准查询的推荐协议。通过 UDP 协议发送的查询可能会丢失,所以需要重传策略解决这个问题;
  3. RFC1123 · Requirements for Internet Hosts – Application and Support Internet Standard, 1989-10
    1. 未来定义的新 DNS 记录类型可能会包含超过 512 字节的信息,所以我们应该使用 TCP 协议来传输 DNS 记录;因此解析器和命名服务需要使用 TCP 协议作为 UDP 无法满足需求时的备份;
    2. DNS 解析器和递归服务器必须支持 UDP 协议,并且应该支持使用 TCP 协议发送非区域传输的查询;也就是说,DNS 解析器或者服务器在发送非区域传输查询时,必须先发送一个 UDP 查询,如果该查询的响应被截断,它应该尝试使用 TCP 协议重新请求;
  4. RFC3596 · DNS Extensions to Support IP Version 6 Internet Standard, 2003-10
    1. 通过 DNS 扩展支持 IPv6 协议,每个 IPv6 占 16 个字节是 IPv4 的四倍;
  5. RFC5011 · Automated Updates of DNS Security (DNSSEC) Trust Anchors Independent, 2007-10
    1. 新增多种资源记录为 DNS 客户端的 DNS 数据来源进行认证,记录包含的数据往往较大;
  6. RFC6376 · DomainKeys Identified Mail (DKIM) Signatures Internet Standard, 2011-09
    1. 选择合适的键大小进行加密是需要在成本、性能和风险之间的权衡,然而大的键(4096-bit)可能没有办法直接放到 DNS UDP 响应包中直接返回;
  7. RFC6891 · Extension Mechanisms for DNS (EDNS(0)) Internet Standard, 2013-04
    1. 使用 UDP 进行传输的 DNS 查询和响应最大不能超过 512 字节,不能支持大量 IPv6 地址或者 DNS 安全签名等记录的传输;
    2. EDNS 为 DNS 提供了扩展功能,让 DNS 通过 UDP 协议携带最多 4096 字节的数据;
  8. RFC7766 · DNS Transport over TCP - Implementation Requirements Proposed Standard, 2016-03
    1. 当客户端接收到一个被阶段的 DNS 响应时,应该通过 TC字段判断是否需要通过 TCP 协议重复发出 DNS 查询请求;
    2. DNSSEC 的引入使得截断的 UDP 数据包变得非常常见;
    3. 使用 UDP 传输 DNS 的数据包大小超过最大传输单元(MTU)时可能会导致 IP 数据包的分片,RFC1123 文档中预测的未来已经到来了,唯一一个用于增加 UDP 能够携带数据包大小的 EDNS 机制被认为不够可靠;
    4. 所有通用 DNS 实现必须要同时支持 UDP 和 TCP 传输协议,其中包括权威服务器、递归服务器以及桩解析器;
    5. 桩解析器和递归解析器可以根据情况选择使用 TCP 或者 UDP 查询直接请求目标服务器,以 UDP 协议来开始发起 DNS 请求不再是强制性的,TCP 协议与 UDP 协议在 DNS 查询中可以互相替代,而不是作为重试机制;
  9. Specification for DNS over Transport Layer Security (TLS) Proposed Standard, 2016-05
    1. 在 DNS 协议中引入 TLS 来为用户提供隐私,减少对 DNS 查询的窃听和篡改,但是 TLS 协议的引入会带来一些性能方面的额外开销;
  10. RFC8484 · DNS Queries over HTTPS (DoH) Proposed Standard, 2018-10
    1. 定义了一种通过 HTTPS 发送 DNS 查询和获取 DNS 响应的协议;
我们可以简单总结一下 DNS 的发展史,1987 年的 RFC1034RFC1035定义了最初版本的 DNS 协议,刚被设计出来的 DNS 就会同时使用 UDP 和 TCP 协议,对于绝大多数的 DNS 查询来说都会使用 UDP 数据报进行传输,TCP 协议只会在区域传输的场景中使用,其中 UDP 数据包只会传输最大 512 字节的数据,多余的会被截断;两年后发布的 RFC1123预测了 DNS 记录中存储的数据会越来越多,同时也第一次显示的指出了发现 UDP 包被截断时应该通过 TCP 协议重试。
过了将近 20 年的时间,由于互联网的发展,人们发现 IPv4 已经不够分配了,所以引入了更长的 IPv6,DNS 也在 2003 年发布的 RFC3596中进行了协议上的支持;随后发布的 RFC5011RFC6376增加了在鉴权和安全方面的支持,但是也带来了巨大的 DNS 记录,UDP 数据包被截断变得非常常见。
RFC6891提供的 DNS 扩展机制能够帮助我们在一定程度上解决大数据包被截断的问题,减少了使用 TCP 协议进行重试的需要,但是由于最大传输单元的限制,这并不能解决所有问题。
DNS 出现之后的 30 多年,RFC7766才终于提出了使用 TCP 协议作为主要协议来解决 UDP 无法解决的问题,TCP 协议也不再只是一种重试时使用的机制,随后出现的 DNS over TLS 和 DNS over HTTP 也都是对 DNS 协议的一种补充。
从这段发展时来看,DNS 并不只是使用 UDP 数据包进行通信,在 DNS 的标准中就一直能看到 TCP 协议的身影,我们在今天也是想要站在历史的角度上分析 ——『为什么 DNS 查询选择使用 UDP/TCP 协议』。

设计

在这一节中,我们将根据 DNS 使用协议的不同,分两个部分介绍 UDP 和 TCP 两种不同的协议在支持 DNS 查询和响应时有哪些优点和缺点,在分析的过程中我们也会结合历史上的上下文,还原做出设计决策时的场景。

UDP

UDP 协议在过去的几十年中其实都是 DNS 主要使用的协议,作为互联网的标准,目前的绝大多数 DNS 请求和响应都会使用 UDP 协议进行数据的传输,我们通过抓包工具就能轻松获得以 UDP 协议为载体的 DNS 请求和响应。
DNS 请求的数据都会以二进制的形式封装成如下的所示的 UDP 数据包中,下面就是一个调用 DNS 服务器获取 www.baidu.com域名的IP 地址的请求,从第四行的 05字节开始到最后就是 DNS 请求的内容,整个数据包中除了 DNS 协议相关的内容之外,还包含以太网、IP 和 UDP 的协议头:

0000   b0 6e bf 6a 4c 4038 f9 d3 ce 10 a6 08004500.n.jL@8.......E.
001000 3b 97 ae 00004011 0b 0a c0 a8 32 6d 7272.;....@.....2mrr
00207272 f3 2700350027 6b ee 0c 5a 01000001 rr.'.5.'k..Z....
003000000000000003777777056261696475.......www.baidu
00400363 6f 6d 0000010001.com.....
虽然每一个 UDP 数据包中都包含了很多以太网、IP、UDP 以及 DNS 协议的相关内容,但是上面的 DNS 请求大小只有 73 个字节,上述 DNS 请求的响应也只有 132 个字节,这对于今天其他的常见请求来讲都是非常小的数据包:

000038 f9 d3 ce 10 a6 b0 6e bf 6a 4c 40080045008......n.jL@..E.
00100076000000009611 4c 7d 72727272 c0 a8 .v......L}rrrr..
002032 6d 0035 f3 270062 5b c2 0c 5a 81800001 2m.5.'.b[..Z....
003000030000000003777777056261696475.......www.baidu
00400363 6f 6d 0000010001 c0 0c 0005000100.com............
00500002 cb 00 0f 0377777701610673686966......www.a.shif
006065 6e c0 16 c0 2b 00010001000001180004 en...+..........
0070 3d 87 a9 7d c0 2b 00010001000001180004=..}.+..........
0080 3d 87 a9 79=..y
UDP 和 TCP 的通信机制非常不同,作为可靠的传输协议,TCP 协议需要通信的双方通过 三次握手建立 TCP 连接后才可以通信,但是在 30 年前的 DNS 查询的场景中我们其实并不需要稳定的连接(或者以为不需要),每一次 DNS 查询都会直接向命名服务器发送 UDP 数据报,与此同时常见 DNS 查询的数据包都非常小,TCP 建立连接会带来以下的额外开销:

  • TCP 建立连接需要进行三次网络通信;
  • TCP 建立连接需要传输 ~130 字节的数据;
  • TCP 销毁连接需要进行四次网络通信;
  • TCP 销毁连接需要传输 ~160 字节的数据;
假设网络通信所消耗的时间是可以忽略的不计的,如果我们只考虑 TCP 建立连接时传输的数据的话,可以简单来算一笔账:


  • 使用 TCP 协议(共 330 字节)
    • 三次握手 — 14x3(Ethernet) + 20x3(IP) + 44 + 44 + 32 字节
    • 查询协议头 — 14(Ethernet) + 20(IP) + 20(TCP) 字节
    • 响应协议头 — 14(Ethernet) + 20(IP) + 20(TCP) 字节
  • 使用 UDP 协议(共 84 字节)
    • 查询协议头 — 14(Ethernet) + 20(IP) + 8(UDP) 字节
    • 响应协议头 — 14(Ethernet) + 20(IP) + 8(UDP) 字节

需要注意的是,我们在这里计算结果的前提是 DNS 解析器只需要与一个命名服务器或者权威服务器进行通信就可以获得 DNS 响应,但是在实际场景中,DNS 解析器可能会递归地与多个命名服务器进行通信,这也加倍地放大了 TCP 协议在额外开销上的劣势。
如果 DNS 查询的请求体和响应分别是 15 和 70 字节,那么 TCP 相比于 UDP 协议会增加 ~250 字节和 ~145% 的额外开销,所以当请求体和响应的大小比较小时,通过 TCP 协议进行传输不仅需要传输更多的数据,还会消耗更多的资源,多次通信以及信息传输带来的时间成本在 DNS 查询较小时是无法被忽视的,TCP 连接带来的可靠性在 DNS 的场景中没能发挥太大的作用。

TCP

今天的网络状况其实没有几十年前设计的那么简单,我们不仅遇到了 IPv4 即将无法分配的状况,而且还需要引入 DNSSEC 等机制来保证 DNS 查询和请求的完整性以及传输安全,总而言之,DNS 协议需要处理的数据包越来越大、数据也越来越多,但是『为什么当需要传输的数据较多时我们就必须使用 TCP 协议呢?』,如果继续使用 UDP 协议就不能完成 DNS 解析么。
从理论上来说,一个 UDP 数据包的大小最多可以达到 64KB,这对于一个常见的 DNS 查询其实是一个非常大的数值;但是在实际生产中,一旦数据包中的数据超过了传送链路的最大传输单元(MTU,也就是单个数据包大小的上限,一般为 1500 字节),当前数据包就可能会被分片传输、丢弃,部分的网络设备甚至会直接拒绝处理包含 EDNS(0) 选项的请求,这就会导致使用 UDP 协议的 DNS 不稳定。
TCP 作为可靠的传输协议,可以非常好的解决这个问题,通过序列号、重传等机制能够保证消息的不重不漏,消息接受方的 TCP 栈会对分片的数据重新进行拼装,DNS 等应用层协议可以直接使用处理好的完整数据。同时,当数据包足够大的时候,TCP 三次握手带来的额外开销比例就会越来越小,与整个包的大小相比就会趋近于 0:

  • 当 DNS 数据包大小为 500 字节时,TCP 协议的额外开销为 ~41.2%;
  • 当 DNS 数据包大小为 1100 字节时,TCP 协议的额外开销为 ~20.7%;
  • 当 DNS 数据包大小为 2300 字节时,TCP 协议的额外开销为 ~10.3%;
  • 当 DNS 数据包大小为 4800 字节时,TCP 协议的额外开销为 ~5.0%;

所以,我们在 DNS 中存储较多的内容时,TCP 三次握手以及协议头带来的额外开销就不是关键因素了,不过我们 TCP 三次握手带来的三次网络传输耗时还是没有办法避免的,这也是我们在目前的场景下不得不接受的问题。

总结

很多人认为 DNS 使用了 UDP 协议来获取域名对应的 IP 地址,这个观点虽然没错,但是还是有一些片面,更加准确的说法其实是 DNS 查询在刚设计时主要使用 UDP 协议进行通信,而 TCP 协议也是在 DNS 的演进和发展中被加入到规范的:

  1. DNS 在设计之初就在区域传输中引入了 TCP 协议,在查询中使用 UDP 协议;
  2. 当 DNS 超过了 512 字节的限制,我们第一次在 DNS 协议中明确了『当 DNS 查询被截断时,应该使用 TCP 协议进行重试』这一规范;
  3. 随后引入的 EDNS 机制允许我们使用 UDP 最多传输 4096 字节的数据,但是由于 MTU 的限制导致的数据分片以及丢失,使得这一特性不够可靠;
  4. 在最近的几年,我们重新规定了 DNS 应该同时支持 UDP 和 TCP 协议,TCP 协议也不再只是重试时的选择;
这篇文章已经详细介绍了 DNS 的历史以及选择不同协议时考虑的关键点,在这里我们重新回顾一下 DNS 查询选择 UDP 或者 TCP 两种不同协议时的主要原因:

  • UDP 协议
    • DNS 查询的数据包较小、机制简单;
    • UDP 协议的额外开销小、有着更好的性能表现;
  • TCP 协议
    • DNS 查询由于 DNSSEC 和 IPv6 的引入迅速膨胀,导致 DNS 响应经常超过 MTU 造成数据的分片和丢失,我们需要依靠更加可靠的 TCP 协议完成数据的传输;
    • 随着 DNS 查询中包含的数据不断增加,TCP 协议头以及三次握手带来的额外开销比例逐渐降低,不再是占据总传输数据大小的主要部分;
无论是选择 UDP 还是 TCP,最核心的矛盾就在于需要传输的数据包大小,如果数据包小到一定程度,UDP 协议绝对最佳的选择,但是当数据包逐渐增大直到突破 512 字节以及 MTU 1500 字节的限制时,我们也只能选择使用更可靠的 TCP 协议来传输 DNS 查询和相应。到最后,我们还是来看一些比较开放的相关问题,有兴趣的读者可以仔细思考一下下面的问题:

  • 如何对使用 TCP 协议的 DNS 进行一些优化,减少一些额外开销?
  • 我们现在已经可以使用 UDP/TCP/TLS/HTTPS 四种方式传输 DNS 数据,这些方式有什么异同?是否还可以通过其他的协议实现 DNS 查询?


Reference


详解 DNS的实现原理

$
0
0
域名系统(Domain Name System)是整个互联网的电话簿,它能够将可被人理解的域名翻译成可被机器理解 IP 地址,使得互联网的使用者不再需要直接接触很难阅读和理解的 IP 地址。

我们在这篇文章中的第一部分会介绍 DNS 的工作原理以及一些常见的 DNS 问题,而第二部分会介绍 DNS 服务 CoreDNS的架构和实现原理。

DNS

域名系统在现在的互联网中非常重要,因为服务器的 IP 地址可能会经常变动,如果没有了 DNS,那么可能 IP 地址一旦发生了更改,当前服务器的客户端就没有办法连接到目标的服务器了,如果我们为 IP 地址提供一个『别名』并在其发生变动时修改别名和 IP 地址的关系,那么我们就可以保证集群对外提供的服务能够相对稳定地被其他客户端访问。

DNS 其实就是一个分布式的树状命名系统,它就像一个去中心化的分布式数据库,存储着从域名到 IP 地址的映射。

工作原理

在我们对 DNS 有了简单的了解之后,接下来我们就可以进入 DNS 工作原理的部分了,作为用户访问互联网的第一站,当一台主机想要通过域名访问某个服务的内容时,需要先通过当前域名获取对应的 IP 地址。这时就需要通过一个 DNS 解析器负责域名的解析,下面的图片展示了 DNS 查询的执行过程:

  1. 本地的 DNS 客户端向 DNS 解析器发出解析 xyz.me 域名的请求;
  2. DNS 解析器首先会向就近的根 DNS 服务器 .请求顶级域名 DNS 服务的地址;
  3. 拿到顶级域名 DNS 服务 me.的地址之后会向顶级域名服务请求负责 xyz.me.域名解析的命名服务;
  4. 得到授权的 DNS 命名服务时,就可以根据请求的具体的主机记录直接向该服务请求域名对应的 IP 地址;
DNS 客户端接受到 IP 地址之后,整个 DNS 解析的过程就结束了,客户端接下来就会通过当前的 IP 地址直接向服务器发送请求。
对于 DNS 解析器,这里使用的 DNS 查询方式是迭代查询,每个 DNS 服务并不会直接返回 DNS 信息,而是会返回另一台 DNS 服务器的位置,由客户端依次询问不同级别的 DNS 服务直到查询得到了预期的结果;另一种查询方式叫做递归查询,也就是 DNS 服务器收到客户端的请求之后会直接返回准确的结果,如果当前服务器没有存储 DNS 信息,就会访问其他的服务器并将结果返回给客户端。

域名层级

域名层级是一个层级的树形结构,树的最顶层是根域名,一般使用 .来表示,这篇文章所在的域名一般写作 xyz.me,但是这里的写法其实省略了最后的 .,也就是全称域名(FQDN)xyz.me.

根域名下面的就是 comnetme等顶级域名以及次级域名 xyz.me,我们一般在各个域名网站中购买和使用的都是次级域名、子域名和主机名了。

域名服务器

既然域名的命名空间是树形的,那么用于处理域名解析的 DNS 服务器也是树形的,只是在树的组织和每一层的职责上有一些不同。DNS 解析器从根域名服务器查找到顶级域名服务器的 IP 地址,又从顶级域名服务器查找到权威域名服务器的 IP 地址,最终从权威域名服务器查出了对应服务的 IP 地址。
$ dig -t A xyz.me +trace
我们可以使用 dig 命令追踪 xyz.me域名对应 IP 地址是如何被解析出来的,首先会向预置的 13 组根域名服务器发出请求获取顶级域名的地址:
.   56335 IN NS m.root-servers.net.
. 56335 IN NS b.root-servers.net.
. 56335 IN NS c.root-servers.net.
. 56335 IN NS d.root-servers.net.
. 56335 IN NS e.root-servers.net.
. 56335 IN NS f.root-servers.net.
. 56335 IN NS g.root-servers.net.
. 56335 IN NS h.root-servers.net.
. 56335 IN NS i.root-servers.net.
. 56335 IN NS a.root-servers.net.
. 56335 IN NS j.root-servers.net.
. 56335 IN NS k.root-servers.net.
. 56335 IN NS l.root-servers.net.
. 56335 IN RRSIG NS 8 0 518400 20181111050000 20181029040000 2134 . G4NbgLqsAyin2zZFetV6YhBVVI29Xi3kwikHSSmrgkX+lq3sRgp3UuQ3 JQxpJ+bZY7mwzo3NxZWy4pqdJDJ55s92l+SKRt/ruBv2BCnk9CcnIzK+ OuGheC9/Coz/r/33rpV63CzssMTIAAMQBGHUyFvRSkiKJWFVOps7u3TM jcQR0Xp+rJSPxA7f4+tDPYohruYm0nVXGdWhO1CSadXPvmWs1xeeIKvb 9sXJ5hReLw6Vs6ZVomq4tbPrN1zycAbZ2tn/RxGSCHMNIeIROQ99kO5N QL9XgjIJGmNVDDYi4OF1+ki48UyYkFocEZnaUAor0pD3Dtpis37MASBQ fr6zqQ==
;; Received 525 bytes from 8.8.8.8#53(8.8.8.8) in 247 ms
根域名服务器是 DNS 中最高级别的域名服务器,这些服务器负责返回顶级域的权威域名服务器地址,这些域名服务器的数量总共有 13 组,域名的格式从上面返回的结果可以看到是 .root-servers.net,每个根域名服务器中只存储了顶级域服务器的 IP 地址,大小其实也只有 2MB 左右,虽然域名服务器总共只有 13 组,但是每一组服务器都通过提供了镜像服务,全球大概也有几百台的根域名服务器在运行。
在这里,我们获取到了以下的 5 条 NS 记录,也就是 5 台 me.定义域名 DNS 服务器:
me.   172800 IN NS b0.nic.me.
me. 172800 IN NS a2.nic.me.
me. 172800 IN NS b2.nic.me.
me. 172800 IN NS a0.nic.me.
me. 172800 IN NS c0.nic.me.
me. 86400 IN DS 2569 7 1 09BA1EB4D20402620881FD9848994417800DB26A
me. 86400 IN DS 2569 7 2 94E798106F033500E67567B197AE9132C0E916764DC743C55A9ECA3C 7BF559E2
me. 86400 IN RRSIG DS 8 1 86400 20181113050000 20181031040000 2134 . O81bud61Qh+kJJ26XHzUOtKWRPN0GHoVDacDZ+pIvvD6ef0+HQpyT5nV rhEZXaFwf0YFo08PUzX8g5Pad8bpFj0O//Q5H2awGbjeoJnlMqbwp6Kl 7O9zzp1YCKmB+ARQgEb7koSCogC9pU7E8Kw/o0NnTKzVFmLq0LLQJGGE Y43ay3Ew6hzpG69lP8dmBHot3TbF8oFrlUzrm5nojE8W5QVTk1QQfrZM 90WBjfe5nm9b4BHLT48unpK3BaqUFPjqYQV19C3xJ32at4OwUyxZuQsa GWl0w9R5TiCTS5Ieupu+Q9fLZbW5ZMEgVSt8tNKtjYafBKsFox3cSJRn irGOmg==
;; Received 721 bytes from 192.36.148.17#53(i.root-servers.net) in 59 ms
当 DNS 解析器从根域名服务器中查询到了顶级域名 .me服务器的地址之后,就可以访问这些顶级域名服务器其中的一台 b2.nic.me获取权威 DNS 的服务器的地址了:
xyz.me.  86400 IN NS f1g1ns1.dnspod.net.
xyz.me. 86400 IN NS f1g1ns2.dnspod.net.
fsip6fkr2u8cf2kkg7scot4glihao6s1.me. 8400 IN NSEC3 1 1 1 D399EAAB FSJJ1I3A2LHPTHN80MA6Q7J64B15AO5K NS SOA RRSIG DNSKEY NSEC3PARAM
fsip6fkr2u8cf2kkg7scot4glihao6s1.me. 8400 IN RRSIG NSEC3 7 2 8400 20181121151954 20181031141954 2208 me. eac6+fEuQ6gK70KExV0EdUKnWeqPrzjqGiplqMDPNRpIRD1vkpX7Zd6C oN+c8b2yLoI3s3oLEoUd0bUi3dhyCrxF5n6Ap+sKtEv4zZ7o7CEz5Fw+ fpXHj7VeL+pI8KffXcgtYQGlPlCM/ylGUGYOcExrB/qPQ6f/62xrPWjb +r4=
qcolpi5mj0866sefv2jgp4jnbtfrehej.me. 8400 IN NSEC3 1 1 1 D399EAAB QD4QM6388QN4UMH78D429R72J1NR0U07 NS DS RRSIG
qcolpi5mj0866sefv2jgp4jnbtfrehej.me. 8400 IN RRSIG NSEC3 7 2 8400 20181115151844 20181025141844 2208 me. rPGaTz/LyNRVN3LQL3LO1udby0vy/MhuIvSjNfrNnLaKARsbQwpq2pA9 +jyt4ah8fvxRkGg9aciG1XSt/EVIgdLSKXqE82hB49ZgYDACX6onscgz naQGaCAbUTSGG385MuyxCGvqJdE9kEZBbCG8iZhcxSuvBksG4msWuo3k dTg=
;; Received 586 bytes from 199.249.127.1#53(b2.nic.me) in 267 ms
这里的权威 DNS 服务是作者在域名提供商进行配置的,当有客户端请求 xyz.me域名对应的 IP 地址时,其实会从作者使用的 DNS 服务商 DNSPod 处请求服务的 IP 地址:
xyz.me.  600 IN A 123.56.94.228
xyz.me. 86400 IN NS f1g1ns2.dnspod.net.
xyz.me. 86400 IN NS f1g1ns1.dnspod.net.
;; Received 123 bytes from 58.247.212.36#53(f1g1ns1.dnspod.net) in 28 ms
最终,DNS 解析器从 f1g1ns1.dnspod.net服务中获取了当前博客的 IP 地址 123.56.94.228,浏览器或者其他设备就能够通过 IP 向服务器获取请求的内容了。
从整个解析过程,我们可以看出 DNS 域名服务器大体分成三类,根域名服务、顶级域名服务以及权威域名服务三种,获取域名对应的 IP 地址时,也会像遍历一棵树一样按照从顶层到底层的顺序依次请求不同的服务器。

胶水记录

在通过服务器解析域名的过程中,我们看到当请求 me.顶级域名服务器的时候,其实返回了 b0.nic.me等域名:
me.   172800 IN NS b0.nic.me.
me. 172800 IN NS a2.nic.me.
me. 172800 IN NS b2.nic.me.
me. 172800 IN NS a0.nic.me.
me. 172800 IN NS c0.nic.me.
...
就像我们最开始说的,在互联网中想要请求服务,最终一定需要获取 IP 提供服务的服务器的 IP 地址;同理,作为 b0.nic.me作为一个 DNS 服务器,我也必须获取它的 IP 地址才能获得次级域名的 DNS 信息,但是这里就陷入了一种循环:
  1. 如果想要获取 xyz.me的 IP 地址,就需要访问 me顶级域名服务器 b0.nic.me
  2. 如果想要获取 b0.nic.me的 IP 地址,就需要访问 me顶级域名服务器 b0.nic.me
  3. 如果想要获取 b0.nic.me的 IP 地址,就需要访问 me顶级域名服务器 b0.nic.me
为了解决这一个问题,我们引入了胶水记录(Glue Record)这一概念,也就是在出现循环依赖时,直接在上一级作用域返回 DNS 服务器的 IP 地址:
$ dig +trace +additional xyz.me

...

me. 172800 IN NS a2.nic.me.
me. 172800 IN NS b2.nic.me.
me. 172800 IN NS b0.nic.me.
me. 172800 IN NS a0.nic.me.
me. 172800 IN NS c0.nic.me.
me. 86400 IN DS 2569 7 1 09BA1EB4D20402620881FD9848994417800DB26A
me. 86400 IN DS 2569 7 2 94E798106F033500E67567B197AE9132C0E916764DC743C55A9ECA3C 7BF559E2
me. 86400 IN RRSIG DS 8 1 86400 20181116050000 20181103040000 2134 . cT+rcDNiYD9X02M/NoSBombU2ZqW/7WnEi+b/TOPcO7cDbjb923LltFb ugMIaoU0Yj6k0Ydg++DrQOy6E5eeshughcH/6rYEbVlFcsIkCdbd9gOk QkOMH+luvDjCRdZ4L3MrdXZe5PJ5Y45C54V/0XUEdfVKel+NnAdJ1gLE F+aW8LKnVZpEN/Zu88alOBt9+FPAFfCRV9uQ7UmGwGEMU/WXITheRi5L h8VtV9w82E6Jh9DenhVFe2g82BYu9MvEbLZr3MKII9pxgyUE3pt50wGY Mhs40REB0v4pMsEU/KHePsgAfeS/mFSXkiPYPqz2fgke6OHFuwq7MgJk l7RruQ==
a0.nic.me. 172800 IN A 199.253.59.1
a2.nic.me. 172800 IN A 199.249.119.1
b0.nic.me. 172800 IN A 199.253.60.1
b2.nic.me. 172800 IN A 199.249.127.1
c0.nic.me. 172800 IN A 199.253.61.1
a0.nic.me. 172800 IN AAAA 2001:500:53::1
a2.nic.me. 172800 IN AAAA 2001:500:47::1
b0.nic.me. 172800 IN AAAA 2001:500:54::1
b2.nic.me. 172800 IN AAAA 2001:500:4f::1
c0.nic.me. 172800 IN AAAA 2001:500:55::1
;; Received 721 bytes from 192.112.36.4#53(g.root-servers.net) in 110 ms

...
也就是同时返回 NS 记录和 A(或 AAAA) 记录,这样就能够解决域名解析出现的循环依赖问题。

服务发现

讲到现在,我们其实能够发现 DNS 就是一种最早的服务发现的手段,通过虽然服务器的 IP 地址可能会经常变动,但是通过相对不会变动的域名,我们总是可以找到提供对应服务的服务器。
在微服务架构中,服务注册的方式其实大体上也只有两种,一种是使用 Zookeeper 和 etcd 等配置管理中心,另一种是使用 DNS 服务,比如说 Kubernetes 中的 CoreDNS 服务。
使用 DNS 在集群中做服务发现其实是一件比较容易的事情,这主要是因为绝大多数的计算机上都会安装 DNS 服务,所以这其实就是一种内置的、默认的服务发现方式,不过使用 DNS 做服务发现也会有一些问题,因为在默认情况下 DNS 记录的失效时间是 600s,这对于集群来讲其实并不是一个可以接受的时间,在实践中我们往往会启动单独的 DNS 服务满足服务发现的需求。

CoreDNS

CoreDNS 其实就是一个 DNS 服务,而 DNS 作为一种常见的服务发现手段,所以很多开源项目以及工程师都会使用 CoreDNS 为集群提供服务发现的功能,Kubernetes 就在集群中使用 CoreDNS 解决服务发现的问题。

作为一个加入 CNCF(Cloud Native Computing Foundation) 的服务 CoreDNS 的实现可以说的非常的简单。

架构

整个 CoreDNS 服务都建立在一个使用 Go 编写的 HTTP/2 Web 服务器 Caddy · GitHub上,CoreDNS 整个项目可以作为一个 Caddy 的教科书用法。

CoreDNS 的大多数功能都是由插件来实现的,插件和服务本身都使用了 Caddy 提供的一些功能,所以项目本身也不是特别的复杂。

Go app template build environment

$
0
0


Build Status
This is a skeleton project for a Go application, which captures the best build techniques I have learned to date. It uses a Makefile to drive the build (the universal API to software projects) and a Dockerfile to build a docker image.
This has only been tested on Linux, and depends on Docker to build.

Customizing it

To use this, simply copy these files and make the following changes:
Makefile:
  • change BIN to your binary name
  • rename cmd/myapp to cmd/$BIN
  • change REGISTRY to the Docker registry you want to use
  • maybe change SRC_DIRS if you use some other layout
  • choose a strategy for VERSION values - git tags or manual
Dockerfile.in:
  • maybe change or remove the USER if you need

Go Modules

This assumes the use of go modules (which will be the default for all Go builds as of Go 1.13) and vendoring (which reasonable minds might disagree about). You will need to run go mod vendor to create a vendor directory when you have dependencies.

Building

Run make or make build to compile your app. This will use a Docker image to build your app, with the current directory volume-mounted into place. This will store incremental state for the fastest possible build. Run make all-build to build for all architectures.
Run make container to build the container image. It will calculate the image tag based on the most recent git tag, and whether the repo is "dirty" since that tag (see make version). Run make all-container to build containers for all architectures.
Run make push to push the container image to REGISTRY. Run make all-pushto push the container images for all architectures.
Run make clean to clean up.

from https://github.com/thockin/go-build-template

年代向錢看 川普貿易戰嗆:習近平差點14分鐘滅香港? 民主派大勝! 中南海大敗!

57爆新聞 香港選舉變天,打北京巴掌

Apollo

$
0
0
An open autonomous driving platform.(一个开源的自动驾驶平台)

Welcome to Apollo's GitHub page!
Apollo is a high performance, flexible architecture which accelerates the development, testing, and deployment of Autonomous Vehicles.
For business and partnership, please visit our website.

Table of Contents

  1. Getting Started
  2. Prerequisites
  3. Architecture
  4. Installation
  5. Documents

Getting Started

Apollo 5.0 is loaded with new modules and features but needs to be calibrated and configured perfectly before you take it for a spin. Please review the prerequisites and installation steps in detail to ensure that you are well equipped to build and launch Apollo. You could also check out Apollo's architecture overview for a greater understanding of Apollo's core technology and platform.
[Attention] The Apollo team is proud to announce that the platform has been migrated to Ubuntu 18.04, one of the most requested upgrades from our developers. We do not expect a disruption to your current work with the Apollo platform, but for perception related code, you would need to:
  1. Upgrade host to ubuntu_16.04 and above (Ubuntu 18.04 is preferred)
  2. Update local host NVIDIA driver >=410.48. Website link. Or follow the guide to install Apollo-Kernel and NVIDIA driver, if you want to install Apollo-Kernel.
  3. Install NVIDIA-docker 2.0 - you can refer to this link for steps on installation, or use the install scripts we provide here
For those developers that would like to continue working with Ubuntu 14.04, please use the Ubuntu 14.04 branch instead of the master branch.
[Attention] The Apollo team has decided to retire Git LFS, which might impact your development. For details, please refer to: migration guide.
Want to contribute to our code? Follow this guide.

Prerequisites

Basic Requirements:

  • The vehicle equipped with the by-wire system, including but not limited to brake-by-wire, steering-by-wire, throttle-by-wire and shift-by-wire (Apollo is currently tested on Lincoln MKZ)
  • A machine with a 4-core processor and 8GB memory minimum (16GB for Apollo 3.5 and above)
  • Ubuntu 14.04
  • Working knowledge of Docker
  • Please note, it is recommended that you install the versions of Apollo in the following order: 1.0 > whichever version you would like to test out. The reason behind this recommendation is that you need to confirm whether individual hardware components and modules are functioning correctly and clear various version test cases, before progressing to a higher, more capable version for your safety and the safety of those around you.

Individual Version Requirements:

The following diagram highlights the scope and features of each Apollo release:

Apollo 1.0:
Apollo 1.0, also referred to as the Automatic GPS Waypoint Following, works in an enclosed venue such as a test track or parking lot. This installation is necessary to ensure that Apollo works perfectly with your vehicle. The diagram below lists the various modules in Apollo 1.0.
image alt text
For Setup:
  • Hardware:
    • Industrial PC (IPC)
    • Global Positioning System (GPS)
    • Inertial Measurement Unit (IMU)
    • Controller Area Network (CAN) card
    • Hard drive
    • GPS Antenna
    • GPS Receiver
  • Software:
    • Apollo Linux Kernel (based on Linux Kernel 4.4.32)
Apollo 1.5:
Apollo 1.5 is meant for fixed lane cruising. With the addition of LiDAR, vehicles with this version now have better perception of its surroundings and can better map its current position and plan its trajectory for safer maneuvering on its lane. Please note, the modules highlighted in Yellow are additions or upgrades for version 1.5.
image alt text
For Setup:
  • All the requirements mentioned in version 1.0
  • Hardware:
    • Light Detection and Ranging System (LiDAR)
    • ASUS GTX1080 GPU-A8G- Gaming GPU Card
  • Software:
    • Nvidia GPU Driver
Apollo 2.0:
Apollo 2.0 supports vehicles autonomously driving on simple urban roads. Vehicles are able to cruise on roads safely, avoid collisions with obstacles, stop at traffic lights, and change lanes if needed to reach their destination. Please note, the modules highlighted in Red are additions or upgrades for version 2.0.
image alt text
For Setup:
  • All the requirements mentioned in versions 1.5 and 1.0
  • Hardware:
    • Traffic Light Detection using Camera
    • Ranging System (LiDAR)
    • Radar
  • Software:
    • Same as 1.5
Apollo 2.5:
Apollo 2.5 allows the vehicle to autonomously run on geo-fenced highways with a camera for obstacle detection. Vehicles are able to maintain lane control, cruise and avoid collisions with vehicles ahead of them.
Please note, if you need to test Apollo 2.5; for safety purposes, please seek the help of the
Apollo Engineering team. Your safety is our #1 priority,
and we want to ensure Apollo 2.5 was integrated correctly with your vehicle before you hit the road.
image alt text
For Setup:
  • All the requirements mentioned in 2.0
  • Hardware:
    • Additional Camera
  • Software:
    • Same as 2.0
Apollo 3.0:
Apollo 3.0's primary focus is to provide a platform for developers to build upon in a closed venue low-speed environment. Vehicles are able to maintain lane control, cruise and avoid collisions with vehicles ahead of them.
image alt text
For Setup:
  • Hardware:
    • Ultrasonic sensors
    • Apollo Sensor Unit
    • Apollo Hardware Development Platform with additional sensor support and flexibility
  • Software:
    • Guardian
    • Monitor
    • Additional drivers to support Hardware
Apollo 3.5:
Apollo 3.5 is capable of navigating through complex driving scenarios such as residential and downtown areas. The car now has 360-degree visibility, along with upgraded perception algorithms to handle the changing conditions of urban roads, making the car more secure and aware. Scenario-based planning can navigate through complex scenarios, including unprotected turns and narrow streets often found in residential areas and roads with stop signs.
image alt text
For Setup:
  • Hardware:
    • Velodyne VLS - 128
    • Apollo Extension Unit (AXU)
    • ARGUS FPD-Link Cameras (3)
    • NovAtel PwrPak7
    • Additional IPC
  • Software:
    • Perception
    • Planning
    • V2X
    • Additional drivers to support Hardware
  • Runtime Framework
    • Cyber RT
Apollo 5.0:
Apollo 5.0 is an effort to support volume production for Geo-Fenced Autonomous Driving. The car now has 360-degree visibility, along with upgraded perception deep learning model to handle the changing conditions of complex road scenarios, making the car more secure and aware. Scenario-based planning has been enhanced to support additional scenarios like pull over and crossing bare intersections.

For Setup:

Architecture

  • Hardware/ Vehicle Overview
image alt text
  • Hardware Connection Overview
image alt text
  • Software Overview - Navigation Mode
image alt text

Installation

Congratulations! You have successfully built out Apollo without Hardware. If you do have a vehicle and hardware setup for a particular version, please pick the Quickstart guide most relevant to your setup:

With Hardware:

Documents

  • Technical Tutorial: Everything you need to know about Apollo. Written as individual versions with links to every document related to that version.
  • How To Guide: Brief technical solutions to common problems that developers face during the installation and use of the Apollo platform
  • Specs: A Deep dive into Apollo's Hardware and Software specifications (only recommended for expert level developers that have successfully installed and launched Apollo)
  • FAQs

Questions

You are welcome to submit questions and bug reports as GitHub Issues.

from  https://github.com/ApolloAuto/apollo

StatusOK

$
0
0
Monitor your Website and APIs from your Computer. Get Notified through Slack, E-mail when your server is down or response time is more than expected.

Simple Version

Simple Setup to monitor your website and recieve a notification to your Gmail when your website is down.

Step 1: Write a config.json with the url information
{
"notifications":{
"mail":{
"smtpHost":"smtp.gmail.com",
"port":587,
"username":"yourmailid@gmail.com",
"password":"your gmail password",
"from":"yourmailid@gmail.com",
"to":"destemailid@gmail.com"
}
},
"requests":[
{
"url":"http://mywebsite.com",
"requestType":"GET",
"checkEvery":30,
"responseTime":800
}
]
}
Turn on access for your gmail https://www.google.com/settings/security/lesssecureapps .

Step 2: Download bin file from here and run the below command from your terminal
$ ./statusok --config config.json
Thats it !!!! You will receive a mail when your website is down or response time is more.
To run as background process add & at the end
$ ./statusok --config config.json & 
to stop the process
$ jobs
$ kill %jobnumber 
 
You can save data to influx db and view response times over a period of time as above using graphana.
Guide to install influxdb and grafana
With StatusOk you can monitor all your REST APIs by adding api details to config file as below.A Notification will be triggered when you api is down or response time is more than expected.
{
"url":"http://mywebsite.com/v1/data",
"requestType":"POST",
"headers":{
"Authorization":"Bearer ac2168444f4de69c27d6384ea2ccf61a49669be5a2fb037ccc1f",
"Content-Type":"application/json"
},
"formParams":{
"description":"sanath test",
"url":"http://google.com"
},
"checkEvery":30,
"responseCode":200,
"responseTime":800
},

{
"url":"http://mywebsite.com/v1/data",
"requestType":"GET",
"headers":{
"Authorization":"Bearer ac2168444f4de69c27d6384ea2ccf61a49669be5a2fb037ccc1f",
},
"urlParams":{
"name":"statusok"
},
"checkEvery":300,
"responseCode":200,
"responseTime":800
},

{
"url":"http://something.com/v1/data",
"requestType":"DELETE",
"formParams":{
"name":"statusok"
},
"checkEvery":300,
"responseCode":200,
"responseTime":800
}
Guide to write config.json file
Sample config.json file
To run the app
$ ./statusok --config config.json &

Database

Save Requests response time information and error information to your database by adding database details to config file. Currently only Influxdb 0.9.3+ is supported.
You can also add data to your own database.view details

Notifications

Notifications will be triggered when mean response time is below given response time for a request or when an error is occured . Currently the below clients are supported to receive notifications.For more information on setup click here
  1. Slack
  2. Smtp Email
  3. Mailgun
  4. Http EndPoint
  5. Dingding
Adding support to other clients is simple.view details

Running with plain Docker

docker run -d -v /path/to/config/folder:/config sanathp/statusok Note: Config folder should contain config file with name config.json

Running with Docker Compose

Prepare docker-compose.yml config like this:
version: '2' services: statusok: build: sanathp/statusok volumes: - /path/to/config/folder:/config depends_on: - influxdb influxdb: image: tutum/influxdb:0.9 environment: - PRE_CREATE_DB="statusok" ports: - 8083:8083 - 8086:8086 grafana: image: grafana/grafana ports: - 3000:3000 Now run it:
docker-compose up 

from https://github.com/sanathp/statusok
https://github.com/sanathp/statusok/releases/
 

抗议的面孔

$
0
0


最近似乎每个人都在抗议。从香港要求民主的游行队伍,到智利呼吁经济平等的示威者,再到在世界各地高呼的环境保护主义者,街头满是异见的声音。奇怪的是,无论发生在何处,无论他们有哪种不满,这些运动都将十七世纪一个失败的英国造反分子视为自己的象征性盟友。
1605年,一小撮人策划暗杀詹姆士一世国王,他们认为国王为该国的天主教徒做的不够多,不够迅速。这项称为“火药阴谋”的计划要在国会开幕典礼上杀死詹姆士、他的家人和支持者。作为天主教事业煽动者的盖伊·福克斯(Guy Fawkes),负责看管藏在上议院地下室的炸药桶。整个阴谋,以及福克斯,在炸药引爆前被发现了。他被囚禁在伦敦塔,酷刑之下,他说出了同谋者的名字。被捕者均以叛国罪被执行绞刑,唯一例外是福克斯——他在爬到绞刑架时不体面地摔倒了,摔断了脖子。11月5日是阴谋暴露的日期,没多久就成了法定假日,此后人们一直通过搭篝火和焚烧福克斯假人的方式来庆祝。

自维多利亚时期以来,人们常常打扮成福克斯的样子,通常是戴着统一设计的面具——上翘的唇须,细长的山羊胡,以及弯弓似的眉毛。但直到大约四百年后的1988年,福克斯才进入全球主流,当时艾伦·摩尔(Alan Moore)和戴维·劳埃德(David Lloyd)出版了系列漫画《V字仇杀队》(V for Vendetta),在一个法西斯主义的未来英格兰,福克斯成了一名终极反英雄。
摩尔在《V字仇杀队》初版刊行期间发表了一篇文章,称是劳埃德提出以福克斯为“V”的原型,劳埃德说:“为什么我们不把他刻划成一个复活的盖伊·福克斯,给他配上纸浆面具、披风和圆锥帽?他会看上去特别怪异,这正是福克斯一直以来应有的形象。我们不应该每逢11月5日烧这哥们,反倒应该庆贺他炸毁国会的尝试!”这本书,以及2006年的同名电影,单枪匹马地用传说的力量硬是造出了一个新的福克斯,他如今被称为V,就在新一代的抗议运动起步的时候,进入了大众和政治文化。

V的面具一眼就能认出来。它的眉毛、唇须、细长的山羊胡,仿佛是在一张雪花石膏脸上用拳头和黑色马克笔画出来的。它现在是抗议的面孔,主要是反政府的抗议,但不止于此。这个令人不安的脸庞,漂浮在在一片黄色背心、雨伞和黑帽衫的海洋中,让你无从忽视。

my heart will go on,口琴版

$
0
0

相关帖子:https://briteming.blogspot.com/2018/03/titanic-theme-song-my-heart-will-go-on.html


57爆新聞 王立強假間諜,計中計? 北京殺器破美澳島鏈封鎖

青年反抗者为何要“揽炒”香港?

$
0
0

这次香港勇武派的暴力反抗有句著名的话,叫做“揽炒香港”,反抗到底,哪怕结局是玉石俱焚。我曾私下与香港一位青年交谈,他甚至说出“财散人安”这种决绝的话。
青年一代要“揽炒香港”,当然是觉得香港没有他们的未来。“没有未来”源于两点:一是香港青年在本港缺乏上升机会;二是北京对香港实行“人滚地留”的方针。90后网络作家卢斯达对此写过多篇文章。在《香港现场:香港正经历一次有效率的世代清洗》(2019年6月13日)一文中,有一节标题是“针对香港年轻人的屠杀”。此处的“屠杀”不是指肉体消灭,而是让青年在香港政治中得不到任何机会。例如很多本土派乃至较中间的自决派参选人,在2016年的大选前被取消参选资格,理由是他们的“政见”不符合《基本法》。大选完结之后,一些得到选民授权的议员也被剥夺议席,例如梁颂恒、游蕙祯、罗冠聪等等。卢斯达认为,这些人的政见、立场、议政风格,南辕北辙,共通点就是年轻。年轻不是政见,但在中殖的香港,却是一个备受打压的政治属性。中共在2016至2017年,雷厉风行打击了一整个世代的政治权利,将他们进入体制改革香港的希望掐碎。也许是因为他们大多数都只认同自己是香港人,而不是中国人或“中国香港人”。这种身份认同令中国十分不安。尽管选举主任和“中国人大”列出的候选人“不符规定”之处各不相同,但说到底,就是中共可以容忍老一辈的政客,但对于新一代从政者,一个也不容许进入体制。
他的另一篇文章的标题非常直接:《中国对台湾也会和香港一样“人滚地留”》。
在此有必要解释一下“人滚地留”的来历。在中国国内的网络自由讨论区,凡涉及台湾问题,基本是一片喊打喊杀声。大概在六、七年前就出现了“人走可以,岛留下”的留言,这个说法后来演变成“留岛不留人”。今年5月我去台湾时,曾听一位在台湾很有影响力的青年意见领袖在演讲中提及,专门请教过他,所谓“留岛不留人”有无官方说法,他很诚实地回答,只见过网络留言,未听过官方有此说法,但如今基本成了台湾人对大陆政策的一种理解。通过卢斯达的文章,我才知道香港人也有这种担忧。
卢斯达对此的叙说十分直接:“对中国来说,香港是非常功能性的,例如金融、贸易、融资能力,其它的对中国来说根本眼都挤不进”,“台湾这船岛,有很多军事和战略好处,就像香港有金融好处。中国很想要,但不是想要那些不习惯受他统治的本地人”,“不论市民生死,北京只在乎香港的白手套功能,香港对中国来说只是一个工具”。
正因基于上述认识,香港反抗者要求美国迅速通过《香港人权与民主法案》,哪怕这个法案直接打击的是香港经济
这总让我想起麝鹿的传说:麝香是种珍贵的香料与药材,来源于雄性麝鹿的睾丸。据说麝鹿后来知道猎人捕猎它要的就是它身体上这一重要部分,在无法逃脱追猎时,会主动将自己的睾丸抓得稀烂,让猎人一无所得。这种“揽炒”颇有这种气势:你不在意我们香港人的尊严,我们就毁了你看重的功能,让你一无所得,“时日曷丧,予及汝皆亡”。
上述两位,可算是过去与今天的成功者与看不到未来的未来主人的各自陈述。这说明香港人对现状的看法,不仅存在阶层差异,存在既得利益者与利益受损的差异,还存在非常深的代际裂沟。更值得深思的是:陈、卢二位都提到香港的金融中心功能,但陈认为香港只有金融中心这一服务功能远远不够,需要优化经济结构,提出了要发展香港大学教育中非常优秀的理科、工科、医科,着眼于建设;卢斯达认为既然“人滚地留”,要这金融中心功能又有何用。
问题是:任何一个社会的未来命运,是由年轻人决定。从人心向背来说,中共已经失去了90后的千禧一代香港青年。

程晓农:中国模式步入困境

$
0
0
共产党资本主义能否自我支撑?
所谓的中国模式,4年前我就发表英文和中文文章指出,其实质是采用资本主义经济制度来巩固中共的集权统治,从制度层面看,这种模式就是共产党资本主义。那么,究竟共产党资本主义本身是否具有足够的生命力,这要用事实来检验,而2019年的中国形势提供了一系列证据,表明中国模式已全面陷入困境。
自上世纪90年代开始,中共的全盘公有制和计划经济体制在改革中处处碰壁,最后把经济拖入了潜在的金融危机,国有银行系统全面资不抵债。为了摆脱社会主义经济体制造成的这种必然后果,朱镕基推动了国企的全面私有化(即“改制”),同时抛弃了计划经体制,最后连计划经济的管控架构(从中央的指挥中枢国家计划委员会到省市、地区乃至县一级的计委)都取消了,于是共产党资本主义终于成型。而借着市场经济旗号,中国顺利加入了WTO,出现了将近20年的经济繁荣,也强化了中共的集权政治体制。
假如这段经济繁荣能够长期延续下去,或许它可以被归结为共产党资本主义制度的作用,而中共的集权体制似乎也可以借此稳固下去。那种鼓吹中国未来是带动全球经济的“火车头”之论,以及中共的“发展是硬道理”之宣传,就是这样推论的。但是,这种推论完全忽视了一个最根本的现实,那就是,共产党的集权体制本身确实可以通过行政的力量来集中资源(主要是财政、金融、土地资源)、加快经济增长,但是,这样的做法同时也会在经济繁荣的过程中自挖墙角,造成一系列严重的后果,最后动摇(undermine)经济发展的基础。2019年就是这样的负面后果全面展现的一年。国际货币基金组织的专家们之所以看不到这一层,是因为他们不懂集权体制的运作规律,于是犯了“中国繁荣幼稚病”。

三、二十年繁荣为何短暂?
中共的共产党资本主义体制和中国短暂的经济繁荣充分说明,集权体制可以推高经济繁荣,但同时也必然缩短繁荣的时限,使得紧日子早早来临。共产党资本主义体制并非真正的市场经济,因为各级政府直接介入经济活动成为推手,而让官员籍此升官的政治体制把这种政府干预的效果放大到了极限,结果乐极生悲,经济繁荣只延续了两个十年,就被紧日子所取代。
所谓经济繁荣的两个十年,我分别称为“出口景气”和“土木工程景气”,各为十年左右。我在今年10月10日刊登于《大纪元时报》的文章《增长困境——中国经济进入“跷跷板年代”》里谈过。从2002年到2011年这十年间,政府通过出口退税等政策,大力推动中国的出口以每年25%以上的高速增长,于是出口狂奔带动了经济成长。从国际经济平衡的角度来看,“出口景气”不可能成为经济增长的长期支柱,对中国这样的人口超级大国来说,更不能指望靠“出口景气”的延续把中国送上世界最强经济体的宝座。因为贸易必须互利,才能维持久远;若中国一国赚尽了全球的钱,长期多卖少买,积累起巨额外汇储备,以后谁还有能力持续从中国进口呢?所以,中国不可能依赖靠出口来不断推动经济成长的道路,“出口景气”总有结束的一天,而美中经贸谈判最后宣布了中国的“出口景气”一去不复返了。
当中国还陶醉在“出口景气”带来经济高成长的成就感当中时,2008年美国的次贷危机突然导致中国的出口订单大幅度减少,中共决定采取强有力的经济刺激措施,通过推动基础设施建设来拉到房地产开发,由此带动了一轮“土木工程景气”。但“土木工程景气”也有它自己的克星。如果说,“出口景气”的克星是国际市场无法向无穷大扩张,那么,“土木工程景气”的克星就是,在国内市场上,房地产开发也不能向无穷多迈进。最终,中国房地产业的供大于求成了不争的事实。前几天中国人民银行调查统计司调研员王立元的一篇文章《供需失衡加剧楼市下行压力》在网上刊出,该文指出了房地产泡沫的三大问题:一,从2011年到2018年住房的年均供给和需求分别为14亿和10亿平方米,供大于求;二,当前的待售住房要6年才能售完;三,城镇住房空置率已高达10%,房地产市场进入危险区,空置房约为3,400万套、22亿平方米。“土木工程景气”因此终结,它创造的繁荣也随之消失。
简单来说,中国经济失去繁荣的根本原因是,盲目扩大出口走到了头,房地产泡沫也走到了头,于是经济下行成了“新常态”。这些困境都不是改革可以改变的,再怎么改革,出口都无法像十几年前那样狂奔,房地产泡沫也始终是巨大的威胁。

四、从短暂繁荣到过紧日子
很多人出于对过去经济繁荣的怀念,盲目地相信,眼前遇到的只是暂时的困难,中国肯定会重建经济繁荣。但这样的判断并非建立在对真实情况的客观分析的基础之上,而是以一厢情愿为前提的主观想像。其实,过去20年中国经济繁荣的成因,正是目前繁荣消逝的缘由,有彼必有此。
中共也承认,“出口景气”和“土木工程景气”都指望不上了;它提出的新说法是,靠产业升级可以重造经济繁荣。但在这一点上中共的言行是相互矛盾的。虽然中共对国内强调,中国有足够的能力搞技术创新,实现产业升级,但在美中经贸谈判中,它却千方百计地绕着侵犯知识产权问题兜圈子,尽量回避这个美方认为最敏感、最重要的问题,坚决不肯作出停止侵犯西方国家知识产权的承诺。它之所以不肯停止侵犯知识产权,乃是因为,所谓的产业升级,很大程度上依赖于侵犯国外的知识产权,一旦真停止了知识产权侵犯活动,产业升级也就停顿了。美国参议院最近公布的《Threats to the U.S. Research Enterprise:China's Talent Recruitment Plans(对美国研究型企业的诸多威胁:中国的“千人计划”)》有详细介绍。
为了安慰国际商界和国内民众,中共当局还宣称,中国有十几亿人口,消费潜力巨大,靠国内消费足以带动经济继续保持增长。事实上,中共放纵房地产泡沫膨胀的痛苦后果现在已经显现出来了,房地产泡沫造成的过高房价一直在挤压国民的消费能力。不仅如此,“房地产景气”消失造成经济下行之后,首先表现为企业亏损,然后影响到职工收入下降,再进一步就影响到企业不但停止雇人,而且开始裁员,失业率将逐步上升。国家统计局最新的数据显示:2019年1至8月全国工业企业利润同比下降1.7%,其中8月下降2.0%;经济发达地区的工业企业利润出现2位数下降,北京下降14.4%、河北下降11.2%,山东下降13%,作为金融、贸易和航运中心的上海则下降19.6%。实体经济不振表明,中国经济已陷入困境。
当前的经济困境不单单是企业开始亏损和经营困难,而且已经延伸到了居民收入方面,城镇居民的收入开始下降。我分析了国家统计局公布的今年上半年和今年前三季度全国居民可支配收入(指居民税后可用于消费和储蓄的收入,包括工资性收入、经营性净收入、财产性净收入和转移性净收入),发现今年第三季度全国城镇居民的平均收入低于上半年的平均收入。需要先说明一下,去年10月1日起,当局为了刺激消费,把个人所得税的起征线从3,500元提高到5,000元,过去需缴纳个人所得税的纳税人数为1.9亿,修改个税起征点后纳税人数降到6,400万人左右。这意味着两点,第一,1.3亿职工的月均收入低于5千元;第二,降低个人所得税起征点后,大约有3,200亿元不再作为个人所得税被收走,而是留在了职工手里,因此居民的可支配收入本来会有一定幅度的上升。然而,在这一背景下,今年的实际情况是,上半年的人均每月可支配收入是3,557元,而三季度的人均每月可支配收入只有3,532元,比上半年平均每月少25元,这是上世纪末以来首次出现的情况,而且,工资减少数完全吞掉了个人所得税减少带来的收入增加。这只是经济下行对居民购买力负面影响的开始,今后的情况将因失业人数增加、就业人员工资进一步减少而恶化。
显然,中国已经在经济下行过程中走过了一个从经历繁荣到告别繁荣的转折点,现在开始进入过紧日子的历史阶段。

五、中国模式面临经济、政治、道德三大困境
经济困境并非当前中共面临的唯一难题,与此同时,中共的党政系统也陷入了“囚徒困境”,这个说法是中共外宣官媒《多维新闻网》提出来的。最近几年中共官场出现了一种新“气象”,那就是官员们普遍有了“二心”,最典型的表现是消极怠工,官场与高层的关系已从江胡时代的“上下同心、闷声发财”之“同伙”关系,重回类似于毛时代的那种“猫鼠”关系,“众鼠惧一猫,猫在鼠愁困”。中共官媒用“囚徒困境”这个博弈论概念来解释这样的官场矛盾。关于官场的这个困境,我在今年5月1日和11月5日于《大纪元时报》刊登的两篇文章,《中共官场新“气象”》和《中共四中全会:“囚徒困境”中的“国家治理”》,分别作了分析。
所谓官场的“囚徒困境”之实质是,高层反腐之后官员们失去了捞钱和腐败生活的机会,对现状普遍不满,因此消极怠工,因此中南海的政令推不动,而高层希望基层官员群策群力、设法摆脱经济困境的指望落空。集权体制之所以能短暂地推高经济繁荣,其制度内因是官员们有利可图;反过来,“囚徒困境”之下官员们怠政、惰政,这种体制就开始失灵了。官场政治上的困境表明,腐败可以“掏空”国家,而反腐败则“掏空”了官心和官员的“干劲”;集权体制令出难行,就只剩下政治高压这最后的“维稳”手段了。
与官场政治上的困境相关的是与官德败坏密切相关的全社会的道德困境。自从中共建立政权以来,中国社会就出现了价值观畸形、道德恶化等问题,从80年代后期开始这些问题变得越来越严重。中共在10月27日颁布了一个《新时代公民道德建设实施纲要》,再次把中共面临的道德困境摆到了桌面上。官媒对道德困境是这样描述的:“这些问题包括拜金主义、享乐主义、极端个人主义突出;是非、善恶、美丑不分,见利忘义、唯利是图,损人利己、损公肥私;造假欺诈、不讲信用”。当下中共高层所谈的道德建设,主要是针对官场,因为官德恶劣是整个社会道德恶化的根源之一,此刻官场上的“囚徒困境”就是官德败坏的最好注脚。过去中共面对官德败坏,只是用正面宣传走走过场;现在官媒承认,“在江泽民时代……腐败和拜金现象泛滥……中国道德领域的诸多弊端一直未变”。最近习近平改了一下方法,不再强调正面宣传,而是把纠正“官德”侧重放在整顿官场这方面;也就是说,以后不按GDP升官,而要看“官德”来决定是否提拔,同时用纪检监察机关施加压力。靠政治压力能纠正“官德”吗?官场上的“囚徒困境”证明,官员们一直在消极抵抗。
从经济困境到官场的“囚徒困境”,再到道德困境,说明中共确实在经济、政治、社会三个主要方面都陷入了困境,而这种结局具有必然性,是共产党资本主义体制的产物。经济困境是中共多年来为了短期经济目标而盲目发展所造成的必然后果;眼下中共希望官员们为摆脱经济困境而努力,但政治困境之下高层的这个愿望落空了;而道德困境其实是产生政治困境的原因之一,也与造成经济困境的盲目发展有关,因为官员们推动盲目发展的动力之一就是有腐败的机会。这三方面的困境环环相扣,彼此“锁定”,难以解脱,这就是当前中国局势的现状。

What does Google know about me?

$
0
0
Did you know that unlike searching on DuckDuckGo, when you search on Google, they keep your search history forever? That means they know every search you’ve ever done on Google. That alone is pretty scary, but it’s just the shallow end of the very deep pool of data that they try to collect on people.
What most people don’t realize is that even if you don’t use any Google products directly, they’re still trying to track as much as they can about you. Google trackers have been found on 75% of the top million websites. This means they're also trying to track most everywhere you go on the internet, trying to slurp up your browsing history!
Most people also don’t know that Google runs most of the ads you see across the internet and in apps – you know those ones that follow you around everywhere? Yup, that’s Google, too. They aren’t really a search company anymore – they’re a tracking company. They are tracking as much as they can for these annoying and intrusive ads, including recording every time you see them, where you saw them, if you clicked on them, etc.
But even that’s not all…
If You Use Google Products
If you do use Google products, they try to track even more. In addition to tracking everything you’ve ever searched for on Google (e.g. “weird rash”), Google also tracks every video you’ve ever watched on YouTube. Many people actually don’t know that Google owns YouTube; now you know.
And if you use Android (yeah, Google owns that too), then Google is also usually tracking:
If you use Gmail, they of course also have all your emails. If you use Google Calendar, they know all your schedule. There’s a pattern here: For all Google products (Hangouts, Music, Drive, etc.), you can expect the same level of tracking; that is, pretty much anything they can track, they will.
Oh, and if you use Google Home, they also store a live recording of every command you’ve (or anyone else) has ever said to your device! Yes, you heard that right (err… they heard it) – you can check out all the recordings on your Google activity page.
Essentially, if you allow them to, they’ll track pretty close to, well, everything you do on the internet. In fact, even if you tell them to stop tracking you, Google has been known to not really listen, for example with location history.
You Become the Product
Why does Google want all of your information anyway? Simple: as stated, Google isn’t a search company anymore, they’re a tracking company. All of these data points allow Google to build a pretty robust profile about you. In some ways, by keeping such close tabs on everything you do, they, at least in some ways, may know you better than you know yourself.
And Google uses your personal profile to sell ads, not only on their search engine, but also on over three million other websites and apps. Every time you visit one of these sites or apps, Google is following you around with hyper-targeted ads.
It’s exploitative. By allowing Google to collect all this info, you are allowing hundreds of thousands of advertisers to bid on serving you ads based on your sensitive personal data. Everyone involved is profiting from your information, except you. You are the product.
It doesn’t have to be this way. It is entirely possible for a web-based business to be profitable without making you the product – since 2014, DuckDuckGo has been profitable without storing or sharing any personal information on people at all. You can read more about our business model here.
The Myth of “Nothing to Hide”
Some may argue that they have “nothing to hide,” so they are not concerned with the amount of information Google has collected and stored on them, but that argument is fundamentally flawed for many reasons.
Everyone has information they want to keep private: Do you close the door when you go to the bathroom? Privacy is about control over your personal information. You don’t want it in the hands of everyone, and certainly don’t want people profiting on it without your consent or participation.
In addition, privacy is essential to democratic institutions like voting and everyday situations such as getting medical care and performing financial transactions. Without it, there can be significant harms.
On an individual level, lack of privacy leads to putting you into a filter bubble, getting manipulated by ads, discrimination, fraud, and identity theft. On a societal level, it can lead to deepened polarization and societal manipulation like we’ve unfortunately been seeing multiply in recent years.
You Can Live Google Free
Basically, Google tries to track too much. It’s creepy and simply just more information than one company should have on anyone.
Thankfully, there are many good ways to reduce your Google footprint, even close to zero! If you are ready to live without Google, we have recommendations for services to replace their suite of products, as well as instructions for clearing your Google search history. It might feel like you are trapped in the Google-verse, but it is possible to break free.
For starters, just switching the search engine for all your searches goes a long way. After all, you share your most intimate questions with your search engine; at the very least, shouldn’t those be kept private? If you switch to the DuckDuckGo app and extension you will not only make your searches anonymous, but also block Google’s most widespread and invasive trackers as you navigate the web.
If you’re unfamiliar with DuckDuckGo, we are an Internet privacy company that empowers you to seamlessly take control of your personal information online, without any tradeoffs. We operate a search engine alternative to Google at http://duckduckgo.com (DuckDuckGo Private Search), and offer a mobile app and desktop browser extension to protect you from Google, Facebook and other trackers, no matter where you go on the Internet.
We’re also trying to educate users through our blog, social media, and a privacy “crash course” newsletter.

from https://www.quora.com/What-does-Google-know-about-me/answer/Gabriel-Weinberg

linux桌面系统上的一款翻墙术-zapret

$
0
0

What is it for?
Bypass the blocking of web sites http.
The project is mainly aimed at the Russian audience to fight russian regulator named "Roskomnadzor".
Some features of the project are russian reality specific (such as getting list of sites
blocked by Roskomnadzor), but most others are common.
How it works
------------
DPI providers have gaps. They happen because DPI rules are writtten for
ordinary user programs, omitting all possible cases that are permissible by standards.
This is done for simplicity and speed. It makes no sense to catch 0.01% hackers,
because these blockings are quite simple and easily bypassed even by ordinary users.
Some DPIs cannot recognize the http request if it is divided into TCP segments.
For example, a request of the form "GET / HTTP / 1.1 \ r \ nHost: kinozal.tv ......"
we send in 2 parts: first go "GET", then "/ HTTP / 1.1 \ r \ nHost: kinozal.tv .....".
Other DPIs stumble when the "Host:" header is written in another case: for example, "host:".
Sometimes work adding extra space after the method: "GET /" => "GET /"
or adding a dot at the end of the host name: "Host: kinozal.tv."
How to put this into practice in the linux system
-------------------------------------------------
How to make the system break the request into parts? You can pipe the entire TCP session
through transparent proxy, or you can replace the tcp window size field on the first incoming TCP packet with a SYN, ACK.
Then the client will think that the server has set a small window size for it and the first data segment
will send no more than the specified length. In subsequent packages, we will not change anything.
The further behavior of the system depends on the implemented algorithm in the OS.
Experience shows that linux always sends first packet no more than the specified
in window size length, the rest of the packets until some time sends no more than max (36, specified_size).
After a number of packets, the window scaling mechanism is triggered and starts taking
the scaling factor into account. The packet size becomes no more than max (36, specified_ramer << scale_factor).
The behavior is not very elegant, but since we do not affect the size of the incoming packets,
and the amount of data received in http is usually much higher than the amount sent, then visually
there will be only small delays.
Windows behaves in a similar case much more predictably. First segment
the specified length goes away, then the window size changes depending on the value,
sent in new tcp packets. That is, the speed is almost immediately restored to the possible maximum.
Its easy to intercept a packet with SYN, ACK using iptables.
However, the options for editing packets in iptables are severely limited.
It’s not possible to change window size with standard modules.
For this, we will use the NFQUEUE. This tool allows transfer packets to the processes running in user mode.
The process, accepting a packet, can change it, which is what we need.
iptables -t mangle -I PREROUTING -p tcp --sport 80 --tcp-flags SYN,ACK SYN,ACK -j NFQUEUE --queue-num 200 --queue-bypass
It will queue the packets we need to the process that listens on the queue with the number 200.
Process will replace the window size. PREROUTING will catch packets addressed to the host itself and routed packets.
That is, the solution works the same way as on the client, so on the router. On a PC-based or OpenWRT router.
In principle, this is enough.
However, with such an impact on TCP there will be a slight delay.
In order not to touch the hosts that are not blocked by the provider, you can make such a move.
Create a list of blocked domains, resolve them to IP addresses and save to ipset named "zapret".
Add to rule:
iptables -t mangle -I PREROUTING -p tcp --sport 80 --tcp-flags SYN,ACK SYN,ACK -m set --match-set zapret src -j NFQUEUE --queue-num 200 --queue-bypass
Thus, the impact will be made only on ip addresses related to blocked sites.
The list can be updated in scheduled task every few days.
If DPI cant be bypassed with splitting a request into segments, then sometimes helps changing case
of the "Host:" http header. We may not need a window size replacement, so the do not need PREROUTING chain.
Instead, we hang on outgoing packets in the POSTROUTING chain:
iptables -t mangle -I POSTROUTING -p tcp --dport 80 -m set --match-set zapret dst -j NFQUEUE --queue-num 200 --queue-bypass
In this case, additional points are also possible. DPI can catch only the first http request, ignoring
subsequent requests in the keep-alive session. Then we can reduce the cpu load abandoning the processing of unnecessary packages.
iptables -t mangle -I POSTROUTING -p tcp --dport 80 -m connbytes --connbytes-dir=original --connbytes-mode=packets --connbytes 1:5 -m set --match-set zapret dst -j NFQUEUE --queue-num 200 --queue-bypass
It happens that the provider monitors the entire HTTP session with keep-alive requests. In this case
it is not enough to restrict the TCP window when establishing a connection. Each http request must be splitted
to multiple TCP segments. This task is solved through the full proxying of traffic using
transparent proxy (TPROXY or DNAT). TPROXY does not work with connections originating from the local system
so this solution is applicable only on the router. DNAT works with local connections,
but there is a danger of entering into endless recursion, so the daemon is launched as a separate user,
and for this user, DNAT is disabled via "-m owner". Full proxying requires more resources than outbound packet
manipulation without reconstructing a TCP connection.
iptables -t nat -I PREROUTING -p tcp --dport 80 -j DNAT --to 127.0.0.1:1188
iptables -t nat -I OUTPUT -p tcp --dport 80 -m owner ! --uid-owner tpws -j DNAT --to 127.0.0.1:1188
NOTE: DNAT on localhost works in the OUTPUT chain, but does not work in the PREROUTING chain without enabling the route_localnet parameter:
sysctl -w net.ipv4.conf..route_localnet=1
You can use "-j REDIRECT --to-port 1188" instead of DNAT, but in this case the transpareny proxy process
should listen on the ip address of the incoming interface or on all addresses. Listen all - not good
in terms of security. Listening one (local) is possible, but in the case of automated
script will have to recognize it, then dynamically enter it into the command. In any case, additional efforts are required.
ip6tables
---------
ip6tables work almost exactly the same way as ipv4, but there are a number of important nuances.
In DNAT, you should take the address --to in square brackets. For example :
iptables -t nat -I OUTPUT -p tcp --dport 80 -m owner ! --uid-owner tpws -j DNAT --to [::1]:1188
The route_localnet parameter does not exist for ipv6.
DNAT to localhost (:: 1) is possible only in the OUTPUT chain.
In the PREROUTING DNAT chain, it is possible to any global address or to the link local address of the same interface
the packet came from.
NFQUEUE works without changes.
When it will not work
----------------------
* If DNS server returns false responses. ISP can return false IP addresses or not return anything
when blocked domains are queried. If this is the case change DNS to public ones, such as 8.8.8.8 or 1.1.1.1.
Sometimes ISP hijacks queries to any DNS server. Dnscrypt or dns-over-tls help.
* If blocking is done by IP.
* If a connection passes through a filter capable of reconstructing a TCP connection, and which
follows all standards. For example, we are routed to squid. Connection goes through the full OS tcpip stack,
fragmentation disappears immediately as a means of circumvention. Squid is correct, it will find everything
as it should, it is useless to deceive him.
BUT. Only small providers can afford using squid, since it is very resource intensive.
Large companies usually use DPI, which is designed for much greater bandwidth.
nfqws
-----
This program is a packet modifier and a NFQUEUE queue handler.
It takes the following parameters:
--debug=0|1 ; 1=print debug info
--qnum=
--wsize= ; set window size. 0 = do not modify
--hostcase ; change Host: => host:
--hostspell=HoSt ; exact spelling of the "Host" header. must be 4 chars. default is "host"
--hostnospace ; remove space after Host: and add it to User-Agent: to preserve packet size
--daemon ; daemonize
--pidfile= ; write pid to file
--user= ; drop root privs
--uid=uid[:gid] ; drop root privs
--dpi-desync ; try to desync dpi state
--dpi-desync-fwmark= ; override fwmark for desync packet. default = 0x40000000
--dpi-desync-ttl= ; set ttl for desync packet
--dpi-desync-fooling=none|md5sig|badsum
--dpi-desync-retrans=0|1 ; 1(default)=drop original data packet to force its retransmission. this adds delay to make sure desync packet goes first
--dpi-desync-skip-nosni=0|1 ; 1(default)=do not apply desync to requests without hostname in the SNI
--hostlist= ; apply dpi desync only to the listed hosts (one host per line, subdomains auto apply)
The manipulation parameters can be combined in any way.
COMMENT. As described earlier, Linux behaves strangely when the window size is changed, unlike Windows.
Following segments do not restore their full length. Connection can go for a long time in batches of small packets.
Package modification parameters (--hostcase, ...) may not work, because nfqws does not work with the connection,
but only with separate packets in which the search may not be found, because scattered across multiple packets.
If the source of the packages is Windows, there is no such problem.
DPI DESYNC ATTACK
After completion of the tcp 3-way handshake, the first data packet from the client goes.
It usually has "GET / ..." or TLS ClientHello. We drop this packet, replacing with a fake version
with another harmless but valid http or https request. This packet must reach DPI and be validated as a good request,
but do not reach the destination server. The following means are available: set a low TTL, send a packet with bad checksum,
add tcp option "MD5 signature". All of them have their own disadvantages :
* md5sig does not work on all servers
* badsum doesn't work if your device is behind NAT which does not pass invalid packets.
Linux NAT by default does not pass them without special setting "sysctl -w net.netfilter.nf_conntrack_checksum=0"
Openwrt sets it from the box, other routers in most cases dont, and its not always possible to change it.
If nfqws is on the router, its not neccessary to switch of "net.netfilter.nf_conntrack_checksum".
Fake packet doesn't go through FORWARD chain, it goes through OUTPUT. But if your router is behind another NAT, for example ISP NAT,
and that NAT does not pass invalid packets, you cant do anything.
* TTL looks like the best option, but it requires special tuning for earch ISP. If DPI is further than local ISP websites
you can cut access to them. Manual IP exclude list is required. Its possible to use md5sig with ttl.
This way you cant hurt anything, but good chances it will help to open local ISP websites.
If automatic solution cannot be found then use zapret-hosts-user-exclude.txt.
Original packet is dropped, there is no response from the server. What will OS do ? Perform a retransmission.
The first retransmission occurs after 0.2 seconds, then the delay increases exponentially.
So there will be some delay at the beginning of each connection. Sites will load slower.
Unfortunately, if you send a fake packet right away, before the NFQUEUE verdict is issued on the original packet, there are no guarantees
which packet will go first. Therefore, a delay is required, it is implemented through the retransmission mechanism.
You can disable the drop of the original packet. Sometimes it works. But not very reliable.
Its possible to avoid delays for most sites by using hostlist or ipset filter.
Hostlist is applicable only to desync attack. It does not work for other options.
Hosts are extracted from plain http request Host: header and SNI of ClientHelllo TLS message.
Subdomains are applied automatically. gzip lists are supported.
iptables for performing the attack :
iptables -t mangle -I POSTROUTING -p tcp -m multiport --dports 80,443 -m connbytes --connbytes-dir=original --connbytes-mode=packets --connbytes 2:4 -m mark ! --mark 0x40000000/0x40000000 -j NFQUEUE --queue-num 200 --queue-bypass
connbytes will only queue the first data packet. mark is needed to keep away generated packets from NFQUEUE.
nfqws sets fwmark when it sends generated packets.
tpws
-----
tpws is transparent proxy.
--debug=0|1|2 ; 0(default)=silent 1=verbose 2=debug
--bind-addr=|
--bind-iface4= ; bind to the first ipv4 addr of interface
--bind-iface6= ; bind to the first ipv6 addr of interface
--bind-linklocal=prefer|force ; prefer or force ipv6 link local
--bind-wait-ifup= ; wait for interface to appear and up
--bind-wait-ip= ; after ifup wait for ip address to appear up to N seconds
--bind-wait-ip-linklocal= ; accept only link locals first N seconds then any
--port= ; port number to listen on
--socks ; implement socks4/5 proxy instead of transparent proxy
--local-rcvbuf= ; SO_RCVBUF for local legs
--local-sndbuf= ; SO_SNDBUF for local legs
--remote-rcvbuf= ; SO_RCVBUF for remote legs
--remote-sndbuf= ; SO_SNDBUF for remote legs
--skip-nodelay ; do not set TCP_NODELAY for outgoing connections. incompatible with split.
--no-resolve ; disable socks5 remote dns
--maxconn= ; max number of local legs
--hostlist= ; only act on host in the list (one host per line, subdomains auto apply)
--split-http-req=method|host ; split http request at specified logical position
--split-pos= ; split at specified pos. invalidates split-http-req.
--hostcase ; change Host: => host:
--hostspell ; exact spelling of "Host" header. must be 4 chars. default is "host"
--hostdot ; add "." after Host: name
--hosttab ; add tab after Host: name
--hostnospace ; remove space after Host:
--hostpad= ; add dummy padding headers before Host:
--methodspace ; add extra space after method
--methodeol ; add end-of-line before method
--unixeol ; replace 0D0A to 0A
--daemon ; daemonize
--pidfile= ; write pid to file
--user= ; drop root privs
--uid=uid[:gid] ; drop root privs
The manipulation parameters can be combined in any way.
There are exceptions: split-pos replaces split-http-req. hostdot and hosttab are mutually exclusive.
Only split-pos option works for non-HTTP traffic.
tpws can bind only to one ip or to all at once.
To bind to all ipv4, specify "0.0.0.0", to all ipv6 - "::". Without parameters, tpws bind to all ipv4 and ipv6.
The --bind-wait * parameters can help in situations where you need to get IP from the interface, but it is not there yet, it is not raised
or not configured.
In different systems, ifup events are caught in different ways and do not guarantee that the interface has already received an IP address of a certain type.
In the general case, there is no single mechanism to hang oneself on an event of the type "link local address appeared on the X interface."
in socks proxy mode no additional system privileges are required
connection to local IPs of the system where tpws runs are prohibited
tpws supports remote dns resolving (curl : --socks5-hostname firefox : socks_remote_dns=true) , but does it in blocking mode.
tpws uses async sockets for all activity but resolving can break this model.
if tpws serves many clients it can cause trouble. also DoS attack is possible against tpws.
if remote resolving causes trouble configure clients to use local name resolution and use
--no-resolve option on tpws side.
Ways to get a list of blocked IP
--------------------------------
1) Enter the blocked domains to ipset/zapret-hosts-user.txt and run ipset/get_user.sh
At the output, you get ipset/zapret-ip-user.txt with IP addresses.
2) ipset/get_reestr_*.sh. Russian specific
3) ipset/get_antifilter_*.sh. Russian specific
4) ipset/get_config.sh. This script calls what is written into the GETLIST variable from the config file.
If the variable is not defined, then only lists for ipsets nozapret/nozapret6 are resolved.
So, if you're not russian, the only way for you is to manually add blocked domains.
Or write your own ipset/get_iran_blocklist.sh , if you know where to download this one.
On routers, it is not recommended to call these scripts more than once in 2 days to minimize flash memory writes.
ipset/create_ipset.sh executes forced ipset update.
The regulator list has already reached an impressive size of hundreds of thousands of IP addresses. Therefore, to optimize ipset
ip2net utility is used. It takes a list of individual IP addresses and tries to find in it subnets of the maximum size (from / 22 to / 30),
in which more than 3/4 addresses are blocked. ip2net is written in C because the operation is resource intensive.
If ip2net is compiled or a binary is copied to the ip2net directory, the create_ipset.sh script uses an ipset of the hash:net type,
piping the list through ip2net. Otherwise, ipset of hash:ip type is used, the list is loaded as is.
Accordingly, if you don’t like ip2net, just remove the binary from the ip2net directory.
create_ipset.sh supports loading ip lists from gzip files. First it looks for the filename with the ".gz" extension,
such as "zapret-ip.txt.gz", if not found it falls back to the original name "zapret-ip.txt".
So your own get_iran_blockslist.sh can use "zz" function to produce gz. Study how other russian get_XXX.sh work.
Gzipping helps saving a lot of precious flash space on embedded systems.
User lists are not gzipped because they are not expected to be very large.
You can add a list of domains to ipset/zapret-hosts-user-ipban.txt. Their ip addresses will be placed
in a separate ipset "ipban". It can be used to route connections to transparent proxy "redsocks" or VPN.
IPV6: if ipv6 is enabled, then additional txt's are created with the same name, but with a "6" at the end before the extension.
zapret-ip.txt => zapret-ip6.txt
The ipsets zapret6 and ipban6 are created.
IP EXCLUSION SYSTEM. All scripts resolve zapret-hosts-user-exclude.txt file, creating zapret-ip-exclude.txt and zapret-ip-exclude6.txt.
They are the source for ipsets nozapret/nozapret6. All rules created by init scripts are created with these ipsets in mind.
The IPs placed in them are not involved in the process.
zapret-hosts-user-exclude.txt can contain domains, ipv4 and ipv6 addresses or subnets.
Domain name filtering
---------------------
An alternative to ipset is to use tpws with a list of domains.
tpws can only read one hostlist.
Enter the blocked domains to ipset/zapret-hosts-users.txt. Remove ipset/zapret-hosts.txt.gz.
Then the init script will run tpws with the zapret-hosts-users.txt list.
Other option ( Roskomnadzor list - get_hostlist.sh ) is russian specific.
You can write your own replacement for get_hostlist.sh.
When filtering by domain name, tpws should run without filtering by ipset.
All http traffic goes through tpws, and it decides whether to use manipulation depending on the Host: field in the http request.
This creates an increased load on the system.
The domain search itself works very quickly, the load is connected with pumping the amount of data through the process.
When using large regulator lists estimate the amount of RAM on the router!
Choosing parameters
-------------------
The file /opt/zapret/config is used by various components of the system and contains basic settings.
It needs to be viewed and edited if necessary.
Select MODE:
nfqws_ipset - use nfqws for http. targets are filtered by ipset "zapret"
nfqws_ipset_https - use nfqws for http and https. targets are filtered by ipset "zapret"
nfqws_all - use nfqws for all http
nfqws_all_https - use nfqws for all http and https
nfqws_all_desync - use nfqws for DPI desync attack on http и https for all http and https
nfqws_ipset_desync - use nfqws for DPI desync attack on http и https for all http and https. targets are filtered by ipset "zapret"
nfqws_hostlist_desync - use nfqws for DPI desync attack on http и https , only to hosts from hostlist
tpws_ipset - use tpws for http. targets are filtered by ipset "zapret"
tpws_ipset_https - use tpws for http and https. targets are filtered by ipset "zapret"
tpws_all - use tpws for all http
tpws_all_https - use tpws for all http and https
tpws_hostlist - same as tpws_all but touch only domains from the hostlist
ipset - only fill ipset. futher actions depend on your own code
Its possible to change manipulation options used by the daemons :
NFQWS_OPT="--wsize=3 --hostspell=HOST"
TPWS_OPT_HTTP="--hostspell=HOST --split-http-req=method"
TPWS_OPT_HTTPS="--split-pos=3"
Options for DPI desync attack are configured separately:
DESYNC_MARK=0x40000000
NFQWS_OPT_DESYNC="--dpi-desync --dpi-desync-ttl=0 --dpi-desync-fooling=badsum --dpi-desync-fwmark=$DESYNC_MARK"
The GETLIST parameter tells the install_easy.sh installer which script to call
to update the list of blocked ip or hosts.
Its called via get_config.sh from scheduled tasks (crontab or systemd timer).
Put here the name of the script that you will use to update the lists.
If not, then the parameter should be commented out.
You can individually disable ipv4 or ipv6. If the parameter is commented out or not equal to "1",
use of the protocol is permitted.
#DISABLE_IPV4=1
DISABLE_IPV6=1
The number of threads for mdig multithreaded DNS resolver (1..100).
The more of them, the faster, but will your DNS server be offended by hammering ?
MDIG_THREADS=30
The following settings are not relevant for openwrt :
If your system works as a router, then you need to enter the names of the internal and external interfaces:
IFACE_LAN = eth0
IFACE_WAN = eth1
IMPORTANT: configuring routing, masquerade, etc. not a zapret task.
Only modes that intercept transit traffic are enabled.
The INIT_APPLY_FW=1 parameter enables the init script to independently apply iptables rules.
With other values or if the parameter is commented out, the rules will not be applied.
This is useful if you have a firewall management system, in the settings of which you should tie the rules.
Screwing to the firewall control system or your launch system
-------------------------------------------------------------
If you use some kind of firewall management system, then it may conflict with an existing startup script.
When re-applying the rules, it could break the iptables settings from the zapret.
In this case, the rules for iptables should be screwed to your firewall separately from running tpws or nfqws.
The following calls allow you to apply or remove iptables rules separately:
/opt/zapret/init.d/sysv/zapret start-fw
/opt/zapret/init.d/sysv/zapret stop-fw
And you can start or stop the demons separately from the firewall:
/opt/zapret/init.d/sysv/zapret start-daemons
/opt/zapret/init.d/sysv/zapret stop-daemons
Simple install to desktop linux system
--------------------------------------
Simple install works on most modern linux distributions with systemd.
Run install_easy.sh and answer its questions.
Simple install to openwrt
-------------------------
install_easy.sh also works on openwrt but there're additional challenges.
They are mainly about possibly low flash free space.
Simple install will not work if it has no space to install itself and required packages from the repo.
Another challenge would be to bring zapret to the router. You can download zip from github and use it.
Do not repack zip contents in the Windows, because this way you break chmod and links.
Install openssh-sftp-server and unzip to openwrt and use sftp to transfer the file.
The best way to start is to put zapret dir to /tmp and run /tmp/zapret/install_easy.sh from there.
After installation remove /tmp/zapret to free RAM.
The absolute minimum for openwrt is 64/8 system, 64/16 is comfortable, 128/extroot is recommended.
Android
-------
Its not possible to use nfqws and tpws in transparent proxy mode without root privileges.
Without root tpws can run in --socks mode.
I have no NFQUEUE presence statistics in stock android kernels, but its present on my MTK device.
If NFQUEUE is present nfqws works.
There's no ipset support unless you run custom kernel. In common case task of bringing up ipset
on android is ranging from "not easy" to "almost impossible", unless you find working kernel
image for your device.
Android does not use /etc/passwd, tpws --user won't work. There's replacement.
Use numeric uids in --uid option.
Its recommended to use gid 3003 (AID_INET), otherwise tpws will not have inet access.
Example : --uid 1:3003
In iptables use : "! --uid-owner 1" instead of "! --uid-owner tpws".
Write your own shell script with iptables and tpws, run it using your root manager.
Autorun scripts are here :
magisk : /data/adb/service.d
supersu : /system/su.d
I haven't checked whether android can kill iptable rules at its own will during wifi connection/disconnection,
mobile data on/off, ...
Https blocking bypass
----------------------
As a rule, DPI tricks do not help to bypass https blocking.
You have to redirect traffic through a third-party host.
It is proposed to use transparent redirect through socks5 using iptables + redsocks, or iptables + iproute + vpn.
Redsocks variant is described in https.txt.
iproute + wireguard - in wireguard_iproute_openwrt.txt.
(they are russian)
SOMETIMES (but not often) a tls handshake split trick works.
Try MODE=..._https
May be you're lucky.
MORE OFTEN DPI desync attack work, but it may require some manual tuning..

from https://github.com/bol-van/zapret/blob/master/docs/readme.eng.txt

Viewing all 20550 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>