Quantcast
Channel: 看得透又看得远者prevail.ppt.cc/flUmLx ppt.cc/fqtgqx ppt.cc/fZsXUx ppt.cc/fhWnZx ppt.cc/fnrkVx ppt.cc/f2CBVx
Viewing all 20453 articles
Browse latest View live

Dowse

$
0
0
Updates: http://dowse.eu
Build Status
Dowse project stats
Dowse is a transparent proxy facilitating the awareness of ingoing and outgoing connections, from, to, and within a local area network.
Dowse provides a central point of soft control for all local traffic: from ARP traffic (layer 2) to TCP/IP (layers 3 and 4) as well as application space, by chaining a firewall setup to a trasparent proxy setup. A core feature for Dowse is that of hiding all the complexity of such a setup.
Dowse is also a highly extensible platform: interoperability between modules is available using Socks4/5, UNIX pipes, local TCP/IP sockets and port redirection, conforming to specific daemon implementations. At the core of Dowse is a very portable shell script codebase implementing a modular plugin architecture that isolates processes and supports any executable written in any language: Shell, C, Perl, Python etc.
Dowse is an ongoing development effort rapidly gaining momentum for its simplicity and usefulness. Here a recent backstage video:
The making of Dowse

Features

Dowse takes control of a LAN by becoming its DHCP server and thereby assigning itself as main gateway and DNS server for all clients. It keeps tracks of assigned leases by MAC Address. ISC DHCP and DNSCRYPT-PROXY are used as daemons.
All network traffic is passed through NAT rules for masquerading. HTTP traffic (TCP port 80) can be filtered through a transparent proxy using an application layer chain of Squid2 and Privoxy.
All IP traffic is filtered using configurable blocklists to keep out malware, spyware and known bad peers, using Peerguardian2 and Iptables.
All DNS traffic (UDP port 53) is filtered through a DNSCRYPT-PROXY plugin encrypting all traffic (AES/SHA256) and analysed using domain-list to render a graphical representation of traffic.
Privilege escalation is managed using https://sup.dyne.org

Installation

Installation and activation takes a few steps, only make install needs root:
  1. Download dowse on a GNU/Linux box (we use Devuan Ascii)
git clone https://github.com/dyne/dowse dowse-src
cd dowse-src && git submodule update --init --recursive
  1. Install all requirements, here below the list of packages. To avoid installing more than needed, consider using the --no-install-recommends flag in APT or similar for other package managers.
zsh iptables build-essential autoconf automake libhiredis-dev libkmod-dev libjemalloc-dev pkg-config libtool libltdl-dev libsodium-dev libldns-dev libnetfilter-queue-dev uuid-dev zlib1g-dev cmake liblo-dev nmap python3-flask python3-redis xmlstarlet wget libcap2-bin
  1. Choose which user should be running dowse: your own is fine, or eventually create one just for that to separate filesystem permissions.
  2. As the user of choice, run make inside the dowse source
  3. As root, run make install
  4. If necessary edit the files in the /etc/dowse folder, especially settings where it should be indicated the address for the local network you like to create.
  5. As the dowse user of choice and inside the source, fire up the startup script ./start.sh
Dowse is now running with a web interface on port 80.
To interact with dowse there is also a console with commands prefixed with dowse- (tab completion available). To enter it run zsh without extensions and source the main script: first type zsh -f and press enter, then type source /usr/local/dowse/zshrc and press enter.
If you like the dowse user to have an interactive console every time it logs in, then do ln -s /usr/local/dowse/zshrc $HOME/.zshrc.
If all went well now one should be able to connect any device to the internet as you did before, via Dowse.

Embedded ARM devices

Using https://www.devuan.org just compile and install Dowse following the procedure above. Images are available for a several popular ARM devices including RaspberryPI2 and 3, BananaPI, Cubieboard etc.

Starting Dowse

Here below an example start script launching all services in Dowse. Some can be commented / expunged ad-hoc depending from use cases, since the only vital functions are redis-serverdhcpd and dnscrypt-proxy.
#/usr/bin/env zsh

source /etc/dowse/settings
source /usr/local/dowse/zshrc

notice "Starting Dowse"

# start the redis daemon (core k/v service)
start redis-server

notice "Starting all daemons in Dowse"

# launch the dhcp daemon
start dhcpd

# start the dns encrypted tunneling
start dnscrypt-proxy

# start the mqtt/websocket hub
start mosquitto

# netdata dashboard for the technical status
start netdata

# nodejs/node-red
start node-red

# start the cronjob handler (with resolution to seconds)
start seccrond

notice "Dowse succesfully started"

}
Adding the following line one can set up an open network, what we call it "party mode":
echo "set party-mode ON" | redis-cli
As a good practice, such a script can be launched from /etc/rc.local for user dowse using setuidgid from the daemontools package.
The next is an example on how to stop dowse, for instance from a stop.sh script:
#/usr/bin/env zsh

source /usr/local/dowse/zshrc

notice "Stopping all daemons in Dowse"

stop seccrond

stop mosquitto

# stop nodejs/node-red
stop node-red

# stop the dashboard
stop netdata

# stop the dns crypto tunnel
stop dnscrypt-proxy

# stop the dhcp server
stop dhcpd

# remove the layer 3 firewall rules
iptables-snat-off
iptables-stop

# restore backup if present
# [[ -r /etc/resolv.conf.dowse-backup ]] && {
# mv /etc/resolv.conf.dowse-backup /etc/resolv.conf
# }

stop redis-server

notice "Dowse has stopped running."
The scripts above are found in dowse source as start.sh and stop.sh and can be customised and called from the system at boot. It is also possible to run an interactive console with completion where dowse commands are available using the console.sh script. Once in the console all the above start/stop commands and even more internals will be available to be launched interactively.

Visualization

The DNS visualization is produced in a custom format which can be easily processed by gource. This is the best way to "see dowse running": if you are running it locally, then install gource and do:
dowse-to-gource | gource --log-format custom -
or from remote:
ssh dowse@dowse.it -- dowse-to-gource | gource --log-format custom -
Sidenote: dowse-to-gource must be in the user's $PATH. To achieve this, as mentioned above, you can change the user's shell to zsh and do: ln -sf /usr/local/dowse/zshrc $HOME/.zshrc.
This will live render all the DNS activity occurring on your computer or local network, with the sort of animation that is also showcased on our website.
One can also experiment with gource arguments and render all the output of dowse-to-gource into a video file.

Experimentation

Open Sound Control (OSC) messaging is implemented to interface low-latency devices that are running on the same network. To start it one must know the IP address of the device, then do:
dowse-to-osc osc.udp://10.0.0.2:999
This will start sending OSC messages over UDP to IP 10.0.0.2 port 999

Development

The main development repository is on https://github.com/dyne/dowse
Inside the ops directory an Ansible recipe is found along a ready to use Vagrant configuration to build two virtual machines (leader and client) that simulate a LAN to do further testing of Dowse.
cd ops
vagrant up
Plus the usual vagrant commands. The devops in Dowse is based on http://Devuan.org and will run two virtual machines connected to each other, one "leader" running Dowse and serving DHCP, one "client" connected to it and to the Internet via the leader.
Help with development is welcome, manuals on how to write new modules and daemons are in the making and there is a sister project to categorize all domains used by Internet's conglomerates which also welcomes contributions: https://github.com/dyne/domain-list

习近平从大阪得到了一颗不定时炸弹

$
0
0
大阪G20习近平与特朗普这场峰会,极其类似于阿根廷G20那场峰会,特朗普延迟课征新税延迟大幅提升税率。两次峰会效果也相同,中方得到一个喘息机会,赢得了面子。但实质问题几乎没有解决,之前开征的惩罚性关税照旧。
阿根廷峰会在习近平演说45分钟、承诺中方将进行结构性改革之后,特朗普宣布把本计划在19年元月一日起提升至25%的关税延期三个月,但已经课征的,继续。中方则报之以采购更多大豆当作回礼。这次特朗普也只是答应暂缓课征新税,针对两千亿美元中国输美产品已经加上去的25%照收不误,中方答应尽快购买大量美国农产品。
上一次特朗普同意延迟三个月,延迟三个月以后,中美双方谈判进展不错,特朗普同意再度延迟,并且没有时限。但是在双方达成的草案遭习近平否定后,特朗普在收到中方寄来的大幅修改后的谈判草案后第三天宣布对中国输美两千亿美元产品课征25%的关税,并宣布下一步将针对剩下的三千多亿中国输美产品课税。而且,不久下令封锁华为。
另外一个被中国官媒视为中方胜利的是特朗普宣布对华为恢复供应零部件,其实也不应高兴得太早。特朗普在随后举行的记者会上表明,向华为供应的是不涉及美国安全的零部件,“不涉及美国安全”,这个尺度太大。最关键的一个问题,美国会否将把华为从美国商务部的实体名单取消,特朗普没有清楚的回答。这意味着特朗普并没有取消针对华为的禁令。他暗示这一切将随着中美贸易谈判的进展而定
在这种情况下,华为可能能够有限生存,但随时都可能被美国掐住脖子。而且,通过中方把华为的问题放在习特会的核心位置来看,再一次证明5G领域领先的华为并不掌握核心技术,核心技术都在美国或者西方企业手中。
这两次习特会,峰会之后都给人一种皆大欢喜的感觉,尤其在特朗普那边,他屡屡赞赏习近平是伟大的领袖,是他的朋友,但是,如果把特朗普的赞赏和激情理解为特朗普好对付就完全错了。这从特朗普在得知中方毁诺第三天后毫不留情地大幅提升惩罚性关税就看得一清二楚。
上次习特会后整整谈判五个月,能持续这么长时间,是谈判过程在特朗普看来有进展,最后达致一个百多页的协议草案。习近平不可能对这五个月的谈判状况毫无所知,他的亲信、负责中方谈判的刘鹤随时向他汇报,习近平为什么在最后时刻不认账呢?一个可能的判断是,习近平一直不愿意接受,一直在拖,拖到快签协议的时候才要反悔,刘鹤等人担心后果的时候,他才说出去一句“一切后果由我承担”,豁出去砸锅。
习近平这一次给人以逼到墙角的感觉。他对去不去大阪G20开不开习特会也是很犹豫,去不去?谈什么,直到最后才决定谈,为什么?因为特朗普在威胁,如果习近平不谈,他将立即提高关税,这不是一句玩笑。习近平最终去了,给人以屈服的感觉。
习近平去大阪,官媒说中方要求“取消全部加征关税”。结果,已经加给中方的海关税一丝都没有取消,威胁中的海关税还没有变成现实,这不能认为是中方取得了胜利。另外,特朗普的选民得到了实际利益,中方同意很快大量购买美国食品及农产品。
可能的胜利者还是特朗普,他步步逼近,步步加码,从去年三月谈贸易逆差,到五月份开出税单,然后再一步步要求中方进行结构性改革。至少特朗普失去的不多,他的农民们吃了一些亏,但是,美国两党在对中国的认知上罕见的达成一致,这不能不说是习近平的功劳。特朗普迫使中方步步后退,使其把已经课征的惩罚性关税当成既成事实来认领。这一次,特朗普似乎也不急,到底是他有选战的考量,还是有更深的谋划?而且,休战状态或者不能很快达成协议的状态其实对中国将很不利。
纽约时报认为,这次习特会不代表美中在化解根本冲突上取得重大突破,顶多只是暂时停火。该报指出,两人达成的这一初步协议,可能会进一步巩固全球经济秩序的广泛重组,削弱中国数十年来作为世界工厂的地位。美国将在未来几月,甚至几年内继续对中国商品征收广泛的关税,几乎可以肯定的是,全球企业将继续把其供应链至少在最后阶段转出中国。 特朗普今年五月把中国输美产品的关税提升至25%之后,从制鞋企业到电子产品制造商等一系列企业正在将他们的供应链迁出中国。许多公司已将最后的组装转移到越南,导致今年美国从越南的进口激增,与此同时,美国从这个的进口开始减弱。
现在回过头来看大阪习特会,可以说,中国回到了五月份与美方决裂时的原点。唯一不同的是把这之前只有10%的两千亿美元中国产品的关税提升至25%。(由于习猪头五月初反悔,导致现在的损失反而更大,哈哈)
星期二重开谈判,美国的基本要求还是要回到五月份决裂时的原点.

川习会两个谜团解开 最大输家的前途堪忧

$
0
0


在刚刚结束的日本大阪G20峰会,川普和习近平会谈之后,双方达成共识要重启谈判。党媒正在把这涂抹成又一场伟大的胜利,但主管党媒、中共意识形态的一批人却正在经历另一种心情,因为川习会后最大的输家正是他们,一些人的政治生命堪忧。
  北京被迫重回谈判桌
  首先这对北京根本不是一场胜利,川普在之后的记者会上说:我们会和中方从上次未完成的部分继续,看看我们是否能达成协议。(英文原文:We’re going to work with China on where we left off,to see if we can make a deal.)川普还说:这不代表会最终达成协议,但我可以告诉你,他们希望达成协议。如果能达成协议,将是历史性的。(英文原文:This doesn’t mean there’s going to be a deal, but they would like to make a deal. I can tell you that. And if we could make a deal, it would be a very historic event.)
  中美双方曾历经10个月时间,中国副总理刘鹤与美国贸易代表莱特希泽、财长姆努钦等先后互访谈判10次。
  但北京5月初却从即将完成的协议草案全面后撤,路透社5月8日曾发表独家报道表示,星期五(5月3日)晚间来自北京的外交电文,对近150页的美中贸易协议草案进行系统的更改,在草案协议的7个章节中,中国全面删除修改文本以解决美国对中国贸易做法核心抱怨的承诺,推翻双方10个月来谈判的成果。这7个方面包括盗取美国知识产权和商业秘密、强制性技术转让、竞争政策、金融服务准入,以及货币操纵等。报道表示,莱特希泽和财政部长姆努钦对中国删改协议草案的程度感到震惊。
  这导致川普星期日(5月5日)宣布,美国将把2000亿美元的中国商品关税从10%上调到25%,并不排除对其它商品也加征关税。他当时在推文中称:他们试图重新谈判,不行!
  从5月初到现在,近两个月后,2000亿美元的中国商品关税已经上调至25%,华为(专题)被列入黑名单,遭到重大打击。现在2000亿美元25%的关税继续,华为也还在名单上,只是被允许购买不涉及国家紧急安全的美国公司产品,川普并在记者会上表示,他对习近平表示华为问题会留到最后,视情况而定。(英文原文:We mentioned Huawei. I said,“We’ll have to save that until the very end. We'll have to see.”)
  而北京现在却被迫重新回到谈判桌,还是在上一份草案上前行,天下只有党媒才能把这称为胜利。
    谁在刘鹤的背后?
  这次川习会也间接解开了两个谜团。本来按常理,此前谈判过程中刘鹤不可能不请示和汇报,就擅自答应美方停止技术转移和外商准入等各种条件。尤其协议部分内容要求中国修法配合,美国建立执行监督机制等,影响重大,刘鹤更不可能擅自作主。但上次北京在最后一刻否定协议草案,导致人们怀疑刘鹤之前10个月的对美谈判策略和过程,到底有没有习近平在背后认可?
  在这次G20川习会中,刘鹤仍然紧挨习近平,就坐在习的左边,显示刘鹤仍是习近平的心腹。
  并且川普在川习会后还透露:中国打算从美国购买海量的农产品和食品,他们会很快开始,几乎是立即。我们会给他们一个单子列出我们想让他们买的东西。(英文原文:And China is going to be buying a tremendous amount of food and agricultural product, and they’re going to start that very soon, almost immediately. We’re going to give them lists of things that we’d like them to buy.)
  这种在谈判中宣布立即大量采购美国农产品,以向川普示好的作法,刘鹤此前也参用过,今年1月31日,刘鹤来到白宫见川普,当时他用英语对川普说,中方今天将购入500万吨美国大豆。但当时川普把刘鹤说的今天(today)听成了每天(per day),后来白宫还就此澄清,成为花絮。
  这次习近平亲自领衔谈判团队,又采用了这一手法,也更间接证明之前刘鹤的谈判过程,习近平是在紧密参与的。
    习近平敢于回到毛时代?
  另一个谜团是,习近平真的“自信”到了要和美国对抗到底,不惜回到毛时代,闭关锁国吗?
  此次G20给出了答案,习近平还没有“自信”到那种程度。但人们同时看到,党内意识形态部门确实一直在试图兜售另一种未来。
  在5月初美中贸易谈判破裂后,党媒光明日报、人民日报等立刻开始大肆进行仇美叫嚣,并且也把矛头对准了中国的“内部力量“、“崇美媚美恐美者”、“投降派”等。
  6月6日,中宣部主管的光明日报报纸刊登了《明辨崇美媚美恐美的奇谈怪论》的文章,批中国亲美派:“以比美国政府更积极的态度,将中美贸易战的责任单方面推给中国”。
  6月8日,新华社刊发《让投降论成为过街老鼠》评论文章,指控中国国内持有对美妥协让步论调的人是“投降论”,是想要“瓦解中国人的抵抗精神”。
  总部设在北京的海外多维网发文称:“近来,中国官方喉舌,从新华社到《人民日报》,都颇有一番气势,要彻底摧毁那些对美国特朗普政府进行政治和经济妥协的声音,当然不排除也包括某些代表性人物。”
  刘鹤甚至被一些毛左网站称为“刘鸿章”,有网民质疑刘鹤在白宫见川普的照片:“为什么我们现在的谈判不能像李鸿章那样,面对面对谈,而非要川普坐高处我们坐下面,为什么?”北大教授孔庆东回答是:“因为李鸿章毕竟还算个文化人,即使签订丧权辱国条约,鬼子也起码给他一点个人尊严,不把他当成一条狗”。
  曾著有《亚洲霸凌:为什么中国梦是对世界秩序的新威胁》一书的中国问题专家毛思迪(Steven Mosher)6月24日接受“美国思想领导者”(American Thought Leader)时说:
  “我们对中共提出公平竞争的所有要求,如停止网络攻击、停止知识产权盗窃、停止补贴,停止号称要主导未来高科技的2025经济部门(即“中国制造2025”计划)等——我们要求中共做的所有事情,它们都把这些视为对其政权的直接挑战和威胁。我们对此非常清楚“。”那么他们会接受美国的要求,沿着进行结构改革的路走吗?这就意味着中共的权力不断受限,逐渐走向解体“。
  很大概率正是意识形态部门和他们的党内同盟者搅黄了上一次贸易谈判,他们看到了美中如果达成协议对党统治的威胁,他们所打的如意算盘是,利用民族主义、意识形态煽风点火,让中美贸易谈判破裂。大陆经济本来就面临诸多难解问题,正可以把美国当成替罪羊。而且如果中国重回毛时代,权势最大的就将是意识形态部门,老百姓过什么样的日子则不在考虑之内。
  现在看来,在巨大压力下,习近平最终没有对此买账。这就造成了一个大问题。在演艺圈里,如果一位明星不再同意经纪人的包装方案,那就只能导致分道扬镳。而在中共高层政治中,这种分裂不会是简单的分道扬镳。

一些非常棒的在线工具

$
0
0


Asscii流程图绘制




本人奉行极简主意,文档使用markdown,自然流程图也使用文本最好。这是一个强大的在线ASSCII图形绘制工具。


下面是个demo

+------------+
+---> | BaseClass | <-- code="" ubclassa="" ubclassb="">-->

正则表达式测试

不做解释两个网址,第一个写ruby下的正则比较方便.

检测你的服务器的网速

输入你的服务器地址,该网站会从全球各地的镜像ping你的主机。这样你可以了解到你的网站在全球各个地方的访问速度。

番茄工作法的计时小工具

腾讯提供的番茄工作法的即时小工具,不懂”番茄工作法”的去Google.

在线移动应用原型制作

使用图片快速生成移动应用的原型,设计师和产品经理绝对不要错过。在线制作,只要上传几张设计效果图随便拖拽几下,就可以部署到手机上查看逼真的交互效果(遗憾的是此网站不是完全免费,注册后免费30天的体验)。

管理搜藏

下面两个网址程序员不要错过,用来管理github上的start.

在线SVG编辑工具

很多图形处理工具不内建支持 .svg文件 ,我想把矢量图导出为指定规格的png文件这个在线工具帮了大忙

tinypng 图片压缩

这是一个用于压缩png图片的在线工具,非常适合移动应用开发人员和前端开发人员对图片资源优化。可以在不太损失显示效果的情况下能够把图片的尺寸减少 70%。

年代向錢看 川普對習近平暫緩新增關稅!華為從死刑變成緩死, 關稅B計畫成籌碼!

Bypass firewalls by abusing DNS history

$
0
0

Firewall bypass script based on DNS history records. This script will search for DNS A history records and check if the server replies for that domain. 

Tool overview
This script will try to find:
  • the direct IP address of a server behind a firewall like Cloudflare, Incapsula, SUCURI ...
  • an old server which still running the same (inactive and unmaintained) website, not receiving active traffic because the A DNS record is not pointing towards it. Because it's an outdated and unmaintained website version of the current active one, it is likely vulnerable for various exploits. It might be easier to find SQL injections and access the database of the old website and abuse this information to use on the current and active website.
This script (ab)uses DNS history records. This script will search for old DNS A records and check if the server replies for that domain. It also outputs a confidence level, based on the similarity in HTML response of the possible origin server and the firewall.
The script also fetches the IP's of subdomains because my own experience learned me that subdomain IP's sometimes point to the origin of the main domain.

Usage

Use the script like this:
bash bypass-firewalls-by-DNS-history.sh -d example.com
  • -d --domain: domain to bypass
  • -o --outputfile: output file with IP's
  • -l --listsubdomains: list with subdomains for extra coverage
  • -a --checkall: Check all subdomains for a WAF bypass

Requirements (optional)

jq is needed to parse output to gather automatically subdomains. Install with apt install jq.

Background information

WAF Bypass explanation

To illustrate what we define as WAF bypass, look at the scheme below.
Scheme WAF Bypass
A normal visitor connects to a Website. The initial request is a DNS request to ask the IP of the website, so the browser of the client knows where to send the HTTP request to. For sites behind cloudflare or some other public WAF, the reply contains an IP address of the WAF itself. Your HTTP traffic flows basically through the WAF to the origin web server. The WAF blocks malicious requests and protects against (D)DoS attacks. However, if an attacker knows the IP of the origin webserver and the origin webserver accepts HTTP traffic from the entire internet, the attacker can perform a WAF bypass: let the HTTP traffic go directly to the origin webserver instead of passing through the WAF.
This script tries to find that origin IP, so you can connect directly to the origin webserver. Attacks like SQL injections or SSRF's are not filtered and can be successfully, in contrary when there is a WAF in between which stops these kind of attacks.

Further exploitation

When you find a bypass, you have two options:
  • Edit your host-file, which is a system-wide solution. You can find your host-file at /etc/hosts(Linux/Mac) or c:\Windows\System32\Drivers\etc\hosts (Windows). Add an entry like this: 80.40.10.22 vincentcox.com.
  • Burp Suite: Burp Suite Settings
From this moment, your HTTP traffic goes directly to the origin webserver. You can perform a penetration test as usual, without your requests being blocked by the WAF.

How to protect against this script?

  • If you use a firewall, make sure to accept only traffic coming through the firewall. Deny all traffic coming directly from the internet. For example: Cloudflare has a list of IP's which you can whitelist with iptables or UFW. Deny all other traffic.
  • Make sure that no old servers are still accepting connections and not accessible in the first place

For who is this script?

This script is handy for:
  • Security auditors
  • Web administrators
  • Bug bounty hunters
  • Blackhatters I guess ¯\_(ツ)_/¯

Web services used in this script

The following services are used:

FAQ

Why in Bash and not in Python?
It started out as a few CURL one-liners, became a bash script, extended the code more and more, and the regret of not using Python extended accordingly.
I find more subdomains with my tools?
I know. I cannot expect everyone to install all these DNS brute-force and enumeration tools. In addition, I don't know beforehand in which folder these tools are placed or under which alias these tools are called. You can still provide your own list with -l so you can feed output of these subdomain tools into this tool. Expected input is a full subdomain on each line.

基于 perl的zonemaster项目

$
0
0

Introduction

Zonemaster is a software package that validates the quality of a DNS delegation. The ambition of the Zonemaster project is to develop and maintain an open source DNS validation tool, offering improved performance over existing tools and providing extensive documentation which could be re-used by similar projects in the future.
Zonemaster consists of several modules or components. The components will help different types of users to check domain servers for configuration errors and generate a report that will assist in fixing the errors.

Background

DNSCheck from IIS and Zonecheck from AFNIC are two old software packages that validate the quality of a DNS delegation. AFNIC and IIS came together to develop a new DNS validation tool from scratch under the name Zonemaster. Zonemaster intends to be a major rewrite of Zonecheck and DNSCheck, and aims to implement the best parts of both.

Purpose

The components developed as part of the Zonemaster project will help different types of users to check domain servers for configuration errors and generate a report that will assist in fixing the errors.
The ambition of the Zonemaster project is to develop and maintain an open source DNS validation tool, offering improved performance over existing tools and providing extensive documentation which could be re-used by similar projects in the future.

Documentation

This is the main project repository. In this repository, documentation regarding the designrequirements and specifications for the Zonemaster implementation are available. We also have a brief user guide.

Prerequisites

Zonemaster comes with documentation for and has been tested on the operating systems and processor architecture listed below.

Supported processor architectures

  • x86_64 / amd64

Supported operating system versions

  • CentOS 7
  • Debian 8
  • Debian 9
  • FreeBSD 11.2
  • FreeBSD 12.0
  • Ubuntu 16.04
  • Ubuntu 18.04

Supported database engine versions

Operating SystemMySQLPostgreSQL
CentOS 75.69.3
Debian 85.59.4
Debian 910.1 (*)9.6
FreeBSD 11.25.69.5
FreeBSD 12.05.69.5
Ubuntu 16.045.79.5
Ubuntu 18.045.710
*) For Debian 9 MariaDB is supported, not MySQL.
Zonemaster Backend has been tested with the combination of OS and database engine version listed in the table above. Zonemaster uses functionality introduced in PostgreSQL version 9.3, and earlier versions are as such not supported.

Supported Perl versions

Operating SystemPerl
CentOS 75.16
Debian 85.20
Debian 95.24
FreeBSD 11.25.28
FreeBSD 12.05.28
Ubuntu 16.045.22
Ubuntu 18.045.26
Zonemaster requieres Perl version 5.14.2 or higher. Zonemaster has been tested with the default version of Perl in the OSs as listed in the table above.

Supported Client Browser versions

Zonemaster GUI is tested against the browsers, their versions and listed OS as indicated bellow and should work perfectly with similar configurations.
Operating SystemBrowserVersion
Ubuntu 18.04Firefox64, 65, 66
Ubuntu 18.04Chrome66
Windows 10Firefox64, 65, 66
Windows 10Chrome73
MacOsFirefox65
MacOsChrome73
Zonemaster GUI was tested manually or with testing tools. See the Zonemaster-gui repository for more details.

Localization

Zonemaster comes with localization for these locales:
  • en.UTF-8
  • fr.UTF-8
  • sv.UTF-8
  • da.UTF-8 (*)
*) Some strings have not yet been translated to Danish.

Zonemaster and its components

The Zonemaster product consists of the main part and five components. The main part consists of specifications and documentation for the Zonemaster product, and is stored in main Zonemaster Github repository (Zonemaster).
All the software for the Zonemaster project belong to the five components, each component being stored in its own Github repository (listed below).
The software has not yet been packaged for any operating systems, and you have to install most of it from the source code. The recommended method is to install from CPAN (except for Zonemaster-GUI), but it is possible to install directly from clones of the Github repositories. Zonemaster-GUI has no Perl code, and is installed directly from its repository at Github.
The Zonemaster Product includes the following components:

Installation

To install Zonemaster, start with installation of Zonemaster-Engine (which will draw in Zonemaster-LDNS) and then continue with the other parts. You will find installation instructions from the links above.

Versions

Go to the release list of this repository to find the latest version of Zonemaster and the versions of the specific components. Be sure to read the release note of each component before installing or upgrading.

Participation

You can submit code by forking this repository and creating pull requests. When you create a pull request, please select the "develop" branch in the relevant Zonemaster repository.
You can follow the project in these two mailing lists:
----

Installation

This document describes prerequisites, installation, post-install sanity checking for Zonemaster::Engine, and rounds up with a few pointer to interfaces for Zonemaster::Engine. For an overview of the Zonemaster product, please see the main Zonemaster Repository.

Prerequisites

For details on supported operating system versions and Perl verisons for Zonemaster::Engine, see the declaration of prerequisites.

Installation

This instruction covers the following operating systems:

Installation on CentOS

  1. Install the EPEL 7 repository:
    sudo yum --enablerepo=extras install epel-release
  2. Make sure the development environment is installed:
    sudo yum groupinstall "Development Tools"
  3. Install binary packages:
    sudo yum install cpanminus libidn-devel openssl-devel perl-Clone perl-core perl-Devel-CheckLib perl-File-ShareDir perl-File-Slurp perl-IO-Socket-INET6 perl-JSON-PP perl-List-MoreUtils perl-Module-Find perl-Moose perl-Net-IP perl-Pod-Coverage perl-Readonly-XS perl-Test-Differences perl-Test-Exception perl-Test-Fatal perl-Test-Pod perl-YAML
  4. Install packages from CPAN:
    sudo cpanm Locale::Msgfmt Locale::TextDomain Mail::RFC822::Address Module::Install Module::Install::XSUtil Test::More Text::CSV
  5. Install Zonemaster::LDNS and Zonemaster::Engine:
    sudo cpanm Zonemaster::LDNS Zonemaster::Engine

Installation on Debian

  1. Refresh the package information
    sudo apt update
  2. Install dependencies from binary packages:
    sudo apt install autoconf automake build-essential cpanminus libclone-perl libdevel-checklib-perl libfile-sharedir-perl libfile-slurp-perl libidn11-dev libintl-perl libio-socket-inet6-perl libjson-pp-perl liblist-moreutils-perl liblocale-msgfmt-perl libmail-rfc822-address-perl libmodule-find-perl libmodule-install-xsutil-perl libmoose-perl libnet-ip-perl libpod-coverage-perl libreadonly-xs-perl libssl-dev libtest-differences-perl libtest-exception-perl libtest-fatal-perl libtest-pod-perl libtext-csv-perl libtool m4
  3. Install dependencies from CPAN:
    sudo cpanm Module::Install Test::More
  4. Install Zonemaster::LDNS and Zonemaster::Engine:
    sudo cpanm Zonemaster::LDNS Zonemaster::Engine

Installation on FreeBSD

  1. Become root:
    su -l
  2. Update list of package repositories:
    Create the file /usr/local/etc/pkg/repos/FreeBSD.conf with the following content, unless it is already updated:
    FreeBSD: {
    url: "pkg+http://pkg.FreeBSD.org/${ABI}/latest",
    }
  3. Check or activate the package system:
    Run the following command, and accept the installation of the pkg package if suggested.
    pkg info -E pkg
  4. Update local package repository:
    pkg update -f
  5. Install dependencies from binary packages:
    pkg install libidn p5-App-cpanminus p5-Clone p5-Devel-CheckLib p5-File-ShareDir p5-File-Slurp p5-IO-Socket-INET6 p5-JSON-PP p5-List-MoreUtils p5-Locale-libintl p5-Locale-Msgfmt p5-Mail-RFC822-Address p5-Module-Find p5-Module-Install p5-Module-Install-XSUtil p5-Moose p5-Net-IP p5-Pod-Coverage p5-Readonly-XS p5-Test-Differences p5-Test-Exception p5-Test-Fatal p5-Test-Pod p5-Text-CSV
  6. Install Zonemaster::LDNS and Zonemaster::Engine:
    cpanm Zonemaster::LDNS Zonemaster::Engine

Installation on Ubuntu

Use the procedure for installation on Debian.

Post-installation sanity check

Make sure Zonemaster::Engine is properly installed.
time perl -MZonemaster::Engine -E 'say join "\n", Zonemaster::Engine->test_module("BASIC", "zonemaster.net")'
The command is expected to take a few seconds and print some results about the delegation of zonemaster.net.

What to do next



我不想说, "外来妹"主题曲

$
0
0

91年的电视剧,在央视播出。


SDNS

$
0
0
A lightweight fast recursive dns server with dnssec support .

Travis Go Report Card GoDoc codecov GitHub version

Installation

cd $GOPATH
go get github.com/semihalev/sdns
or
or run with Docker image
docker run -d --name sdns -p 53:53 -p 53:53/udp -p 853:853 -p 8053:8053 -p 8080:8080 sdns
  • Port 53 DNS server
  • Port 853 DNS-over-TLS server
  • Port 8053 DNS-over-HTTPS server
  • Port 8080 HTTP API

Building

$ go build

Flags

FlagDesc
configLocation of the config file, if not found, it will be generated

Configs

KeyDesc
versionConfig version
blocklistsList of remote blocklists
blocklistdirList of locations to recursively read blocklists from (warning, every file found is assumed to be a hosts-file or domain list)
loglevelWhat kind of information should be logged, Log verbosity level crit,error,warn,info,debug
bindAddress to bind to for the DNS server. Default :53
bindtlsAddress to bind to for the DNS-over-TLS server. Default :853
binddohAddress to bind to for the DNS-over-HTTPS server. Default :8053
tlscertificateTLS certificate file path
tlsprivatekeyTLS private key file path
outboundipsOutbound ip addresses, if you set multiple, sdns can use random outbound ip address
rootserversDNS Root servers
root6serversDNS Root IPv6 servers
rootkeysDNS Root keys for dnssec
fallbackserversFallback servers IP addresses
apiAddress to bind to for the http API server, leave blank to disable
nullrouteIPv4 address to forward blocked queries to
nullroutev6IPv6 address to forward blocked queries to
accesslistWhich clients allowed to make queries
timeoutQuery timeout for dns lookups in duration Default: 5s
connecttimeoutConnect timeout for dns lookups in duration Default: 2s
hostsfileEnables serving zone data from a hosts file, leave blank to disable
expireDefault cache TTL in seconds Default: 600
cachesizeCache size (total records in cache) Default: 256000
maxdepthMaximum recursion depth for nameservers. Default: 30
ratelimitQuery based ratelimit per second, 0 for disable. Default: 0
clientratelimitClient ip address based ratelimit per minute, 0 for disable. if client support edns cookie no limit. Default: 0
blocklistManual blocklist entries
whitelistManual whitelist entries

Server Configuration Checklist

  • Increase file descriptor on your server

Features

  • Linux/BSD/Darwin/Windows supported
  • DNS RFC compatibility
  • DNS lookups within listed servers
  • DNS caching
  • DNSSEC validation
  • DNS over TLS support
  • DNS over HTTPS support
  • Middleware Support
  • RTT priority within listed servers
  • EDNS Cookie Support (client<->server)->
  • Basic IPv6 support (client<->server)->
  • Query based ratelimit
  • IP based ratelimit
  • Access list
  • Prometheus basic query metrics
  • Black-hole internet advertisements and malware servers
  • HTTP API support
  • Outbound IP selection

用法:
sudo sdns -config=sdns.toml
(配置文件sdns.toml会在运行sdns时,在当前目录下,自动生成)
可用作dns proxy server,搭配vpn使用。

使用例子:
sudo wg-quick up wg0
networksetup -setdnsservers "Wi-Fi" 127.0.0.1
sudo sdns -config=sdns.toml

RouteDNS - DNS stub resolver

$
0
0

RouteDNS acts as a stub resolver that offers flexible configuration options with a focus on providing privacy as well as resiliency. It supports several DNS protocols such as plain UDP and TCP, DNS-over-TLS and DNS-over-HTTPS as input and output. In addition it's possible to build complex configurations allowing routing of queries based on query name, type or source address. Upstream resolvers can be grouped in various ways to provide failover, load-balancing, or performance.
Features:
  • Support for DNS-over-TLS (DoT)
  • Support for DNS-over-HTTPS (DoH)
  • Support for plain DNS, UDP and TCP for incoming and outgoing requests
  • Connection reuse and pipelining queries for efficiency
  • Multiple failover and load-balancing algorithms
  • Custom blocklists
  • Routing of queries based on query type, query name, or client IP
  • Written in Go - Platform independent
TODO:
  • DNS-over-TLS listeners
  • DNS-over-HTTP listeners
  • Configurable TLS options, like keys and certs
  • Dot and DoH listeners should support padding as per RFC7830 and RFC8467
  • Introduce logging levels
Note: RouteDNS is under active development and interfaces as well as configuration options are likely going to change

Installation

Get the binary
go get -u github.com/folbricht/routedns/cmd/routedns
An example systemd service file is provided here
Example configuration files for a number of use-cases can be found here

Configuration

RouteDNS uses a config file in TOML format which is passed to the tool as argument in the command line. The configuration is broken up into sections, not all of which are necessary for simple uses.

Resolvers

The [resolvers]-section is used to define and upstream resolvers and the protocol to use when using them. Each of the resolvers requires a unique identifier which may be reference in the following sections. Only defining the resolvers will not actually mean they are used. This section can contain unused upstream resolvers.
The following protocols are supportes:
  • udp - Plain (unencrypted) DNS over UDP
  • tcp - Plain (unencrypted) DNS over TCP
  • dot - DNS-over-TLS
  • doh - DNS-over-HTTP
The following example defines several well-known resolvers, one using DNS-over-TLS, one DNS-over-HTTP while the other two use plain DNS.
[resolvers]

[resolvers.cloudflare-dot]
address = "1.1.1.1:853"
protocol = "dot"

[resolvers.cloudflare-doh]
address = "https://1.1.1.1/dns-query{?dns}"
protocol = "doh"

[resolvers.google-udp-8-8-8-8]
address = "8.8.8.8:53"
protocol = "udp"

[resolvers.google-udp-8-8-4-4]
address = "8.8.4.4:53"
protocol = "udp"

Groups

Multiple resolvers can be combined into a group to implement different failover or loadbalancing algorithms within that group. Again, each group requires a unique identifier.
Each group has resolvers which is and array of one or more resolver-identifiers. These can either be resolvers defined above, or other groups defined earlier.
The type determines which algorithm is being used. Available types:
  • round-robin - Each resolver in the group receives an equal number of queries. There is no failover.
  • fail-rotate - One resolver is active. If it fails the next becomes active and the request is retried. If the last one fails the first becomes the active again. There's no time-based automatic fail-back.
  • fail-back - Similar to fail-rotate but will attempt to fall back to the original order (prioritizing the first) if there are no failures for a minute.
In this example, two upstream resolvers are grouped together and will be used alternating:
[groups]

[groups.google-udp]
resolvers = ["google-udp-8-8-8-8", "google-udp-8-8-4-4"]
type = "round-robin"

Routers

Routers are used to send queries to specific upstream resolvers, groups, or to other routers based on the query type or name. Routers too require a unique identifier. Each router contains at least one route. Routes are are evaluated in the order they are defined and the first match will be used. Typically the last route should not have a type or name, making it the default route.
A route has the following fields:
  • type - If defined, only matches queries of this type
  • name - A regular expession that is applied to the query name. Note that dots in domain names need to be escaped
  • source - Network in CIDR notation. Used to route based on client IP.
  • resolver - The identifier of a resolver, group, or another router that was defined earlier.
Below, router1 sends all queries for the MX record of google.com and all its sub-domains to a group consisting of Google's DNS servers. Anything else is sent to a DNS-over-TLS resolver.
[routers]

[routers.router1]

[[routers.router1.routes]]
type = "MX"
name = '(^|\.)google\.com\.$'
resolver="google-udp"

[[routers.router1.routes]] # A route without type and name becomes the default route for all other queries
resolver="cloudflare-dot"

Listeners

Listers specify how queries are received and how they should be handled. Listeners can send queries to routers, groups, or to resolvers directly. Listeners have a listen address, a protocol (udptcpdot or doh), and specify the handler identifier in resolver.
[listeners]

[listeners.local-udp]
address = "127.0.0.1:53"
protocol = "udp"
resolver = "router1"

[listeners.local-tcp]
address = "127.0.0.1:53"
protocol = "tcp"
resolver = "router1"

Use-cases / Examples

User case 1: Use DNS-over-TLS for all queries locally

In this example, the goal is to send all DNS queries on the local machine encrypted via DNS-over-TLS to Cloudflare's DNS server 1.1.1.1. For this, the nameserver IP in /etc/resolv.conf is changed to 127.0.0.1. Since there is only one upstream resolver, and everything should be sent there, no group or router is needed. Both listeners are using the loopback device as only the local machine should be able to use RouteDNS. The config file would look like this:
[resolvers]

[resolvers.cloudflare-dot]
address = "1.1.1.1:853"
protocol = "dot"

[listeners]

[listeners.local-udp]
address = "127.0.0.1:53"
protocol = "udp"
resolver = "cloudflare-dot"

[listeners.local-tcp]
address = "127.0.0.1:53"
protocol = "tcp"
resolver = "cloudflare-dot"

User case 2: Prefer secure DNS in a corporate environment

In a corporate environment it's necessary to use the potentially slow and insecure company DNS servers. Only these servers are able to resolve some resources hosted in the corporate network. A router can be used to secure DNS whenever possible while still being able to resolve internal hosts.
[resolvers]

# Define the two company DNS servers. Both use plain (insecure) DNS over UDP
[resolvers.mycompany-dns-a]
address = "10.0.0.1:53"
protocol = "udp"

[resolvers.mycompany-dns-b]
address = "10.0.0.2:53"
protocol = "udp"

# Define the Cloudflare DNS-over-HTTPS resolver (GET methods) since that is most likely allowed outbound
[resolvers.cloudflare-doh-1-1-1-1-get]
address = "https://1.1.1.1/dns-query{?dns}"
protocol = "doh"
doh = { method = "GET" }

[groups]

# Since the company DNS servers have a habit of failing, group them into a group that switches on failure
[groups.mycompany-dns]
resolvers = ["mycompany-dns-a", "mycompany-dns-b"]
type = "fail-rotate"

[routers]

[routers.router1]

# Send all queries for '*.mycompany.com.' to the company's DNS, possibly through a VPN tunnel
[[routers.router1.routes]]
name = '(^|\.)mycompany\.com\.$'
resolver="mycompany-dns"

# Everything else can go securely to Cloudflare
[[routers.router1.routes]]
resolver="cloudflare-doh-1-1-1-1-get"

[listeners]

[listeners.local-udp]
address = "127.0.0.1:53"
protocol = "udp"
resolver = "router1"

[listeners.local-tcp]
address = "127.0.0.1:53"
protocol = "tcp"
resolver = "router1"

Use case 3: Restrict access to potentially harmful content

The goal here is to single out children's devices on the network and apply a custom blocklist to their DNS resolution. Anything on the blocklist will fail to resolve with an NXDOMAIN response. Names that aren't on the blocklist are then sent on to CleanBrowsing for any further filtering. All other devices on the network will have unfiltered access via Cloudflare's DNS server, and all queries are done using DNS-over-TLS. The config file can also be found here
[resolvers]

[resolvers.cleanbrowsing-dot]
address = "family-filter-dns.cleanbrowsing.org:853"
protocol = "dot"

[resolvers.cloudflare-dot]
address = "1.1.1.1:853"
protocol = "dot"

[groups]

[groups.cleanbrowsing-filtered]
type = "blocklist"
resolvers = ["cleanbrowsing-dot"] # Anything that passes the filter is sent on to this resolver
blocklist = [ # Define the names to be blocked
'(^|\.)facebook.com.$',
'(^|\.)twitter.com.$',
]

[routers]

[routers.router1]

[[routers.router1.routes]]
source = "192.168.1.123/32"# The IP or network that will use the blocklist in CIDR notation
resolver="cleanbrowsing-filtered"

[[routers.router1.routes]] # Default for everyone else
resolver="cloudflare-dot"

[listeners]

[listeners.local-udp]
address = ":53"
protocol = "udp"
resolver = "router1"

[listeners.local-tcp]
address = ":53"
protocol = "tcp"
resolver = "router1"

Links


dns解析服务器程序-discodns

$
0
0

Build Status
An authoritative DNS nameserver that queries an etcd database of domains and records.

Key Features

  • Full support for a variety of resource records
    • Both IPv4 (A) and IPv6 (AAAA) addresses
    • CNAME alias records
    • Delegation via NS and SOA records
    • SRV and PTR for service discovery and reverse domain lookups
  • Multiple resource records of different types per domain (where valid)
  • Support for wildcard domains
  • Support for TTLs
    • Global default on all records
    • Individual TTL values for individual records
  • Runtime and application metrics are captured regularly for monitoring (stdout or grahite)
  • Incoming query filters

Production Readyness

We've been running discodns in production for several months now, with no issue, though that is not to say it's bug free! If you find any issues, please submit a bug report or pull request.

Why did we build discodns?

When building infrastructure of sufficient complexity -- especially elastic infrastructure -- we've found it's really valuable to have a fast and flexible system for service identity and discovery. Crucially, it has to support different naming conventions and work with a wide variety of platform tooling and service software. DNS has proved itself to be capable in that role for over 25 years.
Since discodns is not a recursive resolver, nor does it implement it's own cache, you should front queries with a forwarder (BIND, for example) as seen in the diagram below.
         +-----------+   +---------+
| | | |
| Servers | | Users |
| | | |
+------+----+ +----+----+
| |
+------v-------------v----+
| |
| Forwarders (BIND) |
| |
+----+---+---------+------+
| | |
+----------+ | | |
| | | | |
| discodns <-- v---=""> Intertubes |
| | | |
| Something | +----------------+
| Else |
| |
+-------------+
-->
This "pluggable" DNS architecture allows us to mix a variety of tools to achieve a very flexible global discovery system throughout.

Why etcd?

We chose etcd as it's a simple and distributed k/v store. It's also commonly used (and designed) for cluster management, so can behave as the canonical point for discovering services throughout a network. Services can utilize the same etcd cluster to both publish and subscribe to changes in the domain name system, as well as other orchestration needs they may have.
Why not ZooKeeper? The etcd API is much simpler for users to use, and it uses RAFT instead of Paxos, which contributes to it being a simpler to understand and easier to manage.
Another attractive quality about etcd is the ability to continue serving (albeit stale) read queries even when a consensus cannot be reached, allowing the cluster to enter a semi-failed state where it cannot accept writes, but it will serve reads. This kind of graceful service degradation is very useful for a read-heavy system, such as DNS.
Currently, discodns has been tested against ETCD 3.1.2.

Getting Started

The discodns project is written in Go, and uses an extensive library (miekg/dns) to provide the actual implementation of the DNS protocol.
You'll need to compile from source, though a Makefile is provided to make this easier. Before starting, you'll need to ensure you have Go (1.8+) installed.

Building

It's simple enough to compile from source...
cd discodns
make
discodns uses godep to manage dependency versions. make will run godep restore, which modifies your go workspace to pin dependency versions.
If you change version dependencies (using go get -u ..pkg.., or manually bumping the git rev of the package in your go workspace), then you must run godep save and commit the changes to the Godeps dir

Running

It's as simple as launching the binary to start a DNS server listening on port 53 (tcp+udp) and accepting requests. You need to ensure you also have an etcd cluster up and running, which you can read about here.
Note: You can enable verbose logging using the -v argument
Note: Since port 53 is a privileged port, you'll need to run discodns as root. You should not do this in production.
cd discodns/build/
sudo ./bin/discodns --etcd=127.0.0.1:4001

Try it out

It's incredibly easy to see your own domains come to life, simply insert a key for your record into etcd and then you're ready to go! Here we'll insert a custom A record for discodns.net pointing to 10.1.1.1.
curl -L http://127.0.0.1:4001/v2/keys/net/discodns/.A -XPUT -d value="10.1.1.1"
{"action":"set","node":{"key":"/net/discodns/.A","value":"10.1.1.1","modifiedIndex":11,"createdIndex":11}}
$ @localhost discodns.net.
;<<>> DiG 9.8.3-P1 <<>> @localhost discodns.net.
; .. truncated ..

;; QUESTION SECTION:
;discodns.net. IN A

;; ANSWER SECTION:
discodns.net. 0 IN A 10.1.1.1

Authority

If you're not familiar with the DNS specification, to behave correctly as an authoritative nameserver each domain needs to have its own SOA (stands for Start Of Authority) and NS records to assert its authority. Since discodns can support multiple authoritative domains, it's up to you to enter this SOA record for each domain you use. Here's an example of creating this record for discodns.net..

SOA

curl -L http://127.0.0.1:4001/v2/keys/net/discodns/.SOA -XPUT -d value=$'ns1.discodns.net.\tadmin.discodns.net.\t3600\t600\t86400\t10'
{"action":"set","node":{"key":"/net/discodns/.SOA","value":"...","modifiedIndex":11,"createdIndex":11}}
Let's break out the value and see what we've got.
ns1.discodns.net     << - This is the root, master nameserver for this delegated domain
admin.discodns.net << - This is the "admin" email address, note the first segment is actually the user (`admin@discodns.net`)
3600 << - Time in seconds for any secondary DNS servers to cache the zone (used with `AXFR`)
600 << - Interval in seconds for any secondary DNS servers to re-try in the event of a failed zone update
86400 << - Expiry time in seconds for any secondary DNS server to drop the zone data (too old)
10 << - Minimum TTL that applies to all DNS entries in the zone
These are all tab-separated in the PUT request body. (The $'' is just a convenience to neatly escape tabs in bash; you could use regular bash strings, with \u0009 or %09 for the tab chars, too)
Note: If you're familiar with SOA records, you'll probably notice a value missing from above. The "Serial Number" (should be in the 3rd position) is actually filled in automatically by discodns, because it uses the current index of the etcd cluster to describe the current version of the zone. (TODO).

NS

Let's add the two NS records we need for our DNS cluster.
curl -L http://127.0.0.1:4001/v2/keys/net/discodns/.NS/ns1 -XPUT -d value=ns1.discodns.net.
{"action":"set","node":{"key":"/net/discodns/.NS/ns1","value":"...","modifiedIndex":12,"createdIndex":12}}
curl -L http://127.0.0.1:4001/v2/keys/net/discodns/.NS/ns2 -XPUT -d value=ns2.discodns.net.
{"action":"set","node":{"key":"/net/discodns/.NS/ns2","value":"...","modifiedIndex":13,"createdIndex":13}}
Don't forget to ensure you also add A records for the ns{1,2}.discodns.net domains to ensure they can resolve to IPs.

Storage

The record names are used as etcd key prefixes. They are in a reverse domain format, i.e discodns.net would equate to the key net/discodns. See the examples below;
  • discodns.net. -> A record -> [10.1.1.1, 10.1.1.2]
    • /net/discodns/.A/foo -> 10.1.1.1
    • /net/discodns/.A/bar -> 10.1.1.2
You'll notice the .A folder here on the end of the reverse domain, this signifies to the dns resolver that the values beneath are A records. You can have an infinite number of nested keys within this folder, allowing for some very interesting automation patterns. Multiple keys within this folder represent multiple records for the same dns entry. If you want to enforce only one value exists for a record type (CNAME for example) you can use a single key instead of a directory (/net/discodns/.CNAME -> foo.net).
$ dig @localhost discodns.net.
;<<>> DiG 9.8.3-P1 <<>> @localhost discodns.net.
; .. truncated ..

;; QUESTION SECTION:
;discodns.net. IN A

;; ANSWER SECTION:
discodns.net. 0 IN A 10.1.1.1
discodns.net. 0 IN A 10.1.1.2

Record Types

Only a select few of record types are supported right now. These are listed here:
  • A (ipv4)
  • AAAA (ipv6)
  • TXT
  • CNAME
  • NS
  • PTR
  • SRV

TTLs (Time To Live)

You can configure discodns with a default TTL (the default default is 300 seconds) using the --default-ttl command line option. This means every single DNS resource record returned will have a TTL of the default value, unless otherwise specified on a per-record basis.
To change the TTL of any given record, you can use the .ttl suffix. For example, to use a TTL of 60 minutes for the discodns.net. A record, the database might look like this...
  • /net/discodns/.A -> 10.1.1.1
  • /net/discodns/.A.ttl -> 18000
If you have multiple nested records, you can still use the suffix. In this example, the record identified by foo will use the default TTL, and bar will use the specified TTL of 60 minutes.
  • /net/discodns/.TXT/foo -> foo
  • /net/discodns/.TXT/bar -> bar
  • /net/discodns/.TXT/bar.ttl -> 18000

Value storage formats

All records in etcd are, of course, just strings. Most record types only require simple string values with no special considerations, except their natural constraints and types within DNS (valid IP addresses, for example)
In cases where multiple pieces of information are needed for a record, they are separated with single tab characters.
These more complex cases are:

SOA

Consists of the following tab-delimited fields in order:
  • Primary nameserver
  • 'Responsible Person' (admin email)
  • Refresh
  • Retry
  • Expire
  • Minimum TTL
See the SOA example above for more details

SRV

Consists of the following tab-delimited fields in order:
  • Priority
    • For clients wishing to choose between multiple service instances
    • 16bit unsigned int
  • Weight
    • For clients wishing to choose between multiple service instances
    • 16bit unsigned int
  • Port
    • The standard port number where the service can be found on the host
    • 16bit unsigned int
  • Target
    • a regular domain name for the host where the service can be found
    • must be resolvable to an A/AAAA record.
For more about the Priority and Weight fields, including the algorithm to use when choosing, see RFC2782.

Metrics

The discodns server will monitor a wide range of runtime and application metrics. By default these metrics are dumped to stderr every 30 seconds, but this can be configured using the -metrics argument, set to 0 to disable completely.
You can also use the -graphite arguments for shipping metrics to your own Graphite server instead.

Query Filters

In some situations, it can be useful to restrict the activities of a discodns nameserver to avoid querying etcd for certain domains or record types. For example, your network may not have support for IPv6 and therefore will never be storing any internal AAAA records, so it's a waste of effort querying etcd as they're never going to return with values.
This can be achieved with the --accept and --reject options to discodns. With these options, queries will be tested against the acceptance criteria before hitting etcd, or the internal resolver. This is a very cheap operation, and can drastically improve performance in some cases.
For example, if I only want to allow PTR lookups in the in-addr.arpa. domain space (for reverse domain queries) I can use the --accept="in-addr.arpa:PTR" argument. The nameserver is now going to reject any queries that aren't reverse lookups.
--accept="discodns.net:" # Accept any queries within the discodns.net domain
--accept="discodns.net:SRV,PTR" # Accept only PTR and SRV queries within the discodns domain
--reject="discodns.net:AAAA" # Reject any queries within the discodns.net domain that are for IPv6 lookups.
from https://github.com/duedil-ltd/discodns

Dockerflix

$
0
0
Docker-based SNI proxy for watching Netflix, Hulu, MTV, Vevo, Crackle, ABC, NBC, PBS...
Want to watch U.S. Netflix, Hulu, MTV, Vevo, Crackle, ABC, NBC, PBS, HBO...?
Got a Dnsmasq capable router at home, a Raspberry Pi or similar Linux computer?
Got a virtual private server with a U.S. IP address?
Then you've come to the right place!
Simply said, Dockerflix emulates what companies like Unblock-Us and the like have been doing for years. Dockerflix uses a man-in-the-middle approach to reroute certain requests through a (your) server in the U.S. and thus tricks geo-fenced on-demand streaming media providers into thinking the request originated from within the U.S. This so-called DNS unblocking approach differs vastly from a VPN.
Since my other DNS unblocking project wasn't easy to install and hard to maintain, I came up with a new variant using dlundquist's sniproxy instead of HAProxy. To make the installation a breeze, I boxed the proxy into a Docker container and wrote a small, Python-based Dnsmasq/BIND configuration generator. And voilà: DNS-unblocking as a service (DaaS) ;-)
Thanks to sniproxy's ability to proxy requests based on a wildcard/regex match it's now so much easier to add support for a service. Now it's usually enough to just add the main domain name to the proxy and DNS configuration and Dockerflix will be able to hop the geo-fence in most cases. Since most on-demand streaming media providers are using an off-domain CDN for the video stream, only web site traffic gets sent through Dockerflix. A few exceptions may apply though, notably if the stream itself is geo-fenced.
Dockerflix provides scripts to create zone files for Dnsmasq and BIND. Please be aware that Dockerflix doesn't come with a recursive DNS resolver. I'm assuming you're setting up a private DNS resolver at home, either using your router or some Linux mini computer like the Raspberry Pi. Open resolvers pose a significant threat to the global network infrastructure. Please see here for more information why it's a big no-no.

Docker installation

This will install the latest Docker version on Ubuntu 12.04 LTS and 14.04 LTS:
wget -qO- https://get.docker.io/gpg | sudo apt-key add -
echo deb http://get.docker.io/ubuntu docker main > /etc/apt/sources.list.d/docker.list
apt-get update
apt-get install lxc-docker python-pip
pip install docker-compose

Usage

Clone this Github repository and build/run the Dockerflix container using docker-compose. It may take a while to build the image, so please be patient:
docker-compose up -d us
Make sure TCP ports 80 and 443 on your VPS are not in use by some other software like a pre-installed web server. Check with netstat -tulpn when in doubt. Make sure both ports are accessible from the outside if using an inbound firewall on the VPS server.
From now on, the Dockerflix container can be resumed or suspended using docker-compose start and docker-compose stop
To see if the Dockerflix container is up and running use docker-compose ps. Want to get rid of Dockerflix? Just type docker-compose stop ; docker-compose rm and it's gone.

Post installation

Now that we have set up the proxy, we need to make sure only the relevant DNS queries get answered with the VPS' public IP address. Generate a Dnsmasq configuration using:
python ./gendns-conf.py --remoteip
This configuration has to be used in your home router (if it runs Dnsmasq for DNS resolution) or a Linux-based computer like the Raspberry Pi. Obviously, all DNS requests originating at home have to be resolved/forwarded through Dnsmasq from now on.

Test

Everything has been set up properly once your VPS' IP address shows up in the web browser when navigating to http://ipinfo.io/
If the web browser shows your home IP there's something wrong with DNS resolution. Tip: Make sure not to fall into the OS or browser DNS cache trap, always restart after changing DNS addresses.

Demo proxy server

If you don't have your own U.S.-located virtual private server yet feel free to use my Dockerflix demo server. Just omit the --remoteip  parameter when calling the gendns-conf.py script and the Dockerflix demo server's IP address will be used.

Updating

Unless you've made local changes to Dockerflix, this one-liner executed in the cloned repository directory fetches the latest Dockerflix version from Github and creates a new Docker container with the updated version:
git pull && docker-compose stop ; docker-compose rm -f ; docker-compose build us && docker-compose up -d us
Don't forget to update your local DNS configuration as well.

Limitations

Dockerflix only handles requests using plain HTTP or TLS using the SNI extension. Some media players don't support SNI and thus won't work with Dockerflix. If you need to proxy plain old SSLv1/v2 for a device, have a look at the non-SNI approach in tunlr-style-dns-unblocking. A few media players (i.e. Chromecast) ignore your DNS settings and always resort to a pre-configured DNS resolver which can't be changed (it still can be done though by rerouting these requests using iptables).

Supported on-demand Internet streaming services

United States

ServiceWeb browsersiOSAndroid
NetflixYesYes
Hulu1YesYes
HBO NowYesYes
HBO GOYes
MTVYes
VevoYesYes
CrackleYesYes
ABCYesYes
NBCYesYes
PBSYesYes
LogoTVYes
Comedy ChannelYes
CW TVYesYes
Disney ChannelYes
Disney JuniorYes
Disney XDYes
DramafeverYesYes
ShowtimeYes
SouthparkYes
SmithsonianYesYes
Star TrekYes
SpikeYesYes
uliveYes
Cooking Channel TVYes
PandoraYesYes
iHeart RadioYes
1 Hulu has blacklisted many VPS providers in the U.S. You have to be lucky to find one which still works.

United Kingdom

ServiceWeb browsersiOSAndroid
BBC UKYes
iTV PlayerYes
Channel4Yes
Use docker-compose up -d uk on a server with a UK IP address to generate a UK Dockerflix proxy. For the DNS settings, you have to call gendns.py with the --region uk parameter and provide the IP address of your UK Dockerflix proxy using the --remoteip parameter.
To update a UK dockerflix please use something like this:
git pull && docker-compose stop ; docker-compose rm -f ; docker-compose build uk && docker-compose up -d uk

Contributing

Like Dockerflix? Please star it on Github!
Please contribute by submitting pull requests instead of opening issues to complain that this or that doesn't work. No one gets paid here, so don't expect any real support.

Advanced configuration

Using a wildcard domain approach may also send traffic to the proxy server even if it's not desired for a certain zone/sub-domain. For instance, if a content provider uses its own sub-domain as an alias for a CDN, you may want to exclude the zone for that particular sub-domain from your DNS configuration. This is where config/dockerflix-dnsmasq-exclude.conf comes into play. Use this file to forward zones to a different (i.e. Google DNS) DNS resolver. Since many CDN optimize their network routes around the world, this usually leads to better stream quality and less buffering compared to sending the stream across the globe through the proxy server. Obviously, this is only helpful as long as the stream itself is not geo-fenced.

消失的國界完整版 中國崛起"一帶一路"的稱霸野心! 走訪五國,帶您聚焦"巨龍"擴張的硝煙與後遺症

何以为家?绝不是侵占别人的家

$
0
0
西方的媒体也绝对不会告诉你,这些“难民”并不全是真正的难民。
  当然,它们更不会告诉你的是,全球难民泛滥的背后,其实还隐藏着极端宗教、野蛮文明对于现代文明的吞噬。
  这些西方的主流媒体,绝对不会给你说,难民进入别的国家应该入乡随俗,自动放弃自己的文明。
  在我们这些正常人起来:你丫的去逃命求食了,不是该放弃自己那些乱七八糟的落后原始习俗和宗教么?
  凭什么别人要打开国门让你来享受自己创造的成果呢?你家让我去吃去住吗?
  我们来看看联合国难民署这个机构的报告和它们对于难民的定义吧!
  如果你在注意看看难民署的人员构成,你会发现,这个机构已经验堕落成了某些势力的推手。
  联合国难民署6月19日,发布了最新报告,说是截至2018年年底,全球流离失所者人数达到7080万。
  也就是说,难民是20年前的20倍,创下了近70年来最高纪录。
  在这份名为《全球趋势》的年度报告中,联合国难民署称,2018年,全球流离失所者人数增加了230万。
  那么,什么人才能认定为难民呢?
  在我们想来,肯定必须没吃没穿、战火连天的地方出来的,才能算作难民!
  可是,大家看看,这些涌向欧洲的身强力壮的中东伊斯兰难民,哪一点像逃难的人呢?
  我看着,倒像是去旅游,去入侵!
  这些人贪图其他国家的发达和进步,就想不劳而获地去强行占有。
  在松亭先生看来,它们不是难民,倒像是侵略者!
  出现这些团结危机,难道不是因为国际社会错误的难民政策引起的吗?
      (联合国难民署在开会,看看是些什么人)
  在某些国际机构的纵容和庇护下,中东的伊斯兰难民现在成了各国的“大爷”!
  有了这种金钟罩的难民身份,它们在别人的国家强奸、杀人、放火,仿佛都不应该受到半点指责。
  这样的犯罪事实,在英国、德国、瑞典等众多国家,早就层出不穷了。
  请问,这些推动所谓的“难民”进入别国的机构和组织,为何从来不发声指责?
  默克尔已经为她的错误装逼政策负出了代价。
  在等待会见来访的乌克兰总统时,她全身颤抖不已,或许正是神对她背叛自己国家和民族的惩罚。
  她的错误政策确实会被载入史册,作为德意志的罪人,德国的后代子孙不会忘记。
       (德国民众悼念遇害者,凶手就是来自中东的伊斯兰难民)
  每年6月20日世界难民日到来之际,联合国难民署都会公布难民的最新数字。
  多年来,每次公布的难民数字都刷新记录。
  在十年前的2009年,难民人数还只有4330万人。 可是,到了2018年,竟然增长到了7080万。
  这正常吗?显然不正常。
  因为在这十年前,整个世界的社会经济都得以大发展了,全球包括非洲的饥饿现象,都在大幅减少。
  结合这个事实分析,现在的难民,其实根本就不是真正的难民。
  它们,其实就是想不劳而获地占有他国财富的人。
  甚至,这些人还包藏祸心地想将自己的极端宗教带到别人的家园,最终实现文明替代、鸠占鹊巢。 
 
     (你以为是青年旅行团?错,是所谓的难民)
  相反,无数中东伊斯兰难民,包括夹杂其中的大量恐怖分子、宗教极端分子却趁势涌入了离得较近且经济发达的欧洲
  这些难民由于素质低下,还坚持着自己的极端落后宗教不放,无疑已经成为了埋在欧洲社会中的一颗定时炸弹!
  一旦它们的人口增长到一定的程度,随时可能成为压垮所在国的最后一根稻草!  
  被联合国官员盛赞的德国,目前已经社会陷入了分裂。
  先是国家财政因难民支出不堪重负,随后便是难民不满救济现状,不断制造社会暴乱。
  德国人民张开怀抱、满怀善心地接收了这些中东伊斯兰难民,可是得到的回报却是强奸、杀人、放火、偷盗等各种暴力犯罪
  而这些可怕的场景,在西方白左势力的压制下,甚至都无法见诸报刊和网络。
  不过,一系列难民的暴乱事件,已经让一些欧洲国家和民众转变了对待难民的态度。
  他们,已经从之前的开放接纳,变成了如今的极度排斥。
  但是难民带给各国民众的危机,却并没有因此得到缓解,每年仍有无数想不劳而获的所谓“难民”四处在寻找落脚点。
  前几个月,就有一批中东伊斯兰难民,趁机涌入了韩国济州岛。
  这些难民的各种违法行为,已经给当地的民众带去了极大困扰。
  如今在白左和伊斯兰极端势力推动下的“难民永动机”已经陷入癫狂。
  不改变国际社会对难民的认定和政策,将会成为各国善良民众的灾难。
  据外媒6月17日报道,目前已有近3万难民涌向了中方边境,他们试图围攻印巴边境,呼吁中国强制收留。
  美国方面也趁机提出了无理要求:
  希望中方能开放边境收留这批难民,但中方对于这一事件却持有完全相反的意见。
  其次德国全面收留难民的前车之鉴,已经给世人提了个醒。
  中国已经秉持人道主义原则,世界各地的难民提供了无数的物资,中国已经尽到了自己对世界的责任。
  而且,中国不是移民国家,我们有十几亿人口,本来就是地少人多、资源贫乏的国家。
  中国曾经推行了几十年的计划生育,让数亿汉人婴儿无法见到自己的母亲。
  中国还有无数失独的老人,他们每天在伤心地思念着孩子。
  任何接收难民和移民的言论,无疑是对这些民众的心口在捅刀子。
  归根到底,难民还是应该留在自己的祖国,去建设自己家园!
  中国,是中国人的中国!

Gmail Confidential保密新功能 可定期毁灭及限定阅读

$
0
0
问:一直以来,Gmail都有一个问题,如果电邮的收件人被黑客暗中加了邮件转递功能,他的邮件会一直被窃而懵然不知。如此一来,你发送给对方的敏感邮件就会被交到黑客手中,这实在太危险了。但现在,Gmail有一个保密功能,似乎可以解决这个存在已久的问题,不知能否介绍一下?

李建军:Gmail的保密功能,并非将你的邮件加密,所以英文用Confidential,而不是Encryption,只不过有这项功能的邮件,不论收件人是否使用Gmail,都可以定期毁灭,同时还可以设定收件人必须通过手机取得特定短讯密码才能打开邮件,另外还可以禁止收件人将邮件转递、列印、或下载到其他地方,那么就算收件人的户口被暗中加入邮件转递功能也好,黑客都无法破解相关邮件;而自动毁灭邮件的设定,则可使讯息在公安人员查到户口前自动删除,那么即使收件人忘记删除邮件,亦能确保讯息泄露。

问:如果收件人并非使用GMail,又能否享受得到Confidential功能的好处呢?

李建军:一样可以,只不过收件人只会收到有特殊连结的电邮,收件人必须在指定时间内,打开指定连结读取电邮内容,而如果配合短讯代码功能,基本上,你可以防止非指定收件人成功读取电邮。因此这个新功能是相当实用,颇为适合中国的特殊环境。当然,这亦意味著你的朋友每一次要看你这类的邮件,都要尝试翻墙,或者使用香港的预付卡浏览和检查邮件,因为中国现时仍然严密封锁GMail。如果你的收件人并非在海外,而在中国本土,就要提醒用者必须先翻墙。

问:如果是用G Suite又能否享受这项功能的好处呢?

李建军:都可以的,因为谷歌已经将这项功能引进到G Suite,甚至可以设定你G Suite帐户所有邮件预设Confidential模式送出,因为G Suite有比较多企业级功能,所以可以提供的功能亦比较多。

问:但这项功能并非百分百可以防止邮件外泄,还有甚么缺点,需要听众使用时特别要注意的?

李建军:这项功能唯一无法解决是荧幕撷取功能,Gmail是防止不了,而Android电话、iPhone以及Mac机都有内置这项功能,所以最安全同保险的做法,是用Confidential功能传送邮件同时,再将内容用PGP加密,再在没有荧幕撷取功能的电脑上阅读邮件,基本上就杜绝大部分邮件外泄的可能。天下间没有百分百防止邮件内容外泄的方法,但要做到将外泄机会大大减低,这部分在现行技术下提供不同的可能性,做到防止外泄。

如果使用G Suite,可以指定使用特定IP的人才可打开G Suite,而用于登入G Suite的电脑是特别指定,并不提供任何荧幕撷取功能,透过这种做法就可以避免资料落入其他人手中。

问:如果要传送使用Confidential模式的电子邮件,是否只能使用GMail网页版?

李建军:除了Gmail网页版或更新版本的Gmail App才可能使用Confidential技术,其他公司的软件则无法使用该技术(作传送),只能阅读。

Dictionary on DNS

$
0
0


查字典的工具,在 shell 下使用,通过网络查询,方便喜欢英文的 Linux/Mac Hackers 使用
优点是速度快,并且无须客户端,随时可以查哦~

Features

  1. 速度快,跟本机一样快~
  2. 支持词组
    $ j a little
    少量, 少许
  3. 支持 任何语言->任何语言(理论上支持,暂缺词库)
    $ j 西藏
    Tibet
  4. 区分大小写
    $ j frank
    [fræŋk]
    adj.
    坦白的, 率直的, 老实的
    vt.
    免费邮寄
    n.
    免费邮寄特权
    大写
    $ j Frank
    [fræŋk]
    n.
    弗兰克(男子名)
  5. 模糊查找
    $ j appe
    No word 'appe' found, did you mean:
    1. nappe [næp] n. 越过水坝落下的水, 叠层结构, 等分半圆锥
    2. apple ['æpl] n. 苹果, 似苹果的果实
    3. appel [ә'pel] n. 灵快的踏足, 垫步

Usage

  1. 在 ~/.bashrc 的末尾添加下面几行
    # jianbing.org on DNS
    functionj {
    dig "$*.jianbing.org" +short txt | perl -pe's/\\(\d{1,3})/chr $1/eg; s/(^"|"$)//g'
    }
  2. 重新打开你的 shell 或者 $ . ~/.bashrc
  3. Enjoy jianbing on DNS
    $ j cat
    [kæt]
    n.

Installation

  1. 安装virtualenv和依赖
    $ virtualenv env
    $ . ./env/bin/activate
    $ pip install dnslib # easy & fast than dnspython
    $ pip install gevent # fastest network library
    $ apt-get install aspell # spelling check. or $ yum install aspell
  2. 下载星际译王词库
    $ #(因为比较难下载到,仓库里提供了一个压缩包
    $ #如果你不是用的这个,需要修改 stardict.py 里的配置
    $ tar xvf stardict-lazyworm-ec-2.4.2.tar.bz2
    $ cd stardict-lazyworm-ec-2.4.2
    $ gunzip -S '.dz' lazyworm-ec.dict.dz
  3. 运行
    $ sudo ./jianbing-dns.py
  4. 如果需要管理进程,请使用 supervisor
  5. 本机测试,在 ~/.bashrc 的末尾添加下面几行
    # jianbing.org on DNS
    functionj {
    dig "$*.jianbing.org" +short txt @localhost | perl -pe's/\\(\d{1,3})/chr $1/eg; s/(^"|"$)//g'
    }
  6. 部署到外网,修改解析(可选)
    1. 为 ns1.youdomain.com 添加一个 A 记录,指向你的服务器地址
    2. 添加一个DNS泛解析,在 *.yourdomain.com 添加 NS 记录指向 ns1.yourdomain.com
    3. 修改上一步的那几行,去掉 @localhost,将 jianbing.org 改为 yourdomain.com
  7. 验证上一步
    $ dig +trace apple.yourdomain.com

The Go Language Version - 2012-09-27

  1. 新增了 Go 版本,没有全面测试,速度应该是比 python 快一些
    $ mkdir your-local-go-location
    $ cd your-local-go-location
    $ export GOPATH=/path/to/your-local-go-location
    $ go get github.com/chuangbo/jianbing-dictionary-dns/golang/jianbing-dns
    $ sudo ./bin/jianbing-dns

增加模糊查找功能 - 2012-11-22

更换了模糊查找的实现 - 2013-01-15

换成 aspell,速度提高几十倍。原来是遍历字典,使用最短路径算法匹配最相似单词,现在使用 aspell 检查拼写,应该是用了基于统计模型的拼写检查算法。可能只支持 Linux,我在 Mac 上测试出现了段错误。计划下一步加上 nose / mock 测试,以及 travis-ci。
这里有徐宥老师翻译的文章,很值得一看 http://blog.youxu.info/spell-correct.html

from  https://github.com/chuangbo/jianbing-dictionary-dns

在OpenVZ vps上安装WireGuard-Go,从而在OpenVZ vps上,也能安装WireGuard服务端

$
0
0

0. 引言

在Wall越来越高的年代,能够找到一款稳定过Wall的V#P#N,已经越来越不容易。
先是Open#V#P#N的沦陷,然后AnyConnect也慢慢的被认证掉了,
纯V#P#N类的工具越来越少,但很多时候预计Socks协议的代理却远远不能满足我们的使用需要。
可能有的人会说,那就用WireGuard啊。
是,WireGuard确实是个完美的方案,但WireGuard只能安装在KVM/Xen/物理架构上,并不能安装在基于OpenVZ的vps上。
难道真的没办法了么?
其实WireGuard官方也提供了一个额外的解决方案,那就是 WireGuard-Go 。

1. 什么是WireGuard-Go

WireGuard-Go是WireGuard的Go语言实现。
在原版的WireGuard中,由于是基于C/C++语言编写,而且需要将成品的WireGuard模块 (wireguard.ko) 编译到系统内核中,这样一来基于OpenVZ的共享核心型虚拟化架构来讲根本无法成功安装WireGuard。
而在WireGuard-Go中,由于WireGuard模块已经通过Go语言实现,所以并不需要将模块编译到内核中,但相对于原版的WireGuard来讲,执行效率可能会略有下降。但这已经是在OpenVZ这种极端环境下的唯一选择。(谁让V#P#N没有其他选择了呢)(逃)

2. 安装WireGuard-Go

2.1 前提条件

首先,在安装WireGuard-Go前,我们来看下安装WireGuard-Go的一些需求。
编译环境:
任何虚拟化架构或者物理机架构
内存>512MB (推荐>1GB,如果内存不足推荐通过增加Swap的方式临时扩展内存)
磁盘可用空间>5GB
安装Golang环境 (下文会具体讲安装及编译过程)
运行环境:
OpenVZ虚拟化架构 (Docker/LXC尚未进行测试)
内存>128MB (推荐>256MB)
磁盘可用空间>500MB
开启TUN/TAP (可以在VPS后台控制面板中检查并打开此项)
编译环境和运行环境可以是同一台服务器,也可以在不同的服务器上,下文会具体讲如何导出编译结果。
一定要注意,毕竟WireGuard怎么说也是个V#P#N,所以一定会用到TUN/TAP,请务必开启TUN/TAP,以防止WireGuard无法正常转发流量!

2.2 准备编译环境

首先我们登录编译环境服务器,并安装Golang环境:
wget -O /tmp/golang.tar.gz https://dl.google.com/go/go1.12.4.linux-amd64.tar.gz
tar -C /usr/local -xvzf /tmp/golang.tar.gz
之后配置Golang相关环境变量:
export PATH=$PATH:/usr/local/go/bin

2.3 编译WireGuard-Go

在Golang安装完成之后,开始下载WireGuard-Go源码:
mkdir -p /tmp/gobuild/ && cd /tmp/gobuild/
git clone https://git.zx2c4.com/wireguard-go
cd wireguard-go
配置环境变量并开始编译(请确保你的网络状况良好,推荐使用海外服务器进行编译):
export GOPATH="/tmp/gobuild/"
go build -v -o "wireguard-go"
如果没有遇到任何错误的话,我们会在同目录看到一个 wireguard-go 的可执行文件。
将此文件复制到系统目录中(本机编译本机安装):
cp wireguard-go /usr/sbin/wireguard-go
或者通过SSH、FTP等方式传输到目标运行环境服务器上,并将文件导入上述位置并配置可执行权限(异机编译安装)。
如果无法自行编译安装,可以使用博主提供的成品编译二进制文件:
https://download.ilemonrain.com/WireGuard-Go/precompile/wireguard-go.gz

2.4 安装并配置WireGuard

看到这里很多人会问了,我们安装的不是WireGuard-Go么?怎么又要安装WireGuard了?
先别急,我在这里解释一下。
在上一步里安装的WireGuard-Go,只是将WireGuard的内核部分 (wireguard.ko) 编译完成了,我们还需要编译WireGuard主程序 (wg 和 wg-quick),才能使得WireGuard能够正常使用,同时也可以像正常配置WireGuard那样配置WireGuard-Go。
首先安装必要的编译环境组件:
For Debian/Ubuntu:
apt-get install libmnl-dev libelf-dev build-essential pkg-config
For CentOS
yuminstalllibmnl-develelfutils-libelf-develpkg-config @development-tools
之后下载源码包:
mkdir -p /tmp/build/ && cd /tmp/build/
git clone https://git.zx2c4.com/WireGuard
cd wireguard-go/src/tools
编译安装WireGuard工具:
make && make install
到这里,WireGuard的wg和wg-quick这两条命令,应该就可以使用了。
再加上上一步中编译好的wireguard-go,我们可以在OpenVZ平台上运行WireGuard了。

3. 配置WireGuard-Go

首先执行WireGuard-Go,启动WireGuard内核,并创建一块虚拟网卡(可能现在不会显示出来,但当使用wg或wg-quick命令行启动时就会自动出现):
由于是测试版的原因,会自动弹出一个类似这样的警告:
WARNING WARNING WARNING WARNING WARNING WARNING WARNING
W G
W You are running this software on a Linux kernel, G
W which is probably unnecessary and foolish. This G
W is because the Linux kernel has built-in first G
W class support for WireGuard, and this support is G
W much more refined than this slower userspace G
W implementation. For more information on G
W installing the kernel module, please visit: G
https://www.wireguard.com/install G
W G
W If you still want to use this program, against G
W the advice here, please first export this G
W environment variable: G
W WG_I_PREFER_BUGGY_USERSPACE_TO_POLISHED_KMOD=1 G
W G
WARNING WARNING WARNING WARNING WARNING WARNING WARNING
执行以下命令即可继续使用:
export WG_I_PREFER_BUGGY_USERSPACE_TO_POLISHED_KMOD=1
之后执行命令,创建虚拟网卡:
wireguard-go wg
下述部分你可以参照doubi的WireGuard教程操作!
传送门:https://doubibackup.com/qbc20cn3.html
接下来我们创建WireGuard的配置文件:
mkdir -p /etc/wireguard/ && cd /etc/wireguard/
生成密钥对:
wg genkey | tee sprivatekey | wg pubkey > spublickey
wg genkey | tee cprivatekey | wg pubkey > cpublickey
确认你的外网网卡(对于OpenVZ虚拟化架构来讲,一般都是 venet0 )
root@ovzhost:~# ip addr
1: lo: mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: venet0: mtu 1500 qdisc noqueue state UNKNOWN
link/void
inet 127.0.0.2/32 scope host venet0
inet X.X.X.X/32 brd X.X.X.X scope global venet0:0
3: wg: mtu 1420 qdisc noop state DOWN qlen 500
link/none
生成WireGuard服务端配置文件 wg0.conf :
echo"[Interface]
PrivateKey = $(cat sprivatekey)
Address = 10.0.0.1/24
PostUp = iptables -A FORWARD -i wg0 -j ACCEPT; iptables -A FORWARD -o wg0 -j ACCEPT; iptables -t nat -A POSTROUTING -o venet0 -j MASQUERADE
PostDown = iptables -D FORWARD -i wg0 -j ACCEPT; iptables -D FORWARD -o wg0 -j ACCEPT; iptables -t nat -D POSTROUTING -o venet0 -j MASQUERADE
ListenPort = 6666
MTU = 1420

[Peer]
PublicKey = $(cat cpublickey)
AllowedIPs = 10.0.0.2/32"
| sed '/^#/d;/^\s*$/d'> wg0.conf
生成WireGuard客户端文件 client.conf :
echo"[Interface]
PrivateKey = $(cat cprivatekey)
Address = 10.0.0.2/24
DNS = 8.8.8.8
MTU = 1420

[Peer]
PublicKey = $(cat spublickey)
Endpoint = $(curl -s whatismyip.akamai.com):6666
AllowedIPs = 0.0.0.0/0, ::0/0
PersistentKeepalive = 30"
| sed '/^#/d;/^\s*$/d'> client.conf
当然,不要忘了开启转发:
echo 1> /proc/sys/net/ipv4/ip_forward
echo "net.ipv4.ip_forward = 1">> /etc/sysctl.conf
sysctl -p
在确认配置无误后,启动WireGuard服务端:
wg-quick up wg0
会得到类似这样的结果:
root@ovzhost:/etc/wireguard# wg-quick up wg0
[#] ip link add wg0 type wireguard
RTNETLINK answers: Operation not supported
[!] Missing WireGuard kernel module. Falling back to slow userspace implementation.
[#] wireguard-go wg0
WARNING WARNING WARNING WARNING WARNING WARNING WARNING
W G
W You are running this software on a Linux kernel, G
W which is probably unnecessary and foolish. This G
W is because the Linux kernel has built-in first G
W class support for WireGuard, and this support is G
W much more refined than this slower userspace G
W implementation. For more information on G
W installing the kernel module, please visit: G
https://www.wireguard.com/install G
W G
WARNING WARNING WARNING WARNING WARNING WARNING WARNING
INFO: (wg0) 2019/04/19 09:45:50 Starting wireguard-go version 0.0.20190409-9-gd024393
[#] wg setconf wg0 /dev/fd/63
[#] ip address add 10.0.0.1/24 dev wg0
[#] ip link set mtu 1420 up dev wg0
[#] iptables -A FORWARD -i wg0 -j ACCEPT; iptables -A FORWARD -o wg0 -j ACCEPT; iptables -t nat -A POSTROUTING -o venet0 -j MASQUERADE
则说明WireGuard启动成功(出错为正常现象,因为无法加载基于内核模块的WireGuard,所以只能加载WireGuard-Go作为Fallback方案)
将client.conf传回本地并导入客户端中(个人推荐使用TunSafe,下载地址自己找),即可完成配置工作。

4. 一点点善后工作...

配置WireGuard开机启动:
systemctl enable wg-quick@wg0
清理编译过程中产生的垃圾文件:
rm -rf /tmp/gobuild/
rm -rf /tmp/build/
rm -f /tmp/golang.tar.gz
Enjoy.
-------

配置vlmcsd,让你的Linux服务器瞬间变成KMS服务器

$
0
0

0. 一点废话

注意:请尽可能避免在欧美服务器上部署KMS服务器(尤其是美国本土的服务器)!这是一种盗版行为,小心微软的律师函or服务器被封停!这不是开玩笑!(建议使用俄罗斯/东欧的服务器)

1. 安装必要环境

首先,安装编译所需要的软件包:
For CentOS:
yum makecache fast
yum install git gcc make -y
For Fedora:
dnf makecache
dnf install git gcc make -y
For Ubuntu/Debian:(Ubuntu 16+/Debian 8+可以将apt-get换为apt)
apt-get update
apt-getinstall git gcc make -y

2. 安装vlmcsd

使用Git将vlmscd的源码Clone下来:
git clone https://github.com/Wind4/vlmcsd.git
开始编译:
cd vlmcsd/
make
之后在 bin 目录下,会得到两个文件:vlmcs 和 vlmcsd
vlmcs 是KMS的客户端(其实就是个调试用的工具,稍后会用到)
vlmcsd 是KMS的服务端
如果为了以后方便的话,可以将这两个文件复制到 /usr/sbin 或者你想要的目录中,方便命令执行:
cp bin/* /usr/sbin/

3. 启动KMS服务器并验证配置

启动KMS服务端:
vlmcsd
程序会自动转入后台运行,然后我们执行下KMS的客户端,验证是否正常启动:
vlmcs
如果返回的是如下结果:
[root@localhost bin]# ./vlmcs
Connecting to 127.0.0.1:1688 ... 127.0.0.1:1688: Connection refused
Fatal: Could not connect to any KMS server
[root@localhost bin]#
则说明KMS的服务端(vlmcsd)没有正确启动,需要排查原因;
如果返回的是如下结果:
[root@localhost bin]# ./vlmcs
Connecting to 127.0.0.1:1688 ... successful
Sending activation request (KMS V6) 1 of 1 -> 05426-03858-004-728820-03-1051-9200.0000-3322017
(3A1C049600B60076)
[root@localhost bin]#
则说明KMS服务器正常启动,可以继续激活工作了。

4. 使用KMS服务器激活系统 (Windows端)

回到Windows端,开启一个cmd(命令提示符)窗口(如果有UAC的话,请注意使用管理员身份提权)
然后输入命令,设置KMS服务器为你的服务器:
slmgr /skms [你的KMS服务器IP地址]
等待弹出提示:
密钥管理服务计算机名称成功地设置为 x.x.x.x。
之后输入命令,开始激活:
slmgr /ato
即可完成KMS激活工作。
友情提示:KMS每次激活只有180天的有效期,但如果执行重新激活,有效期将会重新回到180天.

使用Grub+Memdisk,随意引导ISO镜像,安装系统

$
0
0
之前在用 Vicer的一键重装脚本: https://moeclub.org/2018/04/03/603/?spm=43.4 ,感觉重装服务器系统真的好方便,至少不用再额外买数据盘,然后把ISO镜像DD进去了……
不过用了一段时间,发现了点事情:
  • 无法安装CentOS 7系统
  • 只能装 CentOS/Ubuntu/Debian ,其他系统无法重装
  • 无法引导自己需要的ISO镜像
    总之就是世界上没有太完美的事物嘛,所以自己丰衣足食找个补充方案。
    在前几天研究 netboot.xyz 的ISO启动网络安装的方案后,我一直在想,如果使用者的服务器不提供iPXE/gPXE的话(比如一闪而过,根本不给按Ctrl+B的机会),那么是不是就真的无解了呢?
    然后我在服务器上翻了一遍又一遍,发现了在角落里瑟瑟发抖的Grub,就决定是你了,皮卡丘!(雾)
在正式开始教程前,你必须要注意的事项! ↓↓↓
  • 请准备好VNC!安装过程中全程使用VNC完成!
  • 教程不适用于 OpenVZ / LXC 虚拟化!
  • 请确保你的服务器内存足够大!因为安装镜像需要加载到内存中运行,所以建议预留出来双倍于镜像大小的空间(或者 (系统运行所需的内存+镜像大小)×1.2 )!以防止启动或安装过程中因为内存不足诱发的Out Of Memory/Kernel Panic!
  • 建议将镜像下载到磁盘中,避免启动时下载镜像!如果网络缓慢或稳定的话,可能会极大延长启动时间!
  • 目前此教程仅适用于使用 Grub+BIOS+MBR 引导的系统!(也就是如果在你的 /boot 文件夹中发现了EFI字样的文件夹,十有八九是Grub+UEFI+GPT)

Step 1:检测系统环境

首先,我们来检测下进行ISO安装的基础环境:
我们先来确定下是否存在Grub引导器:
ls /boot/grub/grub.cfg
如果返回结果为:
ls /boot/grub/grub.cfg
则说明存在Grub引导,如果返回结果为:
ls: cannot access /boot/grub/grub.cfg: No such file or directory
则说明系统并不是使用Grub作为引导,可以考虑关掉教程了(当然部分系统可能使用grub2做引导,请根据实际情况进行修改)
确认系统是使用Grub引导后,我们来确认下分区布局:
执行命令:
df -h
会返回类似这样的结果:
Filesystem Size Used Avail Use% Mounted on
/dev/vda1 25G 1.8G 22G 8% /
udev 10M 0 10M 0% /dev
tmpfs 201M 4.4M 196M 3% /run
tmpfs 501M 0 501M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 501M 0 501M 0% /sys/fs/cgroup
如果只有一个 "/",则说明 /boot (引导目录) 和 / (根目录) 都在同一分区中;
如果返回的类似这样的结果:
Filesystem Size Used Avail Use% Mounted on
udev 3.9G 0 3.9G 0% /dev
tmpfs 799M 19M 781M 3% /run
/dev/md0 137G 18G 113G 14% /
tmpfs 3.9G 0 3.9G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/vda1 232M 36M 180M 17% /boot
tmpfs 799M 0 799M 0% /run/user/0
也就是说,"/"和 "/boot"同时存在,这时候我们就要谨慎对待处理结果(稍后会讲到)。
确认完成以下信息后,我们开始准备进行下一步的操作。

Step 2:下载系统镜像

从各大镜像站中,下载你需要的ISO镜像:
比如使用curl命令:
curl -o /boot/isoboot.iso https://mirrors.aliyun.com/ubuntu-releases/releases/16.04.5/ubuntu-16.04.5-server-amd64.iso
或者使用wget命令(取决于你的使用习惯):
wget -O /boot/isoboot.iso https://mirrors.aliyun.com/ubuntu-releases/releases/16.04.5/ubuntu-16.04.5-server-amd64.iso
在这一步下载系统镜像是为了避免在Grub启动过程中下载镜像,会让安装过程变得更难于处理。完成后继续下一步。

Step 3:准备Memdisk

既然是用网络安装,那肯定是不能用现有的磁盘存放安装镜像了……
那么,我们需要一个小工具来帮助我们:Syslinux
Ubuntu/Debian 安装Syslinux:
apt-get install syslinux -y
CentOS 安装Syslinux:
yum install syslinux -y
安装完成后,复制memdisk文件到引导目录:
cp -f /usr/share/syslinux/memdisk /boot/memdisk
在这一步完成后,开始处理Grub引导项目。

Step 4:处理Grub引导项目

我们进入Grub配置文件所在目录:
cd /etc/grub.d/
ls
会看到一些和Grub启动相关的文件:
00_header 05_debian_theme 10_linux 20_linux_xen 30_os-prober 30_uefi-firmware 40_custom 41_custom README
接下来,编辑 41_custom 这个文件,并使用以下内容覆盖原文件:
#!/bin/sh
cat <'OS Web Install'
{ insmod part_msdos insmod part_gpt insmod ext2 set root=(hd0,msdos1) echo'Loading memdisk ...' linux16 /boot/memdisk raw iso echo'Loading ISO ...' initrd16 /boot/isoboot.iso echo'Booting ISO ...'} EOF
在这里,如果你还记得刚才分析分区布局时候的结果,现在就该派上用场了:
  • 如果你的服务器是单块硬盘,而且只有一个分区,那么root的值为 (hd0,msdos1)
  • 如果你的服务器的单块硬盘,存在不止一个分区,请看 /boot 分区在哪个盘上,比如在 /dev/vda5 上,那就是 (hd0,msdos5)
  • 其他更复杂的情况,请重启服务器,到达Grub界面时按下 C 键,进入Grub命令行,并按照以下步骤操作:
grub> ls
(hd0) (hd0,msdos1) (hd0,msdos5)
grub> ls (hd0,msdos1)/
error: unknown filesystem. # 说明这个分区不是正确的启动分区,继续尝试
grub> ls (hd0,msdos5)/
lost+found/ etc/ (各种文件夹) # 说明这个分区是正确的启动分区
之后输入 reboot 回到系统中,继续编辑文件并填写正确的分区信息。
编辑完成后,继续修改 /etc/default/grub 配置文件:
GRUB_DEFAULT=0
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR=lsb_release -i -s 2> /dev/null || echo Debian
GRUB_CMDLINE_LINUX_DEFAULT="quiet"
GRUB_CMDLINE_LINUX=""
将其中的 GRUB_TIMEOUT 的数值改的稍大些(比如30),之后保存文件退出。
执行命令,更新Grub配置文件信息:
update-grub
之后查看修改好的Grub配置文件,确认是否正确写入:
cat /boot/grub/grub.cfg
如果在文件末尾发现了你写的Grub配置信息,比如这样:

BEGIN /etc/grub.d/41_custom

menuentry 'OS Web Install' {
insmod part_msdos
insmod part_gpt
insmod ext2
set root=(hd0,msdos1)
echo 'Loading memdisk ...'
linux16 /boot/memdisk raw iso
echo 'Loading ISO ...'
initrd16 /boot/isoboot.iso
echo 'Booting ISO ...'
}
### END /etc/grub.d/41_custom ###
则说明已经处理好引导,可以开始愉快的重装了~

Step 5: 重启开始安装系统

接下来,重启系统后会来到安装界面.
我们选择最下面的 OS Web Install 选择,就可以愉快的开始ISO安装啦~
(ps: 如果无法启动安装的话,可以试试 netboot.xyz 的启动镜像: https://boot.netboot.xyz/ipxe/netboot.xyz.iso )

动态dns解析客户端程序-inadyn

$
0
0
Dynamic DNS client with SSL/TLS support 

Automated Dynamic DNS Client

Travis Status Coverity Status
The latest release is always available from GitHub at

Table of Contents

Introduction

Inadyn is a small and simple Dynamic DNS, DDNS, client with HTTPS support. Commonly available in many GNU/Linux distributions, used in off the shelf routers and Internet gateways to automate the task of keeping your Internet name in sync with your public¹ IP address. It can also be used in installations with redundant (backup) connections to the Internet.
Most people are unaware they share a pool of Internet addresses with other users of the same Internet Service Provider (ISP). Protocols like DHCP, PPPoE, or PPPoA are used to give you an address and a way to connect to the Internet, but usually not a way for others to connect to you. If you want to run an Internet server on such a connection you risk losing your IP address every time you reconnect, or as in the case of DHCP even when the lease is renegotiated.
By using a DDNS client like inadyn you can register an Internet name with a DDNS provider, like FreeDNS. The DDNS client updates your DNS record periodically and/or on demand when your IP address changes. Inadyn can maintain multiple host records with the same IP address, use a combination of a script, the address from an Internet-facing interface, or default to using the IP address change detector of the DDNS provider.
__
¹ Public IP address is the default, private addresses can also be used.

Supported Providers

Some of these services are free of charge for non-commercial use, some take a small fee, but also provide more domains to choose from.
DDNS providers not supported natively like http://twoDNS.de, can be enabled using the generic DDNS plugin. See below for configuration examples.
In-A-Dyn defaults to HTTPS, but not all providers may support this, so try disabling SSL for the update (ssl = false) or the checkip phase (checkip-ssl = false) in the provider section, in case you run into problems.
HTTPS is enabled by default since it protects your credentials from being snooped and reduces the risk of someone hijacking your account.

Configuration

In-A-Dyn supports updating several DDNS servers, several accounts even on different DDNS providers. The following /etc/inadyn.conf example show how this can be done. To verify your configuration, without starting the daemon, use:
inadyn --check-config
This looks for the default .conf file, to check any file, use:
inadyn --check-config -f /path/to/file.conf

Example

# In-A-Dyn v2.0 configuration file format
period = 300
user-agent = Mozilla/5.0

# The FreeDNS username must be in lower case
# The password (max 16 chars) is case sensitive
provider freedns {
username = lower-case-username
password = case-sensitive-pwd
hostname = some.example.com
}

provider freemyip {
password = YOUR_TOKEN
hostname = YOUR_DOMAIN.freemyip.com
}

provider dyn {
ssl = false
username = charlie
password = snoopy
hostname = { peanuts, woodstock }
user-agent = Mozilla/4.0
}

# With multiple usernames at the same provider, index with :#
provider no-ip.com:1 {
username = ian
password = secret
hostname = flemming.no-ip.com
user-agent = inadyn/2.2
}

# With multiple usernames at the same provider, index with :#
provider no-ip.com:2 {
username = james
password = bond
hostname = spectre.no-ip.com
checkip-ssl = false
checkip-server = api.ipify.org
}

# With multiple usernames at the same provider, index with :#
provider no-ip.com:3 {
username = spaceman
password = bowie
hostname = spaceman.no-ip.com
checkip-command = "/sbin/ifconfig eth0 | grep 'inet6 addr'"
}

# Note: hostname == update-key from Advanced tab in the Web UI
provider tunnelbroker.net {
username = futurekid
password = dreoadsad/+dsad21321 # update-key-in-advanced-tab
hostname = 1234534245321 # tunnel-id
}

provider dynv6.com {
username = your_token
password = n/a
hostname = { host1.dynv6.net, host2.dynv6.net }
}

provider cloudxns.net {
username = your_api_key
password = your_secret_key
hostname = yourhost.example.com
}

provider dnspod.cn {
username = your_api_id
password = your_api_token
hostname = yourhost.example.com
}

provider cloudflare.com {
username = your_email
password = your_api_token
hostname = yourhost.example.com
}
Notice how this configuration file has two different users of the No-IP provider -- this is achieved by appending a :ID to the provider name.
We also define a custom cache directory, default is to use /var/cache. In our case /mnt is a system specific persistent store for caching your IP address as reported to each provider. Inadyn use this to ensure you are not locked out of your account for excessive updates, which may happen if your device Internet gateway running inadyn gets stuck in a reboot loop, or similar.
However, for the caching mechanism to be 100% foolproof the system clock must be set correctly -- if you have issues with the system clock not being set properly at boot, e.g. pending receipt of an NTP message, use the command line option --startup-delay=SEC. To tell inadyn it is OK to proceed before the SEC timeout, use SIGUSR2.
The last system defined is the IPv6 https://tunnelbroker.net service provided by Hurricane Electric. Here hostname is set to the tunnel ID and password must be the Update key found in the Advanced configuration tab.
Sometimes the default checkip-server for a DDNS provider can be very slow to respond, to this end Inadyn now supports overriding this server with a custom one, like for custom DDNS provider, or even a custom command. See the man pages, or the below section, for more information.
Some providers require using a specific browser to send updates, this can be worked around using the user-agent = STRING setting, as shown above. It is available both on a global and on a per-provider level.
NOTE: In a multi-user server setup, make sure to chmod your .conf to 600 (read-write only by you/root) to protect against other users reading your DDNS server credentials.

Custom DDNS Providers

In addition to the default DDNS providers supported by Inadyn, custom DDNS providers can be defined in the config file. Use custom {} in instead of the provider {} section used in examples above.
In-A-Dyn use HTTP basic authentication (base64 encoded) to communicate username and password to the server. If you do not have a username and/or password, you can leave these fields out. Basic authentication, will still be used in communication with the server, but with empty username and password.
A DDNS provider like http://twoDNS.de can be setup like this:
custom twoDNS {
username = myuser
password = mypass
checkip-server = checkip.two-dns.de
checkip-path = /
ddns-server = update.twodns.de
ddns-path = "/update?hostname="
hostname = myhostname.dd-dns.de
}
For https://www.namecheap.com DDNS can look as follows. Notice how the hostname syntax differs between these two DDNS providers. You need to investigate details like this yourself when using the generic/custom DDNS plugin:
custom namecheap {
username = myuser
password = mypass
ddns-server = dynamicdns.park-your-domain.com
ddns-path = "/update?domain=YOURDOMAIN.TLD&password=mypass&host="
hostname = { "alpha", "beta", "gamma" }
}
Here three hostnames are updated, one HTTP GET update request for every DDNS provider is performed, for every listed hostname. Some providers, like FreeDNS, support setting up CNAME records (aliases) to reduce the amount of records you need to update. FreeDNS even default to linking multiple records to the same update, which may be very confusing if you want each DNS record to be updated from a unique IP address -- make sure to check your settings at the DDNS provider!
Your hostname is automatically appended to the end of the ddns-path, as is customary, before it is communicated to the server. Username is your Namecheap username, and password would be the one given to you in the Dynamic DNS panel from Namecheap. Here is an alternative config to illustrate how the hostname setting works:
custom kruskakli {
username = myuser
password = mypass
ddns-server = dynamicdns.park-your-domain.com
ddns-path = "/update?password=mypass&domain="
hostname = YOURDOMAIN.TLD
}
The generic plugin can also be used with providers that require the client's new IP address in the update request. Here is an example of how this can be done if we pretend that http://dyn.com is not supported by inadyn. The ddns-path differs between providers and is something you must figure out. The support pages sometimes list this under an API section, or similar.
# This emulates dyndns.org
custom dyn {
username = DYNUSERNAME
password = DYNPASSWORD
ddns-server = members.dyndns.org
ddns-path = "/nic/update?hostname=%h.dyndns.org&myip=%i"
hostname = { YOURHOST, alias }
}
Here a fully custom ddns-path with format specifiers are used, see the inadyn.conf(5) man page for details on this.
When using the generic plugin you should first inspect the response from the DDNS provider. By default Inadyn looks for a 200 HTTP response OK code and the strings "good""OK""true", or "updated" in the HTTP response body. If the DDNS provider returns something else you can add a list of possible ddns-response = { Arrr, kilroy }, or just a single ddns-response = Cool -- if your provider does give any response then use ddns-response = "".
If your DDNS provider does not provide you with a checkip-server, you can use other free services, like http://ipify.org
checkip-server = api.ipify.org
or even use a script or command:
checkip-command = /sbin/ifconfig eth0 | grep 'inet addr'
These two settings can also be used in standard provider{} sections.
Note: hostname is required, even if everything is encoded in the ddns-path! The given hostname is appended to the ddns-path used for updates, unless you use append-myip in which case your IP address will be appended instead. When using append-myip you probably need to encode your DNS hostname in the ddns-path instead, as is done in the last example above.

Build & Install

Homebrew (macOS)

To run the latest stable version on macOS, type:
brew install inadyn
To run the latest version from the master branch, install the git tap instead:
brew install --HEAD troglobit/inadyn/inadyn
Either of these will install all dependencies.

Building from Source

First download the latest official In-A-Dyn release from GitHub:
In-A-Dyn requires a few libraries to build. The build system searches for them, in their required versions, using the pkg-config tool:
They are available from most UNIX distributions as pre-built packages. Make sure to install the -dev or -devel package of the distribution packages when building Inadyn. On Debian/Ubuntu (derivatives):
$ sudo apt install gnutls-dev libconfuse-dev
To build you also need a C compiler, the pkg-config tool, and make:
$ sudo apt install build-essential pkg-config
When building with HTTPS (SSL/TLS) support, make sure to also install the ca-certificates package on your system, otherwise Inadyn will not be able to validate the DDNS provider's HTTPS certificates.

Configure & Build

The GNU Configure & Build system use /usr/local as the default install prefix. In many cases this is useful, but this means the configuration files and cache files will also use that same prefix. Most users have come to expect those files in /etc/ and /var/run/ and configure has a few useful options that are recommended to use:
$ ./configure --prefix=/usr --sysconfdir=/etc --localstatedir=/var
$ make -j5
$ sudo make install-strip
You may want to remove the --prefix=/usr option.

SSL/TLS Support

By default inadyn tries to build with GnuTLS for HTTPS support. GnuTLS is the recommended SSL library to use on UNIX distributions which do not provide OpenSSL/LibreSSL as a system library. However, when OpenSSL or LibreSSL is available as a system library, for example in many embedded systems:
./configure --enable-openssl
To completely disable inadyn HTTPS support (not recommended!):
./configure --disable-ssl
For more details on the OpenSSL and GNU GPL license issue, see:

RedHat, Fedora, CentOS

On some systems the default configure installation path, /usr/local, is disabled and not searched by tools like ldconfig and pkg-config. So if configure fails to find the libConfuse libraries, or the .pc files, create the file /etc/ld.so.conf.d/local.conf with this content:
/usr/local/lib
update the linker cache:
sudo ldconfig -v |egrep libconfuse
and run the Inadyn configure script like this:
PKG_CONFIG_PATH=/usr/local/lib/pkgconfig ./configure

Integration with systemd

For systemd integration you need to install pkg-config, which helps the Inadyn build system figure out the systemd paths. When installed simply call systemctl to enable and start inadyn:
$ sudo systemctl enable inadyn.service
$ sudo systemctl start inadyn.service
Check that it started properly by inspecting the system log, or:
$ sudo systemctl status inadyn.service

Building from GIT

If you want to contribute, or simply just try out the latest but unreleased features, then you need to know a few things about the GNU build system:
  • configure.ac and a per-directory Makefile.am are key files
  • configure and Makefile.in are generated from autogen.sh, they are not stored in GIT but automatically generated for the release tarballs
  • Makefile is generated by configure script
To build from GIT; clone the repository and run the autogen.sh script. This requires the GNU tools automakeautoconfand libtool to be installed on your system. Released tarballs do not require these tools.
$ sudo apt install git automake autoconf
Then you can clone the repository and create the configure script, which is not part of the GIT repo:
git clone https://github.com/troglobit/inadyn.git
cd inadyn/
./autogen.sh
./configure && make
Building from GIT requires, at least, the previously mentioned library dependencies. GIT sources are a moving target and are not recommended for production systems, unless you know what you are doing!

Building with Docker

A Dockerfile is provided to simplify building and running inadyn.
docker build -t inadyn:latest .
docker run --rm -v "$PWD/inadyn.conf:/etc/inadyn.conf" inadyn:latest

Origin & References

This is the continuation of Narcis Ilisei's original INADYN. Now maintained by Joachim Nilsson. Please file bug reports, or send pull requests for bug fixes and proposed extensions at GitHub.

Viewing all 20453 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>