Sunday, February 3, 2013

The Design and Implementation of a Next Generation Name Service for the Internet


V. Ramasubramanian, E. Sirer, The Design and Implementation of a Next Generation Name Service for the Internet, In Proc. of SIGCOMM 2004.

This paper describes the design and implementation of the Cooperative Domain Name System (CoDoNS), which leverages P2P network to improve lookup performance.

Traditional DNS systems suffer from great latency, load imbalance, update propagation as well as implementation errors. They are vulnerable to attacks such as DoS attacks. The authors propose to use Distributed Hashing Tables to maintain a distributed Domain Name System called CoDoNS. CoDoNS decouples namespace management from the physical location of name servers in the network. Based on P2P networks, CoDoNS provides lower latency as the experiment shows. Eliminating the static query processing hierarchy and shedding load dynamically onto peer nodes greatly decreases the vulnerability of CoDoNS to denial of service attacks. Self-organization and continuous adaptation of replication avoids bottlenecks in the presence of flash crowds.

Many Distributed DNS Systems have been released since the paper was published such as BitTorrent powered DNS system. Some organizations such as piratebay use decentralized DNS system to avoid he increasing control which the governments have on the DNS root servers and continue exchange files with copyright.

7 comments:

  1. CoDoNS poses a good argument in showing a need for a more distributed DNS infrastructure suited to the current needs.

    Short timeouts for mappings used by CDNs like Akamai reduce DNS cache hits, which is a pertinent point. And, current trends in the past 9 years (since 2004 when paper was published) show that CDNs are growing even more explosively.

    It is highly arguable if the CoDoNs will work.

    Although, authors argue about it being easily implementable, they have not mentioned its feasibility in depth especially as this requires 1) update to all DHCP servers, 2) agreement of each authorative institution on the CoDONS infrastructure. Rather, small updates as is currently done seem to be more practical.

    While there is cause for concern in the fact that more than 90% of domains in internet is served by less than or equal to 3 name-servers each, but at the same time their data only shows this would effect well less than 1% of the top 500 domains. So, the threat is not significant enough.

    They do a good work in mining the fact that most of the domain name workload follow a power law like zipf distribution which can be a good argument for their Beehive infrastructure which gives near constant order results.

    To argue their point about ease of DoS attacks on the current DNS, they have suggested for a more replicated model. Although arguable, but stronger measures for counterattacking attacks have traditionally worked well than having more redundancy in the system.

    ReplyDelete
    Replies
    1. I believe that the tail-heavy and essentially long-tailed distribution suggests that the performance is significantly affected by non-popular lookups as well. So, in my opinion, only 1% of top 500 domains may not be a complete indicator of the fact that there is indeed a problem.

      Delete
  2. I believe this paper is worth considering for DNS replacement. They address almost all issues related to well-aged DNS. However, Arindam addressed couple of good points about how they measured the system.

    I think, we should think of this system as if it was implemented world-wide rather than a home node hooking itself up to old DNS server. If Codons is implemented using many dedicated servers worldwide to replace the current architecture, it will bring success. The peers in DHT should be dedicated servers of system, though. This will cause system to shrink to smaller scale but it will address the following security concerns better;
    -The case where a node joins the network, and acts properly for a while and then starts propagating false information about the value it is responsible for
    -Naive peers joining the network and being manipulated.

    Lastly, this has the problem of almost all DHT problems, one of which is the 'trust'. "Too much data or too few frinds"

    ReplyDelete
  3. My biggest criticism of this paper is the way that claim that the current DNS system is vulnerable to DDOS attacks. This is contrary to everything that I have heard and learned in my networking classes and general intuition of the system. Yet, that doesn't mean I am right about the security of DNS. I first checked there references on DNS and DDOS that the authors cite [5], and didn't really find anything that I thought was definitive on the subject. I then checked out wikipedia to see how many instances there were of DDOS attacks on the DNS structure of the internet. It appears that there have been only two major incidents, and that these attacks were not noticed by most end users. This intuitively makes sense since most of the DNS system is cached at every step along the chain. In order for an attack to completely cripple the internet, it would have to block the root servers for several days so that the caches along the routes to the main DNS servers would expire.

    Although there are several benefits to the new proposed system, I really don't understand why the authors try to make claims that don't really hold up in reality. Although its pretty clear that there are theoretical problems with DNS, these concerns have not played out in practice. It leaves a bad taste in my mouth reading the rest of the paper when the authors take this tactic to make their work more relevant.

    ReplyDelete
    Replies
    1. I think it is true that DNS is relatively vulnerable to DoS attacks. Due to the fact that most of real DoS attacks have been centralized to web server during the past, people have constructed methods to cope with DoS to web server; settling anti-ddos, L7 switch, caching server, ACL and so on.

      However, DNS is using udp so ACL cannot be used as I know, therefore it is quite vulnerable to UDP flooding. Second, although DNS has cache, query per second DNS server can handle is restrictive. Therefore, if there is DDos attack using large amount of query per second with several zombie computers, DNS may not be able to easily deal with it.

      Therefore DNS is needed to improve getting rid of its vulnerability to DDoS attacks, and I think CoDoNS suggested one of its solution which is balancing loads though it is not widely used today.

      Delete
    2. Another wrinkle to keep in mind is how well either system deals with the underlying popularity distribution of name and web requests. CoDoNS (since it's based on Beehive, which in turn uses this replication parameter derived from power law/zipf) seems pretty well suited to it. The results were pretty impressive, imo.

      Delete
  4. This paper makes a lot of claims. They deal a lot with legacy DNS vs. CoDoNS, but I think it makes a stronger case to also spend time comparing CoDoNS with the performance of other systems at the time, besides the brief Related Work section at the end. This is from 2006, and although I'm not up to date on all of the papers on this subject, I'd like to see how the performance of CoDoNS stacks up to more recent papers.
    On a related note, by looking at their data, they have a lot of latency spikes in the CoDoNS data, but DNS, while lower in performance overall, seems to be much more consistent without a large amount of variation. The large spikes in latency in CoDoNS are followed by large drops, where they then achieve better performance, but the spike looks like it could lead towards a less consistent system. I guess I don't know enough about more recent ideas in this field, but I get the feeling that this proposed system is highlighting possible issues with DNS that may not be as common as they imply with their data.

    ReplyDelete