Monday, January 14, 2013

week 2

Two papers in these weeks describe two important topics in network: discussion over the appropriateness of end-to-end argument after all these years of development of network, and consensus routing as a solution to the problem of lack of consistency.

On Monday, we read a paper End-to-End Arguments in System Design, which states that the placement of functions on internet should be placed end-to-end, or as "up" as possible for the completeness and correctness of the function, and it is amazing that it has guided the internet for more than 30 years. However, by the growth of internet, the functionality and environment have changed a lot and pure end-to-end arguments seem to have trouble in some cases. In paper Rethinking the Design of the Internet: End-to-end arguments vs. the brave new world, the authors argue that in the new world, instead of strictly following the end-to-end argument, we should carefully balance the pros and cons of placement of each individual function. Reasons that force us to rethink about the placement include, untruthfulness of less sophisticated internet users, complexity of functions, and interposition of third-parties like government. As a result, we need to rethink of our principles for function placement.

In second paper, Consensus Routing: The Internet as a Distributed System, the authors introduce consensus routing as a solution to the problem of lack of consistency, which is a result of our internet routing protocals' preference to responsiveness over consistency. Furthermore consensus routing also take the goal of liveliness into account by two different modes of packet delivery: stable mode, which ensures consistency because a route is adopted only after consensus among all routers, and transient mode, where heuristic forwarding is proceed after failed links. Transient forwarding schemes includes deflections, detour, and back-up routes. At the end of the paper, evaluation section shows us the effectiveness of this consensus routing method in maintaining connectivity, which turns out to be surprisingly good.

16 comments:

  1. I think its really important to also consider the changing environment of the internet, and how society has reacted to these changes. The arguments made in Rethinking the Design of the Internet: End-to-end arguments vs. the brave new world, are definitely ones that are worth considering, but it seems many of these concerns are already taken care of. For example, rsa (ssl) encryption and ipsec assuage many of the third party access concerns (governments or IP's looking at traffic) (although it does mention that it requires third parties to identify certificates). Furthermore, the flexible protocols of the internet allow for concerned parties to create their own lower layers (http://www.geekosystem.com/hacker-internet-in-space/) to avoid third parties all together. The internet is a complicated project, and requires complicated solutions for satisfy everyone's needs. This flexibility is a strength in my opinion, not a weakness.

    ReplyDelete
    Replies
    1. The point of the third party arguments wasn't necessarily strictly from the user side. Government and ISPs have very real motivations for monitoring and inspecting user traffic, and since these institutions drive a lot of the development of the network and the procedures supporting it, they have a lot of influence over future changes. The author was making the point that third parties themselves want to change the nature of the Internet, by moving away from the end-to-end principles so that they can exercise more control.

      Delete
  2. The Rethinking paper was really interesting from a historical perspective. Kinda going off of what Jon said, a lot of the issues they brought up have consequences in use today. SSL encryption obviously is huge, but I was really surprised when they suggested applications that try to place content closer to their users as violators of the end-to-end principles. Things like CDNs seem like such a common part of the internet now, their widespread use must have influenced the core design of the Internet since this papers publication.

    The most interesting thing was how many of the issues brought up are still unresolved today. Anonymity is huge, with even more sophisticated tools created to track users actions, it is difficult to remain anonymous while still experiencing all the features of a application or without resorting to proxies or tor. Government involvement in censorship and regulation of internet traffic is another huge issue, this has the most ability to impact how we use the internet. It would be interesting to see a update to this paper talking issues today, from extremely bandwidth heavy applications like Netflix to the role of government, touching on SOPA/CISPA and China's firewall.

    ReplyDelete
    Replies
    1. Not even just that - there was the whole fiasco a couple of months ago when the UN wanted to take control of the internet for regulation, but ultimately (and thankfully) the US refused to agree. One of the biggest reasons why the internet is so successful today is due to the decentralized and free/open/chaotic state of the internet. Once you start adding gatekeepers and enforcing various forms of censorship for political or profit reasons, the internet no longer becomes the symbol of innovation and open thought. While the "untruthfulness of less sophisticated internet users" may seem problematic at times, it is still important to stay unregulated.

      Kinda went off on a tangent, ^^;

      Also, as an aside, tor and proxies are hardly anonymous - in tor you are trusting the exit node, and in proxies you are trusting the proxy to keep your information confidential.

      Delete
  3. For the second paper, Consensus Routing: The Internet as a Distributed System, I'd like to know if the authors have plans to implement their routing algorithms on a large scale.

    New routing protocols are proposed and researched with great regularity, but the stubborn nature of the internet (and especially ISPs) means that few of these ideas ever come to fruition. It would be nice to hear about modern changes in routing protocols and other modifications to the network infrastructure that ISPs have been advocating.

    ReplyDelete
    Replies
    1. This comment has been removed by the author.

      Delete
    2. There is something heavy here about the cost of implementation in terms of coming together. Not only would ISPs each have to implement this (with particular load and importance falling on the shoulders of the Tier-1s), they would have to come together to do so. A significant challenge I think.

      For each provider, there is also a question of value. It doesn't take much to sell me on the fact that this would be good for the end user (fewer interruptions when things change etc.). But for a single AS, what is the impact of some of the classic BGP pitfalls (obviously this depends on what their cost function looks like, i.e. where they derive value and all that)? How negatively does it really affect a particular AS if there is some bad route for a few seconds? This could very well be ignorance on my part, but if anyone expects broad adoption (I'm not sure that they do), it's the providers that have to be sold on the idea.

      Delete
  4. The paper brings up both sides of the issue of anonymity in browsing. There are those interests in third parties which would benefit from having greater visibility into where data is going and who is requesting it. These interests could be ISPs tracking content, Google wanting to know what to advertise to you, or the government trying to find out your business. As the paper says, does a third party, in any case, have the right to interpose itself into a communication between end-nodes, and, if so, is it the responsibility of the infrastructure to support this?

    ReplyDelete
  5. The Rethinking the design of the Internet paper addresses some important implications of the evolving internet, which may have an impact on what form the future network will be. the internet itself is a paradoxical technology. ISPs, users, content providers, and governments have all different motivations. control vs freedom (government censorship), openness vs privacy (anonymity in browsing), security vs trustfulness (cybercrime), all of these will make internet an even increasingly complicated system in its growth. The author also thinks that the balance of power among the players is not a winner-take-all outcome, but an evolving balance.

    ReplyDelete
    Replies
    1. The second paper questions traditional routing protocols which favor responsiveness over consistency, and therefore, introduces consensus routing, which he thinks will help routing protocols be more predictable and securable. Consensus routing is a decentralized mechanism that can address all of these consistency problems in policy routing.
      Firstly, a distributed coordination algorithm is introduced to determine which route to select. Secondly, two logically distinct modes: stable or transient mode, were used in forwarding packets. Simulations of this method turned out to be satisfying: all ASes maintain complete connectivity following 98.5% of the failures evaluated; and during subprefix-caused and intra-domain loops, much higher connectivity rate is retained compared to BGP.

      Delete
  6. For the first paper, I believe the author made a clear claim that end-to-end design is really a good one and is hard to break, because of its flexibility, compatibility and reliability. Even though government and ISP's have been trying to change the environment of Internet according to more and more consumer-like end users' behavior, it's more preferable to add new functionality upon the end application layer. While on the other hand, for some well-funded companies, efforts are put on short-term opportunity supported new networks that are not end-to-end, yet to fundamentally change the whole Internet architecture.

    The paper is more than 10 years old, but the "thinking" by author has been proved quite compelling. For the last decade, new companies like Google, Twitter, Amazon, Netflix, etc., didn't re-invent the end-to-end design, but rather to invent on the distributed system based on end-to-end backbones, and invested billions of dollars on CDN's and data centers, which were claimed as the trending in the paper as well.

    ReplyDelete
    Replies
    1. The 2nd paper about Consensus Routing, is a perfect model of protocol optimization should be employed on end-to-end Internet design, that is, to embrace compatibility, by changing no existing BGP protocol, and welcome flexibility, by only adding a layer upon existing BGP protocol, so that arbitrary policies can still be used.

      Delete
  7. This comment has been removed by the author.

    ReplyDelete
  8. Both papers, Rethinking the design of the Internet: The end to end arguments vs. the brave new world and Consensus Routing: The Internet as a Distributed System brought new ideas out for future internet.

    First one mentioned hundreds of interesting features of today's internet though it was written in 13 years ago. Hacking war between a hacking group Anonymous and Sony which was happened few years ago shows us vulnerability of modern internet and anonymity. Nowadays, almost everybody's personal information have been kept in the internet space and it will keep causing problems those will be not easy to handle.

    Second paper said that "By favoring responsiveness (a liveness property) over consistency (a safety property), Internet routing has lost both." It may be true, but I really feel consensus for Josiah's thinking about Consensus routing's possibility to implement to large scale system. Today, lots of services such as Facebook and Twitter have dealt with Bigdata. It is well-known that service providers could lost some data(consistency) inevitably to improve availability and partition tolerance in dealing with Bigdata. I would like to look for related research.

    ReplyDelete
  9. The Rethinking the Design of the Internet paper provided a broad overview of the design of the internet, and the issues that we are facing as the primary role of the internet shifts. Although it's from 2000, many of the points hold today. There was a lot of mention of the government involving themselves in monitoring activities, and that is still a prevalent topic. While some government action is to be expected, the internet is unique in the level of freedom it gives an individual. Monitoring of a computer user's activities can reveal very personal information, and should always be considered a violation of a person's rights.

    ReplyDelete
  10. There was an article on HN today sort of relevant to my paper, on end-to-end arguments. Check it out: http://gigaom.com/2013/01/16/time-warner-cable-vs-netflix/

    ReplyDelete