Showing posts with label internet traffic. Show all posts
Showing posts with label internet traffic. Show all posts

Wednesday 7 January 2015

Evolvable Internet Architecture



Internet
When it comes to the future Internet research, making improvement in the existing architecture so that it meets different types of requirements becomes the key topic. Professor Xu Ke and his group from Tsinghua National Laboratory for Information Science and Technology (TNList), Department of Computer Science and Technology, Tsinghua University have set out together to find a solution to this problem.

The team has been able to develop a novel evolvable Internet architecture framework under the evolvability constraint, the manageability constraint and economic adaptability constraint. According to them, the network layer can be used to develop the evolvable architecture under these design constraints. Their teamwork has been published SCIENCE CHINA Information Sciences.2014, Vol 57(11), with the title, "Towards evolvable internet architecture-design constraints and models analysis".

The total concept: 

With nearly 1/3 of the total world’s population having access to internet, it has become one of the strongest and the global means of communication. However, in a deeper sense, internet is no longer a communication channel but now it has been extended to become a communication and data processing storage area. Looking at the changes it is difficult for the existing Internet architecture to adapt.

Even though there are corresponding researches about future Internet research depending on the dirty-slate approach and the clean-slate approach, for more than years the two mainstream architecture development ideas have been conducted in many countries but still these approaches cannot fully solve the current Internet architectures problem efficiently.

As far as the dirty-slate approach is concerned, it can only solve a small range of local issues on the Internet but it also makes the internet architecture evolve into a complex and cumbersome structure. But when it comes to the clean-slate approach, there is serious deployment and transition issues, which becomes totally uncontrollable in the current architecture.

The evolvable Internet architecture combines the advantages of both the clean-slate approach and dirty-slate approach also ensures that the core principal is not hampered.

The evolvable architecture is more flexible when compared to the dirty-slate approach. Even the evolvable architecture becomes more stable when compared to the clean-slate approach. There are three different types of layers in the evolvable architecture.

Apart from ensuring that the construction of the evolvable architecture is in conformity with the design principles, three constraints are required during the development stages. They are the economic adaptability constraint, evolvability constraint, and the management constraint. This is all about the next generation of the internet.

The development and history: 

The collaborative efforts from many researchers from different institutes and universities are behind the discovery and development of this evolvable architecture. This project got the support from a grant from 973 Project of China (Grant Nos. 2009CB320501 & 2012CB315803), NSFC Project (Grant No. 61170292 & 61161140454), New Generation Broadband Wireless Mobile Communication Network of the National Science and Technology Major Projects (Grant No. 2012ZX03005001), 863 Project of China (Grant No. 2013AA013302), EU MARIE CURIE ACTIONS EVANS (Grant No. PIRSES-GA-2010-269323), and the National Science and Technology Support Program (Grant No. 2011BAK08B05-02).

Tuesday 19 August 2014

Internet Traffic Disruptions Imminent As Routing Tables Approach Limit


Internet Traffic
Short Disruption in Internet Traffic Routing

Short disruptions in Internet traffic routing occurred recently which has been attributed to a short surge of registered IPv4 network routes that have made the total number more than the capacity of the old though popular used routers which form the backbone of the internet.Internet users all over experienced major problems on websites which was the result of flood of updates to databases within the internet routers.

This resulted in problems with connectivity issue for companies and many are of the opinion that the modification and the reboot of routers would probably head towards a decline in internet outages in the near future. Experts are of the opinion that this problem could be solved but are speculating that it could only be a precursor to widespread disruption which may come up later on.

 Some of the older kinds that are still operating, like the massive scale data centre Internet router, have hard coded limit to the Ternary Content Addressable Memory – TCAM, which are placed for storing records of internet routes that are broadcasted by various types of servers from time to time which in turn are used to send data like Web sites requests from one part of the world to the other end.

Getaways/Routers Mark Entry/Exit Points

Global traffic
Thousands of getaways which are routers mark the entry and exit points, between the internet service providers’ network and the large backbones which send data packets across the world and are governed as well as synchronised by the Border Gateway Protocol – BGP. The most practical limitation of storable routes is 524288 or 2 raised to the power of 19 in binary or 512,000 which crossed its limitation on Tuesday resulting in congestion as well as blockage of routes. BGP ended in being unable to identify routes which were the cause in the packet loss.

It is essential to have frequent updates to routers in order to explain how networks should operates since internet traffic is designed to move in the most systematic manner, with 512 ports to table data on the internet. Moreover, the frequency of updates to routers is derived from the fact that some of the routers have the potential of accommodating around 512,000 updates in memory excluding any further tweaks. With some of the routers reaching their limits, it seems that the internet has been reaching its full capacity, resulting in a few networks going offline.

Fix Router Memory Allocation Issue

The number of routes has been increasing and stakeholders are gearing to fix the router memory allocation issue but are unprepared for the sudden surge in route records. This issue has been given to BGPMon to rectify the error in Verizon’s system which has been fixed and traffic has now returned to normal.

In May however, Cisco had provided information on how to re-allocate TCAM on Catalyst 6500 as well as 7600 to 1,000,000 and this fix was comparatively simple to apply though it would need each router to be taken offline momentarily. Increasing the capacity should enhance the problem for a longer period of time and is not a permanent fix. As per packet fix.net, the milestone of 300,000 routes, indicting a rapid increase in growth was passed in August 2009.