Current Network Problems
The number of hosts currently connected to the Internet is said to be on the order of several hundred million. Using the Internet, people can now obtain information from around the world by sitting in front of a personal computer. To support the Internet, there are two main communication schemes: Transmission Control Protocol (TCP) for transmitting packets without error between end hosts, and Internet Protocol (IP) for delivering packets from a send host to a receive host via routers within the network. In this regard, the Internet is called a "best-effort network" (in which a network delivers packets between end hosts as quickly as it can) because IP only guarantees connectivity between end hosts in the network.
At the same time, higher processing speeds at end hosts and the development of multimedia, high-speed applications has prompted users to demand various levels of communications quality (referred to as Quality of Service, or QoS). The Integrated Services (IntServ) network was initially proposed as a mechanism for satisfying this demand, and it became the target of standardization activities at the Internet Engineering Task Force (IETF), an Internet standardization body. Though slated to be the platform for future multimedia networks, it was quickly pointed out that IntServ would suffer from problems related to scalability and deployment given the existing network. The Differentiated Services (DiffServ) network was then proposed with the aim of solving the above two problems by giving up a QoS guarantee but by differentiating between connections in a QoS manner. Here too, however, several problems were pointed out with DiffServ architecture, and this approach has not found widespread use as a result.
The following problems are common to the above two types of networks. First, for IntServ, the QoS guarantee means that it must secure network resources at all routers between the send and receive hosts. This means that the transmission path must be decided beforehand, which in turn means that the mechanism itself will collapse if any router or circuit on that path fails during a communication session. This is a problem that also affects DiffServ. Furthermore, as mobile communication technology continues to develop, the mobility of end hosts means that the assumption itself of semi-fixed network resources between end hosts will no longer hold even without the occurrence of equipment failures.
Against the above background, recent years have seen the appearance of Peer-to-Peer (P2P) networks that achieve services through direct communication between end hosts. The P2P format aims to solve some of the key problems of server-based networks such as the World Wide Web including system vulnerability, lack of scalability, and performance limitations due to server bottlenecks. By breaking away from a server-based Web system, the introduction of P2P networks is expected to provide fault tolerance and scalability, to reduce the initial deployment cost and management cost of servers and the network by making intermediary servers unnecessary (disintermediation), and, as a result, to eliminate the need for information-system operators and managers. This elimination of intermediary servers also means that users can join a variety of communities enabling the promotion of independent activities based on autonomy, distribution, and cooperation as appropriate for the information age.
However, on comparing a pure P2P network that floods the entire network with queries to find needed information with a hybrid-type P2P network that improves search efficiency by introducing a server that manages metadata (information that indicates the whereabouts of other information), it was found that scalability in a P2P network has an inverse relationship with fault tolerance and performance, and that a comprehensive solution to all problems has yet to be obtained.
As shown by the above examples, it is not actually that difficult to solve individual problems in a network. There have been many similar problems in the past that can be referenced when designing solutions. The really difficult problem to solve is to:
- Determine the limits of current hardware and software technology and predict future technical limitations;
- Clarify the network service image that is needed now and in the future; and
- Come up with a network design in harmony with all of the above.
This problem is none other than one of constructing appropriate network architecture, which is the goal of our research group.
Future Network Direction
The following three keywords represent the properties deemed necessary for network architecture of the future.
- Scalability: It goes without saying that the Internet user population is on the increase, and this, along with an increase in sensor devices and the spread of information appliances, will result in an increasing number of information-device terminals connected to the Internet. It can be safely assumed that such devices will also be used in a mobile environment. As a result, the method for managing network resources will naturally have to change in order to deal with the increasing number of routers, end hosts, and users.
- Diversity: Network technology is becoming increasingly diverse. A wide variety of high-speed technologies are being developed including wireless LAN and wireless circuits based on 4th-generation technology, access circuits based on DSL and FTTH technologies, and backbone circuits based on gigabit Ethernet LAN and optical communications technology. As a consequence, an integrated network based on a single network architecture as frequently advocated in the past will not come into existence, and a communications format that can provide a stable circuit between end hosts will be all the more difficult to achieve. In addition, the diversification of information equipment and devices means diversified traffic characteristics flowing through the network.
- Mobility: In a mobile environment, movement of the user himself must be taken into account. This calls for flexible network control. Furthermore, for other users that become the other party in such mobile communications, it means that network resources themselves will move and be frequently created and deleted. And for situations in which users as opposed to servers become the providers of information resources as in P2P networks, the ease at which computers can be disconnected from the network must be considered. Finally, in a mobile environment, the possibility arises that routers themselves will move.
The preconditions represented by the above three keywords not only rule out the possibility of a single network architecture for "satisfying all user communication requests" but also calls for greater end-host adaptability as a core policy. In principle, the network must provide a mechanism for supporting such adaptability. It will therefore be necessary to determine the state of the network in an autonomous manner, and end-host control based on network measurement technology will be essential. For the network, as well, autonomous distributed control with the precondition of end-host adaptability is important. This research direction also holds for photonic networks slated for use as a backbone infrastructure.It is said that the Internet was originally intended to have such a distributed format, but that has not been the case in actuality. For example, path control in IP requires cooperation between routers, which is no more than centralized processing at each of those routers. This can help weaken the fault tolerance of the network. In other words, while efforts are intensifying in the direction of distributed processing to improve fault tolerance, the efficiency of resource usage can be expected to drop as a result, and this needs to be compensated for by making end hosts adaptable to the present state of the network. If this can be achieved, it should be possible to construct a network excelling in scalability and fault tolerance while supporting diverse communication technologies of the future, and to provide services that can support diverse user needs. Of course, this will require that end hosts be all the more autonomous, which will in turn require a harmonious order across the entire network. This is, in fact, being discussed in relation to adaptive complex systems, and it appears that knowledge of such systems may be of great use in achieving such harmony.
The "end-to-end principle" has been repeatedly stressed with regard to the Internet. This principle states that:
- The network must not be constructed based on specific applications or with the purpose of supporting specific applications; and
- Functions that can be performed by an end host are entrusted to that host, and related state information is to be maintained only at that host.
To put it bluntly, this means that "communication functions are implemented as much as possible at end nodes while the network devotes itself to transporting bits of information." This basic principle is also referred to as the KISS (Keep it Simple, Stupid) principle. The above network returns to the origins of the Internet and can even be thought of taking it to a higher level.
A famous law describing the value of a network is Metcalf's law, which states that "the value of a network increases exponentially with the number of nodes (or number of users)." Here, given that all users can communicate directly with each other, network value V(N)~ N2, where N is number of users. This law, however, tended to break down with the expansion of the client/server model in the Web system, although the coming of P2P networks can be treated as an attempt to reverse that direction. On examining the number of peer connections in a P2P network, a power law relationship can be observed as can aspects of a complex system. If the reason for such phenomena can be explained, it might be possible to clarify the relationship between fault tolerance and the speed of convergence to an optimal configuration or solution. A particularly important point here is that the Internet differs from other complex systems in that it is something that can be controlled. That is to say, the Internet itself can be viewed as a huge experimental site for complex systems. The power law has also been "discovered" in Internet topology, for example, and if the reason for its appearance can be explained and appropriate network control can be found, it might become possible in the future to feed back the knowledge so obtained to research involved with other types of complex systems.
Our Research Themes
Based on the above, our research group promotes research themes below.
1.1) Research on hardware implementation of CCN
1.2) Research on rapid deployment of CCN
1.3) Research on content caching system of CCN
2.1) Research on sensor network architecture
2.2) Research on mobile ad-hoc network architecture
3.1) Research on overlay network architecture
3.2) Research on web-based service architecture
4.1) Research on control of virtualization in data center
4.2) Research on configuration of data center network with photonic technology
4.3) Research on chipping data center network
5.1) Research on control of network by traffic engineering
5.2) Research on traffic measurement for traffic engineering
6.1) Research on performance evaluation method of network based on characteristics of TCP
7.1) Research IPv6 routing protocol
8.1) Research on optical path network based on fluctuation control
8.2) Research on optical packet and circuit integrated network
9) Research on construction of information network architecture by getting ideas from robustness and adaptability of brain and living body
9.1) Research on self-organizing network architecture
9.2) Establishment of control technology for self-organization
9.3) Establishment of control technology on network as complex adaptive system
Discussion and individual research themes above are described on Achievements 2013 in detail.