The Objectives of the Research Group
1. Current Issues
As a means to satisfy the continually increasing user needs for various levels of quality of service (QoS) that accompany higher end-host speed and multimedia applications, IETF, an Internet standardization organization, is standardizing the IntServ (Integrated Services) network, which is expected to serve as a platform for future multimedia networks. However, problems concerning scalability and deployment through migration from current networks were immediately pointed out. A different idea that grew out of reconsidering IntServ is DiffServ (Differentiated Services), in which QoS guarantee is abandoned and there is only differentiation of the QoS between connections. That approach was seen as a solution to the two problems mentioned above. However, a number of problems with the DiffServ architecture have also been pointed to and it has not come into wide use.
The two networks described above share certain problems. To guarantee QoS in IntServ, it is necessary to reserve network resources in all of the routers between the sending and receiving hosts. To do so requires that the route be determined in advance, and if a router or line fails during communication then the connection itself fails. This same problem appears with the DiffServ architecture. In particular, even if future advances in mobile communication technology eliminate failures, the assumption that network resources between end-hosts can be semi-fixed does not hold because of end-host mobility.
On the other hand, for systems such as the Web, where servers are the main component of the network, the peer-to-peer (P2P) network appeared as a means of solving such problems as system vulnerability, inadequate scalability, and performance limitations due to server bottlenecks by implementing services through direct communication between end-hosts. Introducing the P2P network to eliminate the server-based Web system is expected to bring a number of advantages, including scalability and robustness against failure. Eliminating the server ‘middleman’ can also reduce the initial introduction cost and management cost of the servers and network and eliminate the need an information system administrator or manager. Furthermore, the ability of users to belong to various communities without mediation by a server will promote subjective activities through autonomy, distribution and collaboration in the information era.
However, a pure P2P network has a flat topology, so it is necessary to query the entire network to find required information. A hybrid type P2P introduces servers that manage metadata (data that itself describes data) to improve search efficiency, but such an architecture also reintroduces the scalability problem. We can therefore see that there is a trade-off of scalability and robustness versus performance in P2P networks, which leaves us in the present situation where no final solution is yet available.
The above examples illustrate that it is not difficult to solve network problems individually. The truly difficult problems are those listed below.
- Knowing the current limitations of hardware and software technology and predicting the limitations of technology into the future
- Clarification of the network service concept that is required both for the present and into the future
- Network design for overall harmony based on the service image.
2. The future direction of networks
The three key terms for the requirements of the future network architecture are described below.
- Scalability: The number of information terminals such as sensors and information appliances that are connected to the Internet is steadily increasing, to say nothing of the increase in the population of Internet users itself. It is also assumed that those devices will naturally be used in a mobile environment. As a result, the network resource management method must be changed and accommodation of increases in the number of routers or the number of end-hosts and the number of users must be made possible.
- Diversity: Network technology is increasingly diversifying. Various kinds of high-speed technology continues to be developed, including wireless LAN or fourth-generation technology wireless lines, access lines such as DSL or FTTH, LAN technology such as gigabit Ethernet and backbone lines that employ optical communication technology. As a result, there is still no integrated network based on a single network architecture such as has been proposed time and again in the past, with the further result that a form of communication capable of providing stable communication lines between end points has not been attainable. In addition, the diversity of information appliances and devices is also increasing the diversity in network traffic characteristics.
- Mobility: In a mobile environment, the mobility of the users themselves must be considered, and that requires flexible network control. Furthermore, the mobility of the network resources themselves and the frequency with which they are created or extinguished are also significant factors for the other parties in a communication. When the information resource providers are users rather than servers, as is the case in a P2P network, the ease with which a computer can be disconnected from the network is another factor to be considered. A mobile environment also means that the router itself may move.
Assuming the three keywords described above, there is still no single network architecture to “satisfy the communication requirements of all users,” which makes improvement in end-host adaptability even more fundamentally important. Therefore the provision of a mechanism that supports such adaptability must be a basic principle of the network. That purpose requires autonomous knowledge of network status, so end-host control based on network measurement technology is essential. On the other hand, autonomous and distributed control that is premised on end-host adaptability is important for the network. This direction in research also applies to the photonic networks that serve as the backbone infrastructure.
Although the Internet is said to be inherently oriented to distributed processing, that is not actually true. For example, cooperation between routers is necessary even in IP routing control, and that is nothing more than centralized processing performed in the respective routers. That fact is related to network vulnerability to failure. In other words, the efficiency of resource use that is lost in promoting a distributed processing orientation must be compensated by adaptability of the end-host to the current network status. That would make it possible to construct a network that is both scalable and fully robust against failure while being able to cope with the diverse communication technology that will continue to be developed in the future and providing services to match the diverse needs of users. Of course, achieving that end-host adaptability would require increasing end-host autonomy and a system for overall network premised on it. This is actually a topic of discussion in the field of complex adaptive systems, and so let us have a look at the possibility of making use of that knowledge.
With respect to the Internet, the “End-to-End Principle” has been repeatedly stressed. That principle can be expressed as the following two points.
- A network should not be constructed on the basic of any particular application or to support any particular application.
- The functions that can be implemented by an end-host should be assigned to that end-host and any related state information should be maintained only at that host.
On the other hand, Metcalf's law, a well-known rule for expressing the value of a network, states that “The value of a network increases exponentially with the number of nodes.” That is to say, when it is possible for all nodes (or users) to communicate directly, the value of a network with N users, V(N), will be about N2. Because the Web system was developed on the client/server model, this rule does not hold for it. The P2P network can be said to represent a change from that direction once again. In a P2P network, too, we can observe the power rule in the number of peer connections and understand that such a network exhibits the aspects of a complex system. If the cause of this phenomenon could be identified, it might be possible to clarify the relation between robustness against failure and optimality or the rate of convergence on an optimum solution. A particularly important point is that the Internet differs from other complex systems in that it can be controlled. That is to say, the Internet itself is an immense experimental laboratory for investigating complex systems. The power rule can be ‘discovered’ in the topology and other aspects of the Internet, and if the mechanism by which that appears, what suitable network controls there might be, and other such problems could be solved, the knowledge obtained might be fed back into research on other complex systems in the future.
How future network research should be conducted and the direction of our own research is reported in what follows.
Masayuki Murata, “Network architecture and the direction of future research,” IEICE Technical Group on PNI, March 2006. (invited speech) [pdf]