standards documents for TCP, IP and the other protocols existed, of course, but the The goal of interconnection and internetworking, to have a single powerful. Internet (IP) addresses. Higher-level protocols and the layering principle. Examples of internet architecture. Internetworking With TCP/IP vol 1 -- Part 1. 6. internetworking with tcp ip rockghotreamenla.gq - Free Download. Note: If you're looking for a free download links of Internetworking with TCP/IP Volume One (2-download).
|Language:||English, Spanish, Japanese|
|Genre:||Academic & Education|
|Distribution:||Free* [*Register to download]|
Although many books describe the TCP/IP protocols, Stevens provides a level of “W. Richard TCP/. Internetworking with TCP/IP / Douglas E. Comer. -- 4th ed. Internetworking With TCP/IP. Vol III: Client-Server Programming And Applications . BSD Socket Version. Second Edition. DOUGLAS E. COMER and. DAVID L. Internetworking With TCP-IP Vol 2 2ed Design, Implementation, And Internals - Ebook download as PDF File .pdf), Text File .txt) or read book online.
The payload is the data that is transported. This method of nesting the data payload in a packet with a header is called encapsulation.
IP addressing entails the assignment of IP addresses and associated parameters to host interfaces. The address space is divided into subnetworks , involving the designation of network prefixes. IP routing is performed by all hosts, as well as routers , whose main function is to transport packets across network boundaries. Routers communicate with one another via specially designed routing protocols , either interior gateway protocols or exterior gateway protocols , as needed for the topology of the network.
A central control component of this model was the "Transmission Control Program" that incorporated both connection-oriented links and datagram services between hosts.
The monolithic Transmission Control Program was later divided into a modular architecture consisting of the Transmission Control Protocol and User Datagram Protocol at the transport layer and the Internet Protocol at the internet layer.
IP versions 0 to 3 were experimental versions, used between and It proposes the first version of the IP header, using 0 for the version field. The IP header is different from the modern IPv4 header. The dominant internetworking protocol in the Internet Layer in use today is IPv4 ; the number 4 is the protocol version number carried in every IP datagram. Version number 5 was used by the Internet Stream Protocol , an experimental streaming protocol.
The successor to IPv4 is IPv6. Its most prominent difference from version 4 is the size of the addresses. While IPv4 uses 32 bits for addressing, yielding c. Although adoption of IPv6 has been slow, as of June [update] , all United States government systems have demonstrated basic infrastructure support for IPv6. Under the end-to-end principle, the network infrastructure is considered inherently unreliable at any single network element or transmission medium and is dynamic in terms of availability of links and nodes.
No central monitoring or performance measurement facility exists that tracks or maintains the state of the network. For the benefit of reducing network complexity , the intelligence in the network is purposely located in the end nodes. Global Software Organization IP Software Design Routing Table And Routing Algorithm Fragmentation And Reassembly Implementation Of Fragmentation User Datagrams Data Structures And Input Processing Transmission Control Blocks Finite State Machine Implementation Output Processing Timer Management Flow Control And Adaptive Retransmission Client And Server Table Access Functions Understanding even simple ideas such as how TCP buffers data can help programmers design.
For example. Studying 1 o m w w w. While it may seem that understanding the internal details is not important.
In addition. Besides conventional use on private industrial networks. The most difficult aspect of protocols to understand. Minor details that may seem unimportant suddenly become essential. We can summarize: When one considers the operation of all protocols together.
As many programmers have found. It shows data structures and source code. It reviews concepts and explains nuances in each protocol. More important. IP uses a routing table to make decisions about how to forward datagrams. To provide a concrete example of protocol implementation. Data structures must be chosen with all protocols in mind. To fully understand the details and implementation of a protocol. Heuristics to handle problems and nuances in protocol design can make important differences in overall operation or performance.
A protocol standard document. The example code allows the reader to understand ideas like urgent data processing and network management that spread across many parts of the code.
To explain the details. Source code for the example system allows the reader to understand how the protocols interact and how the software can be integrated into a simple and efficient system. Xinu has been documented in two textbooks. To summarize. Several vendors have used versions of Xinu as an embedded system in commercial products. It shows how the judicious choice of data representation can make some protocols easier to implement and conversely how a poor choice of representation can make the implementation tedious and difficult.
It fills in details and provides the reader with an understanding of the relative difficulty of implementing each part.
To understand the data structures. We have chosen to use Xinu for several reasons. The example programs should be considered part of the text. The software presented in this text is part of the Xinu operating system. Xinu is a small. More to the point. Appendix 2 lists the name and arguments. Some chapters describe entire protocols. Although the Xinu implementation was not designed 4 o m w w w. The cross reference is especially helpful in finding the context in which a given procedure is called.
Appendix 2 provides a list of those functions and procedures used in the code that are not contained in the text. It begins with a review of the operating system functions that TCP uses. For each procedure. Most of the procedures listed come from the C run-time support libraries or the underlying operating system. It focuses on an example implementation from the Xinu operating system. For each procedure or function. The code appears in the chapter that is most pertinent. Chapter 15 discusses heuristics for round trip estimation.
Corner  and Comer  describe the structure of the Xinu operating system. The appendices help the reader understand the code.
We assume the reader is already familiar with most of the material in volume I. To fully understand the protocols. It shows the general structure of protocol software and explains in general terms how the software fits into the operating system.
As we will see. Operating systems provide support for concurrent processing. The examples in this chapter come from Xinu. Even on machines with a single processor they give the illusion that multiple programs can execute simultaneously by switching the CPU among them rapidly. Later chapters review individual pieces of protocol software and present extensive detail.
This chapter provides a brief overview of operating system concepts that we will use throughout the text. Although the examples refer to system calls and argument that are only available in Xinu. That is. Create returns an integer process identifier used to reference the process when performing operations on it. A high-level protocol like TCP may create a segment. Because processes execute independently. In the example system we will use.
Using a single program to implement timeout for multiple protocols makes the program complex. To terminate an existing process. An operating system provides mechanisms to create new processes and to terminate existing processes. In particular. Processes are especially useful in handling the timeout and retransmission algorithms found in many protocols. The operating system can allow two or more processes to execute a single piece of code.
Instead of trying to write a single program that handles all possible sequences of events. Perhaps the most important is that of a process sometimes called a task or thread of control. In fact. Eventually a low-level protocol like ARP may be 7 o m w w w. IP must route the datagram and pass it to the network interface. To avoid having the process use the CPU while waiting. A counting semaphore is a general purpose process synchronization mechanism.
The answer is that the operating system must provide mechanisms that permit processes to communicate. Once a semaphore has been created. The operating system provides a function. We will use three such mechanisms: Screate returns a semaphore identifier that must be used in subsequent operations on the semaphore. The operating system honors priorities when granting processes the use of the CPU.
In a single program. The priority scheme is valuable in protocol software because it allows a programmer to give one process precedence over another. Because the operating system handles all the details of process scheduling. The designer can assign higher priority to the process that implements protocol software.
When a process calls wait. If the system uses a separate process to implement each protocol that requires a timeout. The priority scheme we will use is simple and easy to understand: When a process calls signal. The second process will begin executing the critical code while the third waits.
To understand why mutual exclusion is essential.. If the two processes execute concurrently. To do so.. Although the semantics of wait and signal may seem confusing. If that process finishes and executes signal s. In addition to providing mutual exclusion. Synchronization is needed because queues have finite capacity. As a result.. When the first process finally finishes using the critical code. Of most importance. Mutual exclusion means allowing only one process to execute a given piece of code at a given time.
To prevent processes from interfering with one another. When the second process finishes and executes signal s. Assume that a queue contains space for N items.. Depending on how much CPU time the processes receive. The point is that at any time only one process can execute the critical code.
A program creates a port by calling function pcreate and specifying the size of the queue as an argument. At all other times both producers and consumers can proceed. Also assume that some other set of processes is extracting items and processing them typically many processes insert items and one process extracts them.
Any producer that attempts to insert an item when the queue is full must be blocked until the consumer removes an item and makes space available. A process that inserts items in the queue is called a producer.. If the application programs producing datagrams generate them faster than the IP process can consume and route them. Pcreate returns an identifier used to reference the port We think of a port as a finite queue of messages plus two semaphores that control access.
Two semaphores are required for coordination of producers and consumers as they access a queue of N items. Psend sends a message to a port..
The semaphores are initialized as follows. A producer executes the following wait s1. It is the programmer's responsibility to construct the system in such a way that messages are not lost.
If the receiving process does not execute receive between two successive calls of send. If the processes do block. Send takes a process identifier and a message as arguments. To help synchronize message exchange. In such cases. They specify that a select group of processes may not block under any circumstances. In addition to prohibiting interactions that block processes indefinitely.
To allow processes to determine whether psend will block. A process calls send to send a message to another process. Message passing allows one process to send a message directly to another. Preceive extracts a message from a port.
If the count equals the size of the port. If the count is zero no items remain in the port. Once a process blocks in psend it remains blocked until another process calls preceive.
From the process' point of view.
An interrupt causes the CPU to temporarily suspend normal processing and jump to code called a device driver. The device driver software takes care of minor details. The device driver also informs protocol software that a packet has arrived and must be processed. The device driver also provides a convenient interface for programs that send or receive packets. The device driver code handles the interrupt and restarts the device so it can accept the next packet.
An octet is an 8-bit unit of data. And Output Network interface hardware transfers incoming packets from the network to the computer's memory and informs the operating system that a packet has arrived. Like most operating systems. If a message arrives within the specified time. Once the device driver completes its chores. Software interrupts are efficient.. Some operating systems use the computer's software interrupt mechanism. When a packet arrives. To accommodate random packet arrivals. It is possible to solve the problem of waiting for a random interface in several ways.
As soon as the hardware interrupt completes. The code is hidden in an abstraction called a device. Some have mechanisms that allow a single process to block on a set of input devices and be informed as soon as a packet arrives on one of them. To make the design efficient. Even operating systems that use conventional hardware interrupts have a variety of ways to handle multiple interfaces.
This "software interrupt'' suspends processing and causes the CPU to jump to code that will handle it. To make the protocol software portable. Hardware for netn Figure 2. Before returning from the interrupt. Others use a process per interface. The IEEE There is an input queue associated with each network device. Figure 2. When a datagram arrives. If the datagram carries a TCP segment. When the IP process has no packets to handle.
On networks that do not have self-identifying frames. The Ethernet interrupt routine uses the packet type field of arriving packets to determine which protocol was used in the packet. When a packet that carries an IP datagram arrives.
These procedures examine the destination UDP protocol port number and use it to select an operating system queue port for the user datagram. If the incoming segment carries an acknowledgement for outbound data. Because they execute as separate processes. They use the port mechanism described earlier. As we will see later.
We will examine the internals of each module later. IP and TCP must use an interprocess communication mechanism to communicate.
IP calls psend to deposit segments in the port. TCP will add the data to a buffer associated with the connection and return an acknowledgement to the sender. If the segment contains data. Once TCP receives a segment. Of course.
TCP separates incoming data streams and places the data in buffers. UDP demultiplexer incoming user datagrams based on protocol port number and places them in operating system queues. Because each application program executes as a separate process.
While the details are complex. An application 16 o m w w w. For incoming TCP data. Either 1 an application program passes data to one of the high-level protocols which.
The interrupt handler then returns from the interrupt. The queues associated with output devices provide an important piece of the design. To help isolate the transmission of packets from the execution of processes that implement application programs and protocols. The interrupt handler. When the output operation completes. As long as packets remain on a queue.
In either case. They allow processes to generate a packet.
If any additional packets remain in the queue. The hardware only needs to be started when IP deposits a packet on an empty queue. If the hardware is idle when a packet arrives i.
We assume that such cases are rare. Although it is possible for two applications on a given machine to communicate.. The second. The software uses two TCP processes to handle the complexity. The tcpout process uses a port to synchronize input from multiple processes. The first. TCP output is complex. Connections must be established. Once TCP produces a datagram. Once a segment has been placed in a datagram. IP chooses a network interface over which the datagram must be sent and passes the datagram to the corresponding network output process.
A later chapter reviews the exact details of events and TCP's responses. Because TCP is stream oriented.. Using queues isolates processing from network transmission 2. The timer process deposits a message in the port whenever a timer expires and TCP needs to retransmit a segment. Once the datagram has been created. TCP output. TCP input. It uses the process abstraction to isolate pieces of protocol software.
The operating system provides a semaphore mechanism that processes used to 19 o m w w w. Each process executes independently. Because UDP does not guarantee reliable delivery. The system has a process for IP. When data is passed through conventional queues.
Note that the IP process is a central part of the design — a single IP process handles both input and output. When the TCP output process executes. The example code uses semaphores for mutual exclusion i. For UDP output. The port mechanism uses semaphores to coordinate the processes that use the queue.
The application process then informs the TCP output process that new data is waiting to be sent. The network device drivers enqueue all incoming packets that carry IP datagrams on queues for the IP process. Processes implementing protocols use both conventional queues and ports to pass packets among themselves. An application program. IP extracts packets from the queues and demultiplexes them. The operating system also provides a port mechanism that allows processes to send messages to one another through a finite queue.
If a process attempts to send a message to a port that is full. For TCP output. Device for net1 Device for net2. Output from the device queues is started at interrupt time. IP is a central part of the design — the software for input and output both share a single IP process.
IP is a central part of the design.. Input to the device queues occurs asynchronously with processing.. Also assume that the process follows the datagram through the protocol software until it can be sent or delivered.
Assume that the system assigns a process to each datagram that arrives or each datagram that local applications generate. What is the chief advantage of such a design? The chief disadvantage? Comer  shows how processes and ports can be used for simple protocols like UDP.
What are the two main advantages and disadvantages of each? Consider a protocol software design that uses a large number of processes to handle packets. Why do protocol implementors try to minimize the number of processes that protocols use? If the system described in this chapter executes on a computer in which the CPU is slow compared to the speed at which the network hardware can deliver packets.
What are the advantages and disadvantages of each? Compare two designs: Comer  provides a detailed description of the system. Read more about software interrupts and sketch the design of a protocol implementation that uses software interrupts instead of processes. Chapter 4 discusses address resolution and encapsulation.. This chapter examines the lowest layer.
This chapter shows how device driver and interface software can he organized to allow higher layers of protocol software to recognize and control multiple network hardware interfaces attached to a single machine. It also considers buffer management and packer demultiplexing.
We have chosen to omit the network device driver code because it contains many low-level details that can only be understood completely by someone intimately familiar with the particular network hardware devices It hides hardware details and allows protocol software to interact with a variety of network hardware using the same data structures.
In our example code. The idea is simple: The network interface abstraction defines the interface between protocol software in the operating system and the underlying hardware. Items in the interface array are known throughout the system by their index in the array.
IPaddr ipa. Eaddr hwa. Fields in netif define all the data items that protocol software needs as well as variables used to collect statistics. Structure netif defines the contents of each element in nif. Each hardware address begins with an integer length field followed by the address.
Fields with names that end in valid contain Boolean variables that tel1 whether other fields are valid. To extract the next datagram. Because the length of a physical address depends on the underlying hardware. The device driver software places arriving datagrams for the IP process in a queue. Our design uses a pseudo-network interface.
The pseudo-network interface does not have associated device driver routines. The interaction between IP and the local machine can either be implemented as: It is important to separate the logical state of an interface from the status of the physical hardware because it allows a manager freedom to control its operation. IP must also route datagrams to and from higher-level protocol software on the local computer. The ultimate efficiency of protocol software depends on how it manages the memory used to hold packets.
A good design allocates space quickly and avoids copying data as packets move between layers of protocol software. In practice. IP may need to store datagrams larger than the underlying network packet sizes e. Using a pseudo-net work for the local machine has several advantages.
Individual mbufs need not be completely full. The choice of D is a tradeoff between allowing large datagrams and having sufficient buffers for the expected traffic. Most systems allocate fixed size buffers because doing so prevents fragmentation and guarantees high memory utilization.
In linked list designs. Permitting buffers on the linked list to contain partial data has another advantage: D depends on the expected size of buffer memory as well as the expected use of the system. When a layer of software receives a message from a higher layer. Berkeley UNIX uses a linked structure known as the mbuf. They gather pieces of the packet from buffers on the list. Some devices can transmit or receive packets in noncontiguous memory locations.
Hardware requirements often complicate buffer management. To make buffer processing uniform. The system must copy outgoing data from conventional buffers to the DMA area before transmission. The technique is known as gather-write. These techniques can also be used with fixed-size buffers because they allow the driver to encapsulate a datagram without copying it.
To allocate a buffer. This design was chosen because we expect most datagrams to be smaller than a conventional network MTU. To do so. Once a device has transmitted a packet.
In such systems.
The advantage of having the buffer be self-identifying is that protocol software can pass along a pointer to the buffer without having to remember whether it was allocated from the large or small group. On output. To return the buffer to the free list. VAX architecture. In our implementation. Using a table makes experimentation easier. In general. The technique is especially useful on machines with small page sizes e. In a computer system that supports paged virtual memory. Using code is often more efficient.
Sun Microsystems Sun 3 architecture. RARP message. The netif structure defines the information kept for each network interface. Most important. Among the fields in netif.
If the packet is accepted. It includes buffer management routines. If the packet carries an ARP message. It contains all information pertinent to the interface. Find a hardware architecture manual that describes DMA memory. How does a device driver use DMA memory for buffers? Why does the header contain two pointers to other mbuf nodes?
Comer  describes an Ethernet hardware interface. What statistics does it specify keeping for each network interface? Does the interface structure contain a field for each of them? How does it compare to the network MTU? Find a hardware description of the Lance Ethernet interface device.
Is it possible to enqueue multiple packets for transmission? If so. Later chapters that discuss higher-layer protocols illustrate clearly how ARP insulates those layers from hardware addresses. This chapter examines ARP software that also resides in the network interface layer.
We said that address binding is part of the network interface layer. ARP binds high-level. The input module handles ARP packets that arrive from the network. Although the ARP software maintains an address mapping that binds IP addresses to hardware addresses. IP addresses to low-level.. Address binding software forms a boundary between higher layers of protocol software. The output procedure returns a binding.
Device drivers communicate directly with the network hardware and use only physical network addresses when transmitting and receiving packets. When sending a datagram. Discarding an entry forces the ARP software to use the network to obtain a new binding from the destination machine.
Many implementations fail to interpret the protocol specification correctly. The choice between using a single cache and multiple caches only makes a difference for gateways or multi-homed hosts that have multiple network connections. The next sections discuss the design and the data structures used to implement it.
Cache Timeout and Removal. ARP software initializes the time-to-live field on the entry. Other implementations supply incorrect bindings because they eliminate cache timeout in an attempt to improve efficiency. Our cache policy specifies that if a new binding must be added to the cache after it is already full.
It is important to consider the design of ARP software carefully and to include all aspects of the protocol. When an entry is added to the cache or whenever an entry is validated.
In essence. Before reviewing the procedures that implement ARP. Removal from the cache is independent of the frequency with which the entry is used. The cache manager implements the cache replacement policy. Each entry in the cache contains a field that specifies the network from which the binding was obtained. A single physical cache holds entries for all networks. The alternative is a local replacement policy in which a new binding can only replace a binding from the same network. In our design.
It is important to revalidate entries after they remain in the ARP cache for a filed time. As time proceeds. Our example ARP software follows a few simple design rules: The alternative is a multiple cache scheme that keeps a separate ARP cache for each network interface.
Exclusive Access. Our design assumes that the manager will monitor performance problems and allocate additional cache space when they occur. Our design allows multiple outstanding packets to be enqueued waiting for an address to be resolved. The symptom is the same as for any poorly-tuned cache: While our design can behave poorly in the worst case.
If most of the traffic during a given time interval involves only a few networks. When an ARP reply arrives that contains the needed hardware address. Our software disables interrupts and avoids context switching to guarantee that only one process accesses the ARP cache at any time.
In the worst case. Procedures that operate on the cache e. Each entry in the ARP cache has a queue of outgoing packets destined for the address in that entry. If the traffic later shifts to a different set of networks. For entries that have not yet been resolved. Array arptable forms the global ARP cache. Structure arp defines the format of an ARP packet. Each function takes a single argument that gives the address of an ARP packet.
In-line functions SHA. Because the size of each address field can be determined from information in the fixed fields of the header. Argument pni insures that arpfind will select bindings that correspond to the correct network interface. Arpfind searches the ARP cache sequentially until it finds an entry that matches the specified address. For technologies like Ethernet. It returns a pointer to the entry.
Recall that our design places all ARP bindings in a single table.
After the hardware and protocol address length fields have been assigned. After allocating a buffer to hold the packed. After arpsend creates the ARP request packet.
The code is much simpler than it appears. Because netwrite must return to its caller without delay. If one is needed. If the queue is full when netwrite needs to enqueue a packet. It then fills in fields in the ARP entry. It first checks to see if a queue exists. If the entry has not been resolved and is not pending. If the entry has been resolved. Each output queue has a finite size. It also assigns the entry's time-to-live field and the maximum timeout value.
SPA parp. SHA parp. Arpadd takes information from an ARP packet that has arrived over the network. Because it fills in both the hardware and protocol address fields. Once it has removed all packets. When an ARP packet arrives that contains information needed to resolve an entry. ARP discards them.
When processing a valid packet. If packets arrive containing other address types. THA parp. The protocol specifies that a receiver should first use incoming requests to satisfy pending entries i. The protocol also specifies that if the incoming packet contains a request directed at 49 o m w w w.
IP must create a new cache entry. If it does. After inserting an entry in the cache. It transmits the reply directly. Procedure arpalloc chooses an entry in the ARP cache that will be used for a new binding. It also requires periodic computation independent of either input or output.
The next sections explain the cache policy and show how the software enforces it. ARP will delete yet another binding and broadcast a request. In considering an ARP cache policy. If a datagram transmission causes the system to insert a new binding in the cache.
To implement the preemption policy. When the old. We have chosen a simple replacement policy. When allocating space for a new addition to the ARP cache.
It cycles around the table completely before returning to an entry. We assume that a system manager will monitor and detect such situations. ARP will broadcast a request each time it needs to deliver a datagram. The for-loop in arpalloc searches the entire table. Iterate through ARP cache. If no unused space remains in the cache. Arptimer uses the elapsed time to "age" entries in the cache. It iterates through each entry and decrements the time-to-live field in the entry by gran. If the time-to-live becomes zero or negative.
If not. If the time-to-live expires on an entry that is pending resolution. Arpinit creates rarpsem. These details are handled separately because our design uses a single timer process for many protocols.
If the packet is an IP datagram and the machine is a gateway. Note that arpinit does not initialize the timer process or set up calls to arptimer. For installations in which managers need more control. Our design uses a round-robin replacement policy. If the cache is completely full when a new entry must be inserted.
The example code shows the declarations of data structures that comprise the cache and the procedures that operate on them. Read about addressing for bridged token ring networks. Revise the example software by adding checks that detect when a two or more machines answer a request for a given IP address.
How does the driver send an ARP packet. What is the chief disadvantage of using a single table to hold the ARP cache in a gateway? What is the chief advantage? Should ARP use the local ring broadcast address or the all ring broadcast address? Would users perceive any difference in performance if the ARP software did not allow multiple packets to be enqueued for a pending ARP binding? Parr [RFC ] considers fault tolerant address resolution. What are the advantages and disadvantages of each approach?
ARP is especially susceptible to "spoofing" because an arbitrary machine can answer an ARP broadcast. Would our implementation operate correctly on a gateway that connected multiple rings? What network hardware uses ARP? Sketch the design of address binding software for a network interface that does not use ARP. As an alternative solution to the spoofing problem mentioned in the previous exercise. Examine the Ethernet device driver software in an operating system.
The Ethernet hardware specification enforces a minimum packet size of 60 octets. Later chapters. It discusses the conceptual operation of IP software and the flow of datagrams through the IP layer. To help explain IP without becoming overwhelmed with all the parts at once. In a host. IP is a central switching point in the protocol software. This chapter presents data structures and describes the overall software organization.
While the functionality IP provides may seem simple. IP software must work in gateways. After routing a datagram. The output part uses the local routing table to choose a next hop for outgoing datagrams. Despite its intuitive appeal.
Global Software Organization 5. IP either sends it to one of the network interfaces or to a higher-level protocol on the local machine.
It accepts incoming datagrams from the network interface software as well as outgoing datagrams that higher-level protocols generate. The IP software executes as a single. The IP algorithm routes each datagram and passes it to a network interface. It allows us to create IP software that does not depend on hardware interrupts or procedure calls. When a conventional network interface receives a datagram.
IP views all routing as uniform and symmetric: When the local interlace receives a datagram. IP may generate output while handling an incoming datagram. To avoid making delivery to the local machine a special case. Recall that the local interface has the same structure as other network interfaces. In the discussion that follows.