Comprehensive Guide: Computer Networking Foundations (Topologies and Reference Models)
I. Computer Networking
The overall objective of your course is to offer knowledge about computer network related hardware and software using a layered architecture.
| Component | Content |
|---|---|
| Definition | A computer network is a system where multiple computing devices are interconnected (physically or wirelessly) via communication links to share resources, data, and services. |
| Explanation | A computer network is like a neighborhood telephone system. Everyone has a phone (device), and wires (links) connect them so they can talk (share data) and maybe share a single, big printer (resource). |
| Bookish Explanation | Computer Networks involve the interconnection of autonomous computing devices using various transmission media to enable resource sharing, distributed processing, and effective communication between users and applications. |
II. Network Topologies
Network Topologies define the geometric arrangement of the communication links and networking devices in a computer network.
Bookish Explanation (8–12 Marks Detail)
Network topology dictates how data flows, how robust the network is, and how difficult it is to manage. Common types of network topologies, listed in the syllabus, include:
| Topology | Description | Advantages (A) & Disadvantages (D) |
|---|---|---|
| 1. Bus Topology | All devices are connected to a single central cable (the backbone). Data travels across this cable, and nodes check if the data is addressed to them. | A: Simple, cheap, requires less cable. D: Single point of failure (if the backbone breaks, the entire network fails). Difficult to troubleshoot. |
| 2. Ring Topology | Devices are connected circularly, forming a closed loop. Data flows unidirectionally (in one direction) or bidirectionally. | A: Consistent data transfer rate. D: Failure of a single node can take down the whole network. Adding or removing devices disrupts the network. |
| 3. Star Topology | The most common modern topology. All nodes connect to a central hub, switch, or router. | A: Easy to install, manage, and isolate faults (if one link fails, only that node is affected). High performance. D: Requires more cable. The central device (hub/switch) is a single point of failure. |
| 4. Mesh Topology | Every device is connected directly to every other device (fully connected mesh) or partially connected. | A: Extremely reliable (many redundant paths), robust, high security. D: Very complex and expensive due to excessive cabling and installation difficulty. |
| 5. Tree Topology | A hierarchical structure that combines characteristics of Bus and Star topologies. Multiple star networks are connected to a main bus cable. | A: Centralized management, easy expansion. D: Dependent on the main cable (bus) like a Bus topology. |
Diagram/Mind Map Suggestion
Mind Map Focus: Use a central bubble for "Topologies" and branch out to the 5 main types (Bus, Ring, Star, Mesh, Tree). Under each branch, use color-coding: Green for "Pros" and Red for "Cons."
Diagram Focus (Crucial for 8–12 Marks): Sketch the simple layout for Star (A central switch connecting four PCs) and Bus (A long cable with four PCs dropping off).
III. Reference Models: OSI and TCP/IP
The OSI (Open Systems Interconnection) Reference Model and the TCP/IP (Transmission Control Protocol/Internet Protocol) Model are theoretical frameworks that define how communication should happen across networks using a layered architecture.
A. OSI Reference Model
| Component | Content |
|---|---|
| Definition | The OSI Model is a seven-layer conceptual framework used to standardize and describe the functions of a telecommunication or computing system regardless of the underlying technology. |
| Explanation | Imagine sending a letter. The OSI Model is like a postal sorting system with seven steps. Step 7 (Application) is writing the letter. Step 1 (Physical) is the delivery truck driving it down the road. Every step has a specific job to ensure the letter arrives correctly. |
| Bookish Explanation | The OSI Model organizes network communication into seven distinct layers, ensuring interoperability between diverse systems. Each layer performs a specialized function and interacts only with the layer directly above it and the layer directly below it. This modular design simplifies system development and troubleshooting. |
Functions of the 7 Layers (Bookish Detail)
| Layer | Name | Function (What it does) | Protocol Data Unit (PDU) |
|---|---|---|---|
| Layer 7 | Application | Provides interface between user applications and the network. (e.g., HTTP, SMTP, FTP, DNS) | Data |
| Layer 6 | Presentation | Handles data format conversion, encryption, decryption, and compression. (e.g., JPEG, MPEG, ASCII) | Data |
| Layer 5 | Session | Establishes, manages, and terminates connections (sessions) between applications. | Data |
| Layer 4 | Transport | Provides reliable (TCP) or unreliable (UDP) end-to-end data delivery; segmentation and reassembly. | Segments / Datagrams |
| Layer 3 | Network | Handles logical addressing (IP addressing) and routing of data packets across different networks. | Packets |
| Layer 2 | Data Link | Provides error detection and correction (CRC, Hamming code), framing, and physical addressing (MAC address). | Frames |
| Layer 1 | Physical | Defines the electrical and physical specifications for data transmission across the media (transmission media: twisted pair, fiber optics, etc.). | Bits |
B. TCP/IP Reference Model
| Component | Content |
|---|---|
| Definition | The TCP/IP Model is a practical, four/five-layer protocol suite that forms the basis of the modern internet. It was developed to define and implement network standards. |
| Explanation | If the OSI Model is the detailed blueprint, the TCP/IP Model is the functional building—it's what we actually use. It combines some steps into bigger departments to make things work faster. |
| Bookish Explanation | The TCP/IP model is based on four layers (sometimes presented as five) which map generally to the OSI model. It is a protocol-centric model, meaning it was built around the core protocols (TCP and IP) that drive internet communication. It is highly robust and flexible, enabling communication across vastly different physical networks. |
Layers of the TCP/IP Model (Bookish Detail)
| TCP/IP Layer (4-Layer Model) | OSI Model Equivalent | Function |
|---|---|---|
| 4. Application Layer | Application, Presentation, Session | Provides high-level protocols for user services (e.g., DNS, HTTP, E-mail/SMTP, FTP). |
| 3. Transport Layer | Transport | Manages reliable (TCP) and unreliable (UDP) communication between hosts. |
| 2. Internet Layer (Network) | Network | Defines the logical structure of the network and handles routing using IP addresses. |
| 1. Network Access Layer | Data Link, Physical | Handles all the physical details necessary to interface with the transmission medium (hardware/drivers). |
C. Comparison of OSI with TCP/IP Model
Comparing the models is critical for an 8–12 mark answer.
| Feature | OSI Model | TCP/IP Model |
|---|---|---|
| Layers | 7 layers | 4 (or 5) layers |
| Nature | Theoretical model, defining services, interfaces, and protocols distinctly. | Practical, functional model, primarily built on protocols. |
| Development | Developed by ISO before protocols were invented. | Developed by the U.S. DoD; protocols were developed first, then the model. |
| Layer Combination | Layers 5, 6, 7 are distinct (Session, Presentation, Application). | Layers 5, 6, 7 are combined into a single Application Layer. |
| Standard | Used as a detailed guide/reference point. | Used as the actual standard for the Internet. |
| Transport Layer | Connection-oriented service is mandatory. | Supports both connection-oriented (TCP) and connectionless (UDP) services. |
Diagram Suggestion (Crucial for 8–12 Marks)
Draw a two-column comparison diagram. List the 7 OSI layers vertically in the first column. In the second column, draw brackets showing how the 4 TCP/IP layers map directly onto the OSI layers:
- TCP/IP Application Layer encompasses OSI Layers 7, 6, and 5.
- TCP/IP Transport Layer aligns with OSI Layer 4.
- TCP/IP Internet Layer aligns with OSI Layer 3.
- TCP/IP Network Access Layer encompasses OSI Layers 2 and 1.
Summary Analogy for Reference Models
Think of the OSI Model as a highly detailed instruction manual for assembling a complex engine—it has steps for every single screw and component (7 layers).
The TCP/IP Model is the actual, working engine built from that manual, where several smaller steps have been merged for efficiency and practicality (4 layers).
Switching Techniques: Circuit, Message, and Packet Switching
Switching refers to the methodology used to move data across the various devices and links in a network from the source to the destination.
I. Circuit Switching
| Component | Content |
|---|---|
| Definition | A switching technique where a dedicated communication path (circuit) is established between the sender and receiver before any data transfer begins, and this path remains reserved exclusively for them throughout the duration of the communication. |
| Explanation | This is like reserving a direct, private railway track between two cities for the entire day. No one else can use that track, even if your train is stopped or running empty. You get guaranteed speed and quality, but it's expensive and wasteful if you only use it for a few minutes. |
| Bookish Explanation (8–12 Marks Detail) | Circuit switching is traditionally used in public switched telephone networks (PSTN). It involves three phases: 1. Circuit Establishment (Reservation), 2. Data Transfer, and 3. Circuit Disconnect (Release). Since the bandwidth is reserved and dedicated, it guarantees constant data rate and quality of service (QoS) once the connection is made. However, resources remain blocked and unusable by others during idle times, leading to poor network efficiency and high call setup latency. Circuit switching operates primarily at the Physical layer (Layer 1) of the OSI model. |
II. Message Switching
| Component | Content |
|---|---|
| Definition | A switching technique in which data is transmitted in the form of complete, independent messages. Each switch receives the entire message, stores it temporarily, and then forwards it to the next node. This uses a store-and-forward mechanism. |
| Explanation | This is like sending a heavy, complete letter through an old post office system. The post office holds the whole letter (the message) until the next stage of delivery is free. If the next office is busy, the letter sits and waits. The entire message must arrive before it can move on. |
| Bookish Explanation (8–12 Marks Detail) | In message switching, there is no direct connection between the source and destination. The intermediate switching nodes must have enough buffer space (memory) to hold the entire message while deciding the next hop. This technique uses channel bandwidth very efficiently but suffers from highly variable and potentially massive latency (delay), especially for large messages, as the receiving switch must wait for the entire message to arrive before forwarding. Due to the high storage requirements and unpredictable delays, message switching is now largely considered obsolete and has been replaced by packet switching. |
III. Packet Switching
| Component | Content |
|---|---|
| Definition | A switching technique where data is broken down into small, fixed-size units called packets. These packets travel independently through the network, possibly taking different routes, and are reassembled at the destination. |
| Explanation | This is like breaking up your big suitcase into many small, light, labeled boxes. You send the boxes individually using any available route (road, air, rail). They might arrive out of order, but they all eventually reach the destination, where they are put back together to re-form the suitcase. |
| Bookish Explanation (8–12 Marks Detail) | Packet switching is the foundation of the modern internet. It is highly efficient because multiple users can share the same transmission link (dynamic bandwidth allocation). It offers greater fault tolerance because if one path fails, packets can be rerouted through another path. There are two primary approaches: Datagram Packet Switching (each packet is treated independently, like the post office analogy) and Virtual Circuit Packet Switching (a logical path is established before data transfer, combining benefits of circuit switching and packet switching). Packet switching operates primarily at the Network layer (Layer 3) and Data Link Layer (Layer 2) of the OSI model. |
IV. Comparison of Switching Techniques
The syllabus specifically requires the comparison of these techniques.
| Feature | Circuit Switching | Message Switching | Packet Switching |
|---|---|---|---|
| Connection Setup | Required (Dedicated path) | Not Required | Not Required (Datagram) or Required (Virtual Circuit) |
| Resource Allocation | Resources (bandwidth) are reserved and dedicated. | Dynamic, shared resource (Store-and-forward) | Dynamic, shared resource (Bandwidth used only when transmitting) |
| Efficiency/Utilization | Low (Wasted bandwidth during idle time) | High (But high delay) | Very High (Efficient resource sharing) |
| Delay | Low transmission delay (once connection is set up) | High and variable delay | Low transmission delay (compared to Message Switching) |
| Data Unit | Continuous flow of data | Entire Message | Packets |
| Real-Time Use | Best suited (e.g., voice, video conferencing) | Poorly suited | Suitable (though requires QoS management) |
| Overhead | High setup overhead | Storage overhead at intermediate nodes | Routing and addressing overhead per packet |
Diagram/Mind Map Suggestion
Mind Map Focus: Create a central node "Switching Techniques." Branch out into three main bubbles: Circuit, Message, Packet. Under each branch, list the key differentiating features:
- Circuit: Reserved Path, High Overhead (Setup), Low Delay.
- Message: Store-and-Forward, Whole Message, High Delay.
- Packet: Shared Resources, Small Units, Dynamic Routing.
Diagram Focus (Crucial for 8–12 Marks):
- Circuit Switching: Draw two hosts connected by a series of switches/links, showing a thick, solid line reserved through the network (a permanent, fixed path).
- Packet Switching: Draw two hosts connected by a series of switches/links. Show the data broken into Packet 1 (Blue) and Packet 2 (Red). Show the Blue packet taking Route A and the Red packet taking a different Route B, meeting at the destination host, emphasizing dynamic, independent routing.
Unguided Transmission Media: Radio Waves and Microwave Transmission
Unguided media (or wireless transmission) refers to the transmission of electromagnetic waves without using a physical conductor (like copper wires or fiber optic cables). Instead, the atmosphere or outer space is the transmission medium.
I. Radio Wave Transmission
| Component | Content |
|---|---|
| Definition | Radio waves are electromagnetic waves operating in the frequency range of approximately 3 kHz to 1 GHz. They are characterized by their omni-directional property, meaning they propagate (spread out) in all directions from the transmitting antenna. |
| Explanation | Radio waves are like a loudspeaker in a park. When the speaker plays music, the sound spreads out everywhere, and people all around can hear it, even if they are behind a tree (an obstacle). This is why one radio tower can cover a huge area. |
| Bookish Explanation (8–12 Marks Detail) | Radio wave transmission uses relatively low-frequency electromagnetic waves. Their omni-directional nature makes them suitable for point-to-multipoint communication, such as broadcasting radio and early cellular networks. Because lower frequencies can penetrate walls and buildings (diffraction), a clear line-of-sight is not required between the transmitter and receiver. Advantages include easy installation, portability, and widespread coverage. Disadvantages include susceptibility to interference (noise) from other sources, dependence on regulatory spectrum allocation, and potentially high power requirements for long distances. |
Diagram/Mind Map Suggestion (Radio Waves)
- Visual Focus: Draw a central antenna with wide, spherical ripples radiating outward, symbolizing omni-directional propagation.
- Mind Map Focus: Node: Radio Waves
Key Feature: Omni-Directional Frequency: Low (3 kHz – 1 GHz) Use Case: AM/FM Radio, Early Cellular Pro: Wall Penetration Con: Interference.
II. Microwave Transmission
| Component | Content |
|---|---|
| Definition | Microwave transmission utilizes high-frequency radio signals, typically ranging from 1 GHz to 300 GHz. Unlike radio waves, microwaves are unidirectional and require a clear, unobstructed line-of-sight (LOS) path between the transmitting and receiving antennas. |
| Explanation | Microwave transmission is like using a powerful, focused spotlight. You have to point the light directly at your friend’s receiver (the mirror) to send the signal. If a building gets in the way, the signal is blocked, but because the beam is focused, it’s much stronger and clearer than the sound from the loudspeaker. |
| Bookish Explanation (8–12 Marks Detail) | Microwave transmission is commonly categorized into two types: Terrestrial (Earth-based) and Satellite. Both require highly focused, parabolic dish antennas to concentrate the energy into a narrow beam, making them suitable for point-to-point communication. Due to the high frequency, the beams do not easily penetrate physical barriers like buildings or mountains, necessitating relays (repeaters) positioned at high elevation (e.g., towers or mountain tops) every few kilometers to maintain the line-of-sight connection and overcome Earth’s curvature. Advantages include high data rates (high bandwidth), low interference compared to lower frequency radio waves, and relatively low cost compared to laying fiber optics. Disadvantages include the strict requirement for line-of-sight, vulnerability to atmospheric conditions (rain fade), and the need for frequent repeater stations for long-distance terrestrial links. |
Diagram/Mind Map Suggestion (Microwave Transmission)
- Visual Focus: Draw two tall towers with opposing parabolic dishes (antennas) facing each other, connected by a thin, straight line labeled "Line of Sight." Draw a mountain or building blocking the path between them, showing the failure condition.
- Mind Map Focus: Node: Microwave Transmission
Key Feature: Unidirectional Frequency: High (1 GHz – 300 GHz) Use Case: Terrestrial backbone, Satellite links Pro: High Bandwidth Con: Requires Line-of-Sight, Affected by weather.
Comparative Summary
| Feature | Radio Wave Transmission | Microwave Transmission |
|---|---|---|
| Frequency Range | Lower (Below 1 GHz) | Higher (1 GHz to 300 GHz) |
| Propagation | Omni-directional (Spreads widely) | Unidirectional (Highly focused beam) |
| Line of Sight (LOS) | Not required (Can penetrate walls) | Strictly required |
| Bandwidth/Data Rate | Lower | Higher (Suitable for high-capacity links) |
| Antenna Type | Simple, standard antennas | Highly directional Parabolic Dish Antennas |
Analogy to Solidify Understanding:
If Computer Networking is like delivering goods, Radio Waves are like using a large delivery drone that casts a wide net, dropping packages over a general area, suitable for everyone nearby. Microwave Transmission is like using a powerful, precision laser guided missile to send a specific package directly from one highly dedicated sender to one highly dedicated receiver.
I. Framing: Definition and Techniques
Definition (in Data Link Layer)
Framing is the function of the Data Link Layer (Layer 2) that divides the stream of bits received from the Network Layer into manageable, distinct blocks of data called frames.
Explanation
Imagine you have a very long, continuous roll of paper (the bit stream). You need to cut it into separate, labeled pages (frames) so the person receiving it knows exactly where one page ends and the next begins. Framing is the process of cutting and labeling that paper.
Bookish Explanation (8–12 Marks Detail)
The Data Link Layer is responsible for encapsulating the data packets received from the Network Layer into frames. This process is crucial because it allows the receiver to recognize the start and end of a group of bits and perform error detection and flow control on a block-by-block basis.
The syllabus requires understanding the techniques used to accomplish framing:
- Character Count: The frame header includes a field that specifies the exact number of characters (bytes) in the frame. If the count is corrupted, synchronization is lost, and the receiver cannot correctly interpret subsequent frames.
- Start/End Flags with Character Stuffing: Special bit patterns (flags, e.g., 01111110) mark the start and end of a frame. To prevent the flag pattern from appearing accidentally in the data payload, a process called character stuffing inserts an escape character before any occurrence of the flag pattern within the data.
- Start/End Flags with Bit Stuffing: This is commonly used in synchronous protocols. To ensure that the unique flag pattern (e.g., 01111110) does not appear in the data, the sender scans the data for five consecutive ones (1s). If five ones are found, it inserts a zero (0) bit into the data stream. The receiver removes this inserted zero bit upon arrival.
Diagram/Mind Map Suggestion (Framing)
Diagram: Draw a long strip labeled "Bit Stream." Show an arrow pointing down to a block diagram labeled "Framing." Inside the block diagram, show the bit stream broken into three blocks: Header | Data | Trailer, with a line separating one frame from the next.
II. Data Link Layer: Protocols and Functions
The Data Link Layer is one of the key layers in network architecture.
Definition
The Data Link Layer (Layer 2 of the OSI Model) is responsible for the reliable transfer of data across a single link or segment of the network, managing physical addressing, framing, and handling errors that occur on the physical transmission medium.
Explanation
If the Network Layer (Layer 3) decides the route across the entire country, the Data Link Layer (Layer 2) is the detailed manager responsible for ensuring the package makes it safely from one stop to the very next stop along that route, checking the package's integrity and confirming receipt at each specific link.
Bookish Explanation (8–12 Marks Detail)
The Data Link Layer addresses critical Design Issues, including:
- Providing Service to the Network Layer: The Data Link Layer provides defined services to the Network Layer, often categorized as unacknowledged connectionless, acknowledged connectionless, or acknowledged connection-oriented services.
- Framing: Defining the boundaries of the data units (as discussed above).
- Error Control: Implementing mechanisms to detect and potentially correct errors introduced during transmission.
- Flow Control: Preventing a fast sender from overwhelming a slow receiver by managing the rate of data transmission.
The syllabus mandates knowledge of various protocols:
- Data Link Protocols for Noisy and Noiseless Channels: Protocols must adapt based on channel quality. Protocols for noiseless channels (like Stop-and-Wait) are simple, while protocols for noisy channels (like the ARQ variants) incorporate complex error recovery mechanisms.
- Sliding Window Protocols: This is a key set of protocols used for efficient flow and error control, allowing multiple frames to be transmitted before acknowledgment is required. The key variants listed in your syllabus are:
- Stop and Wait ARQ (Automatic Repeat Request): The simplest protocol; the sender sends one frame and waits for an acknowledgment (ACK) before sending the next.
- Go-back-N ARQ: Allows the sender to transmit multiple frames (up to the window size) without waiting for an ACK. If an error is detected, the receiver discards the corrupted frame and all subsequent frames, forcing the sender to "go back N" and retransmit all frames starting from the damaged one.
- Selective Repeat ARQ: Similar to Go-back-N, but more efficient. Only the specific damaged or lost frame is retransmitted. The receiver buffers subsequent frames and inserts the retransmitted frame in the correct sequence.
III. Error Detection and Correction Codes
The Data Link Layer is critical for Errors Detection and Correction Code. These techniques add redundant information (parity bits or check bits) to the data stream to allow the receiver to verify the integrity of the data.
1. Cyclic Redundancy Check (CRC)
| Component | Content |
|---|---|
| Definition | CRC is a powerful and widely used error detection code that computes a short, fixed-length binary sequence (the checksum) for a block of data, based on polynomial division (modulo-2 arithmetic). |
| Explanation | CRC is like a complex mathematical fingerprint of the data. The sender calculates this fingerprint and attaches it. If the receiver calculates the fingerprint on the received data and it doesn't match the attached one, they know the data was messed up during travel. |
| Bookish Explanation (8–12 Marks Detail) | CRC uses a mathematical approach. The sender and receiver agree upon a predetermined generator polynomial |
2. Hamming Code
| Component | Content |
|---|---|
| Definition | Hamming code is a specialized error detection and correction code that adds redundancy bits to the data such that a single bit error can be identified and located, allowing the receiver to automatically correct the error. |
| Explanation | Hamming code is like having a supervisor who not only tells you which package is broken but also tells you exactly which corner of the package needs fixing. It can find the single mistake and correct it automatically. |
| Bookish Explanation (8–12 Marks Detail) | Hamming codes calculate the number of redundant parity bits ( |
Diagram/Mind Map Suggestion (Error Control)
Mind Map Focus: Node: Error Control Codes
I. Network Classification by Scale: LAN and MAN
Networks are classified based on the geographical area they cover. These topics relate to the overall understanding of network hardware and software within Unit I.
A. LAN (Local Area Network)
| Component | Content |
|---|---|
| Definition | A Local Area Network (LAN) is a computer network that spans a limited geographical area, such as a single office building, a university campus, or a private home. |
| Explanation | A LAN is like all the computers, phones, and printers connected inside your house or school building. They can talk to each other very quickly, but they can’t reach the network across the city directly. |
| Bookish Explanation (8–12 Marks Detail) | LANs are characterized by high data rates, low delay, and low error rates, often employing shared physical media (like copper cable or fiber) under a single administrative domain. Technologies like Ethernet (IEEE 802.3) and Wi-Fi (IEEE 802.11) are the most common implementations. LANs facilitate resource sharing (files, printers, internet access) and distributed application hosting within a restricted area. |
B. MAN (Metropolitan Area Network)
| Component | Content |
|---|---|
| Definition | A Metropolitan Area Network (MAN) spans a larger geographical area than a LAN, typically covering an entire city or a substantial urban region. |
| Explanation | A MAN is like connecting all the different school buildings across the entire city using very fast, dedicated lines. It's bigger than any single LAN but still smaller than the whole internet. |
| Bookish Explanation (8–12 Marks Detail) | MANs are primarily used to connect multiple geographically separate LANs within a city. They often utilize technologies such as Fiber Distributed Data Interface (FDDI) or specialized high-speed carriers like high-speed coaxial cable or fiber optics. MANs serve as a backbone for high-capacity interconnection services, enabling efficient communication for major institutions or city services. They usually fall under the ownership of a single entity (like a city council or large corporation) or a consortium but span public access areas. |
II. IEEE Standards and 802.3
The IEEE (Institute of Electrical and Electronics Engineers) 802 standards specify how LANs and MANs operate at the Physical (L1) and Data Link (L2) layers. IEEE 802.3 is specifically required in Unit II.
A. IEEE 802.3 Standard
| Component | Content |
|---|---|
| Definition | IEEE 802.3 is the internationally recognized standard that defines the protocol and physical requirements for Ethernet, the dominant technology used in wired Local Area Networks (LANs). |
| Explanation | This standard is the rulebook for Ethernet cables and connections. It tells the computer how to structure the data and the specific way to talk on the network cable so that two computers don't try to send data at the exact same moment. |
| Bookish Explanation (8–12 Marks Detail) | The IEEE 802.3 standard specifies both the Physical Layer (cabling, signaling rates, etc.) and the Media Access Control (MAC) Sub-layer protocol within the Data Link Layer. Historically, the core MAC protocol defined by 802.3 for shared media networks was CSMA/CD (Carrier Sense Multiple Access with Collision Detection). This mechanism ensures efficient shared access: a node listens (Carrier Sense) to see if the channel is busy before transmitting. If a transmission is already underway, it defers sending. If two nodes transmit simultaneously (a collision), they detect the conflict and stop transmitting immediately, wait a random time (backoff algorithm), and attempt retransmission. Modern switched Ethernet often operates without contention, but the 802.3 standard still governs the fundamental frame format and addressing (MAC addresses). |
Textual Diagram: Data Link Layer Structure and IEEE 802.3
For an 8–12 mark question, illustrating where the IEEE 802.3 standard fits within the layered architecture is highly effective. The Data Link Layer (L2) is often logically split into two sub-layers.
The following textual diagram illustrates the hierarchy of the Data Link Layer and the scope of the key IEEE standards:
|-------------------------------------|
| NETWORK LAYER (L3) |
|-------------------------------------|
| |
| DATA LINK LAYER (L2) |
|-------------------------------------|
| 2b. LOGICAL LINK CONTROL (LLC) | <--- IEEE 802.2 (Provides interface to L3)
|-------------------------------------|
| 2a. MEDIA ACCESS CONTROL (MAC) | <--- **IEEE 802.3 (Ethernet, CSMA/CD)**
|-------------------------------------|
| PHYSICAL LAYER (L1) |
|-------------------------------------|
Analogy Reminder:
If the Internet is a global highway system, LANs are the small, fast streets inside a town, and MANs are the ring roads connecting those towns. IEEE 802.3 is the specific traffic law (like "drive on the left") that dictates how the cars move on the fast, local Ethernet streets.
Sliding Window Protocols: ARQ Variants
I. Overall Concept: Sliding Window
| Component | Content |
|---|---|
| Definition | A flow control technique that allows a sender to transmit multiple data frames (up to a window size, |
| Explanation | Instead of sending one postcard and waiting for the reply before sending the next (Stop-and-Wait), the Sliding Window is like having a box of postcards (the window). You can send all the postcards in that box at once. Once the receiver confirms receiving the first postcard, you can slide the box forward and send a new one. |
| Bookish Explanation | The sliding window refers to a logical concept where the sender maintains a buffer (window) of unacknowledged frames, allowing a mechanism called pipelining. The efficiency gains come from keeping the transmission channel busy while the acknowledgments travel back across the link, overcoming the limitations of long transmission delay. The size of the window dictates the maximum number of frames that can be outstanding at any given time. |
II. Stop-and-Wait ARQ
This is the simplest ARQ protocol and forms the basis for comparison.
| Component | Content |
|---|---|
| Definition | A fundamental ARQ protocol where the sender transmits only one frame at a time (Window Size = 1) and immediately halts transmission, waiting for a positive acknowledgment (ACK) from the receiver before sending the next frame. |
| Bookish Explanation (8–12 Marks Detail) | Stop-and-Wait ARQ uses sequence numbers 0 and 1, cycling between them for frame identification. If the received frame is corrupted, the receiver discards it and waits (implicitly sending a NAK). The sender relies on a timeout timer. If the ACK is not received before the timer expires, the sender retransmits the frame. The primary drawback is very low channel utilization, especially over long-distance links with high propagation delay, as the channel remains idle while the sender waits. This protocol effectively handles flow control but is highly inefficient in terms of throughput. |
Textual Diagram (Time-Space Diagram)
This diagram shows low utilization as the Sender (S) waits for the ACK (A) after every frame (F).
SENDER (S) CHANNEL RECEIVER (R)
|------ F0 ------->| |
| |<------- A0 ---------|
|------ F1 ------->| |
| |<------- A1 ---------|
|------ F0 ------->| (Repeat for every frame)
III. Go-back-N ARQ
This protocol introduces pipelining efficiency but still has retransmission inefficiency.
| Component | Content |
|---|---|
| Definition | A sliding window protocol that allows the sender to transmit up to |
| Bookish Explanation (8–12 Marks Detail) | In Go-back-N, the sender maintains a window size |
Textual Diagram (Time-Space Diagram)
This diagram shows frames F3 and F4 being needlessly retransmitted because F2 was lost.
SENDER (S) CHANNEL RECEIVER (R)
|--- F0, F1, F2, F3, F4 --->| (Pipelined transmission)
|--------------------------->|
| F2 is lost during transmission |
|--------------------------->| (R discards F3, F4)
| |<------- ACK1 ---------| (R keeps sending ACK1)
| |<------- ACK1 ---------|
|------ F2, F3, F4 --------->| (Sender retransmits ALL frames from F2)
IV. Selective Repeat ARQ
This protocol achieves the highest efficiency by minimizing retransmission waste.
| Component | Content |
|---|---|
| Definition | A sliding window protocol that allows the sender to transmit |
| Bookish Explanation (8–12 Marks Detail) | Selective Repeat requires complex logic and large buffers at the receiver, as it must store and manage out-of-order frames until the missing frame is successfully recovered. The sender and receiver both maintain a window size |
Textual Diagram (Time-Space Diagram)
This diagram shows that the Sender (S) only needs to retransmit F2. F3 and F4 are buffered.
SENDER (S) CHANNEL RECEIVER (R)
|--- F0, F1, F2, F3, F4 --->| (Pipelined transmission)
|--------------------------->|
| F2 is lost during transmission |
|--------------------------->| (R stores F3, F4; R sends NAK2)
| |<------- NAK2 ---------|
|------ F2 --------->| (Sender retransmits ONLY F2)
| |<------- ACK4 ---------| (R acknowledges F0-F4)
Mind Map Suggestion for Memory
Focus your mind map on the comparison of the Receiver's actions, as this is the primary difference:
- Stop-and-Wait: Wait/Idle (No buffer needed)
- Go-back-N: Discard out-of-order frames (Window size = 1)
- Selective Repeat: Buffer out-of-order frames (Window size > 1)
Congestion Control: Policies and Algorithms
I. Principles of Congestion Control and Prevention Policies
Congestion occurs when the load offered to the network (the number of packets being sent) is greater than the capacity of the network (the number of packets that can be processed and forwarded).
| Component | Content |
|---|---|
| Definition | Congestion Control involves mechanisms used to manage network traffic flow to prevent the network from reaching a state of collapse where throughput drops dramatically due to excessive queuing and packet loss. |
| Explanation | Congestion is like a major traffic jam on the highway. If too many cars (packets) try to enter the road at once, everything slows down, and eventually, no one gets anywhere. Congestion control policies are the rules that limit how fast cars can enter the road to keep traffic flowing smoothly. |
| Bookish Explanation (8–12 Marks Detail) | Congestion Prevention Policies are high-level strategies implemented by the Network Layer (Layer 3) to proactively avoid network overload. These policies generally fall into categories like Admission Control (refusing to establish new connections if the network is already congested) and Traffic Shaping (regulating the rate and pattern of data transmission from a source host before the packets enter the network). Traffic shaping ensures that hosts adhere to negotiated traffic contracts, and this is where algorithms like Leaky Bucket and Token Bucket are employed. |
II. Leaky Bucket Algorithm (Traffic Shaping)
| Component | Content |
|---|---|
| Definition | The Leaky Bucket Algorithm is a strict traffic shaping technique that enforces a constant output rate for data transmission, regardless of the input traffic burstiness. It controls the rate at which packets are sent into the network. |
| Explanation | Imagine a bucket with a small hole (the leak) at the bottom. Water (data) can be poured into the top very quickly (a burst), but the water can only leak out through the hole at a fixed, slow rate. If you pour water in faster than it can leak out, the bucket overflows, and the excess water (packets) is discarded. |
| Bookish Explanation (8–12 Marks Detail) | This algorithm can be implemented either using a fixed-size queue (the bucket) or as a counter. The queue holds incoming packets, and a timer ensures that packets are transmitted only at a uniform rate (one packet per clock tick, for instance). If the bucket (queue) overflows because the input rate temporarily exceeds the fixed output rate, the excess packets are discarded. The Leaky Bucket guarantees a predictable and smooth flow of data but is inefficient because it discards bursts of traffic even if the network is currently idle, and it cannot utilize spare network capacity. This makes it ideal for protocols requiring guaranteed constant bandwidth. |
Textual Diagram (Leaky Bucket)
[ BUCKET (Queue) ]
Input (Burst) ---> | Packet 1 | Packet 2 | Packet 3 | ---> Output (Fixed Rate)
(Variable Rate) | Packet 4 | | | (Constant Rate)
|----------/----------|
| (Overflow Discard) |
III. Token Bucket Algorithm (Traffic Shaping)
| Component | Content |
|---|---|
| Definition | The Token Bucket Algorithm is a flexible traffic shaping technique that allows for bursts of traffic, provided that the average long-term transmission rate remains compliant with a defined rate. |
| Explanation | This time, the bucket holds tokens (permission slips), not water. Tokens are dropped into the bucket at a constant rate. To send a packet, you must grab one token. If you have a full bucket of tokens, you can send a massive burst of packets immediately. If the bucket is empty, you must wait for a new token to appear. This lets you use saved capacity but prevents you from going too fast over the long term. |
| Bookish Explanation (8–12 Marks Detail) | Tokens are generated at a constant rate ( |
Textual Diagram (Token Bucket)
[ TOKEN BUCKET ]
Token Generation ----------> [ Tokens (Rate R) ] <--- (Max Capacity C)
(Constant Rate) / | \
/ | \
Input (Burst) ------------> (Check Tokens) --> Output (Immediate Transmission)
(Variable Rate) |
V
(Wait/Discard if no tokens)
Summary Comparison
The key difference for the exam:
- Leaky Bucket: Enforces a strict average rate and discards any traffic that exceeds that rate, eliminating all burstiness.
- Token Bucket: Enforces the average rate over time but allows for bursts (up to the size of the bucket capacity) by utilizing saved tokens, better matching modern network traffic patterns.
I. IP Addressing and Subnetting (Context from Unit III)
The Network Layer handles logical addressing.
| Component | Content |
|---|---|
| Syllabus Reference | Network Layer topics include network layer addressing, IP addressed Classes, Subnetting, Sub network, and Subnet mask. |
| Definition (IP Addressing) | An IP address (Internet Protocol address) is a unique numerical label assigned to every device participating in a computer network that uses the Internet Protocol for communication. It serves the dual purpose of host interface identification and network location addressing. |
| Bookish Explanation | IP addresses are logically divided into classes (Class A, B, C, D, E) to determine the distribution between the network ID (which identifies the entire network) and the host ID (which identifies a specific device within that network). Subnetting is the process of borrowing host bits to create sub-networks (smaller, manageable segments) within a single larger network, defined by a Subnet Mask. The subnet mask identifies which portion of the IP address corresponds to the network/subnet ID and which portion corresponds to the host ID. |
II. Routing Algorithms
The routing algorithm is the component of the Network Layer that determines the path packets follow across the network. Your syllabus requires knowledge of Shortest Path Routing, Flooding, Broadcast, and Multicast routing.
A. Shortest Path Routing
| Component | Content |
|---|---|
| Definition | Shortest Path Routing is a routing strategy that uses algorithms to calculate the path with the minimum cost (e.g., fewest hops, least delay, or lowest monetary cost) between the source and the destination network. |
| Explanation (Conversation History) | The algorithm works like a GPS system (the router), calculating the most efficient path through the interconnected road network (the graph) [Conversation History]. |
| Bookish Explanation (8–12 Marks Detail) | This is a fundamental adaptive routing objective. Algorithms like Dijkstra's or Bellman-Ford's are used by routers to compute the shortest paths. The network is modeled as a weighted graph, where the nodes are routers and the edges are communication links assigned a weight representing cost [Conversation History]. The algorithm iteratively discovers the path that minimizes the accumulation of these weights. This is the primary method used for unicast traffic (one-to-one communication), ensuring efficient use of network resources [Conversation History]. |
Textual Diagram: Shortest Path Determination
The following diagram illustrates a simple graph where the numbers represent the weight/cost of the link (Shortest Path from A to D is A
(1) (2)
A ----- B ----- D
| |
| (10) | (1)
| |
C -------
(5)
(Shortest Path A->D: A-B-D, Cost = 3)
B. Broadcast Routing
| Component | Content |
|---|---|
| Definition | Broadcast routing is the technique used to send a single packet from a source to all possible destinations connected to the network. |
| Explanation (Conversation History) | It is essential for service discovery (e.g., finding the local address of a host) where a client needs to communicate with every device on its local segment [Conversation History]. |
| Bookish Explanation (8–12 Marks Detail) | The primary goal of broadcast routing is to deliver the message to every host on a subnetwork efficiently, minimizing the generation of redundant packets [Conversation History]. Simple methods involve sending a separate packet copy to every destination, which is highly inefficient. More sophisticated techniques, such as Reverse Path Forwarding (RPF), are used. RPF forwards a broadcast packet only if it arrives on the interface that represents the shortest path back to the source, preventing loops and redundant transmissions on multipath networks [Conversation History]. |
Textual Diagram: Simple Broadcast Scope
[Router]
|
+----+----+----+
| | | |
[Host A] [Host B] [Host C] (All Hosts in the local network segment receive the packet)
C. Multicast Routing
| Component | Content |
|---|---|
| Definition | Multicast routing is the specialized technique used to send a single copy of a packet from a source to a specific subset of destinations (known as a multicast group) that have explicitly registered interest in receiving the data. |
| Explanation (Conversation History) | Multicast is crucial for applications like streaming or video conferencing where only users who join the group receive the high-volume data stream [Conversation History]. |
| Bookish Explanation (8–12 Marks Detail) | Unlike broadcasting, multicast conserves bandwidth by intelligently delivering data only to the interested group members [Conversation History]. Multicast routers maintain a record of which interfaces lead toward a member of a specific multicast group. These routers then construct a spanning tree that efficiently connects the source to all members, ensuring that only one copy of the packet traverses any given network link. This provides targeted distribution, making it suitable for applications requiring point-to-multipoint communication [Conversation History]. |
Textual Diagram: Multicast Spanning Tree
(S is Source, R are Routers, G are Group Members. Note that R3 does not forward the packet to Host X because X is not a group member, conserving bandwidth.)
(Source S)
|
R1
/ \
R2 -- R3
/ \ \
[G1] [G2] [Host X]
Domain Name System (DNS) and Application Layer Services
I. DNS Definition and Functionality
| Component | Content |
|---|---|
| Definition | The Domain Name System (DNS) is a hierarchical, distributed database system operating at the Application Layer, primarily responsible for translating human-readable domain names (e.g., www.example.com) into numerical IP addresses needed by the network to locate and route traffic [Conversation History]. |
| Explanation | DNS is the internet's phone book [Conversation History]. You remember the name (domain name), but the network needs the phone number (IP address). DNS is the service that quickly looks up and translates the name into the correct number so communication can begin. |
| Bookish Explanation (8–12 Marks Detail) | The core functionality of DNS is name resolution, which is achieved through a decentralized and fault-tolerant architecture [Conversation History]. It uses an inverted tree structure called the DNS Name Space to delegate management authority. The DNS protocol, often relying on UDP for speed, allows client programs (resolvers) to query the distributed database to map hostnames to IP addresses, ensuring that complex addressing is abstracted away from the end-user [Conversation History]. |
Textual Diagram: DNS Name Space Hierarchy
The DNS structure is hierarchical, starting from the unnamed root [Conversation History]:
. (Root)
|
/------------/-----------|----------\------------\
.com .org .net .edu (Top-Level Domains)
| | | |
microsoft wikipedia example harvard (Second Level Domains)
II. DNS Nameservers and Caching
A. Nameservers
DNS relies on multiple types of DNS Servers to manage the distributed database and resolve queries [Conversation History]:
- Root Name Servers: Servers that direct queries to the appropriate Top-Level Domain (TLD) servers [Conversation History].
- TLD Servers: Hold the addresses for all Authoritative Name Servers within their domain (e.g., the servers responsible for
.com) [Conversation History]. - Authoritative Name Servers: Hold the definitive IP records for a specific domain (e.g., the specific server for
microsoft.com) [Conversation History]. - DNS Resolvers (Local DNS Servers): These are the servers that receive queries from client machines and manage the iterative or recursive process of finding the final IP address [Conversation History].
B. Caching and Caching DNS Resolver
| Component | Content |
|---|---|
| Caching | Caching is a mechanism employed by DNS servers, especially Resolvers, to store the results of recent queries locally for a specified period (Time-To-Live or TTL) [Conversation History]. |
| Caching DNS Resolver | The local DNS server maintains a cache. When a client requests a domain name, the Resolver checks its local cache first. If the mapping is found, it returns the IP address immediately, bypassing the need to query the Root and TLD servers [Conversation History]. |
| Bookish Detail | Caching significantly improves performance and reduces network traffic by avoiding repetitive external lookups [Conversation History]. This decentralized caching is crucial for the scalability of the Internet, preventing higher-level servers (like Root and TLD servers) from being overwhelmed [Conversation History]. However, if a host's IP address changes, the cached record must expire (based on TTL) before clients receive the updated information. |
III. Remote Access and HTTP/HTTPS
A. Remote Access Protocols
The syllabus mentions Remote Login and File transfer protocol, which facilitates remote access.
| Component | Content |
|---|---|
| Definition (Remote Access) | Remote access protocols allow a user to connect to and interact with a network resource or server located in a different geographical location, giving the user the ability to log in or manage files as if they were physically present at the server location. |
| Bookish Detail | Protocols like File Transfer Protocol (FTP) are used for moving files between hosts. Remote Login (traditionally Telnet, now often replaced by more secure methods like SSH) allows a user to establish a terminal session on a remote computer. These are fundamental Application Layer services required for network maintenance and data sharing. |
B. HTTP/HTTPS
The syllabus specifically lists HTTP under the Application Layer.
| Protocol | Definition and Functionality | Security |
|---|---|---|
| HTTP (HyperText Transfer Protocol) | HTTP is the foundational Application Layer protocol used for transmitting hypermedia documents, such as HTML, over the World Wide Web. It is a request-response protocol, typically running on TCP port 80 [Conversation History]. | HTTP is unsecured; data is transferred in plain text, making it vulnerable to eavesdropping and interception [External to Sources]. |
| HTTPS (HyperText Transfer Protocol Secure) | (Note: Only HTTP is listed in the syllabus. HTTPS is a secure variant, external to your sources but standard curriculum.) HTTPS functions identically to HTTP but layers the communication over SSL/TLS (Secure Sockets Layer/Transport Layer Security). | HTTPS provides encryption of data in transit, data integrity (ensuring data wasn't modified), and authentication (verifying the server's identity) [External to Sources]. It typically uses TCP port 443. |
Analogy to Solidify Understanding:
If DNS is the Internet's phone book, then HTTP is the language you use to ask the other person (the server) for information once you connect, and HTTPS is that same conversation conducted inside a soundproof, locked vault (encryption), ensuring privacy and security.
Introduction to Network Security
I. Definition and Objective
| Component | Content |
|---|---|
| Definition (Network Security) | Network Security involves the policies, procedures, and practices implemented to prevent and monitor unauthorized access, misuse, modification, or denial of a computer network and network-accessible resources. |
| Explanation | Network security is like putting strong locks, alarms, and guards around the valuable digital information stored in your network. The goal is to make sure only authorized people can see, change, or use the data, and to stop bad actors (hackers) from messing things up. |
| Bookish Explanation (8–12 Marks Detail) | As highlighted by the course objective, understanding network security concepts is vital. Network security aims to protect the confidentiality, integrity, and availability (CIA Triad) of data traversing the network and residing on network endpoints. It involves various defensive layers spanning different OSI models, from Physical layer access control to Application layer encryption. The foundational goals are achieved through policies and implementation mechanisms related to authentication, authorization, and encryption. |
II. Core Security Services (The CIA Triad)
The primary services provided by network security mechanisms are organized around the CIA Triad:
- Confidentiality:
- Definition: Ensuring that information is accessible only to those authorized to have access.
- Mechanism: Primarily achieved through Encryption (e.g., the use of HTTPS over HTTP, which we discussed previously, provides confidentiality for web traffic [Conversation History]).
- Integrity:
- Definition: Ensuring that the data has not been altered or destroyed in an unauthorized manner during storage or transmission.
- Mechanism: Achieved through methods like digital signatures and hashing algorithms (which operate similarly to the CRC checksums we discussed in the Data Link Layer [Conversation History]).
- Availability:
- Definition: Ensuring that authorized users can access resources and data when needed.
- Mechanism: Protecting against attacks designed to take down resources, such as Denial of Service (DoS) attacks, often involving redundancy and proper Congestion Control policies (like the Leaky Bucket and Token Bucket algorithms from Unit III [Conversation History]).
III. Key Mechanisms for Network Security
To achieve the CIA triad goals, the following mechanisms are typically introduced:
| Mechanism | Layer Focus | Function |
|---|---|---|
| Encryption/Decryption | Presentation/Application (L6/L7) | Scrambling data so that it cannot be read by unauthorized parties (e.g., using protocols like SSL/TLS under HTTPS) [Conversation History]. |
| Authentication | Application (L7) | Verifying the identity of the user or device attempting to access a resource (e.g., usernames and passwords). |
| Firewalls | Network (L3) and Transport (L4) | Network security systems that monitor and control incoming and outgoing network traffic based on predetermined security rules, often filtering packets based on IP addresses and port numbers. |
| Intrusion Detection Systems (IDS) | Multiple Layers | Tools that monitor network activity for malicious activities or policy violations and issue alerts. |
Diagram Suggestion
For a high-scoring answer, a simple block diagram illustrating the interaction between confidentiality, integrity, and availability is effective:
+-------------------+
| NETWORK SECURITY|
+-------------------+
/ | \
/ | \
[Confidentiality] [Integrity] [Availability]
(Encryption) (Hashing/Signatures) (DoS Protection/Redundancy)
Analogy for Network Security:
If the data is a secret message, Encryption (Confidentiality) is writing the message in a secret code. Integrity is using a tamper-proof wax seal to ensure no one changed the letter during delivery. Availability is ensuring the post office (the network) is always open and working so the letter can be delivered immediately.
Textual Diagrams for Computer Networks (BCA-16-501)
1. OSI vs TCP/IP Reference Model Mapping (Unit I)
This diagram shows how the seven layers of the OSI Model map to the four layers of the TCP/IP Model.
OSI vs TCP/IP Reference Model Mapping
|-------------------|-----------------------------|
| OSI Layers (7) | TCP/IP Layers (4) |
|-------------------|-----------------------------|
| 7. Application | |
| 6. Presentation | 4. Application Layer |
| 5. Session | (Protocols: HTTP, DNS, SMTP)|
|-------------------|-----------------------------|
| 4. Transport | 3. Transport Layer |
| (TCP, UDP) | |
|-------------------|-----------------------------|
| 3. Network | 2. Internet Layer |
| (IP Addressing) | (Routing) |
|-------------------|-----------------------------|
| 2. Data Link | 1. Network Access Layer |
| 1. Physical | (Hardware, Drivers) |
|-------------------|-----------------------------|
2. Packet Switching: Dynamic Routing (Unit I)
This diagram illustrates the core concept of packet switching, where packets take different paths (dynamic routing) and reassemble at the destination.
Packet Switching: Dynamic, Independent Routing Example
(Source) H1 ----> [Switch A] --(Route A)--> [Switch B] ----> (Destination) H2
\ /
\---(Route B)---> [Switch C] ----/
- Data broken into: Packet 1 (Blue) and Packet 2 (Red).
- P1 might take Route A; P2 might take Route B.
- Both reassemble at H2, prioritizing network efficiency.
3. Microwave Transmission: Line-of-Sight Requirement (Unit I)
This diagram emphasizes the strict requirement for line-of-sight in microwave communication.
Microwave Transmission (Requires Line-of-Sight)
[Antenna Dish 1]
\
\ (Focused Beam)
\
[Building/Obstacle] --- X --- [Line of Sight Blocked]
\
\
[Antenna Dish 2] (Signal Blocked)
4. Data Link Layer: Frame Structure (Unit II)
This shows how the Data Link Layer (L2) structures the bit stream into a frame.
Data Link Layer: Frame Structure (Framing)
|----- Header -----|----- Data (Payload) -----|----- Trailer -----|
<-- Addressing, Control --> <-- Network Layer PDU --> <-- CRC/Checksum -->
5. Data Link Layer (L2) Sub-layer Structure and IEEE 802 (Unit II)
This diagram shows where the critical IEEE 802.3 (Ethernet) standard fits within Layer 2.
Data Link Layer (L2) Structure and IEEE Standards
|-------------------------------------|
| NETWORK LAYER (L3) |
|-------------------------------------|
| DATA LINK LAYER (L2) |
|-------------------------------------|
| 2b. LOGICAL LINK CONTROL (LLC) | <--- IEEE 802.2
|-------------------------------------|
| 2a. MEDIA ACCESS CONTROL (MAC) | <--- **IEEE 802.3 (Ethernet/CSMA/CD)**
|-------------------------------------|
| PHYSICAL LAYER (L1) |
|-------------------------------------|
6. Go-back-N ARQ: Error Recovery Time-Space Diagram (Unit II)
This diagram illustrates the inefficiency of Go-back-N, where multiple good frames are retransmitted after one loss (F2).
Go-back-N ARQ: Error Recovery Example
SENDER (S) CHANNEL RECEIVER (R)
|--- F0, F1, F2, F3, F4 --->| (Pipelined transmission)
|--------------------------->|
| (F2 is lost) |
|--------------------------->| (R discards F3, F4 as they are out of order)
| |<------- ACK1 ---------| (R keeps sending ACK1)
| |<------- ACK1 ---------|
|------ F2, F3, F4 --------->| (Sender retransmits ALL frames from F2)
7. Routing Algorithm: Shortest Path Determination (Unit III)
This illustrates weighted links used in shortest path algorithms (e.g., Dijkstra's).
Routing Algorithm: Shortest Path Example
(1) (2)
A ----- B ----- D
| |
| (10) | (1)
| |
C -------
(5)
(Shortest Path A->D: A-B-D, Cost = 3)
8. Routing Algorithm: Multicast Spanning Tree (Unit III)
This diagram shows how multicast conserves bandwidth by only forwarding traffic to paths leading to group members (G1, G2).
Multicast Routing: Spanning Tree
(Source S)
|
R1
/ \
R2 -- R3
/ \ \
[G1] [G2] [Host X]
(Traffic only reaches G1 and G2. R3 does not forward to Host X.)
9. Congestion Control: Token Bucket Algorithm (Unit III)
This illustrates the mechanism of the Token Bucket, which allows controlled bursts.
Token Bucket Algorithm (Traffic Shaping)
Token Generation ----------> [ Tokens (Rate R) ] <--- (Max Capacity C)
(Constant Rate) / | \
/ | \
Input (Burst) ------------> (Check Tokens) --> Output (Immediate Transmission)
(Variable Rate) |
V
(Wait/Discard if no tokens)
10. DNS Name Space Hierarchy (Unit IV)
This shows the inverted tree structure of the Domain Name System.
DNS Name Space Hierarchy
. (Root)
|
/------------/-----------|----------\
.com .org .net .edu (Top-Level Domains)
| | | |
google wikipedia example harvard (Second Level Domains)
| |
mail.google.com cs.harvard.edu (Hosts/Subdomains)
11. Network Security: The CIA Triad (Unit IV)
This shows the three foundational pillars of network security.
Network Security: The CIA Triad
+-------------------+
| NETWORK SECURITY|
+-------------------+
/ | \
/ | \
[Confidentiality] [Integrity] [Availability]
(Encryption) (Hashing/Signatures) (DoS Protection)