[Back to Lecture Notes page]
The Data Link Layer (DLL)
Sub-topic Outline:
- Error Detection and Error Correction
- Algorithms for Data Link Protocols
- Simplex Protocols
- Sliding Window Protocols
- Some Example Data Link Protocols
- HDLC - High Level Data Link Control
- SLIP - Serial Line Internet Protocol
- PPP - Point-to-point Protocol
- Data Link for ATM Networks
The Data Link Layer (DLL)
- From the Physical Layer, we have the following services:
- sending signals
- converting from analog signals to digital, and vice versa (modulation) - from now on we will only talk about the digital signals
- putting multiple channels into one link (multiplexing)
- Directing an incoming signal to an outgoing line (switching)
- The Data Link build on that to get a reliable and efficient communication between two adjacent machines - notice that we are still talking about adjacent machines (two machines "side-by-side" connected with one link). The machines may have gone through a few intermediate switches, but essentially we only still have communication on one link. We can't have communication going through multiple links until we have routing, and routing is a Network Layer service.
- Some reasons why we need Data Link layer services - what's the big deal in sending a stream of bits from one machine to an adjacent machine? I thought the Physical Layer already does that. Yes, but some factors which the Physical Layer doesn't worry about are:
- finite data rate
- propogation delays - bits doesn't get to destinations immediately
- errors in transmission
We want some protocols to have the most reliable and efficient communication between two adjacent machines taking these factors into account.
Functions of the Data Link Layer
- Framing
- Error Control
- Flow Control
- Provide Services to The Network Layer
Framing
- Physical Layer transmit bits, but may have errors. Eg.
- added bits,
- lost bits
- bits changed.
- Solution: to put bits into frames, and define checksums for the frames – sender calculates the checksum and puts checksum in the transmission, and receiver calculates the checksum again to make sure it is the same.
- Four types of framing methods:
- Character count
See Fig 3-3 p180
- Use the first bit in each frame to indicate how many bits the frame has.
- Can’t recover from errors because when error occurs, we don’t know where to start reading for the next frame
- Starting and ending characters
- Use special ASCII characters to indicate start and end of frame
- If errors occur, just look for the next start character to start reading again
- If the a start or end character happen to be in the data itself, stuff another special escape character to indicate that it is not actually a start/end character.
Eg. See Fig 3-4 p181
- Starting and ending bit flags
See Fig 3-5 p181
- Similar to (2), but uses a special sequence of bits as oppose to an ASCII character – then this doesn’t depend on having 8-bits for the ASCII representation.
- As in (2), also requires stuffing in case the start/end sequence appears in the data.
- Physical layer coding violations
- Some bit encodings have redundancy. Eg. some LAN protocols encode 1 as 10, and 0 as 01 (always a change in the middle) – so if bit sequences like 11 and 00 are detected, we know that errors have occured.
Error Control
- What do sender and receiver do when frames are received with errors, or are lost? – we will talk about actually detecting error later.
- Using acknowledgment- some possibilities:
- Sends positive acknowledgment if frames received properly
- Sends negative acknowledgment if not
- Waits for acknowledgments once frames are sent
- If get negative acknowledgments, resend frames
- If get NO acknowledgments (frames lost?), resend frames
- Must be careful of synchronisation – eg. what if sender resends a frame before the acknowledgment arrives?
- The above are only some possibilities. Exactly what happens depends on how the protocols choose to handle errors. We will look at some algorithms for doing them, as well as some examples of how real protocols deal with them in this topic.
Flow Control
- How to stop a sender from sending faster than a receiver can receive?
- Protocols usually (but not always) define when how a receiver lets a sender know it can receive more frames.
A Note about Error and Flow Control
Some protocols at higher-level layers (Network, Transport, Application) do error and flow control themselves as well. That being the case, the obvious question is: why then should services at the Data Link layer (which the other layers depend on) have to do it too?
The answer goes back to the concept of layered software again. The main reason is that we design services at the Data Link layer to give reliable and efficient adjacent machine communication. Different data link protocols will give different levels and types of reliability and efficiency. It is up to the higher layer services which data link protocol they want to depend on. They may choose to use a low-reliability but efficient data-link protocol and implement their own error-checking (for reliability), or they may use a high-reliability, high-efficiency data link protocol and not worry about reliability and efficiency themselves. It’s up to them.
If we start making too many assumptions about the higher layer protocols, we might as well not have the layers, and just design the whole architecture all at once. As I’ve mentioned in the past, that leads to a very complicated design and analysis process.
Another reason for the Data Link layer doing error control, even though some higher layer protocols might already be doing it, is because Data Link services adds extra bits to the frames handed down from the higher layers. They need to ensure these extra bits are transmitted error-free.
Services to the Network Layer
- Services in the Network Layer depends on the Data Link layer to transmit bits from one machine to another
- Three types of services the Data Link Layer can offer the Network Layer
- Unacknowledged connectionless service
- Acknowledged connectionless service
- Acknowledged connection-oriented service
W e don’t consider the case of unacknowledged connection-oriented service because it is not really realistic to go through the overheads of setting up a connection for reliable communication and not then not have acknowledgments to assist in the reliability.
Unacknowledged Connectionless Service
- Send bits without needing acknowledgements
- Do not need to establish a connection and release it when finished
- Situation when this is useful:
- Error rate very low - so leave it to higher layers to do error recovery
- Transmission where there is no point in resending (eg. speech)
Acknowledged Connectionless Service
- Must get acknowledgements from receiver for all bits sent
- Do not need to establish a connection and release it when finished
- More reliable
- Situation when this is useful:
- Media with higher error rate (eg. wireless)
Acknowledged Connection-Oriented Service
- Must get acknowledgments from receiver for all bits sent
- Establish a connection and release it when finished
- Most reliable of the three types
- Establish connection between two machines – initialise variables and counters to track which frames have been received, which have not.
- Send frames
- Release connection – free up the buffers used for the variables
[Back to Lecture Notes page]