Wednesday, August 29, 2012

Part One: Describe Network Communications Using Layered Models...

Preface: For this blog entry, I am referring to Scott Bennett's book "31 Days Before Your CCNA Exam". While the occasional passage may be copied from this book, it is not my intention to infringe upon the copyright of this work (which is ©2007, Cisco Systems Inc), nor is it my intention to make the content of the book available to read online. 

Note for any casual readers: I hope you find this entertaining and perhaps even informative. It might seem a bit circuitous and at times perhaps even silly, but when describing things quickly, I tend to describe them in very train-of-thought type ways. I'll repeat myself, refer back to stuff I might not have said, perhaps even (I hope not!) contradict myself from time to time, but hey, this is a learning experience for me too, and as long as I know what I mean, that's the important thing ;-).

The first and most important thing that newcomers to networking theory are taught, are the ins and outs of layered models. Layered models provide a visual and conceptual representation of the most fundamental inner workings of the network.While initially appearing confusing, it's essential to your understanding of networking theory that you understand two particular layered models, inside and out.



The thing to the left, is what's called the OSI model - The Open Systems Interconnect[ion] model. I'm not going to go into the history of it (released in 1984 to simplify and standardise network communications), but I will say that the OSI model - for us - is pretty much a brute fact - it's just there, it just exists. For thousands of classes of network engineers the world over, it is the scaffolding that holds our understanding of network communications up. Why is this relevant to us? Well, it will eventually become apparent, as future modules reference the OSI model.

It's a multilayered model. Think of it as a cocktail where all the layers are poured carefully on top of each other. I would show you a picture of one, but you'd be amazed how hard it is to find a picture of a layered cocktail on the net, that isn't payware on some stock photography website.

The layers of the OSI model are numbered from the bottom up, that is, 1=physical, 2=data link, 3=network and so on.

An easy way to remember the layers of the OSI model is with the mnemonic Please Do Not Throw Sausage Pizza Away 
That is,  Physical, Data-link, Network, Transport, Session, Presentation, Application.
It works the other way too, with All People Seem To Need Data Processing.

As information travels across a network or across the internet (itself simply a network of networks), it is continually changed and updated. While your request for a certain webpage (for example) - the payload - remains the same from source to destination, the box - or multitude of boxes, as you will see - that it travels in is continually redirected, relabeled, sometimes even repackaged altogether, while on its travels. These different types of labelling or packaging are called PDUs.

PDU: Protocol Data Unit: Think of this as a specific type of parcel. Layer 3 PDUs are packets, for example, whereas Layer 2 PDUs are frames. Layer 2 can't read packets, and layer 3 can't read frames.
The layers of the OSI model communicate with each other, but cannot read each other's PDUs:

For a layer to be able to "read the label" and direct the data where it needs to go, the data (or PDU) needs to be placed in a box designed for that layer, that is, encapsulated in a layer [x] PDU.

Example (in VERY general terms (we haven't gone onto ARPs or reverse ARPs or anything like that yet so I'm missing a great deal out), this is just for the purposes of describing the OSI model, remember)

PC1 at the left, wants to send some data to PC2 at the right. connected to either PC is a Switch (square), and these are connected to each other, via two Routers (round). I've used both switches and routers in this example, because:
  • In most cases, switches forward data in frame format, based exclusively on the layer 2 address of the frame (Layer 3 and higher multilayer switches are beyond the scope of the CCNA)
  • Routers forward data in packet format, based exclusively on the layer 3 address of the packet.
Remember that I referred to a multitude of boxes? 

PDUs can be nested one in side the other. When you send information across the an ethernet network, what you're generally sending is Data, inside a segment, itself inside a packet, itself inside a frame. The frame is the lowest level PDU we deal with in this example.
So again, the higher level puts stuff inside an addressed box to send it to a lower level, which puts that box in another, bigger box, with its own address label written in that layer's own unique language, and sends it to an even lower level, which puts it in an even bigger box, and so on and so forth.
Remember transporters from Star Trek? that's layer 1. When the layer 2 PDU is finally ready to be sent, it goes on the transporter pad, and it's dematerialised. All those little atoms and all that energy flowing about? That's layer 1, and those are the bits.
You get to the transporter room wherever you're going, and the box rematerialises again. That's a layer 2 PDU, having been transmitted over Layer 1.

  1. PC1 creates a frame destined for the router, addressed to the router with the router's layer 2 address. Inside that frame is a packet, with a layer 3 address. 
  2. Copper wiring can't carry frames though, it can only carry electrical current, so the PC, having created the frame, transmits the bits, just like the star trek analogy, to the layer 2 Switch. The switch receives the bits and reconstitutes them into a frame.
  3. The switch receives the frame, examines the address label, notices that it is for the router, and (again turning it into lots of bits, sends it on to the router. 
  4. The router reassembles the bits into a frame, and then (remember, routers don't care about layer 2 addresses) opens the frame (box) and lifts out the packet (the smaller box inside), discarding the remains of the data that made up the frame. 
  5. The router examines the address label on the packet, realises it now needs to be passed to the second router, and puts new address labels on the packet, marking it for the attention of the second router. This packet is also turned into bits, which are sent to the second router. 
  6. The second router repeats the process, reconstituting the packet and reading the address label on it. Realising that the destination is a layer 2 address, the router takes the packet, places it inside a new frame, sticks a lovely new layer 2 address on it, and sends it towards the switch, turning it into bits in the process.
  7. The second switch receives the bits, reconstitutes them into a frame, reads the layer 2 address which it recognises as belonging to PC2, and forwards the frame onto PC2's network card, as a stream of bits.
  8. The stream of bits reaches PC2, where it is turned into a frame again, the PC realises "hey this is for me" and works its way through the packaging, opening as many boxes as it needs to, packet, segment and all, to find the data it's after. 
 Now, it's important to remember that this example isn't intended to go into the ins and outs of data transmission, nor does it describe in explicit detail, the steps of routing a packet from a source host to a destination host. That is for later on in my studying.
What the example does do though, is illustrate the fact that during its travels from source to destination, data is moved from one layer to another and back again (often repeatedly), and more importantly, emphasises the role of the OSI model in our understanding of this process.

In summary, the layers are as follows:
  • 7: Application Layer: This consists of E-mail, FTP and other programs that allow the user to enter data.
  • 6: Presentation Layer: This is where encryption and compression can occur. Here, data is represented in a standard syntax and format, such as ASCII.
  • 5: Session Layer: This layer is responsible for setting up and closing down sessions between programs that exchange data.
  • 4: Transport Layer: Segments are transported, helped on their way by functions that ensure reliability of transmission, detect errors, and control the flow of data.
  • 3: Network Layer: Here, Packets are routed over the network. The path they are sent on is determined by their layer 3 address, the IP Address.
  • 2: Data Link Layer: On this layer, frames traverse the local area network. This travel is facilitated by their layer 2 address, the MAC Address.
  • 1: Physical Layer: This is where the magic happens. Data is transmitted as a series of light pulses (fibre optic), electrical pulses (copper cable), or radio waves (Wireless/Wi-Fi).


 In the CCNA, we learn about a second type of layered model, known as the TCP/IP model. As with a great many technological concepts, the TCP/IP model was created by the military (or rather, the United States Department of Defense, on behalf of the military). In this case, the intention was to define a resilient network structure that could ensure consistent communications during nuclear war. As far as my studies indicate, the TCP/IP model is essentially a different concept to describe identical operations. We may touch on this again later (a later passage in the book actually indicates that this is not true. How so remains to be seen).

Remembering which layer of each model corresponds to its counterpart is a small challenge. It's easiest to remember that the layers A,P,S (7,6,5) which are not really dealt with by networking students, at least not in the CCNA curriculum, are bunched together into one big layer. Transport is unchanged, Network simply gets a name change and becomes "Internet", and then you have the two (or one) layers/layer at the bottom. 3,1,1,2 helps me remember. I don't know why.

Sublayers: In order to complicate things a little more, the data link layer of the OSI model actually consists of two sub-layers. OSI's layer 2 combines these 2 sub-layers into one layer, in much the same way that layer 1 of the  TCP/IP model combines the OSI model's L1 and L2 into one layer. It seems to be a conceptual thing.
  • The upper sub-layer is called the Logical Link Control sublayer. This sublayer acts as an interface with the upper layers.
  • The lower-sub-layer is called the Media Access Control sublayer, and, guess what, it controls access to the media. Ethernet operates in the physical layer, and in the MAC sublayer. 
 Frames: Layer 2 frames are made of different types of information called fields. These fields allow the receiving host to identify where a frame starts, where it ends, where it needs to go, and whether it has been transmitted correctly. Without the structure of a frame, the layer 1 transmission would just be a long squiggly stream of binary data. The fields in a generic frame are as follows:
  • Start of Frame: This field identifies the beginning of the frame.
  • Address: This contains the source and destination layer 2 (MAC) address.
  • Length (or) Type: If this is a length field, it determines the length of the frame. If it is the type, it identifies the layer 3 protocol for the frame.
  • Data: This is the important part. This is where the data intended for upper layers is kept. Despite being a layer 2 frame, upper-layer data is intended only for layers 3 and 4 of the TCP/IP model. In the case of the OSI model, it refers to layers 3 to 7, as you'd expect.
  • Frame Check Sequence: We'll come across this one a lot. This field provides a number that represents the data in the frame, as well as a way to check the frame and arrive at the same number. Something called a Cyclic Redundancy Check is a common way to calculate the number, and to check for errors in the frame.
 Three layer 2 technologies that control how the media is accessed, are Ethernet, Token-Ring, and FDDI (Fiber Distributed Data Interface). As part of the MAC sublayer, these technologies are divided into two groups: Deterministic and non deterministic. FDDI and Token-Ring are deterministic, as they provide a way for hosts to take turns accessing the media. Ethernet is nondeterministic and uses something called Carrier Sense Multiple Access/Collision Detection (CSMA/CD) as a way to access the media. This means that hosts will check whether the line is busy, and will only transmit if it is not.
If two hosts transmit at the same time, a collision would occur, and both hosts would be required to wait a random amount of time before trying again.

It is important to pay attention to the name of the specific layered model being referenced, in any layered model question.

The TCP/IP Application Layer: This layer features protocols and programs that prepare data to be encapsulated in lower layers. These programs include TFTP, FTP, SNMP, SMTP, Telnet and DNS.
More on these later.

TCP and UDP (User Datagram Protocol) operate as protocols of the TCP/IP transport layer. Both protocols segment data from the applicaiton layer and send the segments to the destination. UDP differs from TCP in that it just blurts data at the destination host, and doesn't check whether it has been received. TCP on the other hand ensures reliable transfer with receipt acknowledgements, sequencing, and mechanisms to control the flow of data, and is therefore described as a "connection-oriented" protocol.

The TCP/IP Internet Layer is responsible for finding the best path for packets over the network. It is aided in this role by the inclusion of the connectionless protocol, IP. Remember, TCP Is connection-oriented, IP is not.
Other protocols that work at the internet layer of the TCP/IP model include:
  • ARP (Address Resolution Protocol) to find a MAC address when only the IP address is known.
  • Inverse/Reverse ARP (You've guessed it), to find an IP address when only the MAC is known.
  • ICMP: Internet Control Message Protocol. 
 The TCP/IP Network Access Layer is also known as the "host to network" layer, and is responsible for providing the protocols that allow the data to access the physical media. Also found within this layer, are protocols that define the standards for the network media (copper, fiber, radio etc). Examples of these protocols are: Ethernet, Fast Ethernet, PPP, FDDI, ATM, and Frame Relay.


Remember, the application layer of the TCP/IP model includes the Application, Presentation and Session layers of the OSI model. Note that the network access layer includes the Data Link and Physical layers of the OSI model. 3,1,1,2.
The OSI model is more of an academic and theoretical construct, whereas the TCP/IP model is the basis for the development of the internet.

Flow Control and Reliability:

When you're thinking about the transport layer, remember that two major functions of the layer's role are Flow Control, and Reliability, with a capital R.
The transport layer achieves these goals through concepts such as sliding windows (not to be confused with the similarly named Gwyneth Paltrow movie), segment sequence numbers, and acknowledgements.

When two hosts get together and establish a TCP connection at the transport layer, they need to agree on what constitutes a "reasonable" flow of information. It is this flow control, that allows the receiving host to process the received information, in time to receive subsequent segments from the transmitting host.

In order to start passing segments at the transport layer, both hosts must set up and maintain a session. The software and operating system of the sender communicate with the receiver's OS and software, to set up and syncronise a session. TCP avoids congestion at the transport link, because the receiving host is able to send ready or not ready messages to the sending host.

THREE-WAY HANDSHAKE: Applications that use TCP must first set up a session as described above. The sender sends a SYN (synchronise) message. The receiver receives this message and sends back an ACK (Acknowledgement) message. The original sender receives the ACK, and transmits the third message, "ACK+1". During this process, the sequencing for communications via TCP is defined. Both hosts must send an initial sequence number, and both hosts must receive an acknowledgement, before the communication can proceed.

SLIDING WINDOWS: This function depends on the concept of a "window size". If the sender transmits a segment, the receiver receives it, and says "I received your segment, send the next one", that is a window size of one. If the sender transmits seven segments, and the receiving host says "I received your segments, send the next seven", that is a window size of seven. The one, or the seven, are the window size.
Now, any connection where the transmitting host is only sending one segment at a time, is going to be chock full of traffic, but not particularly fast. This is where sliding windows comes in.
The sender might send three segments, and in return the receiving host says, "I received your three segments, try sending four". Four are sent, in return the reply comes back "I received four, try sending five". Before you know it (in an ideal world), the sender is sending dozens of segments between each acknowledgement, and the connection is nice and speedy.

If at some point though, there's a problem, if data falls into a hole somewhere and the receiving host just sits there twiddling its thumbs, still waiting for the data to turn up, the sending host thinks "Hang on, I haven't got an acknowledgement, I'll send the data again, but this time I'll use a smaller window size. So the window size can decrease, as well as increase.

Sliding windows is the reason why, when you're downloading something, the computer changes its mind every second, when it comes to telling you how long you'll need to wait before the download is complete. Kind of like this. An annoyance this may be, but at least now you know why it does it.

ACKNOWLEDGEMENTS: How does the sending host know when to retransmit a segment though? Easy. Each time a "window" of segments is transmitted (and received at the other end), the receiving host acknowledges the receipt of the "window" in its entirety (remember, when using a greater window size than 1, the receiving host does not acknowledge each segment), and along with that acknowledgement, transmits a request for the next numbered segment that it expects to receive (the first one in the new window, along with the other segments that fit into that window).

Like this:

The "ACK" and the "4" are two separate parts of the same message. Essentially, ACK 4 means "I've received this window, please send the next bunch of segments, starting with segment 4".

This is all well and good. We know what happens when the data is received okay, and we know what happens when none of it is received.
But what happens when some of it is received, but not the rest? Again, this is where ACK numbers come in. Picture the diagram above, with a window size of three, but imagine that this scenario occurs:

SEND 1, 2, 3.
ACK 4.
SEND 4, 5, 6.
ACK 6.

Wait, what happened there? ACK 6? It's supposed to be ACK 7, right? Right. But this time round, the receiving host didn't receive segment number 6. So what happens now? Does the ACK6 mean that the sending host will transmit 6, 7, and 8? Well, no. I have to admit, I'm not sure why, and it wasn't ever really explained to me, but what actually happens in this scenario, is this:

SEND 1, 2, 3.
ACK 4.
SEND 4, 5, 6.
ACK 6.
ACK 7.
SEND 7, 8, 9.

Weird huh? There is a reason for it, but as I say, at the moment I'm not sure what it is.

TCP COMPARED TO UDP: FTP, HTTP, SMTP and Telnet all use the transport layer TCP protocol. All of these protocols benefit from the connection-oriented reliable data transfer that TCP provides.

Remember "fields", those little different types of data inside a TCP segment?

Here they are:
  • Source Port
  • Destination Port
  • Sequence Number (remember, just as described above)
  • Acknowledgement Number 
  • Header Length
  • Reserved
  • Code Bits
  • Window (see?)
  • Checksum
  • Urgent Pointer
  • Option
  • Data
Many of these fields are filled with tiny bits of incredibly useful data, that ensures timely and reliable delivery of the segment payload.

UDP is used by other protocols, including TFTP, SNMP, DHCP and DNS.
UDP is a connectionless transport layer protocol. There's none of the lovely sequencing or error checking here. Think of TCP as the computer version of arranging to make a speech in front of invited guests, and UDP as the computer version of sticking your head out of the window and yelling BLARGH!!, while hoping that the people you want to reach are listening, and can hear you.

Error checking in UDP is left to the higher layer protocols, so while software applications might be able to tell whether they've received a message correctly, the UDP protocol itself doesn't care either way.

The fields inside a UDP segment are as follows:
  • Source Port
  • Destination Port
  • Length
  • Checksum
  • Data
So why would anyone use UDP when TCP does a better job? Easy. TCP has a lot of what we call overhead. This is where, to deliver traffic from A to B, TCP actually causes quite a lot of traffic itself. A computer receives a couple of hundred TCP packets, and on the way, those sync numbers and ack numbers and window size settings have already been read hundreds of times, before the destination host has even had the opportunity to read the data.
UDP might be rough and ready, but it takes up less bandwidth and device (switch, router etc) resources to process.

PORT NUMBERS: where a source and destination MAC address can identify the specific machines that the data was sent from and intended for, another type of number, called a port number, identifies which individual software package the data is intended for.
You know what it's like, you're on facebook, you're on google, you're checking your email, there are half a dozen different websites open on your browser, and then windows is updating itself - yet again.

But what is there to stop your computer just getting totally confused with a deluge of data that it doesn't know how to process? Imagine if your email data went to your firefox window, or if, when loading a webpage, firefox tried to open the windows update data, while windows update scratched its head trying to figure out what to do with the latest icanhascheezburger pictures.

Port numbers exist to stop this from happening. For example, all HTTP (webpage data, broadly speaking) data entering your machine destined for your web browser, would  be encoded with a port number of 80. Your computer would read the data and say, "Ah, port 80, I'm going to send this to firefox". If you want to download a file from an FTP (File Transfer Protocol) server, it would come encoded with a port number of 21 "Ah, port 21, this needs to go to the FTP program".
Port numbers are yet another clever gizmo that makes internet communications possible.

For a list of popular port numbers in use, go here. Doom fans will be amused to know that Doom uses UDP port number 666, which made me laugh when I found out.


When designing a network, network engineers use what is called a hierarchical model (the term is used so often in Cisco classes that I spelled hierarchical twice just now without even having to look how to spell it).

The hierarchical model essentially means that devices are grouped into one of three ranks, or layers (hierarchical model layers are completely different from network model layers). These layers are:
  • Core Layer: serves as the backbone of the network, where high speed transmission occurs. This layer generally consists of very powerful (and uber expensive) switches and routers.
  • Distribution Layer: provides "policy based" connectivity, meaning that the majority data goes where it needs to go, without having to bother the core layer with it.
  • Access Layer: This connects directly to the end users, PCs, IP phones, etc.
That's all for today, folks. Coming soon? A lovely rant about the awesomeness of Spanning-Tree Protocol.

Ciao, I'm off to the pub.

Tuesday, August 28, 2012

Let's Get Started...

Okay, so after the non-negligible distractions of the past 6 months (which I'm not going to pay lip service to), it's time to finally get moving forwards again, and get working towards finally taking and passing  the Cisco CCNA exam.

I graduated in early spring this year from college, taking with me both a level 2 and a level 3 C&G IT certificate, which I'm pretty pleased with. Essentially, I passed both courses.
I still need, however, to take the final CCNA exam, which is vendor certified, which in a nutshell means that the certificate comes direct from Cisco, and says "We approve of this guy configuring our hardware, we trust him to do it properly etc etc". 

With that in mind, in April I bought my own lab. What this is, is essentially my own kit to practice on at home. Packet Tracer is all well and good, and it's pretty versatile, but since I'm a hands-on type person, I much prefer getting to grips with the hardware itself rather than just the concepts.

 What you're looking at is my own little internetwork, primarily comprised of 7 connected devices, which then have computers/laptops/printers etc connecting at either end.

From top to bottom, we have:

  Total cost about, hmm, about £450 including the little stuff like cables and whatnot. Sounds expensive, but when it's to help with qualifications, it's really not. Networking hardware depreciates faster than a brand new car. In a derby. With monster trucks.

Seriously, this expansion module (For something even bigger and expensive-erer) is nearly 50 grand! This is what networking hardware costs, and this is why people want engineers who are qualified and certified, by the vendor, by the book.

I'm doing my studying with the help of a rather cool book, which is one of the few paper books I've bought for a while.
I've actually had it for a while so I'm stretching the definition of 31 days somewhat, but hey, I've got to take the exam sooner or later. Would be a waste otherwise.

So, let's see how we go. Rather than take the exam 31 days from now (I don't have the money to book it at the minute) I'll run through the book, and make sure that I understand the concepts described therein. I've got some additional hardware that may be on the way (gratis) so it's my intention to practice the concepts I describe in each section.

So for the time being, this is going to turn into something of a network oriented blog. That's my intention anyway. Let's see how it goes...