Wednesday, September 26, 2012

Workbook Thingie: ACL and NAT...

I've been doing a few practice exams, and two areas that always make me think "Argh!" are Access Control Lists, and Network Address Translation.

It's not that I'm unfamiliar with the concepts or how they operate, rather, it's a case that I have a hard time remembering the syntax of the commands.

A large part of studying for the CCNA is becoming familiar with the command syntax in Cisco's IOS operating system, which is completely CLI text-based. As a consequence, a lot of my studies revolve around remembering the correct commands, which variables to enter, and how to enter them, among other things.

I have found however, an awesome website, which is full of tutorials and scenarios, and it is with their help that I'm studying ACLs now.

Now, NAT depends on an understanding of ACLs, so I'm going to study both at the same time.

1: Create a Standard ACL

Now access lists come in two flavours: Standard, and Extended. Standard ACLS are nice and simple. They block traffic based on its source address, and so should be placed closest to the destination of the traffic as possible. Why? I don't know.

Well, I went and checked. The reason you place standard ACLs closest to the destination of the traffic, is because they block ALL traffic from an address, they might block traffic that you don't necessarily want blocking, from your network.

Here's the topology that I'm working with, and I'm going to configure a standard access control list on router 1, with the intention of blocking traffic from the right PC to the left PC.(I'm going to call the left PC PC1 and the right PC PC2 just to make it easy on myself.
A quick test beforehand to show that I can successfully ping (tests for two-way data travel between hosts) from one PC to the other (meaning that the network is working properly), and we're ready to begin.

Router1(config)#access-list 10 deny host
Router1(config)#access-list 10 permit any

So what we've done here is created an access control list, ACL number 10. 
The purpose of ACL 10 is to deny all traffic coming from, that is, the right hand PC. 
However, a quick test of the network shows that I can still ping from PC1 to PC2, and back again.

Yep, we need to apply the access list to a specific interface.

Router1(config-int)#IP Access-Class 10 out

This applies the access list to the interface, in an outbound direction. Lo and behold, we can no longer ping from the right PC to the left, because router 1 now discards all packets destined for the left PC.

Now that's all done with, it's time to clear the standard ACL off the router, because we're going to create an Extended ACL.

2: Create an Extended ACL

Now it's time to get on with something a little more in depth. Extended ACLs are more versatile than standard ACLs, as they can block specific types of traffic. Want to prevent telnet traffic while allowing web and email traffic through? No problem, you can do that with an extended ACL.

As you can imagine, because extended ACLs are more in depth, the syntax for them is correspondingly more complex. 

The command we're going to use now is:

Router1(config-int)#access-list 150 deny tcp any host eq telnet

To break it down:

  • Access list number 150 (therefore extended). 
  • Deny - do not allow this traffic. 
  • TCP - do not allow this protocol. 
  • Any host: from any host. 
  • - to this host. 
  • Eq Telnet - if it is telnet traffic.
I think you can also block specific port numbers too. I'll check that out in a second.

Because we want to continue to allow IP traffic through, we need to add:

Router1(config)#access-list 150 permit ip any any.

Access control lists have what is called an "implicit deny". That is, unless traffic is specifically allowed, the ACL blocks any and all traffic.

Apply the access list to the interface as before (IP access-class 150 out) and lo and behold, while we can still ping from the right pc to the left pc, we cannot telnet from the right to the left. Not only that, but we can't telnet to the left pc from router 2 either. 

A quick check of router 1 to see if the access control list is working, and...

We're looking at the bit that says "24 matches". This means that 24 telnet packets were blocked from passing to  PC1. Way to go :-).

Now, let's learn about named Access Control Lists.

3: Create a Named ACL

Numbered access control lists are cool, but they have a major drawback, which is that you cannot edit specific lines in the ACL. The only way to do this is to copy the entire ACL into notepad, edit it there, remove the original ACL from the router, and paste the edited version in as a brand new ACL.

With named ACLs, each entry has its own little reference number, indicating its place in the stack of ACL entries. By switching entries around, you can make the ACL behave in very different ways, making the whole thing much more versatile. And all without having to delete and re-create the ACL!

Author's Note: I've come across a problem in Packet Tracer, and the simulated router will not accept the commands that the tutorial is asking me to make. I'm going to fire up my lab and see if my 2620 will let me create a named ACL.
Update: Just fired up my lab, and the router happily accepts the ACL as defined in the tutorial. Could be a problem with Packet Tracer, as even the simulated 2620xm won't accept the commands.

So there we go.

Next up, Wildcard Masks...

Friday, September 14, 2012

Part 3: Evaluate the Characteristics of Routing Protocols

Routers do their thing at layer 3 of the OSI model, so they are responsible for choosing the best path for a layer 3 pdu (packet, remember?) based on its layer 3, or IP address.
But how do routers know the best path for a packet to be sent down? Well, you have two choices.
  • The router could learn the route itself by using a dynamic routing protocol, which allows the router to find out about the network topology and build itself a routing table, or...
  • You manually configure a static route, and tell the router where it needs to send traffic destined for particular destinations.
Once you configure a static route, the router adds it to the routing table, and gives it an administrative distance of 1. Let's look into this a little.

The Administrative Distance is basically a measure of trustworthiness.  In its uh, "career", a router can receive routing information from a variety of sources, various routing protocols etc, and it needs to know which routing protocols to prioritise. For example, a router receives two seperate routes to the same place, one route uses IGRP, which is old and outdated, and the second uses EIGRP, which is the new(ish) standard.
Administrative Distance is what allows the router to say "well actually I'll trust EIGRP on this one, if you don't mind".

 Here are the administrative distances that we, as CCNA students, will most frequently come across:

Directly Connected Route: 0
Manually Configured Static Route: 1
EIGRP Summary Route: 5
EIGRP (Internal) 90
OSPF 110
RIP 120

Note that the aforementioned IGRP, which has an admin distance of 100, is not listed above. This is because IGRP is now outdated and has been largely replaced by EIGRP now. Note also that these administrative distances can be modified from their default values. This allows you to, for example, configure a static route as a backup route, if you give it an administrative distance that is higher than a dynamic route in the routing table.

Routers use Routing protocols to pass information about networks and network locations to each other. Examples of these routing protocols include RIP, OSPF and EIGRP.
It's important not to confuse routing protocols (the protocols that facilitate routing) with the routed protocols, that is, the protocols that define the information contained in a packet.

Autonomous System Numbers are assigned to portions of a larger network, enabling the administrator/architect to break the network in its entirety down into smaller portions. A routing protocol such as BGP (Border Gateway Protocol) is required to route between autonomous systems, even if these disparate autonomous systems are part of a single physically contiguous network.
On the internet, that is, on public networks outside of enterprise/private networks, autonomous system numbers are assigned by ARIN - The American Registry of Internet Numbers.

Routers (and therefore the network) achieve convergence when all routers share a common view of the network. If the network changes, routers must recalculate their routing tables using a dynamic routing protocol. A major advantage of AS numbers is that they break the network into manageable groups, allowing the routers to converge more quickly.

Types of Routing Protocols:

 Routing protocols are divided into two types, depending on their method of operation. Link State, and Distance Vector.
  • Link-State protocols build a topology of the entire network, and send Link State Advertisements (LSAs) to update other routers. LSAs are used to build a full topology of the network (or AS?), and are flooded throughout the network only when there is a topology change. Routers use the SPF (Shortest Path First) algorithm and LSAs to build both a shortest path tree, as well as a routing table. Using LSAs requires a more powerful router, as the process of maintaining a full loop-free topological database requires more memory than Distance Vector protocols.
  • Distance-Vector protocols on the other hand send periodic updates containing the entire routing table, whether the topology changes or not. In addition, as there is no topology table in D-V routing protocols, each router is only aware of its immediate neighbours. Without the routing table, routers running distance-vector protocols use metrics (such as hop count) to determine the best path to its neighbours. 
 When a router receives a packet on a port, it examines the destination address and compares it to the routing table. The routing table is used to determine the best path for the packet, which is then forwarded out of the appropriate port.

Each of the following protocols functions at the internet layer of the TCP/IP model, that is, layer 3 (the network layer) of the OSI model. 

RIP: Distance Vector. Broadcasts updates every 30 seconds and uses hop count as the metric. The maximum hop count is 15 (Literature says that the maximum hop count is 16, but in practical use, anything over 15 hops away is deemed to be unreachable). 
IGRP: Distance Vector protocol, now outdated. Broadcasts updates every 90 seconds, and uses a composite metric of bandwidth, delay, load, and reliability.
OSPF: Link-State protocol. Updates only when there is a change in topology.
EIGRP: Hybrid: Uses features of both link state and distance vector protocols, and multicasts any updates on

As mentioned previously, BGP can be used to route between autonomous systems. It can also be used to route between seperate routing protocols.


Metrics are used to aid routers in discovering the best path to forward packets. The metrics used vary from routing protocol to routing protocol, and can be one or more of the
  • Internetwork Delay
  • Bandwidth
  • Hop Count
  • Reliability
  • Load
Distance Vector routing protocols exchange routing tables with their neighbours in order to ascertain the metric and the best path. If these routers don't exchange their routing tables quickly enough in a changing network, a loop can occur.

A router may not receive an update that a link is down, and proceed to advertise that it can, in fact, get to the network. If these updates are passed to other routers, packets destined for this network could continue to pass around the network continuously. 

Distance vector routing protocols monitor the distance that a packet has travelled as it passes over the network, to avoid this kind of loop. RIP tracks the packet with hop count as a metric, and as mentioned above, deems the network unreachable if it appears to be over 15 hops away. The maximum hop count of 16 ends the routing loop.

Split Horizon: 

If router A updates two connected routers that network 1 is down, but then accepts a later update from one of those two routers that network 1 is reachable, there may be a loop.
This scenario is possible because one of the connected routers may be getting old information from another part of the network, that was originally sent out by router A itself. Split Horizonprevents this type of loop, when it states that router A cannot receive an update that concerns routes that router A originally advertised. 
A router can also prevent loops by poisoning a route for a network that has gone down. A router can accomplish this by sending out the maximum hop count for a route as soon as it sees the network is unreachable. As mentioned, this process is called route poisoning

Distance Vector protocols typically update only on a set interval. This can cause routing issues if a network goes down, as the router that notices it would have to wait up to 30 seconds to send its next update. 
This problem is avoided with triggered updates. With route poisoning and triggered updates working together, a router overrides its regular schedule and as soon as it notices that the network is down, it sends out the poisoned information straight away.
This doesn't mean that the routers immediately remove the route from the routing table, instead it just means that routers know about the change. 
Routers implement a holddown timer that causes them to wait a set amount of time before actually removing the route from the table.

Routing updates occur every 30 seconds with RIP. If RIP does not receive an update about a particular route for 180 seconds, that route is marked as invalid. RIP waits another 60 seconds (for a total of 240) and if information is still not received about the route, the route is removed from the routing table. These two timers are the Invalid timer, and the flush timer, respectively.
The third type of timer that RIP uses is the holddown timer. Once RIP receives a warning that a route is invalid, it immediately assigns a holddown timer to the route.

If the route comes back up, during the holddown timer being active, the route is still "on probation" and is not fully reinstated until the holddown timer expires. If the holddown timer expires, the flush timer kicks in and removes the route shortly afterwards.

IGRP Is also a distance vector routing protocol, but this one forwards routing updates every 90 seconds, rather than every 30 seconds. IGRP focuses on speed as the main reason to use a particular route. The default metrics used by IGRP are bandwidth and delay, but load and reliability can also be considered.
IGRP can advertise interior, system, and exterior routes.
  • Interior routes are between networks that are connected to a router and that have been divided into subnets.
  • System routes are between networks inside of an autonomous system.
  • Exterior routes define access to networks outside of an autonomous system.
IGRP makes use of hold-down timers, split horizon, and poison reverse.

RIP V2: Rip V2 adds authentication and ability to send a subnet mask with routing updates. This means that RIP V2 supports VLSM and classless inter-domain routing.

Another difference between RIP versions 1 and 2 is in how each protocol sends updates.
RIP V1 broadcasts on RIP V2 more efficiently multicasts on (similar to for EIGRP).

Summary Routes: All routers on the internet cannot possibly contain a route for every network that exists. Routers can learn about other networks through static and dynamic routes, but for traffic destined outside of the immediate network, an administrator can add a default route. A default route provides a destination for a router to forward all packets for which it does not have an entry in its routing table.

Link State Advertisements: Link state protocols actually send little hello messages periodically to obtain information about neighbouring routers. These are seperate and distinct from LSAs, which remain the key way that link state protocols discover information across the entire network.
When a network changes, a router will flood LSAs on a specific multicast address across the specified network area. These LSAs allow the router to create a topological database of the network, to use the Dijkstra algorithm to determine the shortest path for each network, to build the shortest path tree, and to use the resulting tree to build the routing table. Flooding LSAs across a network can
affect overall bandwidth on the network and cause each router to recalculate the full topological database. For this reason, a network using link state protocols must be broken up into small enough areas to maintain network efficiency, and sufficiently powerful routers must be used.

OSPF: Is an open (non-proprietary) link state protocol that allows you to control the flow of updates with areas. OSPF is a good choice for a large network because unlike RIP, it allows more than 15 maximum hops, and networks can be divided into areas.
These areas communicate with a backbone area to reduce routing protocol traffic and routing table size.

OSPF routers do indeed maintain a full loop-free topological database of the network. In addition to this topological database, each OSPF enabled router maintains a unique adjacency database that tracks only neighboring routers.
OSPF routers elect a designated router, and a backup designated router, as central points for routing updates.
VLSM support, A bandwidth based metric, a loop free SPF tree and rapid convergence through LSAs are key features of OSPF.

OSPF adjacency databases are fine if you're - for example - on a lab network that has four routers each connected with point to point connections. Each router will have two adjacencies: One for each directly connected neighbor. If you're using fiber though, (FDDI for example), all routers would technically be connected on the ring to each other, making every router, the neighbour of every other router.
OSPF avoids the situation of never ending neighbours with an election.
Routers that are connected on broadcast multiaccess networks, like fiber or ethernet, OR nonbroadcast multiaccess networks such as frame relay, all elect a single router called the DR - Designated Router - to handle updates.
To avoid a single point of failure, the routers also elect a backup designated router.

OSPF hello packets go out on the multicast address (remember, for EIGRP and for RIPv2).
If the connection is broadcast or point to point, the hellos are sent every 10 seconds.
If the connection is NBMA (like frame relay), the packets are sent every 30 seconds.

The packets contain the following:
  • Version
  • Type
  • Packet Length
  • Router ID
  • Area ID
  • Checksum
  • Authentication Type
  • Authentication Data
 The OSPF process starts with hello packets to find neighbouring routers, so that adjacencies can be developed.
First of all, routers need to establish if they are on a point to point or a multiaccess link. If on a multiaccess link, the DR and BDR election then occurs. Once adjacencies exist between neighbours, the routers then forward LSAs and add the resulting information to their topological databases. Once the topological databases are complete, the routers use the SPF (Shortest Path First) algorithm to create the SPF tree, and then a routing table.
Periodic hello packets can alert routers to a change in the topology that would restart the process.

EIGRP and IGRP routing protocols function together well despite the fact that EIGRP offers multiprotocol support and functions as a hybrid routing protocol. EIGRP also supports VLSM whereas IGRP does not,. A router running only IGRP will see EIGRP routes as IGRP routes.

As a hybrid multiprotocol routing protocol, EIGRP uses functions from both link state and distance vector protocols. Like OSPF, EIGRP collects multiple databases of network information to build a routing table.
EIGRP uses a neighbour table in the same way that OSPF uses an adjacency database to maintain information about adjacent routers.
EIGRP however uses DUAL (Diffusing Update Algorithm) to recalculate a topology.
EIGRP also maintains a topology table that contains routes learned from all configured network protocols. In this table, the following fields are present:
  • Feasible Distance: The lowest cost to each destination.
  • Route Source: The router identification number for externally learned routes.
  • Reported Distance: A neighbouring router's reported distance to a destination. 
  • Interface Information: Which interface is used to reach a destination. 
  • Route Status: The status of a route, where ready to use routes are identified as passive, and routes that are being recalculated are identified as Active. REMEMBER: If it's passive, it's because it doesn't need recalculating and is ready to use/in use. 
The neighbour and topology tables allow EIGRP to use DUAL to identify the best route, or the successor (think "successful) route, and enter it into the routing table. Backup routes, or feasible successors, are kept only in the topology table.
If a network goes down and there is no feasible successor, the router sets the route to active, sends query packets out to neighbours, and begins to rebuild the topology.
In the topology table, EIGRP can also tag routes as external or internal.
Internal routes come from inside the EIGRP AS, and external routes come from other routing protocols, and outside the EIGRP AS.

Advanced features of EIGRP that set it apart from other distance vector routing protocols include:
  • Rapid Convergence: EIGRP uses the DUAL FSM (Flying Spaghetti Monster/Finite State Machine) to develop a full loop free topology of the network, allowing all routers to converge at the same time.
  • Efficient Use of Bandwidth: Like OSPF, EIGRP sends out partial updates and hello packets, but these packets only go to routers that need the information. EIGRP also develops neighbour relationships with other routers.
  • Support for VLSM and CIDR: EIGRP sends the subnet mask information,  allowing the network to be divided beyond default subnet masks.
  • Multiple Network Layer Support: Rather than relying on TCP/IP to send and receive updates, EIGRP uses the reliable transport protocol (RTP) as its own proprietary means of sending updates.
  • Independence from Routed Protocols: EIGRP supports UP, IPX, and AppleTalk. EIGRP has a modular design that uses Protocol Dependent Modules (PDMs) to support other routing (routed, surely?) protocols, so changes to reflect revisions in the other protocols have to be made only to the PDM and not to EIGRP.
EIGRP uses five different types of packets to communicate with other routers:
  • Hello: Sent on to communicate with neighbours.
  • Acknowledgement: Hello packets without any data, sent to acknowledge receipt of a message.
  • Update: Used to update new neighbours so that they in turn can update their topology.
  • Query: Used to gather information from one or many neighbours.
  • Reply: Sent as a response to a query packet.
As described, EIGRP routers build a topology table that uses DUAL to select the successor  routes that will populate the routing table. If a link goes down, DUAL selects a feasible successor from the topology table, and promotes it to the successor route.
If there is no feasible successor, EIGRP recalculates the topology table. This process and DUAL enable EIGRP to achieve rapid convergence.

Thursday, September 06, 2012

Broken Again: The life and times of a mechanical idiot...

I've been going to the local bike pub,  on foot, while sitting about looking for the cheapest way to get the throttle cable replaced. Okay, so the idea of bike night is you go on the bike, but for every 10 lads on their bikes, there were three or four who go to get pissed, so it's not too bad, and get pissed I did, week after week.

Now the RS has always had a very heavy throttle, with one cable going into a splitter box, pulling on four other cables. Over the years, at least some of them must have gunked up because to twist the throttle, well, there's probably at least a couple of kilos of pull on the throttle. Dangle two bags of sugar from the end of the throttle cable and twist the grip - that's the sort of stiffness you're looking at.

Eventually, (after drunkenly promising "I'll bring it up this wednesday, you can have a look at it" to come on the bike, I wandered up to RTT for a new throttle cable (having taken the old one as a reference), 18 quid and 20 minutes later and I've got a shiny new one.
Walked back (55 minutes) after 25 minutes of waiting for the once-hourly bus, got home and fitted the cable. Closed everything up, and realised I had no electrics.

Problem One

Got my dad to help, because he's ever so good at this sort of thing - the little wire had snapped off the solder on the cutoff switch, so he stripped the end of each wire, "tinned" them, and resoldered them with shiny new joins. Got the throttle cable fitted - a tight fit since it was slightly shorter than normal, but it fits fine and works fine.
Had a bit of a ride round, and the bike is still overheating like a bastard, but that can be sorted...

Spent today doing all sorts of things, cleaned the bike up, replaced some old bolts, added some nyloc nuts, and then gave it a once over with the polish to get it looking shiny for tonight.

Rode up there, happily making a point of blasting past my mate on his ickle neep neep scooter, and sat at the traffic lights halfway up Cowley lane with the engine getting hotter and hotter and hotter. Lights went to green, off I pull up the hill with the bike hesitating somewhat, and get almost all the way to the top of the hill before the bike bogs down completely. Just as I pull onto the side road, it dies.

Problem Two

A couple of minutes spent kicking it (the kick starter, not the bike!), while other lads ride past giving me cheery waves, and I get it going again. Ride it to outside the front door of the pub and park up, as the lads who passed me engross themselves in the RS "That's bloody marvellous/haven't seen one of them for a while/they go like shit off a shovel them" etc etc etc.

Sit around, walk around (which is like sitting around but higher up), drink (Britvic, thanks!), have a fag, and Karl pulls up on his, well, not sure what it is. Looks like a V-max ("A fucking V-max?!?" says he), but made by honda. VF1100 or something like that. "For you..." he says to his mate, pulling out a reg/rec and handing it over, "and for you" he says to me, pulling out a new throttle tube, after I drunkenly explained that I'd fucked the last one bodging it to get me back home.

We get talking onto how the throttle is incredibly heavy, and as I disconnect the fuel hose and take the tank off to show him the splitter box, and how much strain the (presumably incorrectly routed cable) is putting on it, he goes to have a "look".
Now he's used to riding bikes with two throttle cables, one push, one pull, which I suppose is the only explanation for what he does, which is to try to twist the grip forwards, towards the nose of the bike. Just as I'm beginning to wonder what he's doing, the entire rhs switchgear - cutoff switch, indicator switch and all - breaks free from its mounting pin and rotates upwards until the indicator switch that did face towards the rider, is now facing the sky..

Problem Three

I hastily dismantle the throttle assembly, where I find that the metal pin that stuck snugly into the hole on the bottom of the bar, has now pushed its way through the plastic, meaning that there's little to stop the switchgear rotating with the grip - leaving me with no throttle, unable to ride back. There are a few more blokes around at this point, including one guy who owns a local customs manufacturing/modifying/servicing/if it's to do with bikes, he can do it place. I'm flitting around, worrying about being able to get the bike back, looking for someone more experienced in mechanics to tell me that it's all okay, and that I'll be able to get home fine. Eventually, I do get talking to him (he's a well liked and respected dude this guy, and really knows his stuff. Frankly, I feel like I'm wasting his time every time I talk to him (even when I'm booking the bike in for a service), but he seems to like me well enough, so we get talking), and he makes the following announcements.

  • We're both looking forward to getting the bike in for a major overhaul
  • The RS is great fun to ride, but mechanically, is a royal pain in the arse, and most riders would decide that the bike is more trouble than it's worth
  • The riders that wouldn't consider it more trouble than its worth, are by now desperately looking around trying to find RSs, which is why they apparently command a premium as "there are so few about, because they've all just died"
  • He'd have to look to be sure, but the reason the throttle is heavy, is probably because at least some of the cables under the tank are fucked, and the lining of the cables is now serving to constrict the steel inner cable, and stop it from moving properly...
  • The reason the bike is overheating, is more than likely due to an air bubble. Apparently there's a certain location behind the top cylinder (maybe?), which is a bugger for getting bubbles in

Eventually I decide I can't wait, it's time to go, if I stop and wait for the chips the off licence will be closed by the time I get back, so I kick the bike up (starts first kick, every time, even after 16 years), and eventually it warms up. Nervously ("Will I get home? Or will I be left at the side of the road with a fucked throttle?") I get the earphones in, get the helmet and gloves on, and I get on the bike, and turn the lights on so people know to get out of my way.

Stand up, into first, revs up, clutch out, and off we go.

The bike pulls forwards a meter or so, and starts bogging down. I quickly apply more throttle, and it dies altogether.

Now, I want you to imagine the sound of a two-stroke engine cutting out. Now add the sound of 30 sympathetic but amused bikers all collectively going "ohhhhhhh".

I try to kick the engine over a few times and it's not having it. From somewhere behind me appears the head of Daz, one of my other mates, in my limited field of vision. Over Rammstein's "Der Meister", I hear him say "You've had the tank off haven't you?"

"Yeah". Says I.

"Did you turn the fuel back on?"

Theatrically, as if to say "I've made a stupid mistake and I'm not even going to try to hide it, I sweep my arm over the tank and make a show of turning the fuel tap back on.

I kick the bike a few times, and it won't start. Eventually, I end up having a shouted (yet muffled, thanks to the helmet and earphones) three-way conversation between myself, one of the other blokes there, and someone called "lost in translation", where he offers me a bump start. Eventually, I get on the bike and kick it into first, to a collective chorus of "Try Second!". Another couple of blokes appear in my field of view, offering helpful tips, all of which are drowned out by Till Lindemann rolling his Rs, and eventually we get the bike rolling.

I let the clutch out, and the revs increase, but the bike doesn't start. I pull in, accelerate again, let it out quicker, and the engine cuts into life. Not wanting to lose my momentum, I give a grateful beep of the horn to the lads behind me, and ride towards the big downhill.
I pull into the junction, being eternally mindful not to snap the retaining pin in the switchgear, and eventually make it down the hill, through the bends.
Just as I'm starting to enjoy myself, to remember what a beautifully awesome bike the RS is to ride, clearing a few sweeping curves, I pull up to the roundabout, where the bike dies AGAIN.
I get off the bike, pissed off, and sit around wondering what to do.


That is the sound of a thought occuring to me.

I crouch down beside the bike, grab the fuel tap, and twist it towards the rear "Reserve" position.

A quick kick, and the engine ZZzzZzzZzzZzzzzzzs back into life.

Bloody petrol.

I ride home, glad that I've now got another problem sorted, and with a sense of relief, and a still-working throttle, pull onto the drive.

Epiloguey bit...

A quick trip to the off licence later, and here I am, surrounded by leathers, boots, my helmet is sat on my bed facing blankly towards the wardrobe, and I've got a lovely bottle of chilled mead, and a big bottle of strongbow, which I'm already digging into. And here I am telling my tale to you folks.
Make of it what you will, it's been a fun night. Not great, but fun nonetheless.

I left my bike with an affectionate pat on the fuel tank and a few choice words...

You're a pain in the fucking arse... But I love you anyway...

Wednesday, September 05, 2012

Part Three: Compare and Contract Key Characteristics of LAN Environments

Network Topologies: Networks are organised in a physical topology, an arrangement of machines that are connected to each other. Different terms are used to describe different "shapes" of topologies.
  • Bus: All devices are connected along one single arterial cable. 
  • Ring: Each host is connected to one other host on either side, forming a closed loop.
  • Star: All hosts are connected to a hub or switch in the center.
  • Extended Star: Hosts are connected to a hub or switch that is in turn connected to another hub or switch at the center. 
  • Hierarchical: As described previously, this can be described as a pyramid, with different devices at different levels. Core routers and switches would be present at the top of the pyramid, with Distribution layer switches below, and access-layer switches at the bottom. The topology typically takes the form of a pyramid of extended-star networks.
  • Partial Mesh: Some hosts are directly connected to other hosts, with little or no centralisation of links.
  • Full Mesh: All hosts are directly connected to all other hosts. 
 Logical topologies determine how hosts communicate. They define the fundamental way that the network runs. The two most common types of logical topology are token-passing, and broadcast.
  •  Ring-Passing topologies, like token-ring function by having a "token" passed from one host to the next, to the next. When a host wants to send data, it must wait until it has possession of the token, before it is able to transmit. Despite the name, token-ring LANs do not need to be in a ring physical topology.FDDI (Fiber Distributed Data Interface) is a ring-passing technology.
  • Broadcast topologies like Ethernet do not require rings. Ethernet itself uses CSMA/CD to avoid collisions on the media.
Networks operate in accordance with a set of rules that determine how they communicate. These rules are called Protocols. Network protocols exist to control the type of connection, how data is transmitted, and how to handle errors, among other things.

MAC Addresses: Mac Addresses (also known as BUIs - Burned-In Addresses) are in 48 bit hexadecimal format, and are divided into six groups of two hexadecimal digits.
The first three groups, that is, the first six numbers, are assigned to the manufacturer of a device by the IEEE. The first six numbers are the same for all devices manufactured by that company.
The last six numbers are called the organisational unique identifier, and are assigned by the manufacturer themselves.
A frame that any host sends over the LAN includes a destination MAC address. Any host without the matching MAC address drops the frame.

Ethernet: Ethernet frames didn't always have a length field. Before the Ethernet standard that we know now, DIX (Digital Intel and Xerox - What an unfortunate acronym for anyone with a mental age of 14 (like me)) not only combined the preamble and start of frame delimiter, but also listed the length/type field as just type.
Ethernet today uses the length/type field to identify the upper-layer protocol in use.

Ethernet Frames: From end to end.
  • Preamble made of alternating 1s and 0s "Hey, I'm about to transmit".
  • Start of Frame Delimiter "10101011" "Heh, tricked you".
  • Destination Address, Source Address.
  • Length/Type, this is an important one. If the field is less than 0x600 hex (a really crappy bike meet), it represents the length of the data in the data field. If it is 0x600 or greater, this field represents the type of protocol; 0x800 Hex is IP.
  • Data: This contains the payload of the frame, which is intended for the higher layers.
  • Frame Check Sequence: This allows for error checking of the frame.
 Helpful Reminder: Routers are useful for segmenting LANS. They only forward traffic outside a LAN if it is deliberately intended for another network. As a result, they block broadcasts.

The History of LANs: When ethernet networks started appearing, they started as simple networks connected at the center with a hub connecting them, and over time evolved into sophisticated topologies that operated on many layers of the OSI model.
Originally, LANs operated on a bus topology, using thick and thin Ethernet. Hubs (also known as multiport repeaters) became common in networks, as a way to retime and amplify signals now in a star topology.
The problem with hubs is that all signals travel to all devices, so the potential for a collision is high.

Eventually though, Bridges were introduced, which segmented the network into two separate collision domains.

Nowadays, we use switches, which are superior to hubs and to bridges. Switches filter by MAC address, and essentially, every connection between the switch and a host becomes its own collision domain. Switches, bridges and hubs do not filter broadcasts. The process of dividing a network into multiple collision domains is called microsegmentation.

Ethernet networks that operate only in half duplex can only allow one host to transmit or receive at a time. Collisions occur when two devices attempt to transmit or receive at the same time. When this happens, the device that first witnesses the collision transmits a jam signal. All devices invoke a backoff algorithm, and wait a certain amount of time before attempting to use the network again.
the more devices connected to a hub, the higher the potential for a collision.

Network latency slows connectivity, and is an especially unpopular thing with network gamers. The time it takes a NIC to receive or place a signal on the medium, and the time it takes that signal to travel across the network contributes to latency.
Layer 3 devices can increase latency, because they take more time than a layer 2 device to process network data.

Switches: Switches use MAC addresses to create direct virtual connections between two hosts on a network. These connections are awesome, because they allow each host to transmit and receive at the same time. Full duplex communications uses the bandwidth in both directions, allowing for a 20mbps connection on a  10mbps link.

Switches can operate in one of three forwarding modes:
  • Store and Forward: The switch receives and processes the entire frame before forwarding it.
  • Cut-Through: The switch forwards the frame as soon as it either reads the destination MAC address (default), or reads the first 64 bytes and then forwards the frame. The second mode is called Fragment-Free, and as the name suggests, exists to reinforce the integrity of transmitted data.
  • Adaptive Cut Through mode. Initially, the switch operates in cut-through, until there are a certain number of errors detected. Once this threshold is reached, the switch moves to store-and-forward.
 [I am having trouble finding information on how to configure these different switching modes on my catalyst 2950s. Either the 2950s do not support them (my 3550 definitely should) or operating in multiple forwarding modes is now a historical artefact. From what I can see on various cisco sites, as switches have become faster, the advantage of cut-through switching has diminished. This may or may not be correct, and further research is required].

 Routers, Bridges and Switches improve network functionality because they protect hosts from unnecessary traffic. Routers filter broadcasts and only forward packets that are destined for other networks to other ports. Switches divide collision domains substantially, and only pass frames over the wire to hosts with the proper destination MAC address.

Broadcasts: Remember, devices can send out layer 2 broadcasts to all hosts, by sending out frames with a destination address of FF-FF-FF-FF-FF-FF. Switches do not divide broadcast domains (remember broadcast storms).

When designing a network, it is important to bear in mind the number of broadcast and the number of collision domains that your design will have.

Tuesday, September 04, 2012

Part Two: Describe the Spanning-Tree Process...

Preface: For this blog entry, I am referring to Scott Bennett's book "31 Days Before Your CCNA Exam". While the occasional passage may be copied from this book, it is not my intention to infringe upon the copyright of this work (which is ©2007, Cisco Systems Inc), nor is it my intention to make the content of the book available to read online. 

Note for any casual readers: I hope you find this entertaining and perhaps even informative. It might seem a bit circuitous and at times perhaps even silly, but when describing things quickly, I tend to describe them in very train-of-thought type ways. I'll repeat myself, refer back to stuff I might not have said, perhaps even (I hope not!) contradict myself from time to time, but hey, this is a learning experience for me too, and as long as I know what I mean, that's the important thing ;-).

Switches filter frames by their layer 2 MAC address, and can speed up a network. But if you want to start adding backup connections to a switched network, it's important that you run Spanning-Tree Protocol (STP).
Redundantly connected switches provide a valuable backup connection, but if something goes wrong and these backup connections end up causing loops, the formerly-useful backup links can bring parts of the network grinding to a halt. 

How It Works (Or "How Stuff Goes Wrong"): 

Remember. If a switch does not recognise the destination MAC address for a frame, it broadcasts/floods that frame out of all ports except the port that originally received the frame. Fair enough. Consider what happens though, if a frame comes in, destined for an address that none of the switches recognise. Note that the diagram is used just to describe the topology of the networks, and not the state of the links. These links are all up, ok? :-)

Switch 0 receives the frame from Router0, doesn't recognise the MAC address, so floods it out of all ports except the one it arrived on, so the frame arrives at Switch 1 and Switch 2. Neither of those recognise the address of the frame, so they flood it out in the same manner, essentially passing the frame across the link to each other (and their respective PCs, but they're just there to look pretty for the time being). What then happens, is that Switch0 receives two frames, one from S1 and one from S2, both of which are copies of each other, but neither of which have known MAC addresses. So what happens? S0 floods S1's frame to the router and to S2, and floods S2's frame to the router and to S1. When S1 and S2 receive these frames, again, they flood them to each other, and from there back up to S0, and round and round they go. Switches have no way of recognising frames that it has previously received.
The situation like this, where frames just go round and round and round, is called a loop. Imaginative, huh?

Okay, the situation above doesn't sound that serious, switches process loads and loads of data, and a few misdirected packets here and there aren't that big a problem surely?

Okay, what about when you're looking at a redundancy topology like this?:

Verwarnung! Sendung Sturm! (Or "The Network Falls Over"): 

Loops in themselves are pretty annoying, but you'd think that on a small topology with a single redundancy link like the one in the first example, they wouldn't become that big a problem. But the size of the problem increases with the size of the network.
On our new "bigger" topology, it's a different picture. Let's say for example that switch 9 receives a broadcast frame from a PC attached to it. Switch 9 would forward the frame to the two multilayer switches that it's connected to, MS7 and MS5, which would duplicate the broadcast frame, sending it to another 6 and 4 switches, respectively. Before we realise what's happening, one broadcast frame has become 13, and so far, the frame has only gone through two cycles of duplication. While the switches don't send the frames back out on the ports that they were received on, there are still 17 devices with a total of 30 connections, and each device duplicates every broadcast frame that it receives, so you can imagine the rapidly increasing amount of traffic, with broadcasts being received, duplicated, and broadcast, over and over again, without end. This is known as a Broadcast Storm. That people intentionally introduce broadcast storms into networks as a form of attack tells you just how badly these situations can affect network performance. Before you know it, for every genuine data frame that needs to be forwarded, there are a huge (and always increasing) number of broadcast frames that need to be processed first. Within a short time, the bandwidth is completely eaten up by broadcast frames, the switching hardware is working at 100% processing load, real network traffic slows to a crawl, and the network becomes unusable.

Enter Our Hero: To stop this situation, or to at least mitigate it, we have Spanning-Tree Protocol.

STP is defined by the IEEE 802.1d standard, and exists to identify the shortest paths in a switched network. It does this so that it can build a loop free topology.
"But wait", you ask. "How can a protocol change the way the network is wired?" Well, obviously it can't. However, what it can do, is change the way that each switch uses the ports, going so far as to block non-STP-management traffic from being received or sent on ports.

Early, when the switch is powered on, STP kicks in, and starts going through the Election Process.
This process exists to define the Root Bridge, a kind of "Main Switch". Having a root bridge allows STP to create a logical "tree" over which to send frames. It allows the other switches to say "That's the 'main' switch, let's use that as the top of our tree".

The Bridge Protocol Data Unit exists to allow each switch to identify the root bridge. Once the root bridge is determined, the switches are then able to maintain a single link towards the root bridge, using designated ports.

The Election Process:

A root bridge election is triggered when either a switch has finished booting, or when a path failure has been detected. All switch ports are initially in the blocking state, and this lasts for 20 seconds. This prevents a loop from occurring before STP has had its chance to do its thing.
After switches have booted, they immediately start sending BPDU frames to advertise their BID - their Bridge ID.

Important notes to bear in mind at this point:
  •  The BID consists of both the priority value of the switch, and the MAC address of the sending switch.
  • If two or more switches have identical priority values, the one with the LOWEST MAC ADDRESS has the lower bid. 
  • The LOWER the bid, the better.
Initially, all switches assume that they themselves are the root bridge. The BPDU frames sent out initially have the root ID field matching the BID field, indicating that each switch considers itself the root bridge.
As each switch receives BPDU frames, it compares the root ID from the frames, with the switch's locally configured root ID. If the root ID on the received frame is lower, the locally held one, the switch updates the locally configured root ID to match the new, lower one.
Once this update is complete, the switch incorporates this new root ID in all future BPDU frame transmissions.
The election process ends once the lowest bridge ID populates the root ID field of all switches in the broadcast domain. 

Port States:
  • Blocking: To begin with, when STP starts up, all ports on all switches enter the Blocking State. This means that the ports are essentially shut down, with the exception of one type of traffic, the BPDU.
  • Listening: The port [switch? Is this an error in the book?] checks for multiple paths to the root bridge, and blocks all ports except the port with the lowest path cost to the root bridge. 
  • Learning: The port MAC addresses and begins to populate the MAC Address Table.
  • Forwarding: The port is now part of the active topology, and forwards both data and BPDU frames.
Ports that are not the lowest cost path to the root bridge return to the blocking state and remain there until STP is recalculated.

Types of Ports:
  • Designated Port: Think of the designated port as the designated driver. It's the only type of non-root port that is still allowed to forward traffic on the network (Also known as the Downstream Port, as it forwards traffic away from the Root Bridge).
  • Root Port: The root port is the port (on any switch that isn't the root bridge) that is closest to the root bridge (Also known as the Upstream Port, as it forwards traffic towards the Root Bridge).
  • Non-Designated Port: These are ports that STP puts into the blocking state. They do not forward traffic.
What Happens Now?: Even though the topology has been created successfully, with an agreed-upon root bridge and the ports in their final states,  BPDUs continue to be sent advertising the root ID of the bridge, every two seconds.
Each switch is configured with a maximum age timer that determines how long a switch retains the current BPDU configuration, and this is usually set to 20 seconds.
This means that if a switch fails to receive 10 consecutive BPDU frames from one of its neighbours, the switch goes "hey, the path must have failed. If so, this BPDU info is no longer correct!" and the whole process of electing a new root bridge is triggered again.

Verification: You can verify the spanning tree port assignments  with the "show spanning tree" command, from privileged exec mode.

Let's fire up the lab and see what this looks like:

Here we see that we have three interfaces up. One connects to a PC (edge P2p), and two connect to switches (S2 and S3). Because S2 and S3 each only have one connection to S1, both of S1's connections are Desg (Designated) ports.

Notice in this second example, in this case, the output from Switch 2, that the root bridge ID is 32769 (S1) and is reachable by port FA0/22 (The connection between S2 and S1).
Also note, that while FA0/22 has been configured (by spanning-tree) as a Root Port, FA0/23 is "Altn". What this means is that this port has been defined as an "Alternate" port, which is a port role used by RSTP (Rapid Spanning Tree Protocol), which a more advanced version of the original STP (and is therefore in use by default, on the switches in my lab). More on this later.

Changes to Topology: A switch decides that a topology change has occured, either when a port that was forwarding has gone down, or when a port transitions to forwarding and the switch has a designated port.

The switch notifies the root bridge of the spanning-tree, which broadcasts the information to the entire network.

The switch that notices the change sends out TCNs (Topology Change Notification) BPDUs  on its root port. The receiving switch (unless it is the root bridge itself) is called the designated bridge. The designated bridge replies to the TCN with a TCA - a BPDU with the "Topology Change Acknowledgement" bit set. This process is repeated, with the designated bridge sending out its own TCN to the next switch along the route (which itself becomes the designated bridge), until the root bridge itself is contacted.

Once the root bridge becomes aware of a topology change, it sends out its config BPDUs with the Topology Change bit set.

These BPDUs are received on forwarding and blocking ports.

Manual Configuration: There are five commands that are really beneficial where STP is concerned. These are as follows:
  • (config)#Spanning-Tree Vlan 1[ID] Root Primary [Secondary] : Sets the switch to be either the primary root bridge, or the secondary root bridge of the network. You can configure a switch to be a secondary root bridge, in case the primary fails.
  • (config)#Spanning-Tree Vlan 1[ID] Priority [0-61440 in increments of 4096] : Sets the priority of the switch manually, allowing for some fine tuning.
  • (config-if)#Spanning-Tree Portfast: Skips the majority of the STP process and forces the ports it is applied to, straight to forwarding. Portfast should only ever be used on ports that are connected to a single host (as opposed to a hub or a switch).
  • (config)#Spanning-Tree Mode PVST [Rapid-PVST] : Changes the mode of STP operation.
  • (config-if)#Spanning-Tree Cost 1 [value] (Not modeled in packet tracer for some reason): Allows you to manually assign costs to each port, to create desired routes to the root bridge.
Modes of STP Operation: Since it was created, STP has gone through several incarnations, each with a variety of features. Some of the more widely used are:
  • Rapid Spanning Tree: An updated version of STP, RSTP allows substantially faster convergence. Not only this, but it permits the use of Alternate ports in STP topologies (as discussed earlier). RSTP allows each port to perform a role seperate to its final state. For example, a designated port could temporarily be in the discarding state, even though its final state is to be forwarding. Port states and roles are able to change independently of each other. A useful example is that of the Alternate port, which is in a discarding (not transmitting data) state by default, but upon being needed, passes into the forwarding state.
  • Per Vlan Spanning Tree: A network can run an STP instance for each VLAN in a network, which means that each VLAN has its own defined primary and secondary root bridges. This allows for extra redundancy.
That's all for now.

Wednesday, August 29, 2012

Part One: Describe Network Communications Using Layered Models...

Preface: For this blog entry, I am referring to Scott Bennett's book "31 Days Before Your CCNA Exam". While the occasional passage may be copied from this book, it is not my intention to infringe upon the copyright of this work (which is ©2007, Cisco Systems Inc), nor is it my intention to make the content of the book available to read online. 

Note for any casual readers: I hope you find this entertaining and perhaps even informative. It might seem a bit circuitous and at times perhaps even silly, but when describing things quickly, I tend to describe them in very train-of-thought type ways. I'll repeat myself, refer back to stuff I might not have said, perhaps even (I hope not!) contradict myself from time to time, but hey, this is a learning experience for me too, and as long as I know what I mean, that's the important thing ;-).

The first and most important thing that newcomers to networking theory are taught, are the ins and outs of layered models. Layered models provide a visual and conceptual representation of the most fundamental inner workings of the network.While initially appearing confusing, it's essential to your understanding of networking theory that you understand two particular layered models, inside and out.



The thing to the left, is what's called the OSI model - The Open Systems Interconnect[ion] model. I'm not going to go into the history of it (released in 1984 to simplify and standardise network communications), but I will say that the OSI model - for us - is pretty much a brute fact - it's just there, it just exists. For thousands of classes of network engineers the world over, it is the scaffolding that holds our understanding of network communications up. Why is this relevant to us? Well, it will eventually become apparent, as future modules reference the OSI model.

It's a multilayered model. Think of it as a cocktail where all the layers are poured carefully on top of each other. I would show you a picture of one, but you'd be amazed how hard it is to find a picture of a layered cocktail on the net, that isn't payware on some stock photography website.

The layers of the OSI model are numbered from the bottom up, that is, 1=physical, 2=data link, 3=network and so on.

An easy way to remember the layers of the OSI model is with the mnemonic Please Do Not Throw Sausage Pizza Away 
That is,  Physical, Data-link, Network, Transport, Session, Presentation, Application.
It works the other way too, with All People Seem To Need Data Processing.

As information travels across a network or across the internet (itself simply a network of networks), it is continually changed and updated. While your request for a certain webpage (for example) - the payload - remains the same from source to destination, the box - or multitude of boxes, as you will see - that it travels in is continually redirected, relabeled, sometimes even repackaged altogether, while on its travels. These different types of labelling or packaging are called PDUs.

PDU: Protocol Data Unit: Think of this as a specific type of parcel. Layer 3 PDUs are packets, for example, whereas Layer 2 PDUs are frames. Layer 2 can't read packets, and layer 3 can't read frames.
The layers of the OSI model communicate with each other, but cannot read each other's PDUs:

For a layer to be able to "read the label" and direct the data where it needs to go, the data (or PDU) needs to be placed in a box designed for that layer, that is, encapsulated in a layer [x] PDU.

Example (in VERY general terms (we haven't gone onto ARPs or reverse ARPs or anything like that yet so I'm missing a great deal out), this is just for the purposes of describing the OSI model, remember)

PC1 at the left, wants to send some data to PC2 at the right. connected to either PC is a Switch (square), and these are connected to each other, via two Routers (round). I've used both switches and routers in this example, because:
  • In most cases, switches forward data in frame format, based exclusively on the layer 2 address of the frame (Layer 3 and higher multilayer switches are beyond the scope of the CCNA)
  • Routers forward data in packet format, based exclusively on the layer 3 address of the packet.
Remember that I referred to a multitude of boxes? 

PDUs can be nested one in side the other. When you send information across the an ethernet network, what you're generally sending is Data, inside a segment, itself inside a packet, itself inside a frame. The frame is the lowest level PDU we deal with in this example.
So again, the higher level puts stuff inside an addressed box to send it to a lower level, which puts that box in another, bigger box, with its own address label written in that layer's own unique language, and sends it to an even lower level, which puts it in an even bigger box, and so on and so forth.
Remember transporters from Star Trek? that's layer 1. When the layer 2 PDU is finally ready to be sent, it goes on the transporter pad, and it's dematerialised. All those little atoms and all that energy flowing about? That's layer 1, and those are the bits.
You get to the transporter room wherever you're going, and the box rematerialises again. That's a layer 2 PDU, having been transmitted over Layer 1.

  1. PC1 creates a frame destined for the router, addressed to the router with the router's layer 2 address. Inside that frame is a packet, with a layer 3 address. 
  2. Copper wiring can't carry frames though, it can only carry electrical current, so the PC, having created the frame, transmits the bits, just like the star trek analogy, to the layer 2 Switch. The switch receives the bits and reconstitutes them into a frame.
  3. The switch receives the frame, examines the address label, notices that it is for the router, and (again turning it into lots of bits, sends it on to the router. 
  4. The router reassembles the bits into a frame, and then (remember, routers don't care about layer 2 addresses) opens the frame (box) and lifts out the packet (the smaller box inside), discarding the remains of the data that made up the frame. 
  5. The router examines the address label on the packet, realises it now needs to be passed to the second router, and puts new address labels on the packet, marking it for the attention of the second router. This packet is also turned into bits, which are sent to the second router. 
  6. The second router repeats the process, reconstituting the packet and reading the address label on it. Realising that the destination is a layer 2 address, the router takes the packet, places it inside a new frame, sticks a lovely new layer 2 address on it, and sends it towards the switch, turning it into bits in the process.
  7. The second switch receives the bits, reconstitutes them into a frame, reads the layer 2 address which it recognises as belonging to PC2, and forwards the frame onto PC2's network card, as a stream of bits.
  8. The stream of bits reaches PC2, where it is turned into a frame again, the PC realises "hey this is for me" and works its way through the packaging, opening as many boxes as it needs to, packet, segment and all, to find the data it's after. 
 Now, it's important to remember that this example isn't intended to go into the ins and outs of data transmission, nor does it describe in explicit detail, the steps of routing a packet from a source host to a destination host. That is for later on in my studying.
What the example does do though, is illustrate the fact that during its travels from source to destination, data is moved from one layer to another and back again (often repeatedly), and more importantly, emphasises the role of the OSI model in our understanding of this process.

In summary, the layers are as follows:
  • 7: Application Layer: This consists of E-mail, FTP and other programs that allow the user to enter data.
  • 6: Presentation Layer: This is where encryption and compression can occur. Here, data is represented in a standard syntax and format, such as ASCII.
  • 5: Session Layer: This layer is responsible for setting up and closing down sessions between programs that exchange data.
  • 4: Transport Layer: Segments are transported, helped on their way by functions that ensure reliability of transmission, detect errors, and control the flow of data.
  • 3: Network Layer: Here, Packets are routed over the network. The path they are sent on is determined by their layer 3 address, the IP Address.
  • 2: Data Link Layer: On this layer, frames traverse the local area network. This travel is facilitated by their layer 2 address, the MAC Address.
  • 1: Physical Layer: This is where the magic happens. Data is transmitted as a series of light pulses (fibre optic), electrical pulses (copper cable), or radio waves (Wireless/Wi-Fi).


 In the CCNA, we learn about a second type of layered model, known as the TCP/IP model. As with a great many technological concepts, the TCP/IP model was created by the military (or rather, the United States Department of Defense, on behalf of the military). In this case, the intention was to define a resilient network structure that could ensure consistent communications during nuclear war. As far as my studies indicate, the TCP/IP model is essentially a different concept to describe identical operations. We may touch on this again later (a later passage in the book actually indicates that this is not true. How so remains to be seen).

Remembering which layer of each model corresponds to its counterpart is a small challenge. It's easiest to remember that the layers A,P,S (7,6,5) which are not really dealt with by networking students, at least not in the CCNA curriculum, are bunched together into one big layer. Transport is unchanged, Network simply gets a name change and becomes "Internet", and then you have the two (or one) layers/layer at the bottom. 3,1,1,2 helps me remember. I don't know why.

Sublayers: In order to complicate things a little more, the data link layer of the OSI model actually consists of two sub-layers. OSI's layer 2 combines these 2 sub-layers into one layer, in much the same way that layer 1 of the  TCP/IP model combines the OSI model's L1 and L2 into one layer. It seems to be a conceptual thing.
  • The upper sub-layer is called the Logical Link Control sublayer. This sublayer acts as an interface with the upper layers.
  • The lower-sub-layer is called the Media Access Control sublayer, and, guess what, it controls access to the media. Ethernet operates in the physical layer, and in the MAC sublayer. 
 Frames: Layer 2 frames are made of different types of information called fields. These fields allow the receiving host to identify where a frame starts, where it ends, where it needs to go, and whether it has been transmitted correctly. Without the structure of a frame, the layer 1 transmission would just be a long squiggly stream of binary data. The fields in a generic frame are as follows:
  • Start of Frame: This field identifies the beginning of the frame.
  • Address: This contains the source and destination layer 2 (MAC) address.
  • Length (or) Type: If this is a length field, it determines the length of the frame. If it is the type, it identifies the layer 3 protocol for the frame.
  • Data: This is the important part. This is where the data intended for upper layers is kept. Despite being a layer 2 frame, upper-layer data is intended only for layers 3 and 4 of the TCP/IP model. In the case of the OSI model, it refers to layers 3 to 7, as you'd expect.
  • Frame Check Sequence: We'll come across this one a lot. This field provides a number that represents the data in the frame, as well as a way to check the frame and arrive at the same number. Something called a Cyclic Redundancy Check is a common way to calculate the number, and to check for errors in the frame.
 Three layer 2 technologies that control how the media is accessed, are Ethernet, Token-Ring, and FDDI (Fiber Distributed Data Interface). As part of the MAC sublayer, these technologies are divided into two groups: Deterministic and non deterministic. FDDI and Token-Ring are deterministic, as they provide a way for hosts to take turns accessing the media. Ethernet is nondeterministic and uses something called Carrier Sense Multiple Access/Collision Detection (CSMA/CD) as a way to access the media. This means that hosts will check whether the line is busy, and will only transmit if it is not.
If two hosts transmit at the same time, a collision would occur, and both hosts would be required to wait a random amount of time before trying again.

It is important to pay attention to the name of the specific layered model being referenced, in any layered model question.

The TCP/IP Application Layer: This layer features protocols and programs that prepare data to be encapsulated in lower layers. These programs include TFTP, FTP, SNMP, SMTP, Telnet and DNS.
More on these later.

TCP and UDP (User Datagram Protocol) operate as protocols of the TCP/IP transport layer. Both protocols segment data from the applicaiton layer and send the segments to the destination. UDP differs from TCP in that it just blurts data at the destination host, and doesn't check whether it has been received. TCP on the other hand ensures reliable transfer with receipt acknowledgements, sequencing, and mechanisms to control the flow of data, and is therefore described as a "connection-oriented" protocol.

The TCP/IP Internet Layer is responsible for finding the best path for packets over the network. It is aided in this role by the inclusion of the connectionless protocol, IP. Remember, TCP Is connection-oriented, IP is not.
Other protocols that work at the internet layer of the TCP/IP model include:
  • ARP (Address Resolution Protocol) to find a MAC address when only the IP address is known.
  • Inverse/Reverse ARP (You've guessed it), to find an IP address when only the MAC is known.
  • ICMP: Internet Control Message Protocol. 
 The TCP/IP Network Access Layer is also known as the "host to network" layer, and is responsible for providing the protocols that allow the data to access the physical media. Also found within this layer, are protocols that define the standards for the network media (copper, fiber, radio etc). Examples of these protocols are: Ethernet, Fast Ethernet, PPP, FDDI, ATM, and Frame Relay.


Remember, the application layer of the TCP/IP model includes the Application, Presentation and Session layers of the OSI model. Note that the network access layer includes the Data Link and Physical layers of the OSI model. 3,1,1,2.
The OSI model is more of an academic and theoretical construct, whereas the TCP/IP model is the basis for the development of the internet.

Flow Control and Reliability:

When you're thinking about the transport layer, remember that two major functions of the layer's role are Flow Control, and Reliability, with a capital R.
The transport layer achieves these goals through concepts such as sliding windows (not to be confused with the similarly named Gwyneth Paltrow movie), segment sequence numbers, and acknowledgements.

When two hosts get together and establish a TCP connection at the transport layer, they need to agree on what constitutes a "reasonable" flow of information. It is this flow control, that allows the receiving host to process the received information, in time to receive subsequent segments from the transmitting host.

In order to start passing segments at the transport layer, both hosts must set up and maintain a session. The software and operating system of the sender communicate with the receiver's OS and software, to set up and syncronise a session. TCP avoids congestion at the transport link, because the receiving host is able to send ready or not ready messages to the sending host.

THREE-WAY HANDSHAKE: Applications that use TCP must first set up a session as described above. The sender sends a SYN (synchronise) message. The receiver receives this message and sends back an ACK (Acknowledgement) message. The original sender receives the ACK, and transmits the third message, "ACK+1". During this process, the sequencing for communications via TCP is defined. Both hosts must send an initial sequence number, and both hosts must receive an acknowledgement, before the communication can proceed.

SLIDING WINDOWS: This function depends on the concept of a "window size". If the sender transmits a segment, the receiver receives it, and says "I received your segment, send the next one", that is a window size of one. If the sender transmits seven segments, and the receiving host says "I received your segments, send the next seven", that is a window size of seven. The one, or the seven, are the window size.
Now, any connection where the transmitting host is only sending one segment at a time, is going to be chock full of traffic, but not particularly fast. This is where sliding windows comes in.
The sender might send three segments, and in return the receiving host says, "I received your three segments, try sending four". Four are sent, in return the reply comes back "I received four, try sending five". Before you know it (in an ideal world), the sender is sending dozens of segments between each acknowledgement, and the connection is nice and speedy.

If at some point though, there's a problem, if data falls into a hole somewhere and the receiving host just sits there twiddling its thumbs, still waiting for the data to turn up, the sending host thinks "Hang on, I haven't got an acknowledgement, I'll send the data again, but this time I'll use a smaller window size. So the window size can decrease, as well as increase.

Sliding windows is the reason why, when you're downloading something, the computer changes its mind every second, when it comes to telling you how long you'll need to wait before the download is complete. Kind of like this. An annoyance this may be, but at least now you know why it does it.

ACKNOWLEDGEMENTS: How does the sending host know when to retransmit a segment though? Easy. Each time a "window" of segments is transmitted (and received at the other end), the receiving host acknowledges the receipt of the "window" in its entirety (remember, when using a greater window size than 1, the receiving host does not acknowledge each segment), and along with that acknowledgement, transmits a request for the next numbered segment that it expects to receive (the first one in the new window, along with the other segments that fit into that window).

Like this:

The "ACK" and the "4" are two separate parts of the same message. Essentially, ACK 4 means "I've received this window, please send the next bunch of segments, starting with segment 4".

This is all well and good. We know what happens when the data is received okay, and we know what happens when none of it is received.
But what happens when some of it is received, but not the rest? Again, this is where ACK numbers come in. Picture the diagram above, with a window size of three, but imagine that this scenario occurs:

SEND 1, 2, 3.
ACK 4.
SEND 4, 5, 6.
ACK 6.

Wait, what happened there? ACK 6? It's supposed to be ACK 7, right? Right. But this time round, the receiving host didn't receive segment number 6. So what happens now? Does the ACK6 mean that the sending host will transmit 6, 7, and 8? Well, no. I have to admit, I'm not sure why, and it wasn't ever really explained to me, but what actually happens in this scenario, is this:

SEND 1, 2, 3.
ACK 4.
SEND 4, 5, 6.
ACK 6.
ACK 7.
SEND 7, 8, 9.

Weird huh? There is a reason for it, but as I say, at the moment I'm not sure what it is.

TCP COMPARED TO UDP: FTP, HTTP, SMTP and Telnet all use the transport layer TCP protocol. All of these protocols benefit from the connection-oriented reliable data transfer that TCP provides.

Remember "fields", those little different types of data inside a TCP segment?

Here they are:
  • Source Port
  • Destination Port
  • Sequence Number (remember, just as described above)
  • Acknowledgement Number 
  • Header Length
  • Reserved
  • Code Bits
  • Window (see?)
  • Checksum
  • Urgent Pointer
  • Option
  • Data
Many of these fields are filled with tiny bits of incredibly useful data, that ensures timely and reliable delivery of the segment payload.

UDP is used by other protocols, including TFTP, SNMP, DHCP and DNS.
UDP is a connectionless transport layer protocol. There's none of the lovely sequencing or error checking here. Think of TCP as the computer version of arranging to make a speech in front of invited guests, and UDP as the computer version of sticking your head out of the window and yelling BLARGH!!, while hoping that the people you want to reach are listening, and can hear you.

Error checking in UDP is left to the higher layer protocols, so while software applications might be able to tell whether they've received a message correctly, the UDP protocol itself doesn't care either way.

The fields inside a UDP segment are as follows:
  • Source Port
  • Destination Port
  • Length
  • Checksum
  • Data
So why would anyone use UDP when TCP does a better job? Easy. TCP has a lot of what we call overhead. This is where, to deliver traffic from A to B, TCP actually causes quite a lot of traffic itself. A computer receives a couple of hundred TCP packets, and on the way, those sync numbers and ack numbers and window size settings have already been read hundreds of times, before the destination host has even had the opportunity to read the data.
UDP might be rough and ready, but it takes up less bandwidth and device (switch, router etc) resources to process.

PORT NUMBERS: where a source and destination MAC address can identify the specific machines that the data was sent from and intended for, another type of number, called a port number, identifies which individual software package the data is intended for.
You know what it's like, you're on facebook, you're on google, you're checking your email, there are half a dozen different websites open on your browser, and then windows is updating itself - yet again.

But what is there to stop your computer just getting totally confused with a deluge of data that it doesn't know how to process? Imagine if your email data went to your firefox window, or if, when loading a webpage, firefox tried to open the windows update data, while windows update scratched its head trying to figure out what to do with the latest icanhascheezburger pictures.

Port numbers exist to stop this from happening. For example, all HTTP (webpage data, broadly speaking) data entering your machine destined for your web browser, would  be encoded with a port number of 80. Your computer would read the data and say, "Ah, port 80, I'm going to send this to firefox". If you want to download a file from an FTP (File Transfer Protocol) server, it would come encoded with a port number of 21 "Ah, port 21, this needs to go to the FTP program".
Port numbers are yet another clever gizmo that makes internet communications possible.

For a list of popular port numbers in use, go here. Doom fans will be amused to know that Doom uses UDP port number 666, which made me laugh when I found out.


When designing a network, network engineers use what is called a hierarchical model (the term is used so often in Cisco classes that I spelled hierarchical twice just now without even having to look how to spell it).

The hierarchical model essentially means that devices are grouped into one of three ranks, or layers (hierarchical model layers are completely different from network model layers). These layers are:
  • Core Layer: serves as the backbone of the network, where high speed transmission occurs. This layer generally consists of very powerful (and uber expensive) switches and routers.
  • Distribution Layer: provides "policy based" connectivity, meaning that the majority data goes where it needs to go, without having to bother the core layer with it.
  • Access Layer: This connects directly to the end users, PCs, IP phones, etc.
That's all for today, folks. Coming soon? A lovely rant about the awesomeness of Spanning-Tree Protocol.

Ciao, I'm off to the pub.

Tuesday, August 28, 2012

Let's Get Started...

Okay, so after the non-negligible distractions of the past 6 months (which I'm not going to pay lip service to), it's time to finally get moving forwards again, and get working towards finally taking and passing  the Cisco CCNA exam.

I graduated in early spring this year from college, taking with me both a level 2 and a level 3 C&G IT certificate, which I'm pretty pleased with. Essentially, I passed both courses.
I still need, however, to take the final CCNA exam, which is vendor certified, which in a nutshell means that the certificate comes direct from Cisco, and says "We approve of this guy configuring our hardware, we trust him to do it properly etc etc". 

With that in mind, in April I bought my own lab. What this is, is essentially my own kit to practice on at home. Packet Tracer is all well and good, and it's pretty versatile, but since I'm a hands-on type person, I much prefer getting to grips with the hardware itself rather than just the concepts.

 What you're looking at is my own little internetwork, primarily comprised of 7 connected devices, which then have computers/laptops/printers etc connecting at either end.

From top to bottom, we have:

  Total cost about, hmm, about £450 including the little stuff like cables and whatnot. Sounds expensive, but when it's to help with qualifications, it's really not. Networking hardware depreciates faster than a brand new car. In a derby. With monster trucks.

Seriously, this expansion module (For something even bigger and expensive-erer) is nearly 50 grand! This is what networking hardware costs, and this is why people want engineers who are qualified and certified, by the vendor, by the book.

I'm doing my studying with the help of a rather cool book, which is one of the few paper books I've bought for a while.
I've actually had it for a while so I'm stretching the definition of 31 days somewhat, but hey, I've got to take the exam sooner or later. Would be a waste otherwise.

So, let's see how we go. Rather than take the exam 31 days from now (I don't have the money to book it at the minute) I'll run through the book, and make sure that I understand the concepts described therein. I've got some additional hardware that may be on the way (gratis) so it's my intention to practice the concepts I describe in each section.

So for the time being, this is going to turn into something of a network oriented blog. That's my intention anyway. Let's see how it goes...

Wednesday, March 28, 2012


I hope that I can come back and supercede this post sooner rather than later.

You folks might have noticed that I've stopped posting.

I'm still here, still alive and breathing.

There's so much stuff going on in my life that I can't bring myself to take the time out to post about what's been happening recently.

I have lots of things happening at the moment, my life has turned into a real day to day free for all.

There's nothing I really want to talk about. Sure I'm still waking up, going to work, coming back and sleeping, but am I enjoying anything enough to want to tell you guys about it? No.

I'm in a kinda crappy place right now and all hell is breaking loose. I'll be back to let you guys know about it one way or the other. How soon that happens, is another story.

I'll try and be more cheerful in future, but for now, I've got a lot going on.

All the best, stay safe.

[EDIT] There's a phrase: "One day, I'll look back on this and laugh".

When I do, that will be an awesome day indeed.