Sunday, December 17, 2017

TIWA: "Today I was Asked..."

"Today I was asked..." is a new blog series where I go over networking questions that were asked to me by co-workers or associates.

In my current role, I am a senior Network Engineer. I am also very public about my readiness for the CCIE exam. Sometimes my co-workers treat me as if I already have a CCIE and know "everything" about networking.

This blog series is about times when I was asked a networking question and didn't have the answer or I found myself second guessing my answer which causes me to go home and build labs around these topics and make sure I understand them.

As a result of building a lab and doing research I thought it would be great to document the efforts as it may be useful to someone else. For a selfish reason it also helps me to write out information about topics to better memorize them.

I hope you enjoy my first post in the series: "How can you tell if Policy Based Routing is routing packets? Can you see it in the routing table or somewhere?"

TIWA: How can you tell if PBR is routing packets?

Today I was asked: "How can you tell if Policy Based Routing is routing packets? Can you see it in the routing table or somewhere?"

I didn't have the correct answer for that at the time, so I looked it up, whipped up a lab and here we are :)



SPOILER ALERT: debug ip policy FTW!!!!

My co-worker was correct to assume, there is nothing in the routing table to tell you there is a policy based route installed or that packets are being forwarded in contrary to the routing table. Checking the routing table alone is not enough to determine the flow of packets. You must also check the ingress interface for policies.


To  build Policy Based Routing (PBR) you need 3 basic ingredients:
  1. access-list (standard or extended)
  2. route-map
  3. reachable next-hop

In the graphic above we are sourcing our traffic from the Loopback0 interface: 1.1.1.1 on R1 (on the right). In the middle is our PBR router. Using the default route traffic will traverse the PBR router using the next-hop of 10.10.10.6. We want to use Policy Based Routing to route traffic sourced from: 1.1.1.1 to the next-hop 20.20.20.6.


PBR: (configuration)


access-list 1 permit host 1.1.1.1

route-map PBR permit 10
 match ip address 1
 set ip next-hop 20.20.20.6

interface FastEthernet0/0
 ip policy route-map PBR


IOS: (verification)


To see if PBR is turned on or is in-use:


show ip interface fa0/0
show ip interface fa0/0 | i Policy

show ip policy

show cef interface fa0/0
show cef interface fa0/0 | i Policy|polic


To verify the configuration elements of PBR:


show route-map PBR

show access-list PBR

NOTE: Hit counts are good indicators your policy is getting hit.


To verify the traffic is hitting the Policy as it ingresses and interface:


debug ip policy

NOTE: Using the "debug" command on a policy which see's alot of traffic can blow up your console. Best practice to use ACL's to filter the "debug" output to a particular host or hosts.

Thursday, December 14, 2017

Pi-hole: Day 1 (First 5 minutes)

Wow! This is so simple why haven't I been using this for years?

Pi-hole is a DNS blackhole server. It is so light weight it can run well on meager hardware such as a RasberryPi.

By default it pulls from various blacklist DNS sources, to help block the dns requests for many common and popular malware domains as well as ad-campaigns.

I had this up and running in about 5 minutes as a VM. I changed my DNS offering from my router's DHCP server to point to my new Pi-hole server. Using my phone I dis-associated and re-associated with my WiFi so it would received the new DNS offering. I immediately went to 2 sites that are notorious for banner and popup ads.

The first site's banner ads we gone!!! And the site seemed to load a bit quicker.

The second site I tested is notorious for pop-ups as well as banner ads. The banner ads were gone and when I invoked a popup it quickly disappeared off screen. (This isn't a feature of Pi-hole but a result of the DNS being blackholed)

The Pi-hole dashboard immediately registered the requests. The below screenshot came directly after my two connectivity tests from above within about 5 minutes of being up and running.



Also, I went ahead and turned DNS forwarding on and I chose the Quad9 DNS (9.9.9.9). I've heard good things about it. I've tested reachability to it and it tends to be a bit quicker than Google's 8.8.8.8. Quad9 also has the added benefit of being security and privacy focused. I used OpenDNS in the past but was turned off when Cisco bought it (personal preference).

Pi-hole is Free and Open Source but please consider making a donation: https://pi-hole.net/donate/

There is a pretty large Pi-hole user base as the project is already a few years old. There are many things to tune if you choose to do so. You can also find curated DNS blacklists out there from the users groups, that might be worth adding a resource.

Personally I'm interested in the log retention of Pi-hole and are there any ways to forward the logs to log collector or database to allow the investigation of DNS queries outside of Pi-hole.

I'll do a follow-up post in a few weeks after I let this run on the network for a while. So check back.

Sunday, December 3, 2017

TIL: as-path prepending

Today I learned: You can prepend any AS numbers in the prepended string.


They typical method of as-path prepending is to prepend or add your autonomous system number to the AS_PATH attribute to influence inbound traffic patterns.

You can technically add any autonomous system to the AS_PATH even AS's that don't belong to you.

NOTE: This is frowned upon in production. "Just because you can doesn't mean you should!"

See the example below:

Without context or a topology this seems a little bland but the results are there. You can see from the BGP table below we have prepended a bunch of AS's that do not belong to us.

Prepeding configured out-bound from R3 --> R1:


R3#sho run | s as-path|route-map|router bgp
router bgp 200

 neighbor 155.1.13.1 remote-as 100
 neighbor 155.1.13.1 route-map AS_254 out

ip as-path access-list 254 permit ^254$

route-map AS_254 permit 10
 match as-path 254
 set as-path prepend 254 250 123

route-map AS_254 permit 20


Showing the R1 partial BGP table:


R1#sho ip bgp neighbors 155.1.13.3 routes

[ ... OUTPUT OMITTED ... ]

     Network          Next Hop            Metric LocPrf Weight Path
 *>  28.119.16.0/24   155.1.13.3                             0 200 54 i
 *>  28.119.17.0/24   155.1.13.3                             0 200 54 i
 *   51.51.51.51/32   155.1.13.3                             0 200 254 250 123 254 ?
 *   205.90.31.0      155.1.13.3                             0 200 254 250 123 254 ?
 *   220.20.3.0       155.1.13.3                             0 200 254 250 123 254 ?
 *   222.22.2.0       155.1.13.3                             0 200 254 250 123 254 ?


Credit: This was influenced by a lab from the INE workbook.

Friday, December 1, 2017

Do you think LogZilla is better than Kiwi?

tl;dr
On LinkedIn, I was asked the question "Do you think LogZilla is better that Kiwi?" and my response(below) was a few thousand characters more than LinkedIn allows in a "comment". See comment here.

Before trying LogZilla I did a quick comparison of a few centralized log management products(LogZilla included). This included research on compatibility, how to videos, usability, ease of install, and also "What do i need it for?" and "how am I going to use it?".

I did like Kiwi for its simplicity. I was happy to see they have a web-interface. I liked their one-time purchase price model. This would be perfect for a small scale install on a budget.

Kiwi is a small product offering from Solarwinds. Solarwinds' product focus is not centralized logging their product focus is compliance/configuration management and performance analytics. Kiwi is not their most profitable business unit.(If I'm wrong... tell me.)

What turned me off about Kiwi was it runs on a windows platform. I don't have spare Windows VM's or licenses lying around and for that reason I had to move on because I didn't have a platform to run it on.

The second product I looked at was LogZilla. Right out of the box, it had additional features and integrations that Kiwi didn't offer. I watched a few of the videos from their YouTube channel and decided I should give this product a try. They do centralized log management and they do it well. This isn't part of a larger suite of products, this is their product. What that means to me is, I don't have to worry about getting an inferior product because its not part of the most profitable business unit within the company instead, it is the business unit of the company.

They offer a free trial download and getting LogZilla installed can be completed with a single command. It can't get any easier right? If you read my blog then you know I decided to use the prebuilt VM which got me up and running in less than 30 minutes. I personally really like the dashboards/widgets and the layout LogZilla has. One thing I really like about it is, you can use it right out of the box or you can customize it to any level the suits you or your businesses needs. Almost everything is customizable. I'm piloting this at my house so I don't need much but, I am exploring building some automation scripts. This product fits my use case at home, and hopefully I can leverage it to fit business cases at work.

One of the last reasons I prefer LogZilla over Kiwi, isn't necessarily a technical or business reason it's more of a human reason. Shortly after getting LogZilla up and running I reached out to their sales department to get my trial period extended. I had a few back and fourths with members of their team and even the CEO reached out to me after seeing my blog post. That was important to me. I got to know them a little bit and understand that they too are a small business. I currently work for a small business, and before this company I worked for an even smaller business. Supporting small business is something I like to do, because I had a small business once and I know what its like. I enjoyed making every customer a personal experience and that's what LogZilla has done for me so far.

Some of the other products that were up for consideration were, ELKSplunk and Nagios Log Server.

Although I don't work with Splunk directly, it's in most environments I work in. I know it as one of the super giants in the industry like ArcSight. Splunk does have a "free" version (with data cap) you can run, but I was a bit intimidated because I associate big names with big complicated systems. So until someone gives me a reason I 'have to run Splunk', it can live at the bottom of my list.

One product that I haven't tried and maybe I'll try it a bit down the road is Nagios Log Server. I didn't even know they had a log management product. I know Nagios from a few years ago, I had to work setup Nagios to monitor availability and performance for some forward facing services and back end services too. Looking into it, looks like it runs ELK in the background. I'm pretty excited about this product. Nagios Log Server you can run with a data cap of 500MB/day.

ELK is the new hip thing in town. It's trendy. Everywhere I work, organizations are standing up ELK stacks. Some big installs, some small installs, some in production, some just for testing, it's everywhere.

To be clear, I'm not doing a bake-off here. I just want to work with great products and push the limits of what I know to learn new things everyday.

Here's a summary if your considering a syslog for your home or business. Give them all a try and find the right product for you.

Hopefully I can continue to offer some valuable feedback from my experiences with the tools I choose to use.

Kiwi offered a fully features system free for 14-days.
LogZilla offers their system free for 7-days.
Splunk  - 500MB/per day
Nagios Log Server - 500MB/per day
ELK  ???

Sunday, November 26, 2017

LogZilla: Day 1

TL;DR - It was China!!!

I wanted to increase visibility of my network at home through the use of a central syslog server. I decided on trying a COTS product instead of rolling my own. I chose LogZilla as my product to try. I had it downloaded, running and receiving its first logs in less than 30 minutes.

I downloaded the *.ova as I already have a small ESXi server running a few VM's with some spare resources.


I setup the VM's hardisk as thin provisioned and gave the VM 2 GB memory. The VM booted but gave an error that it needed a minimum of 4 GB to run LogZilla, I stepped it up to 4 GB and it booted fine. I know the website suggests 8GB but I'm cheap :) Upon initial boot the console asked you log in, and begin the 'first boot sequence'. I assume this is downloading the latest version and updating the VM before launching the LogZilla service.



Once LogZilla is up and running you should go around to your devices and configure the IP address of LogZilla as your remote syslog server or configure your current syslog server to forward events to LogZilla.

Upon first log in to the Web GUI of LogZilla you'll be presented with a Generic Dashboard full of helpful widgets.

The first of which displays the overall statistic of LogZilla's log ingestion. This shows the Events Per Day in max and average. This is helpful to understand what scale license you'll require for your environment.


Another widget shows a pie chart depicting all of the hosts that have sent logs to LogZilla and shows a comparison by volume.


Under the top widgets are 2 pre-configured "Live Stream" widgets in table format. These update in real-time and provide the live stream view. These are very helpful, one table contains all the logs that contain "Error" and the other contains all the logs which contain "Failed". This is great for quick look. I have this widget set to show the logs for the whole day. Because I have a small home network all of the events that contain "Error" or "Failed" in a day isn't very many.

All the widgets are customizable and you can build your own widgets. You can also utilize the provided or custom widgets to create or modify the dashboards. Being the first day I wasn't too interested in making custom dashboards or widgets, I really just wanted to get it working and pumping some logs through.

The bottom widget table is also a "Live Stream" widget. I have it set to show the logs from the 'last minute'.


After having LogZilla running for less than 12 hours I had enough data to start looking into some of the investigative features. I noticed a couple of TLS errors coming from openvpn for a time of day I wasn't connecting to my own VPN.


I selected an event and right-clicked, there are a bunch of helpful context options. 


Using the "Display Geo IP Information" tool I was able to located the source:


From the event I am able to right-click and create a Trigger based on that event.


A trigger allows me to specify some criteria to match in a log message and make an action. This can be sending an email alert or assigning an actionable item to another LogZilla user. For my purposes I simply want a notification and the item to be marked as "Actionable":


The "Name" is arbitrary but should be something that means something to you. When using the "Create Trigger" feature from the main dashboard LogZilla will pre-fill all this information based on the event you chose from the beginning. I edited out the specifics of the "Event match" because I didn't want it to on trigger on the single IP address. I want it to trigger anytime there is a log message that contains the "Event match" phrase from the screenshot.

Now, if LogZilla see's a log containing the message we specified it will create a notification and put the log entry in the "Actionable" widget as configured by my Trigger.



LogZilla has also helped me notice some anomalies that would otherwise gone unnoticed. After the first day of collecting logs I noticed I have nearly 10K log events coming from my router. If I watch the "Live Stream" I see most of them are duplicates. I was seeing this log almost every 5 seconds.

Note: "Host" column removed to better fit in screenshot.

I looked it up and found this reference. An update and changing the log level from the router would make this go away.

So, for my first day of watching logs I have found China trying to connect to my VPN and an anomalous log that shows up every 5 seconds. 

Awareness is King!

This concludes Day 1 of my Thanksgiving holiday. 


Sunday, November 19, 2017

Future = Application Layer Networking

I was having a conversation with a peer about the future of networking. The foundation of the conversation revolved around SDN and the changes that SDN brings to network operators and engineers. The point was raised to me that 'engineers and operators of future networks won't need to have the granular low level understanding of bits, bytes and protocols.' As control of the network becomes more and more software driven the engineer/operator needs only high level understanding. My response to that is: Nothing could be further from the truth! My prediction for the future is: true application layer networking. Have a predictable and deterministic path through the network based on application only.

I think in the future, of application layer routing, we will need to incorporate some level of routing intelligence on each host/end device. I'm not sure exactly what that will look like yet but, I know it is not along the lines of OSPF or EIGRP.

In our current model, for most networks (home networks and small business networks) there is a single egress point where all traffic leaves your LAN to destinations on the internet.

In mid-sized business/enterprise you'll have redundant links a backups. You may have site-to-site tunnels with IPSEC connected remote sites but anything not on your LAN or part of your remote sites still egresses a single point to destinations on the internet.

Large businesses/Enterprises may have multiple egress points to the internet, all managed by lead engineers and operators with oversight from the senior engineers, involving multiple AS's and public IPv4 subnets that span the globe. This is expensive and the bottom line is, even with all the sophistication, the workstations and end devices are still taking the shortest path out of the network based on destination IP address and not on application specific characteristics.

Routers are doing destination based forwarding all over the globe. They are not making routing decisions based on the the type of traffic in the payload of the packet.

One thing I foresee SDN doing for us is bringing dynamic intelligence into routing. Having your controller understand the link requirements of protocols and identify those protocols as they are passing through the routers and forward them based on the application traffic they are carrying not just their destination IP address.

Another thing I believe the future holds for us is true multi-path routing, where end devices, even a common smart phone, can have multiple gateways and not just redundant default gateways, instead they would be application specific gateways. For example I could be connected to my cellular network, wifi and maybe a bunch of ad-hoc networks all at the same time. Perhaps those ad-hoc networks have gateways of their own and we can use them to egress to the internet essentially giving a device like our phone, multiple egress points. Letting our devices participate in the decision making process for routing and forwarding and how to best utilize the links available to it on a per-application basis.

Sorry I went off on a minor futuristic sci-fi routing tangent for a moment.

To bring this full circle, I feel like the engineers and operators of the future will actually need to know more about the inter-workings of each protocol more than just Layer 4. If the future is anything close to application layer networking, we will actually need to be closer to the bits and bytes to understand the protocol of the applications themselves in-order to programmatically and deterministically route them to their destinations.

P.S. - I'm not talking about getting rid of IP addresses but instead introduce more to forwarding than just the destination. I'm sure all the "every packet should be treated equal" people out there are going to have a fit with this.

Comments are welcomed.