"Today I was asked..." is a new blog series where I go over networking questions that were asked to me by co-workers or associates.
In my current role, I am a senior Network Engineer. I am also very public about my readiness for the CCIE exam. Sometimes my co-workers treat me as if I already have a CCIE and know "everything" about networking.
This blog series is about times when I was asked a networking question and didn't have the answer or I found myself second guessing my answer which causes me to go home and build labs around these topics and make sure I understand them.
As a result of building a lab and doing research I thought it would be great to document the efforts as it may be useful to someone else. For a selfish reason it also helps me to write out information about topics to better memorize them.
I hope you enjoy my first post in the series: "How can you tell if Policy Based Routing is routing packets? Can you see it in the routing table or somewhere?"
Sunday, December 17, 2017
TIWA: How can you tell if PBR is routing packets?
Today I was asked: "How can you tell if Policy Based Routing is routing packets? Can you see it in the routing table or somewhere?"
I didn't have the correct answer for that at the time, so I looked it up, whipped up a lab and here we are :)
SPOILER ALERT: debug ip policy FTW!!!!
My co-worker was correct to assume, there is nothing in the routing table to tell you there is a policy based route installed or that packets are being forwarded in contrary to the routing table. Checking the routing table alone is not enough to determine the flow of packets. You must also check the ingress interface for policies.
To build Policy Based Routing (PBR) you need 3 basic ingredients:
In the graphic above we are sourcing our traffic from the Loopback0 interface: 1.1.1.1 on R1 (on the right). In the middle is our PBR router. Using the default route traffic will traverse the PBR router using the next-hop of 10.10.10.6. We want to use Policy Based Routing to route traffic sourced from: 1.1.1.1 to the next-hop 20.20.20.6.
access-list 1 permit host 1.1.1.1
route-map PBR permit 10
match ip address 1
set ip next-hop 20.20.20.6
interface FastEthernet0/0
ip policy route-map PBR
show ip interface fa0/0
show ip interface fa0/0 | i Policy
show ip policy
show cef interface fa0/0
show cef interface fa0/0 | i Policy|polic
show route-map PBR
show access-list PBR
NOTE: Hit counts are good indicators your policy is getting hit.
debug ip policy
NOTE: Using the "debug" command on a policy which see's alot of traffic can blow up your console. Best practice to use ACL's to filter the "debug" output to a particular host or hosts.
I didn't have the correct answer for that at the time, so I looked it up, whipped up a lab and here we are :)
SPOILER ALERT: debug ip policy FTW!!!!
My co-worker was correct to assume, there is nothing in the routing table to tell you there is a policy based route installed or that packets are being forwarded in contrary to the routing table. Checking the routing table alone is not enough to determine the flow of packets. You must also check the ingress interface for policies.
To build Policy Based Routing (PBR) you need 3 basic ingredients:
- access-list (standard or extended)
- route-map
- reachable next-hop
In the graphic above we are sourcing our traffic from the Loopback0 interface: 1.1.1.1 on R1 (on the right). In the middle is our PBR router. Using the default route traffic will traverse the PBR router using the next-hop of 10.10.10.6. We want to use Policy Based Routing to route traffic sourced from: 1.1.1.1 to the next-hop 20.20.20.6.
PBR: (configuration)
route-map PBR permit 10
match ip address 1
set ip next-hop 20.20.20.6
interface FastEthernet0/0
ip policy route-map PBR
IOS: (verification)
To see if PBR is turned on or is in-use:
show ip interface fa0/0 | i Policy
show ip policy
show cef interface fa0/0
show cef interface fa0/0 | i Policy|polic
To verify the configuration elements of PBR:
show access-list PBR
NOTE: Hit counts are good indicators your policy is getting hit.
To verify the traffic is hitting the Policy as it ingresses and interface:
NOTE: Using the "debug" command on a policy which see's alot of traffic can blow up your console. Best practice to use ACL's to filter the "debug" output to a particular host or hosts.
Labels:
based,
PBR,
policy,
policy based routing,
routing
Thursday, December 14, 2017
Pi-hole: Day 1 (First 5 minutes)
Wow! This is so simple why haven't I been using this for years?
Pi-hole is a DNS blackhole server. It is so light weight it can run well on meager hardware such as a RasberryPi.
By default it pulls from various blacklist DNS sources, to help block the dns requests for many common and popular malware domains as well as ad-campaigns.
I had this up and running in about 5 minutes as a VM. I changed my DNS offering from my router's DHCP server to point to my new Pi-hole server. Using my phone I dis-associated and re-associated with my WiFi so it would received the new DNS offering. I immediately went to 2 sites that are notorious for banner and popup ads.
The first site's banner ads we gone!!! And the site seemed to load a bit quicker.
The second site I tested is notorious for pop-ups as well as banner ads. The banner ads were gone and when I invoked a popup it quickly disappeared off screen. (This isn't a feature of Pi-hole but a result of the DNS being blackholed)
The Pi-hole dashboard immediately registered the requests. The below screenshot came directly after my two connectivity tests from above within about 5 minutes of being up and running.
Also, I went ahead and turned DNS forwarding on and I chose the Quad9 DNS (9.9.9.9). I've heard good things about it. I've tested reachability to it and it tends to be a bit quicker than Google's 8.8.8.8. Quad9 also has the added benefit of being security and privacy focused. I used OpenDNS in the past but was turned off when Cisco bought it (personal preference).
Pi-hole is Free and Open Source but please consider making a donation: https://pi-hole.net/donate/
There is a pretty large Pi-hole user base as the project is already a few years old. There are many things to tune if you choose to do so. You can also find curated DNS blacklists out there from the users groups, that might be worth adding a resource.
Personally I'm interested in the log retention of Pi-hole and are there any ways to forward the logs to log collector or database to allow the investigation of DNS queries outside of Pi-hole.
I'll do a follow-up post in a few weeks after I let this run on the network for a while. So check back.
Pi-hole is a DNS blackhole server. It is so light weight it can run well on meager hardware such as a RasberryPi.
By default it pulls from various blacklist DNS sources, to help block the dns requests for many common and popular malware domains as well as ad-campaigns.
I had this up and running in about 5 minutes as a VM. I changed my DNS offering from my router's DHCP server to point to my new Pi-hole server. Using my phone I dis-associated and re-associated with my WiFi so it would received the new DNS offering. I immediately went to 2 sites that are notorious for banner and popup ads.
The first site's banner ads we gone!!! And the site seemed to load a bit quicker.
The second site I tested is notorious for pop-ups as well as banner ads. The banner ads were gone and when I invoked a popup it quickly disappeared off screen. (This isn't a feature of Pi-hole but a result of the DNS being blackholed)
The Pi-hole dashboard immediately registered the requests. The below screenshot came directly after my two connectivity tests from above within about 5 minutes of being up and running.
Also, I went ahead and turned DNS forwarding on and I chose the Quad9 DNS (9.9.9.9). I've heard good things about it. I've tested reachability to it and it tends to be a bit quicker than Google's 8.8.8.8. Quad9 also has the added benefit of being security and privacy focused. I used OpenDNS in the past but was turned off when Cisco bought it (personal preference).
Pi-hole is Free and Open Source but please consider making a donation: https://pi-hole.net/donate/
There is a pretty large Pi-hole user base as the project is already a few years old. There are many things to tune if you choose to do so. You can also find curated DNS blacklists out there from the users groups, that might be worth adding a resource.
Personally I'm interested in the log retention of Pi-hole and are there any ways to forward the logs to log collector or database to allow the investigation of DNS queries outside of Pi-hole.
I'll do a follow-up post in a few weeks after I let this run on the network for a while. So check back.
Sunday, December 3, 2017
TIL: as-path prepending
Today I learned: You can prepend any AS numbers in the prepended string.
They typical method of as-path prepending is to prepend or add your autonomous system number to the AS_PATH attribute to influence inbound traffic patterns.
You can technically add any autonomous system to the AS_PATH even AS's that don't belong to you.
NOTE: This is frowned upon in production. "Just because you can doesn't mean you should!"
See the example below:
Without context or a topology this seems a little bland but the results are there. You can see from the BGP table below we have prepended a bunch of AS's that do not belong to us.
Prepeding configured out-bound from R3 --> R1:
R3#sho run | s as-path|route-map|router bgp
router bgp 200
neighbor 155.1.13.1 remote-as 100
neighbor 155.1.13.1 route-map AS_254 out
ip as-path access-list 254 permit ^254$
route-map AS_254 permit 10
match as-path 254
set as-path prepend 254 250 123
route-map AS_254 permit 20
Showing the R1 partial BGP table:
R1#sho ip bgp neighbors 155.1.13.3 routes
[ ... OUTPUT OMITTED ... ]
Network Next Hop Metric LocPrf Weight Path
*> 28.119.16.0/24 155.1.13.3 0 200 54 i
*> 28.119.17.0/24 155.1.13.3 0 200 54 i
* 51.51.51.51/32 155.1.13.3 0 200 254 250 123 254 ?
* 205.90.31.0 155.1.13.3 0 200 254 250 123 254 ?
* 220.20.3.0 155.1.13.3 0 200 254 250 123 254 ?
* 222.22.2.0 155.1.13.3 0 200 254 250 123 254 ?
Credit: This was influenced by a lab from the INE workbook.
Friday, December 1, 2017
Do you think LogZilla is better than Kiwi?
tl;dr
On LinkedIn, I was asked the question "Do you think LogZilla is better that Kiwi?" and my response(below) was a few thousand characters more than LinkedIn allows in a "comment". See comment here.
Before trying LogZilla I did a quick comparison of a few centralized log management products(LogZilla included). This included research on compatibility, how to videos, usability, ease of install, and also "What do i need it for?" and "how am I going to use it?".
I did like Kiwi for its simplicity. I was happy to see they have a web-interface. I liked their one-time purchase price model. This would be perfect for a small scale install on a budget.
Kiwi is a small product offering from Solarwinds. Solarwinds' product focus is not centralized logging their product focus is compliance/configuration management and performance analytics. Kiwi is not their most profitable business unit.(If I'm wrong... tell me.)
What turned me off about Kiwi was it runs on a windows platform. I don't have spare Windows VM's or licenses lying around and for that reason I had to move on because I didn't have a platform to run it on.
The second product I looked at was LogZilla. Right out of the box, it had additional features and integrations that Kiwi didn't offer. I watched a few of the videos from their YouTube channel and decided I should give this product a try. They do centralized log management and they do it well. This isn't part of a larger suite of products, this is their product. What that means to me is, I don't have to worry about getting an inferior product because its not part of the most profitable business unit within the company instead, it is the business unit of the company.
They offer a free trial download and getting LogZilla installed can be completed with a single command. It can't get any easier right? If you read my blog then you know I decided to use the prebuilt VM which got me up and running in less than 30 minutes. I personally really like the dashboards/widgets and the layout LogZilla has. One thing I really like about it is, you can use it right out of the box or you can customize it to any level the suits you or your businesses needs. Almost everything is customizable. I'm piloting this at my house so I don't need much but, I am exploring building some automation scripts. This product fits my use case at home, and hopefully I can leverage it to fit business cases at work.
One of the last reasons I prefer LogZilla over Kiwi, isn't necessarily a technical or business reason it's more of a human reason. Shortly after getting LogZilla up and running I reached out to their sales department to get my trial period extended. I had a few back and fourths with members of their team and even the CEO reached out to me after seeing my blog post. That was important to me. I got to know them a little bit and understand that they too are a small business. I currently work for a small business, and before this company I worked for an even smaller business. Supporting small business is something I like to do, because I had a small business once and I know what its like. I enjoyed making every customer a personal experience and that's what LogZilla has done for me so far.
Some of the other products that were up for consideration were, ELK, Splunk and Nagios Log Server.
Although I don't work with Splunk directly, it's in most environments I work in. I know it as one of the super giants in the industry like ArcSight. Splunk does have a "free" version (with data cap) you can run, but I was a bit intimidated because I associate big names with big complicated systems. So until someone gives me a reason I 'have to run Splunk', it can live at the bottom of my list.
One product that I haven't tried and maybe I'll try it a bit down the road is Nagios Log Server. I didn't even know they had a log management product. I know Nagios from a few years ago, I had to work setup Nagios to monitor availability and performance for some forward facing services and back end services too. Looking into it, looks like it runs ELK in the background. I'm pretty excited about this product. Nagios Log Server you can run with a data cap of 500MB/day.
ELK is the new hip thing in town. It's trendy. Everywhere I work, organizations are standing up ELK stacks. Some big installs, some small installs, some in production, some just for testing, it's everywhere.
To be clear, I'm not doing a bake-off here. I just want to work with great products and push the limits of what I know to learn new things everyday.
Here's a summary if your considering a syslog for your home or business. Give them all a try and find the right product for you.
Hopefully I can continue to offer some valuable feedback from my experiences with the tools I choose to use.
Kiwi offered a fully features system free for 14-days.
LogZilla offers their system free for 7-days.
Splunk - 500MB/per day
Nagios Log Server - 500MB/per day
ELK ???
On LinkedIn, I was asked the question "Do you think LogZilla is better that Kiwi?" and my response(below) was a few thousand characters more than LinkedIn allows in a "comment". See comment here.
Before trying LogZilla I did a quick comparison of a few centralized log management products(LogZilla included). This included research on compatibility, how to videos, usability, ease of install, and also "What do i need it for?" and "how am I going to use it?".
I did like Kiwi for its simplicity. I was happy to see they have a web-interface. I liked their one-time purchase price model. This would be perfect for a small scale install on a budget.
Kiwi is a small product offering from Solarwinds. Solarwinds' product focus is not centralized logging their product focus is compliance/configuration management and performance analytics. Kiwi is not their most profitable business unit.(If I'm wrong... tell me.)
What turned me off about Kiwi was it runs on a windows platform. I don't have spare Windows VM's or licenses lying around and for that reason I had to move on because I didn't have a platform to run it on.
The second product I looked at was LogZilla. Right out of the box, it had additional features and integrations that Kiwi didn't offer. I watched a few of the videos from their YouTube channel and decided I should give this product a try. They do centralized log management and they do it well. This isn't part of a larger suite of products, this is their product. What that means to me is, I don't have to worry about getting an inferior product because its not part of the most profitable business unit within the company instead, it is the business unit of the company.
They offer a free trial download and getting LogZilla installed can be completed with a single command. It can't get any easier right? If you read my blog then you know I decided to use the prebuilt VM which got me up and running in less than 30 minutes. I personally really like the dashboards/widgets and the layout LogZilla has. One thing I really like about it is, you can use it right out of the box or you can customize it to any level the suits you or your businesses needs. Almost everything is customizable. I'm piloting this at my house so I don't need much but, I am exploring building some automation scripts. This product fits my use case at home, and hopefully I can leverage it to fit business cases at work.
One of the last reasons I prefer LogZilla over Kiwi, isn't necessarily a technical or business reason it's more of a human reason. Shortly after getting LogZilla up and running I reached out to their sales department to get my trial period extended. I had a few back and fourths with members of their team and even the CEO reached out to me after seeing my blog post. That was important to me. I got to know them a little bit and understand that they too are a small business. I currently work for a small business, and before this company I worked for an even smaller business. Supporting small business is something I like to do, because I had a small business once and I know what its like. I enjoyed making every customer a personal experience and that's what LogZilla has done for me so far.
Some of the other products that were up for consideration were, ELK, Splunk and Nagios Log Server.
Although I don't work with Splunk directly, it's in most environments I work in. I know it as one of the super giants in the industry like ArcSight. Splunk does have a "free" version (with data cap) you can run, but I was a bit intimidated because I associate big names with big complicated systems. So until someone gives me a reason I 'have to run Splunk', it can live at the bottom of my list.
One product that I haven't tried and maybe I'll try it a bit down the road is Nagios Log Server. I didn't even know they had a log management product. I know Nagios from a few years ago, I had to work setup Nagios to monitor availability and performance for some forward facing services and back end services too. Looking into it, looks like it runs ELK in the background. I'm pretty excited about this product. Nagios Log Server you can run with a data cap of 500MB/day.
ELK is the new hip thing in town. It's trendy. Everywhere I work, organizations are standing up ELK stacks. Some big installs, some small installs, some in production, some just for testing, it's everywhere.
To be clear, I'm not doing a bake-off here. I just want to work with great products and push the limits of what I know to learn new things everyday.
Here's a summary if your considering a syslog for your home or business. Give them all a try and find the right product for you.
Hopefully I can continue to offer some valuable feedback from my experiences with the tools I choose to use.
Kiwi offered a fully features system free for 14-days.
LogZilla offers their system free for 7-days.
Splunk - 500MB/per day
Nagios Log Server - 500MB/per day
ELK ???
Subscribe to:
Posts (Atom)