ELK + Palo Alto Networks

Why ELK Stack

Chances are that if you’re here you already know what the ELK Stack is and what it is used for. For the uninitiated ELK is actually an acronym (Elasticsearch / Logstash / Kibana). Here’s a brief overview:

Logstash – Data collection and transportation pipeline. We will use Logstash to read in our syslog files and store them in an Elasticsearch index.

Elasticsearch – A distributed search and analytics engine designed for scalability. This is what indexes our data and allows us to create usability visualizations with Kibana.

Kibana – a data visualization platform that is easy to use and nice on the eyes.

The data lifecycle for ELK goes a little something like this:

  1. Syslog Server feeds Logstash
  2. Logstash filters and parses logs and stores them within Elasticsearch
  3. Elasticsearch indexes and makes sense out of all the data
  4. Kibana makes millions of data points consumable by us mere mortals

For this demo we are going to ship logs out of a Palo Alto Networks Firewall into an ELK Stack setup and make a nice NOC-like dashboard.
KibanaDashboard
For this project you will need…

  • A Linux Ubuntu Server 14.04 LTS: 1 core | 4Gb Memory | 100Gb storage (you may need more processor and memory depending on how much data you will be pumping in).
  • http://www.ubuntu.com/download/server
  • A Palo Alto Networks firewall with a Threat Prevention Subscription
  • Something on the firewall to generate traffic

Installing ELK Stack

Let’s begin by prepping our Ubuntu 14.04 LTS install. I like to make sure it has all of its updates and everything before starting. The only feature that I install during the server setup process is SSH because nobody likes working from a console.

Install & Configure syslog-ng

Install syslog-ng:

sudo apt-get install -y syslog-ng syslog-ng-core

(you may have to type in your admin password for this)

Now we shall edit the syslog-ng.conf file:

sudo vi /etc/syslog-ng/syslog-ng.conf

Replace options { … }; with the following:

options {
flush_lines (0);
time_reopen (10);
log_fifo_size (1000);
chain_hostnames (off);
use_dns (no);
use_fqdn (no);
create_dirs (no);
keep_hostname (yes);
ts_format(iso);
};

Add the following below source s_src {…};

source s_netsyslog {
        udp(ip(0.0.0.0) port(514) flags(no-hostname));
        tcp(ip(0.0.0.0) port(514) flags(no-hostname));
};

#source s_urlsyslog {
#       udp(ip(0.0.0.0) port(514) flags(no-hostname));
#       tcp(ip(0.0.0.0) port(514) flags(no-hostname));
#};
destination d_netsyslog { file("/var/log/network.log" owner("root") group("root") perm(0644)); };

destination d_urlsyslog { file("/var/log/urllogs.log" owner("root") group("root") perm(0644)); };

log { source(s_netsyslog); filter(f_traffic); destination(d_netsyslog); };

log { source(s_netsyslog); filter(f_threat); destination(d_urlsyslog); };

filter f_traffic { facility(local0); };
filter f_threat { facility(local1); };

Save the file by hitting ESC then : x. Press enter and you should be sent back to the CLI (command line interface).

Now type in the following:

sudo service syslog-ng restart

. If everything restarts, then you’re good to go. If not fix your errors and then let’s move on!

Now it’s time to setup our logrotate file. We will do this so our logs don’t eat us alive.

sudo vi /etc/logrotate.d/network_syslog

Enter the following:

/var/log/network.log {
daily
rotate 30
create 644 root root
olddir /var/log/network
missingok
notifempty
sharedscripts
dateext
compress
delaycompress
maxage 90
postrotate
/etc/init.d/syslog-ng reload > /dev/null 2>&1
endscript
}

Save the file and exit

sudo vi /etc/logrotate.d/urllogs_syslog

Enter the following:

/var/log/urllogs.log {
daily
rotate 30
create 644 root root
olddir /var/log/network
missingok
notifempty
sharedscripts
dateext
compress
delaycompress
maxage 90
postrotate
/etc/init.d/syslog-ng reload > /dev/null 2>&1
endscript
}

Save and exit then restart the syslog-ng service

sudo service syslog-ng restart

Now let’s shoot some logs from our firewall to our syslog server. Go to the admin interface of your PAN NGFW and go to the Device Tab. Look under “Server Profiles” and find the Syslog option:
syslogScreenCapAdd a syslog server by clicking the + button on the bottom left and use the following information:

  • Name: Some identifier that you will reference when adding the syslog servers to the log forwarding objects
  • Click the + to add the actual Syslog Server
  • Name: Identifier (object name)
  • Syslog Server: IP Address of the syslog server
  • Port: Whatever port your syslog server listens on (default is 514)
  • Format: BSD
  • Facility: We use facility to help our syslog server send the right logs to the right locations. For traffic use the log_local0 option.

We will create two of these syslog profiles, one for traffic and one for URLs (threat logging… I’ll get into that in a minute).

Once we have the PAN NGFW syslog server profiles set up let’s add them to a log forwarding profile. Click on the Objects tab and then on Log Forwarding. Add a new profile and give it a name. Then select which settings you want to activate syslogs for. For our demo we will turn on syslogs for Traffic (the any severity) and for Threats (the information setting). The reason we use threat logs is that we can add the URL field (oddly called $misc in the custom log settings) to a custom URL log. We will do this at the end. For right now it’s important to get data flowing via syslog to our syslog server. Here is what the Log Forwarding Profile should look like:

LogForwardingProfile

The last thing to do now is add the log forwarding profile to the policies that we want to forward the logs for. To do this we will select the Policies tab and add the log forwarding option on a per policy basis. Only policies that you apply the log forwarding profile to will have their logs shipped to the syslog server. This is important to remember when trouble shooting.LogForwardingPolicy

Be sure to commit your changes and then generate some traffic. After a few minutes run the following command:

sudo tail -f /var/log/urllogs.log

If everything worked right you should start seeing a flow of traffic with all sorts of cool data. This is a good sign and means we are ready to start installing the ELK stack. Before we do that we need to install Java.

Install JAVA

This tutorial assumes you are using Ubuntu LTS 14.04. If you are using a different flavor you should be able to keep up as the commands aren’t terribly different. First thing is first… Java.

For our demo we will install version 8 of Java. Let’s start by adding the PPA to apt:

sudo add-apt-repository -y ppa:webupd8team/java

Now to update our apt package database:

sudo apt-get update

And now to install the latest stable version of Java 8. Be sure to accept the license agreement when it prompts.

sudo apt-get -y install oracle-java8-installer

With that done it’s time to go install Elasticsearch

Install Elasticsearch

We can install Elasticsearch with a package manager (lucky us) with just a little front end work.

The following command imports the Elasticsearch public GPG key into apt:

wget -qO - https://packages.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -

Now we need to create the Elasticsearch source list:

echo "deb http://packages.elastic.co/elasticsearch/2.x/debian stable main" | sudo tee -a /etc/apt/sources.list.d/elasticsearch-2.x.list

Update our apt package database:

sudo apt-get -y install elasticsearch

With Elasticsearch installed we need to do some configuration magic. Everything done in Elasticsearch is via HTTP API calls. So basically if I have access to your API then I can do whatever I want via simple HTTP calls. For example if you have your HTTP API open a command like this:

curl -X DELETE 'http://elk.youcompany.com:9200/important_index'

would cause you to have a very bad day… Let’s fix that!

sudo vi /etc/elasticsearch/elasticsearch.yml

Find the section that says “network.host”. Uncomment the section (by deleting the pound sign: #) and replace network.host’s value with localhost. Save and exit the file (press esc and then type : x then press enter).

With that done we need to restart the Elasticsearch service

sudo service elasticsearch restart

Lastly we need to make sure ElasticSearch starts up if the server is ever rebooted:

sudo update-rc.d elasticsearch defaults 95 10

Congrats! Elasticsearch has been installed.

Install Kibana

Just like Elasticsearch we can install Kibana with a package manager. First create the source list:

echo "deb http://packages.elastic.co/kibana/4.4/debian stable main" | sudo tee -a /etc/apt/sources.list.d/kibana-4.4.x.list

Update our apt package database:

sudo apt-get update

Install Kibana:

sudo apt-get -y install kibana

Kibana is now installed. Now we need to do a bit of configuring. We again want to prevent direct access to Kibana. We will actually force traffic to go through a reverse proxy in order to access Kibana resources. The tool we will use is called Nginx.

Open the Kibana configuration file:

sudo vi /opt/kibana/config/kibana.yml

Find the line that specifies

server.host

and replace the IP address (“0.0.0.0” by default) with “localhost”:

server.host: "localhost"

Save and exit. Next we will ensure Kibana starts when the server reboots.

sudo update-rc.d kibana defaults 96 9
sudo service kibana start

Now to install and configure Nginx.

Since we told Kibana, the user interface for our new logging system, to only listen on localhost we have to set up a reverse proxy in order to access Kibana from a different machine. First step in installing Nginx is to grab the packages:

sudo apt-get install nginx apache2-utils

We need to set up an admin user that can access the Kibana interface. We will use htpasswd for this. Consider using a different username if you wish.

sudo htpasswd -c /etc/nginx/htpasswd.users kibanaadmin

When prompted enter a password for this new account. You will need this password to log into your Kibana interface.

Next we need to configure Nginx to properly proxy Kibana.

sudo vi /etc/nginx/sites-available/default

Delete the contents of the file and paste the following. Be sure to replace

server_name

with the actual name of your server.

server {
    listen 80;

    server_name example.com;

    auth_basic "Restricted Access";
    auth_basic_user_file /etc/nginx/htpasswd.users;

    location / {
        proxy_pass http://localhost:5601;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;        
    }
}

Save then exit. Nginx is now set up to forward to traffic to Kibana which actually operates on port 5601. We have also instructed Nginx to use the htpasswd.users file to manage access. We need to restart the Nginx service

sudo service nginx restart

to make everything go live. If you want you can try to go see if Kibana is up and running by going to

http://[your IP]

. Be sure to type in your kibanaadmin (or whatever) password when prompted. Wit this completed we will move on to installing Logstash.

Install Logstash

Create the Logstash source list:

echo 'deb http://packages.elastic.co/logstash/2.2/debian stable main' | sudo tee /etc/apt/sources.list.d/logstash-2.2.x.list

Refresh the packages:

sudo apt-get update

Install Logstash:

sudo apt-get install -y logstash

With Logstash installed we have to instruct it how to take existing logs on our server and how to import them to Elasticsearch for indexing. Logstash has the ability to do all sorts of neat stuff during the import process, but for the purpose of this tutorial we will keep things fairly vanilla. Let’s create our Logstash configuration file step by step. First off I want to encourage everyone go read Clay Curtis’s post on this same subject as well as this one. The majority of this post is built of Clay’s work and it’s really great. You can find his post here: ELK Stack for Network Operations Reloaded

We are going to create our PAN Traffic file:

sudo vi /etc/logstash/conf.d/pan-traffic.conf

Logstash configuration files are broken down into three sections:
Input – How Logstash reads in log files
Filter – Any manipulation needed while storing the logs during the Output process
Output – Where and how to store the logs (basically how to throw them to Elasticsearch)

Here is what our Input section will look like:

input {
  file {
        path => ["/var/log/network.log"]
        sincedb_path => "/var/log/logstash/sincedb"
        start_position => "beginning"
        type => "syslog"
        tags => ["netsyslog"]
        }
}

So here’s what’s going on with this section. First we define the path on where this Logstash configuration can expect to find our log file. We established this earlier in our Syslog-Ng configuration. Then we define the sincedb_path which maintains where Logstash left on in terms of importing logs. We set the start position to beginning to ensure we get all the logs available in the .log file. Finally we declare it type “syslog” and tag it as a netsyslog. With our input file done we will move on to the filter section. Here things get a bit different.

filter {
  if [type] == "syslog" {
    grok {
      #strips timestamp and host off of the front of the syslog message leaving the raw message generated by the syslog client and saves it as "raw_message"
      #patterns_dir => "/opt/logstash/patterns"
      match => [ "message", "%{TIMESTAMP_ISO8601:@timestamp} %{HOSTNAME:syslog_host} %{GREEDYDATA:raw_message}" ]
    }
  }

    csv {
      source => "raw_message"
      columns => [ "PaloAltoDomain","ReceiveTime","SerialNum","Type","Threat-ContentType","ConfigVersion","GenerateTime","SourceAddress","DestinationAddress","NATSourceIP","NATDestinationIP","Rule","SourceUser","DestinationUser","Application","VirtualSystem","SourceZone","DestinationZone","InboundInterface","OutboundInterface","LogAction","TimeLogged","SessionID","RepeatCount","SourcePort","DestinationPort","NATSourcePort","NATDestinationPort","Flags","IPProtocol","Action","Bytes","BytesSent","BytesReceived","Packets","StartTime","ElapsedTimeInSec","Category","Padding","seqno","actionflags","SourceCountry","DestinationCountry","cpadding","pkts_sent","pkts_received" ]
    }
    date {
      timezone => "America/Chicago"
      match => [ "GenerateTime", "YYYY/MM/dd HH:mm:ss" ]
    }
    #convert fields to proper format
    mutate {
      convert => [ "Bytes", "integer" ]
      convert => [ "BytesReceived", "integer" ]
      convert => [ "BytesSent", "integer" ]
      convert => [ "ElapsedTimeInSec", "integer" ]
      convert => [ "geoip.area_code", "integer" ]
      convert => [ "geoip.dma_code", "integer" ]
      convert => [ "geoip.latitude", "float" ]
      convert => [ "geoip.longitude", "float" ]
      convert => [ "NATDestinationPort", "integer" ]
      convert => [ "NATSourcePort", "integer" ]
      convert => [ "Packets", "integer" ]
      convert => [ "pkts_received", "integer" ]
      convert => [ "pkts_sent", "integer" ]
      convert => [ "seqno", "integer" ]
      gsub => [ "Rule", " ", "_",
                "Application", "( |-)", "_" ]
      remove_field => [ "message", "raw_message" ]
    }

  #Geolocate logs that have SourceAddress and if that SourceAddress is a non-RFC1918 address
  if [SourceAddress] and [SourceAddress] !~ "(^127\.0\.0\.1)|(^10\.)|(^172\.1[6-9]\.)|(^172\.2[0-9]\.)|(^172\.3[0-1]\.)|(^192\.168\.)|(^169\.254\.)" {
      geoip {
           database => "/opt/logstash/GeoLiteCity.dat"
           source => "SourceAddress"
           target => "SourceGeo"
      }
      #Delete 0,0 in SourceGeo.location if equal to 0,0
      if ([SourceGeo.location] and [SourceGeo.location] =~ "0,0") {
        mutate {
          replace => [ "SourceGeo.location", "" ]
        }
      }
    }

  #Geolocate logs that have DestinationAddress and if that DestinationAddress is a non-RFC1918 address
  if [DestinationAddress] and [DestinationAddress] !~ "(^127\.0\.0\.1)|(^10\.)|(^172\.1[6-9]\.)|(^172\.2[0-9]\.)|(^172\.3[0-1]\.)|(^192\.168\.)|(^169\.254\.)" {
      geoip {
           database => "/opt/logstash/GeoLiteCity.dat"
           source => "DestinationAddress"
           target => "DestinationGeo"
      }
      #Delete 0,0 in DestinationGeo.location if equal to 0,0
      if ([DestinationGeo.location] and [DestinationGeo.location] =~ "0,0") {
        mutate {
          replace => [ "DestinationAddress.location", "" ]
        }
      }
    }

  #Takes the 5-tuple of source address, source port, destination address, destination port, and protocol and does a SHA1 hash to fingerprint the flow.  This is a useful
  #way to be able to do top N terms queries on flows, not just on one field.
  if [SourceAddress] and [DestinationAddress] {
    fingerprint {
      concatenate_sources => true
      method => "SHA1"
      key => "logstash"
      source => [ "SourceAddress", "SourcePort", "DestinationAddress", "DestinationPort", "IPProtocol" ]
    }
  }
} #end filter block

Wow… That was a lot let’s break it down. The initial thing we did was to set up our grok which in a nutshell helps us parse the data and remove some of the unstructured stuff. Syslogs are going to attach some junk to our logs that we don’t really care about so we will remove the timestamp and the hostname and keep the main log itself which grok calls GREEDYDATA and we will call raw_message. The next thing we do is set up the columns by treating the .log file as a csv. We list every column present (a good way to see what is reported is to do a log export from the monitor tab in your PAN firewall). After the columns we will establish the timezone which can help if you have multiple firewalls and need to align date/times across timezones.

The next major section is the mutate section which will tell Elasticsearch what the data type of these values will be. There will be some more field manipulation needed here in a bit and we will go over that soon. Moving on…

When looking at first image did anyone notice that traffic map that showed geo locations? Pretty cool eh? That’s what the next section is for. Long story short on this section is we define the geo database then we pump in some IPs and read the returns. We also make exceptions for IPs that are in the private range. You will have to download and place the geodata file in the right place. You can download the geodata file here: http://geolite.maxmind.com/download/geoip/database/GeoLiteCity.dat.gz

You will need to place the GeoLiteCity.dat file here:

/opt/logstash/GeoLiteCity.dat
output {
  #stdout { codec => rubydebug }
  #stdout { debug => true }
  elasticsearch {
    index => "pan-traffic"
    hosts => ["localhost:9200"]
    template => "/opt/logstash/elasticsearch-template.json"
    template_overwrite => true
  }
} #end output block

Last but not least is the output section. In this section we give the index an actual name, select which cluster to use, and we provide it a json template which Clay has generously given the community. You can get that json template here: https://github.com/clay584/ELK_Stack_For_Network_Operations_RELOADED/blob/master/elasticsearch-template.json. Be sure to place the elasticsearch-template.json file in the right spot:

/opt/logstash/elasticsearch-template.json

So… In theory if we have done this all correctly and restart the Logstash service

sudo service Logstash restart

after a few minutes we should see the index appear in Kibana!

I’m guessing you went to Kibana and were saddened to see that there wasn’t any data automagically in there. Let’s add the index to Kibana so we can start doing some data visualizations. Follow these steps:

  • Click on settings in the top bar
  • In the index name or pattern field type: pan-traffic
  • In the time-field name dropdown select: @timestamp
  • Hit create

At this point you should see a bunch columns and if you hit the “Discover” link at the top you should see data flowing in. If not go check your logs to see if data is coming in by using tail -f /var/log/network.log.

There is however one big problem…

There is a thing called “Analyzed Fields” which is a way that Elasticsearch combs through a string and breaks it up to make it more easily indexable. This will throw off certain fields where the whole string is required such as the Application field. The issue is that in an analyzed field Elasticsearch will break the string up based on various delimiters such as the “-” character. This makes our Application field and later with URL Category nearly unusable. There is a fix and it requires that we get into the HTTP API that we worked hard to protect at the beginning of the post. Hopefully you all will see why this was so important to restrict.

Fixing the Traffic Index

We can directly query the Elasticsearch index to find out neat information about our index. However for this purpose we are going to strictly focus on looking at and editing the mappings.

To view the mapping of any given index use the following:

curl -X GET 'http://localhost:9200/[index_name]/_mapping'

You will get back a long string with all the various columns and index properties. It’s tough to read and that’s why you can tack on ?pretty=true to the end of the URL string to make it “pretty”. The way I have been manipulating the index goes as such.

  1. Manipulate the index as needed in a text editor
  2. Stop the Logstash service (this is to prevent new logs from being imported while we work). You can close an individual index through the Elasticsearch API but this is just as easy especially in a tutorial environment.`
  3. Delete the existing index
  4. Place the edited index in using the same name defined in the Logstash configuration file
  5. Restart the Logstash service and check in Kibana if the needed changes were successful

So here is the index we currently have:

 {"pan-traffic":{"mappings":{"syslog":{"properties":{"@timestamp":{"type":"date","format":"strict_date_optional_time||epoch_millis"},"@version":{"type":"string"},"Action":{"type":"string"},"Application":{"type":"string"},"Bytes":{"type":"long"},"BytesReceived":{"type":"long"},"BytesSent":{"type":"long"},"Category":{"type":"string"},"ConfigVersion":{"type":"string"},"DestinationAddress":{"type":"string"},"DestinationCountry":{"type":"string"},"DestinationGeo":{"properties":{"area_code":{"type":"long"},"city_name":{"type":"string"},"continent_code":{"type":"string"},"country_code2":{"type":"string"},"country_code3":{"type":"string"},"country_name":{"type":"string"},"dma_code":{"type":"long"},"ip":{"type":"string"},"latitude":{"type":"double"},"location":{"type":"double"},"longitude":{"type":"double"},"postal_code":{"type":"string"},"real_region_name":{"type":"string"},"region_name":{"type":"string"},"timezone":{"type":"string"}}},"DestinationPort":{"type":"string"},"DestinationZone":{"type":"string"},"ElapsedTimeInSec":{"type":"long"},"Flags":{"type":"string"},"GenerateTime":{"type":"date","format":"yyyy/MM/dd HH:mm:ss||yyyy/MM/dd||epoch_millis"},"IPProtocol":{"type":"string"},"InboundInterface":{"type":"string"},"LogAction":{"type":"string"},"NATDestinationIP":{"type":"string"},"NATDestinationPort":{"type":"long"},"NATSourceIP":{"type":"string"},"NATSourcePort":{"type":"long"},"OutboundInterface":{"type":"string"},"Packets":{"type":"long"},"Padding":{"type":"string"},"PaloAltoDomain":{"type":"string"},"ReceiveTime":{"type":"date","format":"yyyy/MM/dd HH:mm:ss||yyyy/MM/dd||epoch_millis"},"RepeatCount":{"type":"string"},"Rule":{"type":"string"},"SerialNum":{"type":"string"},"SessionID":{"type":"string"},"SourceAddress":{"type":"string"},"SourceCountry":{"type":"string"},"SourcePort":{"type":"string"},"SourceZone":{"type":"string"},"StartTime":{"type":"date","format":"yyyy/MM/dd HH:mm:ss||yyyy/MM/dd||epoch_millis"},"Threat-ContentType":{"type":"string"},"TimeLogged":{"type":"date","format":"yyyy/MM/dd HH:mm:ss||yyyy/MM/dd||epoch_millis"},"Type":{"type":"string"},"VirtualSystem":{"type":"string"},"actionflags":{"type":"string"},"column47":{"type":"string"},"column48":{"type":"string"},"column49":{"type":"string"},"column50":{"type":"string"},"column51":{"type":"string"},"column53":{"type":"string"},"column54":{"type":"string"},"cpadding":{"type":"string"},"fingerprint":{"type":"string"},"host":{"type":"string"},"path":{"type":"string"},"pkts_received":{"type":"long"},"pkts_sent":{"type":"long"},"seqno":{"type":"long"},"syslog_host":{"type":"string"},"tags":{"type":"string"},"type":{"type":"string"}}}}}}

The index field we are looking for is this:

"Application":{"type":"string"}

We need to add “index”:”not_analyzed” to it to make the field look like this:

"Application":{"index":"not_analyzed","type":"string"}

Add this updated field to the rest of the index that we pulled out and we should be ready to replace the existing index. First we stop the Elasticsearch service:

sudo service elasticsearch stop

Then we delete the existing index:

curl -X DELETE 'http://localhost:9200/pan-traffic'

Next we place our new index:

curl -g -X PUT 'http://localhost:9200/pan-traffic' -d '{"mappings":{"syslog":{"dynamic_templates":[{"message_field":{"mapping":{_norms":true,"type":"string"},"match":"message","match_mapping_type":"string"}},{"string_fields":{"mapping":{"index":"analyzed","omit_norms":true,"type":"string","fields":{"raw":{"index":"not_analyzed","ignore_above":256,"type":"string"}}},"match":"*","match_mapping_type":"string"}}],"_all":{"enabled":true},"properties":{"@version":{"type":"string","index":"not_analyzed"},"DestinationGeo":{"dynamic":"true","properties":{"location":{"type":"geo_point","lat_lon":true,"geohash":true}}},"SourceGeo":{"dynamic":"true","properties":{"location":{"type":"geo_point","lat_lon":true,"geohash":true}}},"geoip":{"dynamic":"true","properties":{"location":{"type":"geo_point","lat_lon":true,"geohash":true}}}}},"syslog":{"dynamic_templates":[{"message_field":{"mapping":{"index":"analyzed","omit_norms":true,"type":"string"},"match":"message","match_mapping_type":"string"}},{"string_fields":{"mapping":{"index":"analyzed","omit_norms":true,"type":"string","fields":{"raw":{"index":"not_analyzed","ignore_above":256,"type":"string"}}},"match":"*","match_mapping_type":"string"}}],"_all":{"enabled":true},"properties":{"@timestamp":{"type":"date","format":"dateOptionalTime"},"@version":{"type":"string","index":"not_analyzed"},"Action":{"type":"string","norms":{"enabled":false},"fields":{"raw":{"type":"string","index":"not_analyzed","ignore_above":256}}},"Application":{"type":"string","norms":{"enabled":false},"fields":{"raw":{"type":"string","index":"not_analyzed","ignore_above":256}}},"Bytes":{"type":"long"},"BytesReceived":{"type":"long"},"BytesSent":{"type":"long"},"Category":{"type":"string","norms":{"enabled":false},"fields":{"raw":{"type":"string","index":"not_analyzed","ignore_above":256}}},"ConfigVersion":{"type":"string","norms":{"enabled":false},"fields":{"raw":{"type":"string","index":"not_analyzed","ignore_above":256}}},"DestinationAddress":{"type":"string","norms":{"enabled":false},"fields":{"raw":{"type":"string","index":"not_analyzed","ignore_above":256}}},"DestinationCountry":{"type":"string","norms":{"enabled":false},"fields":{"raw":{"type":"string","index":"not_analyzed","ignore_above":256}}},"DestinationGeo":{"dynamic":"true","properties":{"area_code":{"type":"long"},"city_name":{"type":"string","norms":{"enabled":false},"fields":{"raw":{"type":"string","index":"not_analyzed","ignore_above":256}}},"continent_code":{"type":"string","norms":{"enabled":false},"fields":{"raw":{"type":"string","index":"not_analyzed","ignore_above":256}}},"country_code2":{"type":"string","norms":{"enabled":false},"fields":{"raw":{"type":"string","index":"not_analyzed","ignore_above":256}}},"country_code3":{"type":"string","norms":{"enabled":false},"fields":{"raw":{"type":"string","index":"not_analyzed","ignore_above":256}}},"country_name":{"type":"string","norms":{"enabled":false},"fields":{"raw":{"type":"string","index":"not_analyzed","ignore_above":256}}},"dma_code":{"type":"long"},"ip":{"type":"string","norms":{"enabled":false},"fields":{"raw":{"type":"string","index":"not_analyzed","ignore_above":256}}},"latitude":{"type":"double"},"location":{"type":"geo_point","lat_lon":true,"geohash":true},"longitude":{"type":"double"},"postal_code":{"type":"string","norms":{"enabled":false},"fields":{"raw":{"type":"string","index":"not_analyzed","ignore_above":256}}},"real_region_name":{"type":"string","norms":{"enabled":false},"fields":{"raw":{"type":"string","index":"not_analyzed","ignore_above":256}}},"region_name":{"type":"string","norms":{"enabled":false},"fields":{"raw":{"type":"string","index":"not_analyzed","ignore_above":256}}},"timezone":{"type":"string","norms":{"enabled":false},"fields":{"raw":{"type":"string","index":"not_analyzed","ignore_above":256}}}}},"DestinationPort":{"type":"string","norms":{"enabled":false},"fields":{"raw":{"type":"string","index":"not_analyzed","ignore_above":256}}},"DestinationUser":{"type":"string","norms":{"enabled":false},"fields":{"raw":{"type":"string","index":"not_analyzed","ignore_above":256}}},"DestinationZone":{"type":"string","norms":{"enabled":false},"fields":{"raw":{"type":"string","index":"not_analyzed","ignore_above":256}}},"ElapsedTimeInSec":{"type":"long"},"Flags":{"type":"string","norms":{"enabled":false},"fields":{"raw":{"type":"string","index":"not_analyzed","ignore_above":256}}},"GenerateTime":{"type":"date","format":"yyyy/MM/dd HH:mm:ss||yyyy/MM/dd"},"IPProtocol":{"type":"string","norms":{"enabled":false},"fields":{"raw":{"type":"string","index":"not_analyzed","ignore_above":256}}},"InboundInterface":{"type":"string","norms":{"enabled":false},"fields":{"raw":{"type":"string","index":"not_analyzed","ignore_above":256}}},"LogAction":{"type":"string","norms":{"enabled":false},"fields":{"raw":{"type":"string","index":"not_analyzed","ignore_above":256}}},"NATDestinationIP":{"type":"string","norms":{"enabled":false},"fields":{"raw":{"type":"string","index":"not_analyzed","ignore_above":256}}},"NATDestinationPort":{"type":"long"},"NATSourceIP":{"type":"string","norms":{"enabled":false},"fields":{"raw":{"type":"string","index":"not_analyzed","ignore_above":256}}},"NATSourcePort":{"type":"long"},"OutboundInterface":{"type":"string","norms":{"enabled":false},"fields":{"raw":{"type":"string","index":"not_analyzed","ignore_above":256}}},"Packets":{"type":"long"},"Padding":{"type":"string","norms":{"enabled":false},"fields":{"raw":{"type":"string","index":"not_analyzed","ignore_above":256}}},"PaloAltoDomain":{"type":"string","norms":{"enabled":false},"fields":{"raw":{"type":"string","index":"not_analyzed","ignore_above":256}}},"ReceiveTime":{"type":"date","format":"yyyy/MM/dd HH:mm:ss||yyyy/MM/dd"},"RepeatCount":{"type":"string","norms":{"enabled":false},"fields":{"raw":{"type":"string","index":"not_analyzed","ignore_above":256}}},"Rule":{"type":"string","norms":{"enabled":false},"fields":{"raw":{"type":"string","index":"not_analyzed","ignore_above":256}}},"SerialNum":{"type":"string","norms":{"enabled":false},"fields":{"raw":{"type":"string","index":"not_analyzed","ignore_above":256}}},"SessionID":{"type":"string","norms":{"enabled":false},"fields":{"raw":{"type":"string","index":"not_analyzed","ignore_above":256}}},"SourceAddress":{"type":"string","norms":{"enabled":false},"fields":{"raw":{"type":"string","index":"not_analyzed","ignore_above":256}}},"SourceCountry":{"type":"string","norms":{"enabled":false},"fields":{"raw":{"type":"string","index":"not_analyzed","ignore_above":256}}},"SourceGeo":{"dynamic":"true","properties":{"area_code":{"type":"long"},"city_name":{"type":"string","norms":{"enabled":false},"fields":{"raw":{"type":"string","index":"not_analyzed","ignore_above":256}}},"continent_code":{"type":"string","norms":{"enabled":false},"fields":{"raw":{"type":"string","index":"not_analyzed","ignore_above":256}}},"country_code2":{"type":"string","norms":{"enabled":false},"fields":{"raw":{"type":"string","index":"not_analyzed","ignore_above":256}}},"country_code3":{"type":"string","norms":{"enabled":false},"fields":{"raw":{"type":"string","index":"not_analyzed","ignore_above":256}}},"country_name":{"type":"string","norms":{"enabled":false},"fields":{"raw":{"type":"string","index":"not_analyzed","ignore_above":256}}},"dma_code":{"type":"long"},"ip":{"type":"string","norms":{"enabled":false},"fields":{"raw":{"type":"string","index":"not_analyzed","ignore_above":256}}},"latitude":{"type":"double"},"location":{"type":"geo_point","lat_lon":true,"geohash":true},"longitude":{"type":"double"},"postal_code":{"type":"string","norms":{"enabled":false},"fields":{"raw":{"type":"string","index":"not_analyzed","ignore_above":256}}},"real_region_name":{"type":"string","norms":{"enabled":false},"fields":{"raw":{"type":"string","index":"not_analyzed","ignore_above":256}}},"region_name":{"type":"string","norms":{"enabled":false},"fields":{"raw":{"type":"string","index":"not_analyzed","ignore_above":256}}},"timezone":{"type":"string","norms":{"enabled":false},"fields":{"raw":{"type":"string","index":"not_analyzed","ignore_above":256}}}}},"SourcePort":{"type":"string","norms":{"enabled":false},"fields":{"raw":{"type":"string","index":"not_analyzed","ignore_above":256}}},"SourceUser":{"type":"string","norms":{"enabled":false},"fields":{"raw":{"type":"string","index":"not_analyzed","ignore_above":256}}},"SourceZone":{"type":"string","norms":{"enabled":false},"fields":{"raw":{"type":"string","index":"not_analyzed","ignore_above":256}}},"StartTime":{"type":"date","format":"yyyy/MM/dd HH:mm:ss||yyyy/MM/dd"},"Threat-ContentType":{"type":"string","norms":{"enabled":false},"fields":{"raw":{"type":"string","index":"not_analyzed","ignore_above":256}}},"TimeLogged":{"type":"date","format":"yyyy/MM/dd HH:mm:ss||yyyy/MM/dd"},"Type":{"type":"string","norms":{"enabled":false},"fields":{"raw":{"type":"string","index":"not_analyzed","ignore_above":256}}},"VirtualSystem":{"type":"string","norms":{"enabled":false},"fields":{"raw":{"type":"string","index":"not_analyzed","ignore_above":256}}},"actionflags":{"type":"string","norms":{"enabled":false},"fields":{"raw":{"type":"string","index":"not_analyzed","ignore_above":256}}},"cpadding":{"type":"string","norms":{"enabled":false},"fields":{"raw":{"type":"string","index":"not_analyzed","ignore_above":256}}},"fingerprint":{"type":"string","norms":{"enabled":false},"fields":{"raw":{"type":"string","index":"not_analyzed","ignore_above":256}}},"geoip":{"dynamic":"true","properties":{"location":{"type":"geo_point","lat_lon":true,"geohash":true}}},"host":{"type":"string","norms":{"enabled":false},"fields":{"raw":{"type":"string","index":"not_analyzed","ignore_above":256}}},"path":{"type":"string","norms":{"enabled":false},"fields":{"raw":{"type":"string","index":"not_analyzed","ignore_above":256}}},"pkts_received":{"type":"long"},"pkts_sent":{"type":"long"},"seqno":{"type":"long"},"syslog_host":{"type":"string","norms":{"enabled":false},"fields":{"raw":{"type":"string","index":"not_analyzed","ignore_above":256}}},"tags":{"type":"string","norms":{"enabled":false},"fields":{"raw":{"type":"string","index":"not_analyzed","ignore_above":256}}},"type":{"type":"string","norms":{"enabled":false},"fields":{"raw":{"type":"string","index":"not_analyzed","ignore_above":256}}}}}}}'

Lastly we will start back up the Logstash service:

sudo service Logstash start

You may need to delete the index and read it in Kibana to get the fields to show up correctly. This is done in the Settings >> Indices menu within Kibana.

Next Up:

I’m working on a post to integrate the URL logs. This takes additional work on the firewall because sending the URL in the syslog requires creating a custom log. The follow up post to this should be out soon.

My Thanks

I was HEAVILY inspired by two amazing write ups on this subject. Please go check them out. They are way smarter than me and deserve all the praise.

http://operational.io/elk-stack-for-network-operations-reloaded/ (Palo Alto Network / ELK Integration)

https://www.digitalocean.com/community/tutorials/how-to-install-elasticsearch-logstash-and-kibana-elk-stack-on-ubuntu-14-04 (ELK Installation)

https://gist.github.com/nicolashery/6317643 (Index Update)

Also. Beware of bugs. This has been a fun learning experience for me and I’m fairly certain I screwed something up along the way.

If you have any questions shoot me an email at anderikistan@gmail.com.
You can find me on LinkedIn at: https://www.linkedin.com/in/iankanderson
or on Twitter @Anderikistan

Happy data mining!

ELK + Palo Alto Networks