Sorting/filtering/counting text data

From Tech-Wiki
Revision as of 18:13, 28 April 2017 by Fabricio.Lima (Talk | contribs) (Created page with "Let's assume we'd like to count the most hit firewall rules parsing the log: $ cat firewall.log | grep -v -E '^$' | grep -v date | cut -d\; -f4 | cut -d\= -f2 | sort -n | u...")

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Let's assume we'd like to count the most hit firewall rules parsing the log:

$ cat firewall.log  | grep -v -E '^$' | grep -v date | cut -d\; -f4 | cut -d\= -f2 | sort -n | uniq -c | sort

remove blank lines: grep -v -E '^$' remove lines with 'date' in them: grep -v date cut out field "rule=XX": cut -d\; -f4 (delimiter: ";" and print field 4) cut out rule numbers: cut -d\= -f2 (delimiter: "=" and print field 2) sort lines numerically: sort -n count number of times the rule number appears: uniq -c

For reference, this firewall log's content:

$ cat firewall.log
date:25-mar
source=10.1.1.1;destination=192.168.1.1;port=80;rule=3
source=10.1.1.1;destination=192.168.1.1;port=80;rule=3
source=10.1.1.1;destination=192.168.1.1;port=21;rule=2
source=10.1.1.1;destination=192.168.1.1;port=22;rule=1