Linux sed and awk Cheat Sheet

Edit text streams with sed and parse structured text with awk.

View
StandardDetailedCompact
Export
Copy the compact sheet, download it, or print it.
Download
`D` dense toggle · `C` copy all
## sed Substitution
Replace first occurrence per line
sed 's/foo/bar/' file.txt

# Substitute one match in each line.

Replace all occurrences per line
sed 's/foo/bar/g' file.txt

# Substitute every match on each line.

Edit a file in place
sed -i 's/debug/info/g' app.conf

# Modify the original file directly.

Edit in place with backup
sed -i.bak 's/debug/info/g' app.conf

# Create a backup copy while editing.

Use an alternate delimiter
sed 's|/usr/local/bin|/opt/bin|g' paths.txt

# Make path replacements easier to read.

## sed Line Deletion and Printing
Delete blank lines
sed '/^$/d' file.txt

# Remove empty lines from output.

Delete comment lines
sed '/^[[:space:]]*#/d' config.ini

# Skip lines that begin with `#`.

Print only matching lines
sed -n '/ERROR/p' app.log

# Use sed as a pattern filter.

Delete a line range
sed '5,10d' file.txt

# Remove lines 5 through 10.

Print a line range only
sed -n '5,10p' file.txt

# Show lines 5 through 10.

## awk Fields and Records
Print selected columns
awk '{print $1, $3}' access.log

# Show the first and third whitespace-delimited fields.

Use a custom field separator
awk -F, '{print $1, $2}' users.csv

# Split CSV-like data on commas.

Print the last field
awk '{print $NF}' file.txt

# Use NF to reference the final field on each line.

Print line numbers with lines
awk '{print NR ": " $0}' file.txt

# Prefix each line with NR.

Set output field separator
awk 'BEGIN{OFS=","} {print $1, $2, $3}' file.txt

# Join output fields with commas.

## awk Filtering and Aggregation
Print rows matching a condition
awk '$3 > 100 {print $0}' metrics.txt

# Show lines where column 3 is greater than 100.

Sum a numeric column
awk '{sum += $2} END {print sum}' prices.txt

# Add all values in the second field.

Calculate an average
awk '{sum += $2; count += 1} END {if (count) print sum / count}' prices.txt

# Compute the average of a numeric field.

Count rows by key
awk '{count[$1]++} END {for (k in count) print k, count[k]}' file.txt

# Count occurrences of the first field.

Find the maximum value
awk 'NR==1 || $2 > max {max=$2} END {print max}' values.txt

# Track the highest number in a field.

## awk Scripting Patterns
Use BEGIN and END blocks
awk 'BEGIN{print "name,total"} {print $1 "," $2} END{print "done"}' report.txt

# Print headers and footers around processed output.

Process command output
df -h | awk 'NR>1 {print $1, $5, $6}'

# Filter output from another command using awk.

Match with regex in awk
awk '/ERROR|WARN/' app.log

# Print only lines that match a regex.

Replace text in awk
awk '{gsub(/foo/, "bar"); print}' file.txt

# Use `sub` and `gsub` for replacements.

Skip a header row
awk 'NR>1 {print $1, $2}' users.csv

# Ignore the first line of a delimited file.

Recommended next

No recommendations yet.