sed 's/foo/bar/' file.txtBy default, sed replaces only the first match on each line.
Edit text streams with sed and parse structured text with awk.
Replace text patterns line by line.
sed 's/foo/bar/' file.txtBy default, sed replaces only the first match on each line.
sed 's/foo/bar/g' file.txtThe `g` flag applies the replacement globally across the line.
sed -i 's/debug/info/g' app.confIn-place editing is powerful; consider keeping backups before bulk changes.
sed -i.bak 's/debug/info/g' app.confCreates `app.conf.bak` before replacing the original file.
sed 's|/usr/local/bin|/opt/bin|g' paths.txtAlternate delimiters reduce escaping when replacing file paths or URLs.
Delete, print, and filter lines by patterns or ranges.
sed '/^$/d' file.txtA common cleanup step before diffing or processing text.
sed '/^[[:space:]]*#/d' config.iniEspecially useful for previewing meaningful config lines only.
sed -n '/ERROR/p' app.logSuppress default output with `-n`, then explicitly print matches.
sed '5,10d' file.txtRanges are useful for removing generated headers or blocks quickly.
sed -n '5,10p' file.txtA compact way to inspect a specific segment of a file.
Split lines into fields and work with structured text.
awk '{print $1, $3}' access.logAwk automatically splits input lines into fields using whitespace by default.
awk -F, '{print $1, $2}' users.csv`-F` sets the input field separator.
awk '{print $NF}' file.txt`NF` is the number of fields in the current record.
awk '{print NR ": " $0}' file.txt`NR` is the current record number across all input.
awk 'BEGIN{OFS=","} {print $1, $2, $3}' file.txt`OFS` controls how fields are separated when printed with commas in `print`.
Filter rows and calculate totals, counts, and summaries.
awk '$3 > 100 {print $0}' metrics.txtAwk's pattern-action style is ideal for tabular filtering.
awk '{sum += $2} END {print sum}' prices.txtUse the END block for final reporting after all input is processed.
awk '{sum += $2; count += 1} END {if (count) print sum / count}' prices.txtTrack totals and counts explicitly for derived values like averages.
awk '{count[$1]++} END {for (k in count) print k, count[k]}' file.txtAssociative arrays make awk a lightweight grouping and summarization tool.
awk 'NR==1 || $2 > max {max=$2} END {print max}' values.txtAwk is often enough for lightweight numeric analysis without needing a spreadsheet.
Use BEGIN, END, and built-in variables effectively.
awk 'BEGIN{print "name,total"} {print $1 "," $2} END{print "done"}' report.txtBEGIN runs before input, END runs after all input is consumed.
df -h | awk 'NR>1 {print $1, $5, $6}'Awk is especially useful for command output with stable columns.
awk '/ERROR|WARN/' app.logA pattern-only awk program prints matching lines by default.
awk '{gsub(/foo/, "bar"); print}' file.txt`sub` replaces one match, while `gsub` replaces all matches in the current record.
awk 'NR>1 {print $1, $2}' users.csvSkipping headers is one of the most common awk patterns for CSV-like inputs.