blob: 3a36b8cf8871ef203800c915148df29ef8dba99a [file] [log] [blame]
COMMAND NAME: gplogfilter
Searches through HAWQ log files for specified entries.
*****************************************************
SYNOPSIS
*****************************************************
gplogfilter [<timestamp_options>] [<pattern_options>]
[<output_options>] [<input_options>] [<input_file>]
gplogfilter --help
gplogfilter --version
*****************************************************
DESCRIPTION
*****************************************************
The gplogfilter utility can be used to search through a
HAWQ log file for entries matching the specified search
criteria. If an input file is not supplied, then gplogfilter will
use master data directory locate the HAWQ master log file.
To read from standard input,use a dash (-) as the input file name.
Input files may be compressed using gzip. In an input file, a log
entry is identified by its timestamp in YYYY-MM-DD [hh:mm[:ss]] format.
You can also use gplogfilter to search through all segment log
files at once by running it through the hawq ssh utility. For example,
to display the last three lines of each segment log file:
hawq ssh -f seg_host_file
=> source /usr/local/hawq/greenplum_path.sh
=> gplogfilter -n 3 /data/hawq/segment/pg_log/hawq-*
By default, the output of gplogfilter is sent to standard output.
Use the -o option to send the output to a file or a directory.
If you supply an output file name ending in .gz, the output file will
be compressed by default using maximum compression. If the
output destination is a directory, the output file is given
the same name as the input file.
*****************************************************
OPTIONS
*****************************************************
TIMESTAMP OPTIONS
******************
-b <datetime> | --begin=<datetime>
Specifies a starting date and time to begin searching for
log entries in the format of YYYY-MM-DD [hh:mm[:ss]].
-e <datetime> | --end=<datetime>
Specifies an ending date and time to stop searching for
log entries in the format of YYYY-MM-DD [hh:mm[:ss]].
-d <time> | --duration=<time>
Specifies a time duration to search for log entries in
the format of [hh][:mm[:ss]]. If used without either the
-b or -e option, will use the current time as a basis.
PATTERN MATCHING OPTIONS
*************************
-c i[gnore]|r[espect] | --case=i[gnore]|r[espect]
Matching of alphabetic characters is case sensitive
by default unless proceeded by the --case=ignore option.
-C '<string>' | --columns='<string>'
Selects specific columns from the log file. Specify the
desired columns as a comma-delimited string of column numbers
beginning with 1, where the second column from left is 2,
the third is 3, and so on.
-f '<string>' | --find='<string>'
Finds the log entries containing the specified string.
-F '<string>' | --nofind='<string>'
Rejects the log entries containing the specified string.
-m <regex> | --match=<regex>
Finds log entries that match the specified Python regular
expression. See http://docs.python.org/library/re.html for
Python regular expression syntax.
-M <regex> | --nomatch=<regex>
Rejects log entries that match the specified Python regular
expression. See http://docs.python.org/library/re.html for
Python regular expression syntax.
-t | --trouble
Finds only the log entries that have ERROR:, FATAL:, or
PANIC: in the first line.
OUTPUT OPTIONS
***************
-n <integer> | --tail=<integer>
Limits the output to the last integer of qualifying log
entries found.
-s <offset> [<limit>] | --slice=<offset> [<limit>]
From the list of qualifying log entries, returns the <limit>
number of entries starting at the <offset> entry number, where
an offset of zero (0) denotes the first entry in the result set
and an offset of any number greater than zero counts back
from the end of the result set.
-o <output_file> | --out=<output_file>
Writes the output to the specified file or directory
location instead of STDOUT.
-z 0-9 | --zip=0-9
Compresses the output file to the specified compression level
using gzip, where 0 is no compression and 9 is maximum
compression. If you supply an output file name ending in .gz,
the output file will be compressed by default using maximum
compression.
-a | --append
If the output file already exists, appends to the file
instead of overwriting it.
INPUT OPTIONS
**************
<input_file>
The name of the input log file(s) to search through. If
an input file is not supplied, gplogfilter will use master
data directory to locate the HAWQ master log file. To read
from standard input, use a dash (-) as the input file name.
-u | --unzip
Uncompress the input file using gunzip. If the input
file name ends in .gz, it will be uncompressed by default.
--help
Displays the online help.
--version
Displays the version of this utility.
*****************************************************
EXAMPLES
*****************************************************
Display the last three error messages in the master log file:
gplogfilter -t -n 3
Display all log messages in the master log file timestamped
in the last 10 minutes:
gplogfilter -d :10
Display log messages in the master log file containing the
string '|con6 cmd11|':
gplogfilter -f '|con6 cmd11|'
Using hawq ssh, run gplogfilter on the segment hosts and search
for log messages in the segment log files containing the string
'con6' and save output to a file.
hawq ssh -f seg_hosts_file -e 'source
/usr/local/hawq/greenplum_path.sh ; gplogfilter -f
con6 /data/hawq/segment/pg_log/hawq-*' > seglog.out
*****************************************************
SEE ALSO
*****************************************************
hawq ssh, hawq scp