Log Feeder Configurations

NameDescriptionDefaultExamples
cluster.nameThe name of the cluster the Log Feeder program runs in.EMPTYcl1
hadoop.security.credential.provider.pathThe jceks file that provides passwords.EMPTYjceks://file/etc/ambari-logsearch-logfeeder/conf/logfeeder.jceks
logfeeder.cache.dedup.intervalMaximum number of milliseconds between two identical messages to be filtered out.1000500
logfeeder.cache.enabledEnables the usage of a cache to avoid duplications.falsetrue
logfeeder.cache.key.fieldThe field which's value should be cached and should be checked for repetitions.log_messagesome_field_prone_to_repeating_value
logfeeder.cache.last.dedup.enabledEnable filtering directly repeating log entries irrelevant of the time spent between them.falsetrue
logfeeder.cache.sizeThe number of log entries to cache in order to avoid duplications.10050
logfeeder.checkpoint.extensionThe extension used for checkpoint files..cpckp
logfeeder.checkpoint.folderThe folder where checkpoint files are stored.EMPTY/usr/lib/ambari-logsearch-logfeeder/conf/checkpoints
logfeeder.cloud.rollover.archive.base.dirLocation where the active and archives logs will be stored. Beware, it could require a large amount of space, use mounted disks if it is possible./tmp/var/lib/ambari-logsearch-logfeeder/data
logfeeder.cloud.rollover.immediate.flushImmediately flush temporal cloud logs (to active location).truefalse
logfeeder.cloud.rollover.max.filesThe number of max backup log files for rolled over logs.1050
logfeeder.cloud.rollover.on.shutdownRollover temporal cloud log files on shutdowntruefalse
logfeeder.cloud.rollover.on.startupRollover temporal cloud log files on startuptruefalse
logfeeder.cloud.rollover.threshold.minRollover cloud log files after an interval (minutes)601
logfeeder.cloud.rollover.threshold.sizeRollover cloud log files after the log file size reach this limit801024
logfeeder.cloud.rollover.threshold.size.unitRollover cloud log file size unit (e.g: KB, MB etc.)MBKB
logfeeder.cloud.rollover.use.gzipUse GZip on archived logs.truefalse
logfeeder.cloud.storage.base.pathBase path prefix for storing logs (cloud storage / hdfs), could be an absolute path or URI. (if URI used, that will override the default.FS with HDFS client)/apps/logsearch/user/logsearch/mypaths3a:///user/logsearch
logfeeder.cloud.storage.bucketAmazon S3 bucket.logfeederlogs
logfeeder.cloud.storage.bucket.bootstrapCreate bucket on startup.truefalse
logfeeder.cloud.storage.custom.fsIf it is not empty, override fs.defaultFS for HDFS client. Can be useful to write data to a different bucket (from other services) if the bucket address is read from core-site.xmlEMPTYs3a://anotherbucket
logfeeder.cloud.storage.destinationType of storage that is the destination for cloud output logs.nonehdfss3
logfeeder.cloud.storage.modeOption to support sending logs to cloud storage. You can choose between supporting only cloud storage, non-cloud storage or bothdefaultdefaultcloudhybrid
logfeeder.cloud.storage.upload.on.shutdownTry to upload archived files on shutdownfalsetrue
logfeeder.cloud.storage.uploader.interval.secondsSecond interval, that is used to check against there are any files to upload to cloud storage or not.6010
logfeeder.cloud.storage.uploader.timeout.minutesTimeout value for uploading task to cloud storage in minutes.6010
logfeeder.cloud.storage.use.filtersUse filters for inputs (with filters the output format will be JSON)falsetrue
logfeeder.cloud.storage.use.hdfs.clientUse hdfs client with cloud connectors instead of the core clients for shipping data to cloud storagefalsetrue
logfeeder.config.dirThe directory where shipper configuration files are looked for./usr/lib/ambari-logsearch-logfeeder/conf/usr/lib/ambari-logsearch-logfeeder/conf
logfeeder.config.filesComma separated list of the config files containing global / output configurations.EMPTYglobal.json,output.json/usr/lib/ambari-logsearch-logfeeder/conf/global.config.json
logfeeder.configs.filter.solr.enabledUse solr as a log level filter storagefalsetrue
logfeeder.configs.filter.solr.monitor.enabledMonitor log level filters (in solr) periodically - used for checking updates.truefalse
logfeeder.configs.filter.solr.monitor.intervalTime interval (in seconds) between monitoring input config filter definitions from Solr.3060
logfeeder.configs.filter.zk.enabledUse zk as a log level filter storage (works only with local config)falsetrue
logfeeder.configs.local.enabledMonitor local input.config-*.json files (do not upload them to zookeeper or solr)falsetrue
logfeeder.docker.registry.enabledEnable to monitor docker containers and store their metadata in an in-memory registry.falsetrue
logfeeder.hdfs.file.permissionsDefault permissions for created files on HDFS640600
logfeeder.hdfs.hostHDFS Name Node host.EMPTYmynamenodehost
logfeeder.hdfs.kerberosEnable kerberos support for HDFSfalsetrue
logfeeder.hdfs.keytabKerberos keytab location for Log Feeder for communicating with secure HDFS./etc/security/keytabs/logfeeder.service.keytab/etc/security/keytabs/mykeytab.keytab
logfeeder.hdfs.portHDFS Name Node portEMPTY9000
logfeeder.hdfs.principalKerberos principal for Log Feeder for communicating with secure HDFS.logfeeder/_HOSTmylogfeeder/myhost1@EXAMPLE.COM
logfeeder.hdfs.userOverrides HADOOP_USER_NAME variable at runtimeEMPTYhdfs
logfeeder.include.default.levelComma separated list of the default log levels to be enabled by the filtering.EMPTYFATAL,ERROR,WARN
logfeeder.log.filter.enableEnables the filtering of the log entries by log level filters.falsetrue
logfeeder.metrics.collector.hostsComma separtaed list of metric collector hosts.EMPTYc6401.ambari.apache.org,c6402.ambari.apache.org
logfeeder.metrics.collector.pathThe path used by metric collectors.EMPTY/ws/v1/timeline/metrics
logfeeder.metrics.collector.portThe port used by metric collectors.EMPTY6188
logfeeder.metrics.collector.protocolThe protocol used by metric collectors.EMPTYhttphttps
logfeeder.s3.access.keyAmazon S3 secret access key.EMPTYMySecretAccessKey
logfeeder.s3.access.key.fileAmazon S3 secret access key file (that contains only the key).EMPTY/my/path/access_key
logfeeder.s3.credentials.file.enabledEnable to get Amazon S3 secret/access keys from files.EMPTYtrue
logfeeder.s3.credentials.hadoop.access.refAmazon S3 access key reference in Hadoop credential store..logfeeder.s3.access.keylogfeeder.s3.access.key
logfeeder.s3.credentials.hadoop.enabledEnable to get Amazon S3 secret/access keys from Hadoop credential store API.EMPTYtrue
logfeeder.s3.credentials.hadoop.secret.refAmazon S3 secret access key reference in Hadoop credential store..logfeeder.s3.secret.keylogfeeder.s3.secret.key
logfeeder.s3.endpointAmazon S3 endpoint.https://s3.amazonaws.comhttps://s3.amazonaws.com
logfeeder.s3.multiobject.delete.enableWhen enabled, multiple single-object delete requests are replaced by a single ‘delete multiple objects’-request, reducing the number of requests.truefalse
logfeeder.s3.object.aclAmazon S3 ACLs for new objects.privatelogs
logfeeder.s3.path.style.accessEnable S3 path style access will disable the default virtual hosting behaviour (DNS).falsetrue
logfeeder.s3.regionAmazon S3 region.EMPTYus-east-2
logfeeder.s3.secret.keyAmazon S3 secret key.EMPTYMySecretKey
logfeeder.s3.secret.key.fileAmazon S3 secret key file (that contains only the key).EMPTY/my/path/secret_key
logfeeder.simulate.input_numberThe number of the simulator instances to run with. O means no simulation.010
logfeeder.simulate.log_idsThe comma separated list of log ids for which to create the simulated log entries.The log ids of the installed services in the clusterambari_server,zookeeper,infra_solr,logsearch_app
logfeeder.simulate.log_levelThe log level to create the simulated log entries with.WARNINFO
logfeeder.simulate.max_log_wordsThe maximum number of words in a simulated log entry.58
logfeeder.simulate.min_log_wordsThe minimum number of words in a simulated log entry.53
logfeeder.simulate.number_of_wordsThe size of the set of words that may be used to create the simulated log entries with.1000100
logfeeder.simulate.sleep_millisecondsThe milliseconds to sleep between creating two simulated log entries.100005000
logfeeder.solr.cloud.client.discoverOn startup, with a Solr Cloud client, the Solr nodes will be discovered, then LBHttpClient will be built from that.falsetrue
logfeeder.solr.implicit.routingUse implicit routing for Solr Collections.falsetrue
logfeeder.solr.jaas.fileThe jaas file used for solr./etc/security/keytabs/logsearch_solr.service.keytab/usr/lib/ambari-logsearch-logfeeder/conf/logfeeder_jaas.conf
logfeeder.solr.kerberos.enableEnables using kerberos for accessing solr.falsetrue
logfeeder.solr.metadata.collectionMetadata collection name that could contain log level filters or input configurations.EMPTYlogsearch_metadata
logfeeder.solr.urlsComma separated solr urls (with protocol and port), override logfeeder.solr.zk_connect_string configEMPTYhttps://localhost1:8983/solr,https://localhost2:8983
logfeeder.solr.zk_connect_stringZookeeper connection string for Solr.EMPTYlocalhost1:2181,localhost2:2181/mysolr_znode
logfeeder.tmp.dirThe tmp dir used for creating temporary files.java.io.tmpdir/tmp/
logsearch.config.zk_aclsZooKeeper ACLs for handling configs. (read & write)world:anyone:cdrwaworld:anyone:r,sasl:solr:cdrwa,sasl:logsearch:cdrwa
logsearch.config.zk_connect_stringZooKeeper connection string.EMPTYlocalhost1:2181,localhost2:2181/znode
logsearch.config.zk_connection_retry_time_out_msThe maximum elapsed time for connecting to ZooKeeper in milliseconds. 0 means retrying forever.EMPTY1200000
logsearch.config.zk_connection_time_out_msZooKeeper connection timeout in millisecondsEMPTY30000
logsearch.config.zk_rootZooKeeper root node where the shippers are stored. (added to the connection string)EMPTY/logsearch
logsearch.config.zk_session_time_out_msZooKeeper session timeout in millisecondsEMPTY60000