Limit log output to prevent in case of the large group of the log segments
Descriptions of the changes in this PR:
Limit logged data
### Motivation
Overly long log line at ReadAheadEntryReader
Found a log with single log line of over 5 mil characters long
"org.apache.distributedlog.ReadAheadEntryReader - Starting the readahead entry reader for ..." + the details of ~16000 segments.
### Changes
Output details of up to 10 segments plus count of segments.
Master Issue: #2561
Reviewers: Enrico Olivelli <eolivelli@gmail.com>, Nicolo Boschi <boschi1997@gmail.com>
This closes #2562 from dlg99/master-log
diff --git a/stream/distributedlog/core/src/main/java/org/apache/distributedlog/ReadAheadEntryReader.java b/stream/distributedlog/core/src/main/java/org/apache/distributedlog/ReadAheadEntryReader.java
index 11a7e7e..915d81a 100644
--- a/stream/distributedlog/core/src/main/java/org/apache/distributedlog/ReadAheadEntryReader.java
+++ b/stream/distributedlog/core/src/main/java/org/apache/distributedlog/ReadAheadEntryReader.java
@@ -33,6 +33,8 @@
import java.util.concurrent.atomic.AtomicBoolean;
import java.util.concurrent.atomic.AtomicReference;
import java.util.function.Function;
+import java.util.stream.Collectors;
+
import org.apache.bookkeeper.common.concurrent.FutureEventListener;
import org.apache.bookkeeper.common.concurrent.FutureUtils;
import org.apache.bookkeeper.common.util.OrderedScheduler;
@@ -411,8 +413,12 @@
}
public void start(final List<LogSegmentMetadata> segmentList) {
- logger.info("Starting the readahead entry reader for {} : segments = {}",
- readHandler.getFullyQualifiedName(), segmentList);
+ // Managed to get 5mil character long log line from here.
+ // Will limit the output.
+ logger.info("Starting the readahead entry reader for {} : number of segments: {}, top 10 segments = {}",
+ readHandler.getFullyQualifiedName(), segmentList.size(),
+ segmentList.size() > 10
+ ? segmentList.stream().limit(10).collect(Collectors.toList()) : segmentList);
started.set(true);
processLogSegments(segmentList);
}