Java Reference
In-Depth Information
The word “map” here is used in its functional sense: you take an input and you produce an output.
In this case, the mapping operation will take each element of the stream, apply a function to that stream
(“map the value”), and produce an output. This allows you to take a stream of one type and generate
a stream of a new type: for instance, you could take a stream of ids and generate a stream of objects
instantiated from a database call. Returning to our Library example, we could generate a stream of the
library's books, and we could easily map that onto a stream of genres. This is what that would look like:
Stream<Book.Genre> genres =
library.getBooks().parallelStream().map(Book::getGenre);
The map function presumes that you will be producing one output element for each input element.
If you may produce zero output elements or many output elements for each input element, you instead will
want to use the flatMap function. The flatMap function works the same way as the map function, except
instead of producing a single element, it produces a Stream of elements. For instance, assume that you had
a stream of file names, and you wanted to concatenate together all those file's contents. First, you could map
the strings into Path instances, and then use the Files.lines method, which takes in a Path instance and
produces a stream of its contents. You now have a map from a stream into a stream of streams: this is where
flatMap will flatten that down for you. The code to do this is given in Listing 3-11. This code is made slightly
more complicated by the checked exception that is thrown by the Files.lines method, but we will address
how to make this code nicer in chapter 7.
Listing 3-11. Creating a stream of all the lines from files given a list of file names via Stream.flatMap
Function<Path, Stream<String>> readLines = path -> {
try {
return Files.lines(path);
} catch (IOException ioe) {
throw new RuntimeException("Error reading " + path, ioe);
}
};
Stream<String> lines = Stream.of("foo.txt", "bar.txt", "baz.txt")
.map(Paths::get)
.flatMap(readLines);
The other common intermediary step is a filtering step. Filtering ensures that everything in the stream
satisfies some test, which allows you to sift out elements that you don't want. To filter, you pass a Predicate
into the Stream.filter method. Each element in the stream is put through the test. If the test returns true ,
the stream generates the element; if the test returns false , the stream silently discards the element. For our
Library class, we could get a stream of only technical books by passing Genre.TECHNICAL::equals as our
filtering Predicate. That would look like this:
Stream<Book> techBooks =
library.getBooks().parallelStream().filter(Book.Genre.TECHNICAL::equals);
Note that the element type of the stream is not changed in filtering: we are not performing any kind of
transformation on the elements themselves, but simply deciding whether to retain or reject each stream
element. This is useful because you can add filters in the middle of processing to perform tests when you
have the appropriate types. For instance, we can extend the Listing 3-11 example to only read the content
from existing files by passing a filter; only paths for which Files.exists returns true will be retained. We
implement this in Listing 3-12. There, we create the Path instances first, then pass those transient instances
 
Search WWH ::




Custom Search