Interface BaseStream<T,A,P,C,PL,OT,IT,ITER extends Iterator<T>,S extends BaseStream<T,A,P,C,PL,OT,IT,ITER,S>>

Type Parameters:
T - the type of the stream elements
A - the type of array
P - the type of predicate
C - the type of consumer
PL - the type of PrimitiveList/List
OT - the type of Optional
IT - the type of Indexed
ITER - the type of Iterator
S - the type of of the stream implementing BaseStream
All Superinterfaces:
AutoCloseable, Closeable, Immutable
All Known Implementing Classes:
ByteStream, ByteStream.ByteStreamEx, CharStream, CharStream.CharStreamEx, DoubleStream, DoubleStream.DoubleStreamEx, EntryStream, FloatStream, FloatStream.FloatStreamEx, IntStream, IntStream.IntStreamEx, LongStream, LongStream.LongStreamEx, ShortStream, ShortStream.ShortStreamEx, Stream, Stream.StreamEx

@Immutable public interface BaseStream<T,A,P,C,PL,OT,IT,ITER extends Iterator<T>,S extends BaseStream<T,A,P,C,PL,OT,IT,ITER,S>> extends Closeable, Immutable
The Stream will be automatically closed after execution(A terminal method is executed/triggered).
See Also:
  • Method Details

    • filter

      Returns a stream consisting of the elements of this stream that match the given predicate.
      Parameters:
      predicate -
      Returns:
    • filter

      @Beta @ParallelSupported @IntermediateOp S filter(P predicate, C actionOnDroppedItem)
      Returns a stream consisting of the elements of this stream that match the given predicate.
      Parameters:
      predicate -
      actionOnDroppedItem -
      Returns:
    • takeWhile

      @ParallelSupported @IntermediateOp S takeWhile(P predicate)
      Keep the elements until the given predicate returns false. The stream should be sorted, which means if x is the first element: predicate.text(x) returns false, any element y behind x: predicate.text(y) should returns false. In parallel Streams, the elements after the first element which predicate returns false may be tested by predicate too.
      For example:
       // For sequential stream:
       Stream.of(1, 2, 3, 4, 5, 6).takeWhile(it -> it invalid input: '<' 5).toList() ===> [1, 2, 3, 4]
       Stream.of(1, 2, 5, 6, 3, 4).takeWhile(it -> it invalid input: '<' 5).toList() ===> [1, 2]
       Stream.of(5, 6, 1, 2, 3, 4).takeWhile(it -> it invalid input: '<' 5).toList() ===> []
      
      
       // For parallel stream:
       Stream.of(1, 2, 3, 4, 5, 6).parallel().takeWhile(it -> it invalid input: '<' 5).toList() ===> [1, 2, 3, 4] // Order could be different since it's in parallel stream.
       Stream.of(1, 2, 5, 6, 3, 4).parallel().takeWhile(it -> it invalid input: '<' 5).toList() ===> [1, 2] // or [1, 2, 3] or [1, 2, 3, 4] // Order could be different since it's in parallel stream.
       Stream.of(5, 6, 1, 2, 3, 4).parallel().takeWhile(it -> it invalid input: '<' 5).toList() ===> any sub set of [1, 2, 3, 4], including [] // Order could be different since it's in parallel stream.
       
      Parameters:
      predicate -
      Returns:
    • dropWhile

      @ParallelSupported @IntermediateOp S dropWhile(P predicate)
      Remove the elements until the given predicate returns false. The stream should be sorted, which means if x is the first element: predicate.text(x) returns true, any element y behind x: predicate.text(y) should returns true. In parallel Streams, the elements after the first element which predicate returns false may be tested by predicate too.
      For example:
       // For sequential stream:
       Stream.of(1, 2, 3, 4, 5, 6).dropWhile(it -> it invalid input: '<' 4).toList() ===> [4, 5, 6]
       Stream.of(1, 2, 5, 6, 3, 4).dropWhile(it -> it invalid input: '<' 4).toList() ===> [5, 6, 3, 4]
       Stream.of(5, 6, 1, 2, 3, 4).dropWhile(it -> it invalid input: '<' 4).toList() ===> [5, 6, 1, 2, 3, 4]
      
      
       // For parallel stream:
       Stream.of(1, 2, 3, 4, 5, 6).parallel().dropWhile(it -> it invalid input: '<' 4).toList() ===> [4, 5, 6] // Order could be different since it's in parallel stream.
       Stream.of(1, 2, 5, 6, 3, 4).parallel().dropWhile(it -> it invalid input: '<' 4).toList() ===> [5, 6, 4] // or [5, 6, 3, 4] // Order could be different since it's in parallel stream.
       Stream.of(5, 6, 1, 2, 3, 4).parallel().dropWhile(it -> it invalid input: '<' 4).toList() ===> [5, 6] + any sub set of [1, 2, 3, 4] // Order could be different since it's in parallel stream.
       
      Parameters:
      predicate -
      Returns:
    • dropWhile

      @Beta @ParallelSupported @IntermediateOp S dropWhile(P predicate, C actionOnDroppedItem)
      Remove the elements until the given predicate returns false. The stream should be sorted, which means if x is the first element: predicate.text(x) returns true, any element y behind x: predicate.text(y) should returns true. In parallel Streams, the elements after the first element which predicate returns false may be tested by predicate too.
      Parameters:
      predicate -
      actionOnDroppedItem -
      Returns:
      dropWhile(Object)
    • skipUntil

      @Beta @ParallelSupported @IntermediateOp S skipUntil(P predicate)
      Parameters:
      predicate -
      Returns:
      See Also:
    • removeIf

      Deprecated.
      Parameters:
      predicate -
      Returns:
    • removeIf

      @ParallelSupported @IntermediateOp @Deprecated S removeIf(P predicate, C actionOnDroppedItem)
      Deprecated.
      Parameters:
      predicate -
      actionOnDroppedItem -
      Returns:
    • split

      @SequentialOnly @IntermediateOp Stream<S> split(int chunkSize)
      Returns Stream of S with consecutive sub sequences of the elements, each of the same size (the final sequence may be smaller).
      Parameters:
      chunkSize - the desired size of each sub sequence (the last may be smaller).
      Returns:
    • splitToList

      @SequentialOnly @IntermediateOp Stream<PL> splitToList(int chunkSize)
      Returns Stream of PL with consecutive sub sequences of the elements, each of the same size (the final sequence may be smaller).
      This method only runs sequentially, even in parallel stream.
      Parameters:
      chunkSize - the desired size of each sub sequence (the last may be smaller).
      Returns:
    • split

      Split the stream by the specified predicate. This stream should be sorted by value which is used to verify the border.
      This method only runs sequentially, even in parallel stream.
      Parameters:
      predicate -
      Returns:
    • splitToList

      @SequentialOnly @IntermediateOp Stream<PL> splitToList(P predicate)
      Split the stream by the specified predicate. This method only runs sequentially, even in parallel stream.
      Parameters:
      predicate -
      Returns:
    • splitAt

      @SequentialOnly @IntermediateOp Stream<S> splitAt(int where)
      Split the stream into two pieces at where. The first piece will be loaded into memory.
      Parameters:
      where -
      Returns:
    • splitAt

      Split the stream into two pieces at where turns to true. The first piece will be loaded into memory.
       
       Stream.of(1, 3, 2, 4, 2, 5).splitAt(it -> it >= 4).forEach(s -> s.println()); // [1, 3, 2], [4, 2, 5]
       
       
      Parameters:
      where -
      Returns:
    • sliding

      @SequentialOnly @IntermediateOp Stream<S> sliding(int windowSize)
      Parameters:
      windowSize -
      Returns:
      See Also:
    • slidingToList

      @SequentialOnly @IntermediateOp Stream<PL> slidingToList(int windowSize)
      Parameters:
      windowSize -
      Returns:
      See Also:
    • sliding

      @SequentialOnly @IntermediateOp Stream<S> sliding(int windowSize, int increment)
      Stream.of(1, 2, 3, 4, 5, 6, 7, 8).sliding(3, 1).forEach(Stream::println)
      output:
      [1, 2, 3]
      [2, 3, 4]
      [3, 4, 5]
      [4, 5, 6]
      [5, 6, 7]
      [6, 7, 8]

      ============================================================================
      Stream.of(1, 2, 3, 4, 5, 6, 7, 8).sliding(3, 3).forEach(Stream::println)
      output:
      [1, 2, 3]
      [4, 5, 6]
      [7, 8]

      ============================================================================
      Stream.of(1, 2, 3, 4, 5, 6, 7, 5).sliding(3, 5).forEach(Stream::println)
      output:
      [1, 2, 3]
      [6, 7, 8]

      This method only runs sequentially, even in parallel stream.
      Parameters:
      windowSize -
      increment -
      Returns:
    • slidingToList

      @SequentialOnly @IntermediateOp Stream<PL> slidingToList(int windowSize, int increment)
      Parameters:
      windowSize -
      increment -
      Returns:
      See Also:
    • intersection


      This method only runs sequentially, even in parallel stream.
      Parameters:
      c -
      Returns:
      See Also:
    • difference


      This method only runs sequentially, even in parallel stream.
      Parameters:
      c -
      Returns:
      See Also:
    • symmetricDifference

      @SequentialOnly @IntermediateOp S symmetricDifference(Collection<T> c)

      This method only runs sequentially, even in parallel stream.
      Parameters:
      c -
      Returns:
      See Also:
    • reversed


      This method only runs sequentially, even in parallel stream and all elements will be loaded to memory.
      Returns:
    • rotated


      This method only runs sequentially, even in parallel stream and all elements will be loaded to memory.
      Returns:
    • shuffled


      This method only runs sequentially, even in parallel stream and all elements will be loaded to memory.
      Returns:
    • shuffled


      This method only runs sequentially, even in parallel stream and all elements will be loaded to memory.
      Returns:
    • distinct

      Returns a stream consisting of the distinct elements of this stream.
      Returns:
    • sorted

      Returns a stream consisting of the elements of this stream in sorted order.
      All elements will be loaded to memory.
      Returns:
    • reverseSorted

    • cycled


      This method only runs sequentially, even in parallel stream and retrieved elements will be saved in memory for next cycle.
      Returns:
    • cycled

      @SequentialOnly @IntermediateOp S cycled(long rounds)

      This method only runs sequentially, even in parallel stream and retrieved elements will be saved in memory for next cycle.
      Parameters:
      rounds -
      Returns:
    • indexed


      This method only runs sequentially, even in parallel stream.
      Returns:
    • skip

      Parameters:
      n -
      Returns:
    • skip

      @Beta @ParallelSupported @IntermediateOp S skip(long n, C consumer)
      Parameters:
      n -
      consumer -
      Returns:
    • limit

      @SequentialOnly @IntermediateOp S limit(long maxSize)
      Parameters:
      maxSize -
      Returns:
    • step

    • rateLimited

      @SequentialOnly @IntermediateOp S rateLimited(double permitsPerSecond)
      Parameters:
      permitsPerSecond -
      Returns:
      See Also:
    • rateLimited

      @SequentialOnly @IntermediateOp S rateLimited(RateLimiter rateLimiter)
      Parameters:
      rateLimiter -
      Returns:
      See Also:
    • onEach

      Same as peek
      Parameters:
      action -
      Returns:
      See Also:
    • peek

    • prepend

    • prepend

    • append

    • append

    • appendIfEmpty

      @SequentialOnly @IntermediateOp S appendIfEmpty(Supplier<? extends S> supplier)
    • throwIfEmpty

      @SequentialOnly @IntermediateOp S throwIfEmpty(Supplier<? extends RuntimeException> exceptionSupplier)
    • applyIfNotEmpty

      @TerminalOp <R, E extends Exception> u.Optional<R> applyIfNotEmpty(Throwables.Function<? super S,R,E> func) throws E
      This is a terminal operation. That's to say this stream will be closed after this operation.
      Type Parameters:
      R -
      E -
      Parameters:
      func -
      Returns:
      Throws:
      E
    • acceptIfNotEmpty

      @TerminalOp <E extends Exception> If.OrElse acceptIfNotEmpty(Throwables.Consumer<? super S,E> action) throws E
      This is a terminal operation. That's to say this stream will be closed after this operation.
      Type Parameters:
      E -
      Parameters:
      action -
      Returns:
      Throws:
      E
    • join

    • join

    • percentiles


      All elements will be loaded to memory and sorted if not yet.
      Returns:
    • count

    • first

    • last

    • elementAt

      @Beta @SequentialOnly @TerminalOp OT elementAt(long position)
      Parameters:
      position - in current stream(not upstream or origin source). It starts from 0.
      Returns:
    • onlyOne

      Returns:
      Throws:
      DuplicatedResultException - if there are more than one element in this stream.
    • toArray

    • toList

    • toSet

    • toImmutableList

    • toImmutableSet

    • toCollection

      @SequentialOnly @TerminalOp <CC extends Collection<T>> CC toCollection(Supplier<? extends CC> supplier)
    • toMultiset

    • toMultiset

      @SequentialOnly @TerminalOp Multiset<T> toMultiset(Supplier<? extends Multiset<T>> supplier)
    • toLongMultiset

    • toLongMultiset

      @SequentialOnly @TerminalOp LongMultiset<T> toLongMultiset(Supplier<? extends LongMultiset<T>> supplier)
    • println

    • iterator

      Deprecated.
      ? may cause memory/resource leak if forget to close this Stream
      Returns an iterator for the elements of this stream.
      Remember to close this Stream after the iteration is done, if needed.
      Returns:
    • __

      @Beta @SequentialOnly @IntermediateOp <SS extends BaseStream> SS __(Function<? super S,? extends SS> transfer)
    • onClose

      @SequentialOnly @IntermediateOp S onClose(Runnable closeHandler)
      Parameters:
      closeHandler -
      Returns:
    • close

      void close()
      It will be called by terminal operations in final.
      Specified by:
      close in interface AutoCloseable
      Specified by:
      close in interface Closeable
    • isParallel

      boolean isParallel()
    • sequential

    • parallel

      Consider using sps(Function) if only next operation need to be parallelized. For example:
       stream.parallel().map(f).filter(p)...;
      
       // Replace above line of code with "sps" if only "f" need to be parallelized. And "p" is fast enough to be executed in sequential Stream.
       stream.sps(s -> s.map(f)).filter(p)...;
       // Or switch the stream back sequential stream if don't use "sps".
       stream.parallel().map(f).sequential().filter(p)...;
      
       
      In most scenarios, there could be only one operation need be parallelized in the stream. So sps(Function) is recommended in most of scenarios.
      Returns:
      See Also:
    • parallel

      @SequentialOnly @IntermediateOp S parallel(int maxThreadNum)
      Consider using sps(int, Function) if only next operation need to be parallelized. For example:
       stream.parallel(maxThreadNum).map(f).filter(p)...;
      
       // Replace above line of code with "sps" if only "f" need to be parallelized. And "p" is fast enough to be executed in sequential Stream.
       stream.sps(maxThreadNum, s -> s.map(f)).filter(p)...;
       // Or switch the stream back sequential stream if don't use "sps".
       stream.parallel(maxThreadNum).map(f).sequential().filter(p)...;
      
       
      In most scenarios, there could be only one operation need be parallelized in the stream. So sps(int, Function) is recommended in most of scenarios.
      Parameters:
      maxThreadNum -
      Returns:
      See Also:
    • parallel

      Consider using sps(ParallelSettings, Function) if only next operation need to be parallelized. For example:
       stream.parallel(splitor).map(f).filter(p)...;
      
       // Replace above line of code with "sps" if only "f" need to be parallelized. And "p" is fast enough to be executed in sequential Stream.
       stream.sps(SP.create(splitor), s -> s.map(f)).filter(p)...;
       // Or switch the stream back sequential stream if don't use "sps".
       stream.parallel(splitor).map(f).sequential().filter(p)...;
      
       
      In most scenarios, there could be only one operation need be parallelized in the stream. So sps(ParallelSettings, Function) is recommended in most of scenarios.
      Parameters:
      splitor -
      Returns:
      See Also:
    • parallel

      @SequentialOnly @IntermediateOp S parallel(int maxThreadNum, BaseStream.Splitor splitor)
      Returns an equivalent stream that is parallel. May return itself if the stream was already parallel with the same maxThreadNum and splitor as the specified ones.

      When to use parallel Streams?
       
       Profiler.run(1, 1, 3, "sequential", () -> Stream.of(list).operation(F)...).printResult();
       Profiler.run(1, 1, 3, "parallel", () -> Stream.of(list).parallel().operation(F)...).printResult();
       
       
      Here is a sample performance test with computer: CPU Intel i7-3520M 4-cores 2.9 GHz, JDK 1.8.0_101, Windows 7:
       
      
           public void test_perf() {
              final String[] strs = new String[10_000];
              N.fill(strs, N.uuid());
          
              final int m = 10;
              final Function<String, Long> mapper = str -> {
                  long result = 0;
                  for (int i = 0; i invalid input: '<' m; i++) {
                      result += N.sum(str.toCharArray()) + 1;
                  }
                  return result;
              };
          
              final MutableLong sum = MutableLong.of(0);
          
              for (int i = 0, len = strs.length; i invalid input: '<' len; i++) {
                  sum.add(mapper.apply(strs[i]));
              }
          
              final int threadNum = 1, loopNum = 100, roundNum = 3;
          
              Profiler.run(threadNum, loopNum, roundNum, "For Loop", () -> {
                  long result = 0;
                  for (int i = 0, len = strs.length; i invalid input: '<' len; i++) {
                      result += mapper.apply(strs[i]);
                  }
                  assertEquals(sum.longValue(), result);
              }).printResult();
          
              Profiler.run(threadNum, loopNum, roundNum, "JDK Sequential",
                      () -> assertEquals(sum.longValue(), java.util.stream.Stream.of(strs).map(mapper).mapToLong(e -> e).sum())).printResult();
          
              Profiler.run(threadNum, loopNum, roundNum, "JDK Parallel",
                      () -> assertEquals(sum.longValue(), java.util.stream.Stream.of(strs).parallel().map(mapper).mapToLong(e -> e).sum())).printResult();
          
              Profiler.run(threadNum, loopNum, roundNum, "Abcus Sequential", () -> assertEquals(sum.longValue(), Stream.of(strs).map(mapper).mapToLong(e -> e).sum()))
                      .printResult();
          
              Profiler.run(threadNum, loopNum, roundNum, "Abcus Parallel",
                      () -> assertEquals(sum.longValue(), Stream.of(strs).parallel().map(mapper).mapToLong(e -> e).sum())).printResult();
          
              Profiler.run(threadNum, loopNum, roundNum, "Abcus Parallel by chunck", () -> assertEquals(sum.longValue(),
                      Stream.of(strs).splitToList(100).parallel().map(it -> N.sumLong(it, e -> mapper.apply(e))).mapToLong(e -> e).sum())).printResult();
           }
          
       
       
      And test result: Unit is milliseconds. N(the number of elements) is 10_000, Q(cost per element of F, the per-element function (usually a lambda), here is mapper) is calculated by: value of 'For loop' / N(10_000).
      m = 1 m = 10m = 50m = 100m = 500m = 1000
      Q 0.000020.00020.0010.0020.010.02
      For Loop0.232.31122110219
      JDK Sequential0.282.31122114212
      JDK Parallel0.221.361266122
      Abcus Sequential0.321122112212
      Abcus Parallel1111111677128
      Comparison:
      • Again, do NOT and should NOT use parallel Streams if you don't have any performance problem with sequential Streams, because using parallel Streams has extra cost.
      • Again, consider using parallel Streams only when N(the number of elements) * Q(cost per element of F, the per-element function (usually a lambda)) is big enough.
      • The implementation of parallel Streams in Abacus is more than 10 times, slower than parallel Streams in JDK when Q is tiny(here is less than 0.0002 milliseconds by the test):
        • The implementation of parallel Streams in JDK 8 still can beat the sequential/for loop when Q is tiny(Here is 0.00002 milliseconds by the test). That's amazing, considering the extra cost brought by parallel computation. It's well done.
        • The implementation of parallel Streams in Abacus is pretty simple and straight forward. The extra cost(starting threads/synchronization/queue...) brought by parallel Streams in Abacus is too bigger to tiny Q(Here is less than 0.001 milliseconds by the test). But it starts to be faster than sequential Streams when Q is big enough(Here is 0.001 milliseconds by the test) and starts to catch the parallel Streams in JDK when Q is bigger(Here is 0.01 milliseconds by the test).
        • Consider using the parallel Streams in Abacus when Q is big enough, specially when IO involved in F. Because one IO operation(e.g. DB/web service request..., Reading/Writing file...) usually takes 1 to 1000 milliseconds, or even longer. By the parallel Streams APIs in Abacus, it's very simple to specify max thread numbers. Sometimes, it's much faster to execute IO/Network requests with a bit more threads. It's fair to say that the parallel Streams in Abacus is high efficient, may same as or faster than the parallel Streams in JDK when Q is big enough, except F is heavy cpu-used operation. Most of the times, the Q is big enough to consider using parallel Stream is because IO/Network is involved in F.
      • JDK 7 is supported by the Streams in Abacus. It's perfect to work with retrolambda on Android
      • All primitive types are supported by Stream APIs in Abacus except boolean


      A bit more about Lambdas/Stream APIs, you may heard that Lambdas/Stream APIs is 5 time slower than imperative programming. It's true when Q and F is VERY, VERY tiny, like f = (int a, int b) -> a + b;. But if we look into the samples in the article and think about it: it just takes less than 1 milliseconds to get the max value in 100k numbers. There is potential performance issue only if the "get the max value in 100K numbers" call many, many times in your API or single request. Otherwise, the difference between 0.1 milliseconds to 0.5 milliseconds can be totally ignored. Usually we meet performance issue only if Q and F is big enough. However, the performance of Lambdas/Streams APIs is closed to for loop when Q and F is big enough. No matter in which scenario, We don't need and should not concern the performance of Lambdas/Stream APIs.

      Although it's is parallel Streams, it doesn't means all the methods are executed in parallel. Because the sequential way is as fast, or even faster than the parallel way for some methods, or is pretty difficult, if not possible, to implement the method by parallel approach. Here are the methods which are executed sequentially even in parallel Streams.

      splitXXX/splitAt/splitBy/slidingXXX/collapse, distinct, reverse, rotate, shuffle, indexed, cached, top, kthLargest, count, toArray, toList, toList, toSet, toMultiset, toLongMultiset, intersection(Collection c), difference(Collection c), symmetricDifference(Collection c), forEach(identity, accumulator, predicate), findFirstOrLast, findFirstAndLast
      Parameters:
      maxThreadNum - Default value is the number of cpu-cores. Steps/operations will be executed sequentially if maxThreadNum is 1.
      splitor - The target array is split by ranges for multiple threads if splitor is splitor.ARRAY and target stream composed by array. It looks like:
      
       for (int i = 0; i invalid input: '<' maxThreadNum; i++) {
           final int sliceIndex = i;
      
           futureList.add(asyncExecutor.execute(new Runnable() {
               public void run() {
                   int cursor = fromIndex + sliceIndex * sliceSize;
                   final int to = toIndex - cursor > sliceSize ? cursor + sliceSize : toIndex;
                   while (cursor invalid input: '<' to) {
                       action.accept(elements[cursor++]);
                   }
              }
          }));
       }
       
      Otherwise, each thread will get the elements from the target array/iterator in the stream one by one with the target array/iterator synchronized. It looks like:
      
       for (int i = 0; i invalid input: '<' maxThreadNum; i++) {
           futureList.add(asyncExecutor.execute(new Runnable() {
               public void run() {
                   T next = null;
      
                   while (true) {
                       synchronized (elements) {
                           if (cursor.intValue() invalid input: '<' toIndex) {
                               next = elements[cursor.getAndIncrement()];
                           } else {
                               break;
                           }
                       }
      
                       action.accept(next);
                   }
               }
           }));
       }
       
      Using splitor.ARRAY only when F (the per-element function (usually a lambda)) is very tiny and the cost of synchronization on the target array/iterator is too big to it. For the F involving IO or taking 'long' to complete, choose splitor.ITERATOR. Default value is splitor.ITERATOR.

      Consider using sps(ParallelSettings, Function) if only next operation need to be parallelized. For example:
       stream.parallel(maxThreadNum, splitor).map(f).filter(p)...;
      
       // Replace above line of code with "sps" if only "f" need to be parallelized. And "p" is fast enough to be executed in sequential Stream.
       stream.sps(SP.create(maxThreadNum, splitor), s -> s.map(f)).filter(p)...;
       // Or switch the stream back sequential stream if don't use "sps".
       stream.parallel(maxThreadNum, splitor).map(f).sequential().filter(p)...;
      
       
      In most scenarios, there could be only one operation need be parallelized in the stream. So sps(ParallelSettings, Function) is recommended in most of scenarios.
      Returns:
      See Also:
    • parallel

      @SequentialOnly @IntermediateOp S parallel(int maxThreadNum, BaseStream.Splitor splitor, Executor executor)
      Consider using sps(ParallelSettings, Function) if only next operation need to be parallelized. For example:
       stream.parallel(maxThreadNum, splitor, executor).map(f).filter(p)...;
      
       // Replace above line of code with "sps" if only "f" need to be parallelized. And "p" is fast enough to be executed in sequential Stream.
       stream.sps(SP.create(maxThreadNum, splitor, executor), s -> s.map(f)).filter(p)...;
       // Or switch the stream back sequential stream if don't use "sps".
       stream.parallel(maxThreadNum, splitor, executor).map(f).sequential().filter(p)...;
      
       
      In most scenarios, there could be only one operation need be parallelized in the stream. So sps(ParallelSettings, Function) is recommended in most of scenarios.
      Parameters:
      maxThreadNum -
      splitor -
      executor - should be able to execute maxThreadNum * following up operations in parallel.
      Returns:
      See Also:
    • parallel

      @SequentialOnly @IntermediateOp S parallel(int maxThreadNum, Executor executor)
      Consider using sps(ParallelSettings, Function) if only next operation need to be parallelized. For example:
       stream.parallel(maxThreadNum, executor).map(f).filter(p)...;
      
       // Replace above line of code with "sps" if only "f" need to be parallelized. And "p" is fast enough to be executed in sequential Stream.
       stream.sps(SP.create(maxThreadNum,  executor), s -> s.map(f)).filter(p)...;
       // Or switch the stream back sequential stream if don't use "sps".
       stream.parallel(maxThreadNum, executor).map(f).sequential().filter(p)...;
      
       
      In most scenarios, there could be only one operation need be parallelized in the stream. So sps(ParallelSettings, Function) is recommended in most of scenarios.
      Parameters:
      maxThreadNum -
      executor - should be able to execute maxThreadNum * following up operations in parallel.
      Returns:
      See Also:
    • parallel

      Consider using sps(ParallelSettings, Function) if only next operation need to be parallelized. For example:
       stream.parallel(executor).map(f).filter(p)...;
      
       // Replace above line of code with "sps" if only "f" need to be parallelized. And "p" is fast enough to be executed in sequential Stream.
       stream.sps(SP.create(executor), s -> s.map(f)).filter(p)...;
       // Or switch the stream back sequential stream if don't use "sps".
       stream.parallel(executor).map(f).sequential().filter(p)...;
      
       
      In most scenarios, there could be only one operation need be parallelized in the stream. So sps(ParallelSettings, Function) is recommended in most of scenarios.
      Parameters:
      executor - should be able to execute maxThreadNum * following up operations in parallel.
      Returns:
      See Also:
    • parallel

      Consider using sps(ParallelSettings, Function) if only next operation need to be parallelized. For example:
       stream.parallel(parallelSettings).map(f).filter(p)...;
      
       // Replace above line of code with "sps" if only "f" need to be parallelized. And "p" is fast enough to be executed in sequential Stream.
       stream.sps(SP.create(parallelSettings), s -> s.map(f)).filter(p)...;
       // Or switch the stream back sequential stream if don't use "sps".
       stream.parallel(parallelSettings).map(f).sequential().filter(p)...;
      
       
      In most scenarios, there could be only one operation need be parallelized in the stream. So sps(ParallelSettings, Function) is recommended in most of scenarios.
      Parameters:
      ps -
      Returns:
      See Also:
    • sps

      @Beta @SequentialOnly @IntermediateOp <SS extends BaseStream> SS sps(Function<? super S,? extends SS> ops)
      Temporarily switch the stream to parallel stream for operation ops and then switch back to sequence stream.
      stream().parallel().ops(map/filter/...).sequence()
      Type Parameters:
      SS -
      Parameters:
      ops -
      Returns:
    • sps

      @Beta @SequentialOnly @IntermediateOp <SS extends BaseStream> SS sps(int maxThreadNum, Function<? super S,? extends SS> ops)
      Temporarily switch the stream to parallel stream for operation ops and then switch back to sequence stream.
      stream().parallel(maxThreadNum).ops(map/filter/...).sequence()
      Type Parameters:
      SS -
      Parameters:
      maxThreadNum -
      ops -
      Returns:
    • sps

      @Beta @SequentialOnly @IntermediateOp <SS extends BaseStream> SS sps(BaseStream.ParallelSettings ps, Function<? super S,? extends SS> ops)
      Temporarily switch the stream to parallel stream for operation ops and then switch back to sequence stream.
      stream().parallel(ps).ops(map/filter/...).sequence()
      Type Parameters:
      SS -
      Parameters:
      ps -
      ops -
      Returns:
    • psp

      @Beta @SequentialOnly @IntermediateOp <SS extends BaseStream> SS psp(Function<? super S,? extends SS> ops)
      Temporarily switch the stream to sequence stream for operation ops and then switch back to parallel stream with same maxThreadNum/splitor/asyncExecutor.
      stream().sequence().ops(map/filter/...).parallel(sameMaxThreadNum, sameSplitor, sameAsyncExecutor)
      Type Parameters:
      SS -
      Parameters:
      ops -
      Returns: