Class IntStream

java.lang.Object
com.landawn.abacus.util.stream.IntStream
All Implemented Interfaces:
Immutable, BaseStream<Integer,int[],IntPredicate,IntConsumer,IntList,u.OptionalInt,IndexedInt,IntIterator,IntStream>, Closeable, AutoCloseable
Direct Known Subclasses:
IntStream.IntStreamEx

@Immutable public abstract class IntStream extends Object
The Stream will be automatically closed after execution(A terminal method is executed/triggered).
See Also:
  • Method Details

    • rateLimited

      public IntStream rateLimited(RateLimiter rateLimiter)
      Returns:
      See Also:
    • skipUntil

      public IntStream skipUntil(IntPredicate predicate)
      Returns:
      See Also:
    • map

    • mapToChar

      @ParallelSupported @IntermediateOp public abstract CharStream mapToChar(IntToCharFunction mapper)
    • mapToByte

      @ParallelSupported @IntermediateOp public abstract ByteStream mapToByte(IntToByteFunction mapper)
    • mapToShort

    • mapToLong

      @ParallelSupported @IntermediateOp public abstract LongStream mapToLong(IntToLongFunction mapper)
    • mapToFloat

    • mapToDouble

    • mapToObj

      @ParallelSupported @IntermediateOp public abstract <U> Stream<U> mapToObj(IntFunction<? extends U> mapper)
    • flatMap

      @ParallelSupported @IntermediateOp public abstract IntStream flatMap(IntFunction<? extends IntStream> mapper)
    • flattMap

      @ParallelSupported @IntermediateOp public abstract IntStream flattMap(IntFunction<int[]> mapper)
    • flatMapToChar

      @ParallelSupported @IntermediateOp public abstract CharStream flatMapToChar(IntFunction<? extends CharStream> mapper)
    • flatMapToByte

      @ParallelSupported @IntermediateOp public abstract ByteStream flatMapToByte(IntFunction<? extends ByteStream> mapper)
    • flatMapToShort

      @ParallelSupported @IntermediateOp public abstract ShortStream flatMapToShort(IntFunction<? extends ShortStream> mapper)
    • flatMapToLong

      @ParallelSupported @IntermediateOp public abstract LongStream flatMapToLong(IntFunction<? extends LongStream> mapper)
    • flatMapToFloat

      @ParallelSupported @IntermediateOp public abstract FloatStream flatMapToFloat(IntFunction<? extends FloatStream> mapper)
    • flatMapToDouble

      @ParallelSupported @IntermediateOp public abstract DoubleStream flatMapToDouble(IntFunction<? extends DoubleStream> mapper)
    • flatMapToObj

      @ParallelSupported @IntermediateOp public abstract <T> Stream<T> flatMapToObj(IntFunction<? extends Stream<T>> mapper)
    • flattMapToObj

      @ParallelSupported @IntermediateOp public abstract <T> Stream<T> flattMapToObj(IntFunction<? extends Collection<T>> mapper)
    • flatMappToObj

      @ParallelSupported @IntermediateOp public abstract <T> Stream<T> flatMappToObj(IntFunction<T[]> mapper)
    • mapMulti

    • mapPartial

      Note: copied from StreamEx: https://github.com/amaembo/streamex
      Type Parameters:
      R -
      Parameters:
      mapper -
      Returns:
    • mapPartialJdk

      @Beta @ParallelSupported @IntermediateOp public abstract IntStream mapPartialJdk(IntFunction<OptionalInt> mapper)
      Note: copied from StreamEx: https://github.com/amaembo/streamex
      Type Parameters:
      R -
      Parameters:
      mapper -
      Returns:
    • rangeMap

      @SequentialOnly @IntermediateOp public abstract IntStream rangeMap(IntBiPredicate sameRange, IntBinaryOperator mapper)
      Note: copied from StreamEx: https://github.com/amaembo/streamex
      Returns a stream consisting of results of applying the given function to the ranges created from the source elements. This is a quasi-intermediate partial reduction operation.
      Parameters:
      sameRange - a non-interfering, stateless predicate to apply to the leftmost and next elements which returns true for elements which belong to the same range.
      mapper - a non-interfering, stateless function to apply to the range borders and produce the resulting element. If value was not merged to the interval, then mapper will receive the same value twice, otherwise it will receive the leftmost and the rightmost values which were merged to the range.
      Returns:
      See Also:
    • rangeMapToObj

      @SequentialOnly @IntermediateOp public abstract <T> Stream<T> rangeMapToObj(IntBiPredicate sameRange, IntBiFunction<T> mapper)
      Note: copied from StreamEx: https://github.com/amaembo/streamex
      Returns a stream consisting of results of applying the given function to the ranges created from the source elements. This is a quasi-intermediate partial reduction operation.
      Parameters:
      sameRange - a non-interfering, stateless predicate to apply to the leftmost and next elements which returns true for elements which belong to the same range.
      mapper - a non-interfering, stateless function to apply to the range borders and produce the resulting element. If value was not merged to the interval, then mapper will receive the same value twice, otherwise it will receive the leftmost and the rightmost values which were merged to the range.
      Returns:
      See Also:
    • collapse

      @SequentialOnly @IntermediateOp public abstract Stream<IntList> collapse(IntBiPredicate collapsible)
      Merge series of adjacent elements which satisfy the given predicate using the merger function and return a new stream.
      This method only runs sequentially, even in parallel stream.
      Parameters:
      collapsible -
      Returns:
    • collapse

      @SequentialOnly @IntermediateOp public abstract IntStream collapse(IntBiPredicate collapsible, IntBinaryOperator mergeFunction)
      Merge series of adjacent elements which satisfy the given predicate using the merger function and return a new stream.
      This method only runs sequentially, even in parallel stream.
      Parameters:
      collapsible -
      mergeFunction -
      Returns:
    • collapse

      @SequentialOnly @IntermediateOp public abstract IntStream collapse(IntTriPredicate collapsible, IntBinaryOperator mergeFunction)
      Parameters:
      collapsible - test the current element with the first element and previous element in the series. The first parameter is the first element of this series, the second parameter is the previous element and the third parameter is the current element.
      mergeFunction -
      Returns:
    • scan

      @SequentialOnly @IntermediateOp public abstract IntStream scan(IntBinaryOperator accumulator)
      Returns a Stream produced by iterative application of a accumulation function to an initial element init and next element of the current stream. Produces a Stream consisting of init, acc(init, value1), acc(acc(init, value1), value2), etc.

      This is an intermediate operation.

      Example:

       accumulator: (a, b) -> a + b
       stream: [1, 2, 3, 4, 5]
       result: [1, 3, 6, 10, 15]
       

      This method only runs sequentially, even in parallel stream.
      Parameters:
      accumulator - the accumulation function
      Returns:
    • scan

      @SequentialOnly @IntermediateOp public abstract IntStream scan(int init, IntBinaryOperator accumulator)
      Returns a Stream produced by iterative application of a accumulation function to an initial element init and next element of the current stream. Produces a Stream consisting of init, acc(init, value1), acc(acc(init, value1), value2), etc.

      This is an intermediate operation.

      Example:

       init:10
       accumulator: (a, b) -> a + b
       stream: [1, 2, 3, 4, 5]
       result: [11, 13, 16, 20, 25]
       

      This method only runs sequentially, even in parallel stream.
      Parameters:
      init - the initial value. it's only used once by accumulator to calculate the fist element in the returned stream. It will be ignored if this stream is empty and won't be the first element of the returned stream.
      accumulator - the accumulation function
      Returns:
    • scan

      @SequentialOnly @IntermediateOp public abstract IntStream scan(int init, IntBinaryOperator accumulator, boolean initIncluded)
      Parameters:
      init -
      accumulator -
      initIncluded -
      Returns:
    • prepend

      @SequentialOnly @IntermediateOp public abstract IntStream prepend(int... a)
    • append

      @SequentialOnly @IntermediateOp public abstract IntStream append(int... a)
    • appendIfEmpty

      @SequentialOnly @IntermediateOp public abstract IntStream appendIfEmpty(int... a)
    • top

      @SequentialOnly @IntermediateOp public abstract IntStream top(int n)

      This method only runs sequentially, even in parallel stream.
      Parameters:
      n -
      Returns:
    • top

      @SequentialOnly @IntermediateOp public abstract IntStream top(int n, Comparator<? super Integer> comparator)

      This method only runs sequentially, even in parallel stream.
      Parameters:
      n -
      comparator -
      Returns:
    • toIntList

      @SequentialOnly @TerminalOp public abstract IntList toIntList()
    • toMap

      @ParallelSupported @TerminalOp public abstract <K, V, E extends Exception, E2 extends Exception> Map<K,V> toMap(Throwables.IntFunction<? extends K,E> keyMapper, Throwables.IntFunction<? extends V,E2> valueMapper) throws E, E2
      Parameters:
      keyMapper -
      valueMapper -
      Returns:
      Throws:
      E extends Exception
      E2 extends Exception
      See Also:
      • Collectors#toMap(Throwables.Function, Throwables.Function)
    • toMap

      @ParallelSupported @TerminalOp public abstract <K, V, M extends Map<K, V>, E extends Exception, E2 extends Exception> M toMap(Throwables.IntFunction<? extends K,E> keyMapper, Throwables.IntFunction<? extends V,E2> valueMapper, Supplier<? extends M> mapFactory) throws E, E2
      Parameters:
      keyMapper -
      valueMapper -
      mapFactory -
      Returns:
      Throws:
      E extends Exception
      E2 extends Exception
      See Also:
      • Collectors#toMap(Throwables.Function, Throwables.Function, Supplier)
    • toMap

      @ParallelSupported @TerminalOp public abstract <K, V, E extends Exception, E2 extends Exception> Map<K,V> toMap(Throwables.IntFunction<? extends K,E> keyMapper, Throwables.IntFunction<? extends V,E2> valueMapper, BinaryOperator<V> mergeFunction) throws E, E2
      Parameters:
      keyMapper -
      valueMapper -
      mergeFunction -
      Returns:
      Throws:
      E extends Exception
      E2 extends Exception
      See Also:
      • Collectors#toMap(Throwables.Function, Throwables.Function, BinaryOperator)
    • toMap

      @ParallelSupported @TerminalOp public abstract <K, V, M extends Map<K, V>, E extends Exception, E2 extends Exception> M toMap(Throwables.IntFunction<? extends K,E> keyMapper, Throwables.IntFunction<? extends V,E2> valueMapper, BinaryOperator<V> mergeFunction, Supplier<? extends M> mapFactory) throws E, E2
      Parameters:
      keyMapper -
      valueMapper -
      mergeFunction -
      mapFactory -
      Returns:
      Throws:
      E extends Exception
      E2 extends Exception
      See Also:
      • Collectors#toMap(Throwables.Function, Throwables.Function, BinaryOperator, Supplier)
    • groupTo

      @ParallelSupported @TerminalOp public abstract <K, A, D, E extends Exception> Map<K,D> groupTo(Throwables.IntFunction<? extends K,E> keyMapper, Collector<Integer,A,D> downstream) throws E
      Parameters:
      keyMapper -
      downstream -
      Returns:
      Throws:
      E extends Exception
      See Also:
      • Collectors#groupingBy(Throwables.Function, Collector)
    • groupTo

      @ParallelSupported @TerminalOp public abstract <K, A, D, M extends Map<K, D>, E extends Exception> M groupTo(Throwables.IntFunction<? extends K,E> keyMapper, Collector<Integer,A,D> downstream, Supplier<? extends M> mapFactory) throws E
      Parameters:
      keyMapper -
      downstream -
      mapFactory -
      Returns:
      Throws:
      E extends Exception
      See Also:
      • Collectors#groupingBy(Throwables.Function, Collector, Supplier)
    • reduce

      @ParallelSupported @TerminalOp public abstract int reduce(int identity, IntBinaryOperator op)
    • reduce

    • collect

      @ParallelSupported @TerminalOp public abstract <R> R collect(Supplier<R> supplier, ObjIntConsumer<? super R> accumulator, BiConsumer<R,R> combiner)
      Type Parameters:
      R -
      Parameters:
      supplier -
      accumulator -
      combiner -
      Returns:
      See Also:
    • collect

      @ParallelSupported @TerminalOp public abstract <R> R collect(Supplier<R> supplier, ObjIntConsumer<? super R> accumulator)
      Parameters:
      supplier -
      accumulator -
      Returns:
      See Also:
    • forEach

      @ParallelSupported @TerminalOp public abstract <E extends Exception> void forEach(Throwables.IntConsumer<E> action) throws E
      Throws:
      E extends Exception
    • forEachIndexed

      @ParallelSupported @TerminalOp public abstract <E extends Exception> void forEachIndexed(Throwables.IndexedIntConsumer<E> action) throws E
      Throws:
      E extends Exception
    • anyMatch

      @ParallelSupported @TerminalOp public abstract <E extends Exception> boolean anyMatch(Throwables.IntPredicate<E> predicate) throws E
      Throws:
      E extends Exception
    • allMatch

      @ParallelSupported @TerminalOp public abstract <E extends Exception> boolean allMatch(Throwables.IntPredicate<E> predicate) throws E
      Throws:
      E extends Exception
    • noneMatch

      @ParallelSupported @TerminalOp public abstract <E extends Exception> boolean noneMatch(Throwables.IntPredicate<E> predicate) throws E
      Throws:
      E extends Exception
    • findFirst

      @ParallelSupported @TerminalOp public abstract <E extends Exception> u.OptionalInt findFirst(Throwables.IntPredicate<E> predicate) throws E
      Throws:
      E extends Exception
    • findFirstOrAny

      @ParallelSupported @TerminalOp public abstract <E extends Exception> u.OptionalInt findFirstOrAny(Throwables.IntPredicate<E> predicateForFirst) throws E
      Returns the first element matched by predicateForFirst if found or the first element if this stream is not empty Otherwise an empty OptionalInt will be returned.
      Type Parameters:
      E -
      Parameters:
      predicateForFirst -
      Returns:
      Throws:
      E
    • findFirstOrLast

      @ParallelSupported @TerminalOp public abstract <E extends Exception> u.OptionalInt findFirstOrLast(Throwables.IntPredicate<E> predicateForFirst) throws E
      Returns the first element matched by predicateForFirst if found or the last element if this stream is not empty Otherwise an empty OptionalInt will be returned.
      Type Parameters:
      E -
      Parameters:
      predicateForFirst -
      Returns:
      Throws:
      E
    • findLast

      @ParallelSupported @TerminalOp public abstract <E extends Exception> u.OptionalInt findLast(Throwables.IntPredicate<E> predicate) throws E
      Consider using: stream.reversed().findFirst(predicate) for better performance if possible.
      Type Parameters:
      E -
      Parameters:
      predicate -
      Returns:
      Throws:
      E
    • findAny

      @ParallelSupported @TerminalOp public abstract <E extends Exception> u.OptionalInt findAny(Throwables.IntPredicate<E> predicate) throws E
      Throws:
      E extends Exception
    • min

      @SequentialOnly @TerminalOp public abstract u.OptionalInt min()
    • max

      @SequentialOnly @TerminalOp public abstract u.OptionalInt max()
    • kthLargest

      @SequentialOnly @TerminalOp public abstract u.OptionalInt kthLargest(int k)
      Parameters:
      k -
      Returns:
      OptionalByte.empty() if there is no element or count less than k, otherwise the kth largest element.
    • sum

      @SequentialOnly @TerminalOp public abstract long sum()
    • average

      @SequentialOnly @TerminalOp public abstract u.OptionalDouble average()
    • summarize

      @SequentialOnly @TerminalOp public abstract IntSummaryStatistics summarize()
    • summarizeAndPercentiles

      @SequentialOnly @TerminalOp public abstract Pair<IntSummaryStatistics,u.Optional<Map<Percentage,Integer>>> summarizeAndPercentiles()
    • merge

      Deprecated.
      replaced by mergeWith(IntStream, IntBiFunction)
      Parameters:
      b -
      nextSelector - first parameter is selected if Nth.FIRST is returned, otherwise the second parameter is selected.
      Returns:
      See Also:
    • mergeWith

      @SequentialOnly @IntermediateOp public abstract IntStream mergeWith(IntStream b, IntBiFunction<MergeResult> nextSelector)
      Parameters:
      b -
      nextSelector - first parameter is selected if Nth.FIRST is returned, otherwise the second parameter is selected.
      Returns:
    • zipWith

      @ParallelSupported @IntermediateOp public abstract IntStream zipWith(IntStream b, IntBinaryOperator zipFunction)
    • zipWith

      @ParallelSupported @IntermediateOp public abstract IntStream zipWith(IntStream b, IntStream c, IntTernaryOperator zipFunction)
    • zipWith

      @ParallelSupported @IntermediateOp public abstract IntStream zipWith(IntStream b, int valueForNoneA, int valueForNoneB, IntBinaryOperator zipFunction)
    • zipWith

      @ParallelSupported @IntermediateOp public abstract IntStream zipWith(IntStream b, IntStream c, int valueForNoneA, int valueForNoneB, int valueForNoneC, IntTernaryOperator zipFunction)
    • asLongStream

      @SequentialOnly @IntermediateOp public abstract LongStream asLongStream()
    • asFloatStream

      @SequentialOnly @IntermediateOp public abstract FloatStream asFloatStream()
    • asDoubleStream

      @SequentialOnly @IntermediateOp public abstract DoubleStream asDoubleStream()
    • toJdkStream

      @SequentialOnly @IntermediateOp public abstract IntStream toJdkStream()
    • boxed

      @SequentialOnly @IntermediateOp public abstract Stream<Integer> boxed()
    • iterator

      @SequentialOnly public IntIterator iterator()
      Remember to close this Stream after the iteration is done, if needed.
      Returns:
    • empty

      public static IntStream empty()
    • ofNullable

      public static IntStream ofNullable(Integer e)
    • of

      @SafeVarargs public static IntStream of(int... a)
    • of

      public static IntStream of(int[] a, int startIndex, int endIndex)
    • of

      public static IntStream of(Integer[] a)
    • of

      public static IntStream of(Integer[] a, int startIndex, int endIndex)
    • of

      public static IntStream of(Collection<Integer> c)
    • of

      public static IntStream of(IntIterator iterator)
    • of

      public static IntStream of(IntStream stream)
    • of

      public static IntStream of(u.OptionalInt op)
    • of

      public static IntStream of(OptionalInt op)
    • of

      public static IntStream of(IntBuffer buf)
    • ofCodePoints

      public static IntStream ofCodePoints(CharSequence str)
    • defer

      public static IntStream defer(Supplier<IntStream> supplier)
      Lazy evaluation.
      This is equal to: Stream.just(supplier).flatMapToInt(it -> it.get()).
      Type Parameters:
      T -
      Parameters:
      supplier -
      Returns:
    • from

      @Beta public static IntStream from(Supplier<IntList> supplier)
      Lazy evaluation.
      This is equal to: Stream.just(supplier).flatMapToInt(it -> it.get().stream()).
      Parameters:
      supplier -
      Returns:
    • flatten

      public static IntStream flatten(int[][] a)
    • flatten

      public static IntStream flatten(int[][] a, boolean vertically)
    • flatten

      public static IntStream flatten(int[][] a, int valueForNone, boolean vertically)
    • flatten

      public static IntStream flatten(int[][][] a)
    • from

      @Deprecated @Beta @SafeVarargs public static IntStream from(char... a)
      Deprecated.
      Parameters:
      a -
      Returns:
    • from

      @Deprecated @Beta public static IntStream from(char[] a, int fromIndex, int toIndex)
      Deprecated.
      Parameters:
      a -
      fromIndex -
      toIndex -
      Returns:
    • from

      @Deprecated @Beta @SafeVarargs public static IntStream from(byte... a)
      Deprecated.
      Parameters:
      a -
      Returns:
    • from

      @Deprecated @Beta public static IntStream from(byte[] a, int fromIndex, int toIndex)
      Deprecated.
      Parameters:
      a -
      fromIndex -
      toIndex -
      Returns:
    • from

      @Deprecated @Beta @SafeVarargs public static IntStream from(short... a)
      Deprecated.
      Parameters:
      a -
      Returns:
    • from

      @Deprecated @Beta public static IntStream from(short[] a, int fromIndex, int toIndex)
      Deprecated.
      Parameters:
      a -
      fromIndex -
      toIndex -
      Returns:
    • range

      public static IntStream range(int startInclusive, int endExclusive)
    • range

      public static IntStream range(int startInclusive, int endExclusive, int by)
    • rangeClosed

      public static IntStream rangeClosed(int startInclusive, int endInclusive)
    • rangeClosed

      public static IntStream rangeClosed(int startInclusive, int endInclusive, int by)
    • repeat

      public static IntStream repeat(int element, long n)
    • random

      public static IntStream random()
    • random

      public static IntStream random(int startInclusive, int endExclusive)
    • iterate

      public static IntStream iterate(BooleanSupplier hasNext, IntSupplier next)
    • iterate

      public static IntStream iterate(int init, BooleanSupplier hasNext, IntUnaryOperator f)
    • iterate

      public static IntStream iterate(int init, IntPredicate hasNext, IntUnaryOperator f)
      Parameters:
      init -
      hasNext - test if has next by hasNext.test(init) for first time and hasNext.test(f.apply(previous)) for remaining.
      f -
      Returns:
    • iterate

      public static IntStream iterate(int init, IntUnaryOperator f)
    • generate

      public static IntStream generate(IntSupplier s)
    • ofIndices

      public static <AC> IntStream ofIndices(AC source, BiFunction<? super AC,Integer,Integer> indexFunc)
      Parameters:
      source -
      indexFunc -
      Returns:
      See Also:
    • ofIndices

      public static <AC> IntStream ofIndices(AC source, int fromIndex, BiFunction<? super AC,Integer,Integer> indexFunc)
      Parameters:
      source -
      fromIndex -
      indexFunc -
      Returns:
      See Also:
    • ofIndices

      public static <AC> IntStream ofIndices(AC source, int fromIndex, int increment, BiFunction<? super AC,Integer,Integer> indexFunc)
       
       // Forwards:
       int[] a = {1, 2, 3, 2, 5, 1};
       IntStream.ofIndices(a, N::indexOf).println(); // [0, 5]
       IntStream.ofIndices(a, 1, N::indexOf).println(); // [5]
      
       // Backwards
       IntStream.ofIndices(a, 5, -1, N::lastIndexOf).println(); // [5, 0]
       IntStream.ofIndices(a, 4, -1, N::lastIndexOf).println(); // [0]
      
       // OR
       // Forwards:
       int[] source = { 1, 2, 3, 1, 2, 1 };
       int[] targetSubArray = { 1, 2 };
       IntStream.ofIndices(source, (a, fromIndex) -> Index.ofSubArray(a, fromIndex, targetSubArray, 0, targetSubArray.length).orElse(-1)).println(); // [0, 3]
      
       // Backwards
       IntStream.ofIndices(source, 5, -2, (a, fromIndex) -> Index.ofSubArray(a, fromIndex, targetSubArray, 0, targetSubArray.length).orElse(-1))
                          .println(); // [3, 0]
       
       
      Parameters:
      source -
      fromIndex -
      increment -
      indexFunc -
      Returns:
      See Also:
    • ofIndices

      public static <AC> IntStream ofIndices(AC source, int fromIndex, int increment, int sourceLen, BiFunction<? super AC,Integer,Integer> indexFunc)
    • concat

      @SafeVarargs public static IntStream concat(int[]... a)
    • concat

      @SafeVarargs public static IntStream concat(IntIterator... a)
    • concat

      @SafeVarargs public static IntStream concat(IntStream... a)
    • concat

      @Beta public static IntStream concat(List<int[]> c)
    • concat

      public static IntStream concat(Collection<? extends IntStream> c)
    • concatIterators

      @Beta public static IntStream concatIterators(Collection<? extends IntIterator> c)
    • zip

      public static IntStream zip(int[] a, int[] b, IntBinaryOperator zipFunction)
      Zip together the "a" and "b" arrays until one of them runs out of values. Each pair of values is combined into a single value using the supplied zipFunction function.
      Parameters:
      a -
      b -
      Returns:
    • zip

      public static IntStream zip(int[] a, int[] b, int[] c, IntTernaryOperator zipFunction)
      Zip together the "a", "b" and "c" arrays until one of them runs out of values. Each triple of values is combined into a single value using the supplied zipFunction function.
      Parameters:
      a -
      b -
      c -
      Returns:
    • zip

      public static IntStream zip(IntIterator a, IntIterator b, IntBinaryOperator zipFunction)
      Zip together the "a" and "b" iterators until one of them runs out of values. Each pair of values is combined into a single value using the supplied zipFunction function.
      Parameters:
      a -
      b -
      Returns:
    • zip

      public static IntStream zip(IntIterator a, IntIterator b, IntIterator c, IntTernaryOperator zipFunction)
      Zip together the "a", "b" and "c" iterators until one of them runs out of values. Each triple of values is combined into a single value using the supplied zipFunction function.
      Parameters:
      a -
      b -
      Returns:
    • zip

      public static IntStream zip(IntStream a, IntStream b, IntBinaryOperator zipFunction)
      Zip together the "a" and "b" streams until one of them runs out of values. Each pair of values is combined into a single value using the supplied zipFunction function.
      Parameters:
      a -
      b -
      Returns:
    • zip

      public static IntStream zip(IntStream a, IntStream b, IntStream c, IntTernaryOperator zipFunction)
      Zip together the "a", "b" and "c" streams until one of them runs out of values. Each triple of values is combined into a single value using the supplied zipFunction function.
      Parameters:
      a -
      b -
      Returns:
    • zip

      public static IntStream zip(Collection<? extends IntStream> c, IntNFunction<Integer> zipFunction)
      Zip together the iterators until one of them runs out of values. Each array of values is combined into a single value using the supplied zipFunction function.
      Parameters:
      c -
      zipFunction -
      Returns:
    • zip

      public static IntStream zip(int[] a, int[] b, int valueForNoneA, int valueForNoneB, IntBinaryOperator zipFunction)
      Zip together the "a" and "b" iterators until all of them runs out of values. Each pair of values is combined into a single value using the supplied zipFunction function.
      Parameters:
      a -
      b -
      valueForNoneA - value to fill if "a" runs out of values first.
      valueForNoneB - value to fill if "b" runs out of values first.
      zipFunction -
      Returns:
    • zip

      public static IntStream zip(int[] a, int[] b, int[] c, int valueForNoneA, int valueForNoneB, int valueForNoneC, IntTernaryOperator zipFunction)
      Zip together the "a", "b" and "c" iterators until all of them runs out of values. Each triple of values is combined into a single value using the supplied zipFunction function.
      Parameters:
      a -
      b -
      c -
      valueForNoneA - value to fill if "a" runs out of values.
      valueForNoneB - value to fill if "b" runs out of values.
      valueForNoneC - value to fill if "c" runs out of values.
      zipFunction -
      Returns:
    • zip

      public static IntStream zip(IntIterator a, IntIterator b, int valueForNoneA, int valueForNoneB, IntBinaryOperator zipFunction)
      Zip together the "a" and "b" iterators until all of them runs out of values. Each pair of values is combined into a single value using the supplied zipFunction function.
      Parameters:
      a -
      b -
      valueForNoneA - value to fill if "a" runs out of values first.
      valueForNoneB - value to fill if "b" runs out of values first.
      zipFunction -
      Returns:
    • zip

      public static IntStream zip(IntIterator a, IntIterator b, IntIterator c, int valueForNoneA, int valueForNoneB, int valueForNoneC, IntTernaryOperator zipFunction)
      Zip together the "a", "b" and "c" iterators until all of them runs out of values. Each triple of values is combined into a single value using the supplied zipFunction function.
      Parameters:
      a -
      b -
      c -
      valueForNoneA - value to fill if "a" runs out of values.
      valueForNoneB - value to fill if "b" runs out of values.
      valueForNoneC - value to fill if "c" runs out of values.
      zipFunction -
      Returns:
    • zip

      public static IntStream zip(IntStream a, IntStream b, int valueForNoneA, int valueForNoneB, IntBinaryOperator zipFunction)
      Zip together the "a" and "b" iterators until all of them runs out of values. Each pair of values is combined into a single value using the supplied zipFunction function.
      Parameters:
      a -
      b -
      valueForNoneA - value to fill if "a" runs out of values first.
      valueForNoneB - value to fill if "b" runs out of values first.
      zipFunction -
      Returns:
    • zip

      public static IntStream zip(IntStream a, IntStream b, IntStream c, int valueForNoneA, int valueForNoneB, int valueForNoneC, IntTernaryOperator zipFunction)
      Zip together the "a", "b" and "c" iterators until all of them runs out of values. Each triple of values is combined into a single value using the supplied zipFunction function.
      Parameters:
      a -
      b -
      c -
      valueForNoneA - value to fill if "a" runs out of values.
      valueForNoneB - value to fill if "b" runs out of values.
      valueForNoneC - value to fill if "c" runs out of values.
      zipFunction -
      Returns:
    • zip

      public static IntStream zip(Collection<? extends IntStream> c, int[] valuesForNone, IntNFunction<Integer> zipFunction)
      Zip together the iterators until all of them runs out of values. Each array of values is combined into a single value using the supplied zipFunction function.
      Parameters:
      c -
      valuesForNone - value to fill for any iterator runs out of values.
      zipFunction -
      Returns:
    • merge

      public static IntStream merge(int[] a, int[] b, IntBiFunction<MergeResult> nextSelector)
      Parameters:
      a -
      b -
      nextSelector - first parameter is selected if Nth.FIRST is returned, otherwise the second parameter is selected.
      Returns:
    • merge

      public static IntStream merge(int[] a, int[] b, int[] c, IntBiFunction<MergeResult> nextSelector)
      Parameters:
      a -
      b -
      c -
      nextSelector - first parameter is selected if Nth.FIRST is returned, otherwise the second parameter is selected.
      Returns:
    • merge

      public static IntStream merge(IntIterator a, IntIterator b, IntBiFunction<MergeResult> nextSelector)
      Parameters:
      a -
      b -
      nextSelector - first parameter is selected if Nth.FIRST is returned, otherwise the second parameter is selected.
      Returns:
    • merge

      public static IntStream merge(IntIterator a, IntIterator b, IntIterator c, IntBiFunction<MergeResult> nextSelector)
      Parameters:
      a -
      b -
      c -
      nextSelector - first parameter is selected if Nth.FIRST is returned, otherwise the second parameter is selected.
      Returns:
    • merge

      public static IntStream merge(IntStream a, IntStream b, IntBiFunction<MergeResult> nextSelector)
      Parameters:
      a -
      b -
      nextSelector - first parameter is selected if Nth.FIRST is returned, otherwise the second parameter is selected.
      Returns:
    • merge

      public static IntStream merge(IntStream a, IntStream b, IntStream c, IntBiFunction<MergeResult> nextSelector)
      Parameters:
      a -
      b -
      c -
      nextSelector - first parameter is selected if Nth.FIRST is returned, otherwise the second parameter is selected.
      Returns:
    • merge

      public static IntStream merge(Collection<? extends IntStream> c, IntBiFunction<MergeResult> nextSelector)
      Parameters:
      c -
      nextSelector - first parameter is selected if Nth.FIRST is returned, otherwise the second parameter is selected.
      Returns:
    • parallelMerge

      public static IntStream parallelMerge(Collection<? extends IntStream> c, IntBiFunction<MergeResult> nextSelector)
      All the elements from each input Collection/Iterator/Stream will be merged into two queues by multiple threads. Then these two new queues will be merged into one Iterator/Stream by one thread. So it's not totally lazy evaluation and may cause out of memory error if there are too many elements merged into the new queues. Consider using merge, which is totally lazy evaluation.
      Parameters:
      c -
      nextSelector - first parameter is selected if Nth.FIRST is returned, otherwise the second parameter is selected.
      Returns:
    • parallelMerge

      public static IntStream parallelMerge(Collection<? extends IntStream> c, IntBiFunction<MergeResult> nextSelector, int maxThreadNum)
      All the elements from each input Collection/Iterator/Stream will be merged into two queues by multiple threads. Then these two new queues will be merged into one Iterator/Stream by one thread. So it's not totally lazy evaluation and may cause out of memory error if there are too many elements merged into the new queues. Consider using merge, which is totally lazy evaluation.
      Parameters:
      c -
      nextSelector - first parameter is selected if Nth.FIRST is returned, otherwise the second parameter is selected.
      maxThreadNum -
      Returns:
    • elementAt

      public u.OptionalInt elementAt(long position)
      Specified by:
      elementAt in interface BaseStream<T,A,P,C,PL,OT,IT,ITER extends Iterator<T>,S extends com.landawn.abacus.util.stream.StreamBase<T,A,P,C,PL,OT,IT,ITER,S>>
      Parameters:
      position - in current stream(not upstream or origin source). It starts from 0.
      Returns:
    • rateLimited

      public IntStream rateLimited(double permitsPerSecond)
      Specified by:
      rateLimited in interface BaseStream<T,A,P,C,PL,OT,IT,ITER extends Iterator<T>,S extends com.landawn.abacus.util.stream.StreamBase<T,A,P,C,PL,OT,IT,ITER,S>>
      Returns:
      See Also:
    • peek

      public IntStream peek(IntConsumer action)
      Specified by:
      peek in interface BaseStream<T,A,P,C,PL,OT,IT,ITER extends Iterator<T>,S extends com.landawn.abacus.util.stream.StreamBase<T,A,P,C,PL,OT,IT,ITER,S>>
    • sliding

      public Stream<IntStream> sliding(int windowSize)
      Specified by:
      sliding in interface BaseStream<T,A,P,C,PL,OT,IT,ITER extends Iterator<T>,S extends com.landawn.abacus.util.stream.StreamBase<T,A,P,C,PL,OT,IT,ITER,S>>
      Returns:
      See Also:
    • slidingToList

      public Stream<IntList> slidingToList(int windowSize)
      Specified by:
      slidingToList in interface BaseStream<T,A,P,C,PL,OT,IT,ITER extends Iterator<T>,S extends com.landawn.abacus.util.stream.StreamBase<T,A,P,C,PL,OT,IT,ITER,S>>
      Returns:
      See Also:
    • shuffled

      public IntStream shuffled()
      Description copied from interface: BaseStream

      This method only runs sequentially, even in parallel stream and all elements will be loaded to memory.
      Specified by:
      shuffled in interface BaseStream<T,A,P,C,PL,OT,IT,ITER extends Iterator<T>,S extends com.landawn.abacus.util.stream.StreamBase<T,A,P,C,PL,OT,IT,ITER,S>>
      Returns:
    • throwIfEmpty

      @SequentialOnly @IntermediateOp public IntStream throwIfEmpty(Supplier<? extends RuntimeException> exceptionSupplier)
      Specified by:
      throwIfEmpty in interface BaseStream<T,A,P,C,PL,OT,IT,ITER extends Iterator<T>,S extends com.landawn.abacus.util.stream.StreamBase<T,A,P,C,PL,OT,IT,ITER,S>>
    • toImmutableList

      public ImmutableList<Integer> toImmutableList()
      Specified by:
      toImmutableList in interface BaseStream<T,A,P,C,PL,OT,IT,ITER extends Iterator<T>,S extends com.landawn.abacus.util.stream.StreamBase<T,A,P,C,PL,OT,IT,ITER,S>>
    • toImmutableSet

      public ImmutableSet<Integer> toImmutableSet()
      Specified by:
      toImmutableSet in interface BaseStream<T,A,P,C,PL,OT,IT,ITER extends Iterator<T>,S extends com.landawn.abacus.util.stream.StreamBase<T,A,P,C,PL,OT,IT,ITER,S>>
    • join

      public String join(CharSequence delimiter)
      Specified by:
      join in interface BaseStream<T,A,P,C,PL,OT,IT,ITER extends Iterator<T>,S extends com.landawn.abacus.util.stream.StreamBase<T,A,P,C,PL,OT,IT,ITER,S>>
    • println

      public void println()
      Specified by:
      println in interface BaseStream<T,A,P,C,PL,OT,IT,ITER extends Iterator<T>,S extends com.landawn.abacus.util.stream.StreamBase<T,A,P,C,PL,OT,IT,ITER,S>>
    • __

      public <SS extends BaseStream> SS __(Function<? super IntStream,? extends SS> transfer)
      Specified by:
      __ in interface BaseStream<T,A,P,C,PL,OT,IT,ITER extends Iterator<T>,S extends com.landawn.abacus.util.stream.StreamBase<T,A,P,C,PL,OT,IT,ITER,S>>
    • isParallel

      public boolean isParallel()
      Specified by:
      isParallel in interface BaseStream<T,A,P,C,PL,OT,IT,ITER extends Iterator<T>,S extends com.landawn.abacus.util.stream.StreamBase<T,A,P,C,PL,OT,IT,ITER,S>>
    • sequential

      public IntStream sequential()
      Specified by:
      sequential in interface BaseStream<T,A,P,C,PL,OT,IT,ITER extends Iterator<T>,S extends com.landawn.abacus.util.stream.StreamBase<T,A,P,C,PL,OT,IT,ITER,S>>
    • parallel

      public IntStream parallel()
      Description copied from interface: BaseStream
      Consider using sps(Function) if only next operation need to be parallelized. For example:
       stream.parallel().map(f).filter(p)...;
      
       // Replace above line of code with "sps" if only "f" need to be parallelized. And "p" is fast enough to be executed in sequential Stream.
       stream.sps(s -> s.map(f)).filter(p)...;
       // Or switch the stream back sequential stream if don't use "sps".
       stream.parallel().map(f).sequential().filter(p)...;
      
       
      In most scenarios, there could be only one operation need be parallelized in the stream. So sps(Function) is recommended in most of scenarios.
      Specified by:
      parallel in interface BaseStream<T,A,P,C,PL,OT,IT,ITER extends Iterator<T>,S extends com.landawn.abacus.util.stream.StreamBase<T,A,P,C,PL,OT,IT,ITER,S>>
      Returns:
      See Also:
    • parallel

      public IntStream parallel(int maxThreadNum)
      Description copied from interface: BaseStream
      Consider using sps(int, Function) if only next operation need to be parallelized. For example:
       stream.parallel(maxThreadNum).map(f).filter(p)...;
      
       // Replace above line of code with "sps" if only "f" need to be parallelized. And "p" is fast enough to be executed in sequential Stream.
       stream.sps(maxThreadNum, s -> s.map(f)).filter(p)...;
       // Or switch the stream back sequential stream if don't use "sps".
       stream.parallel(maxThreadNum).map(f).sequential().filter(p)...;
      
       
      In most scenarios, there could be only one operation need be parallelized in the stream. So sps(int, Function) is recommended in most of scenarios.
      Specified by:
      parallel in interface BaseStream<T,A,P,C,PL,OT,IT,ITER extends Iterator<T>,S extends com.landawn.abacus.util.stream.StreamBase<T,A,P,C,PL,OT,IT,ITER,S>>
      Returns:
      See Also:
    • parallel

      public IntStream parallel(BaseStream.Splitor splitor)
      Description copied from interface: BaseStream
      Consider using sps(ParallelSettings, Function) if only next operation need to be parallelized. For example:
       stream.parallel(splitor).map(f).filter(p)...;
      
       // Replace above line of code with "sps" if only "f" need to be parallelized. And "p" is fast enough to be executed in sequential Stream.
       stream.sps(SP.create(splitor), s -> s.map(f)).filter(p)...;
       // Or switch the stream back sequential stream if don't use "sps".
       stream.parallel(splitor).map(f).sequential().filter(p)...;
      
       
      In most scenarios, there could be only one operation need be parallelized in the stream. So sps(ParallelSettings, Function) is recommended in most of scenarios.
      Specified by:
      parallel in interface BaseStream<T,A,P,C,PL,OT,IT,ITER extends Iterator<T>,S extends com.landawn.abacus.util.stream.StreamBase<T,A,P,C,PL,OT,IT,ITER,S>>
      Returns:
      See Also:
    • parallel

      public IntStream parallel(int maxThreadNum, BaseStream.Splitor splitor)
      Description copied from interface: BaseStream
      Returns an equivalent stream that is parallel. May return itself if the stream was already parallel with the same maxThreadNum and splitor as the specified ones.

      When to use parallel Streams?
       
       Profiler.run(1, 1, 3, "sequential", () -> Stream.of(list).operation(F)...).printResult();
       Profiler.run(1, 1, 3, "parallel", () -> Stream.of(list).parallel().operation(F)...).printResult();
       
       
      Here is a sample performance test with computer: CPU Intel i7-3520M 4-cores 2.9 GHz, JDK 1.8.0_101, Windows 7:
       
      
           public void test_perf() {
              final String[] strs = new String[10_000];
              N.fill(strs, N.uuid());
          
              final int m = 10;
              final Function<String, Long> mapper = str -> {
                  long result = 0;
                  for (int i = 0; i < m; i++) {
                      result += N.sum(str.toCharArray()) + 1;
                  }
                  return result;
              };
          
              final MutableLong sum = MutableLong.of(0);
          
              for (int i = 0, len = strs.length; i < len; i++) {
                  sum.add(mapper.apply(strs[i]));
              }
          
              final int threadNum = 1, loopNum = 100, roundNum = 3;
          
              Profiler.run(threadNum, loopNum, roundNum, "For Loop", () -> {
                  long result = 0;
                  for (int i = 0, len = strs.length; i < len; i++) {
                      result += mapper.apply(strs[i]);
                  }
                  assertEquals(sum.longValue(), result);
              }).printResult();
          
              Profiler.run(threadNum, loopNum, roundNum, "JDK Sequential",
                      () -> assertEquals(sum.longValue(), java.util.stream.Stream.of(strs).map(mapper).mapToLong(e -> e).sum())).printResult();
          
              Profiler.run(threadNum, loopNum, roundNum, "JDK Parallel",
                      () -> assertEquals(sum.longValue(), java.util.stream.Stream.of(strs).parallel().map(mapper).mapToLong(e -> e).sum())).printResult();
          
              Profiler.run(threadNum, loopNum, roundNum, "Abcus Sequential", () -> assertEquals(sum.longValue(), Stream.of(strs).map(mapper).mapToLong(e -> e).sum()))
                      .printResult();
          
              Profiler.run(threadNum, loopNum, roundNum, "Abcus Parallel",
                      () -> assertEquals(sum.longValue(), Stream.of(strs).parallel().map(mapper).mapToLong(e -> e).sum())).printResult();
          
              Profiler.run(threadNum, loopNum, roundNum, "Abcus Parallel by chunck", () -> assertEquals(sum.longValue(),
                      Stream.of(strs).splitToList(100).parallel().map(it -> N.sumLong(it, e -> mapper.apply(e))).mapToLong(e -> e).sum())).printResult();
           }
          
       
       
      And test result: Unit is milliseconds. N(the number of elements) is 10_000, Q(cost per element of F, the per-element function (usually a lambda), here is mapper) is calculated by: value of 'For loop' / N(10_000).
      m = 1 m = 10m = 50m = 100m = 500m = 1000
      Q 0.000020.00020.0010.0020.010.02
      For Loop0.232.31122110219
      JDK Sequential0.282.31122114212
      JDK Parallel0.221.361266122
      Abcus Sequential0.321122112212
      Abcus Parallel1111111677128
      Comparison:
      • Again, do NOT and should NOT use parallel Streams if you don't have any performance problem with sequential Streams, because using parallel Streams has extra cost.
      • Again, consider using parallel Streams only when N(the number of elements) * Q(cost per element of F, the per-element function (usually a lambda)) is big enough.
      • The implementation of parallel Streams in Abacus is more than 10 times, slower than parallel Streams in JDK when Q is tiny(here is less than 0.0002 milliseconds by the test):
        • The implementation of parallel Streams in JDK 8 still can beat the sequential/for loop when Q is tiny(Here is 0.00002 milliseconds by the test). That's amazing, considering the extra cost brought by parallel computation. It's well done.
        • The implementation of parallel Streams in Abacus is pretty simple and straight forward. The extra cost(starting threads/synchronization/queue...) brought by parallel Streams in Abacus is too bigger to tiny Q(Here is less than 0.001 milliseconds by the test). But it starts to be faster than sequential Streams when Q is big enough(Here is 0.001 milliseconds by the test) and starts to catch the parallel Streams in JDK when Q is bigger(Here is 0.01 milliseconds by the test).
        • Consider using the parallel Streams in Abacus when Q is big enough, specially when IO involved in F. Because one IO operation(e.g. DB/web service request..., Reading/Writing file...) usually takes 1 to 1000 milliseconds, or even longer. By the parallel Streams APIs in Abacus, it's very simple to specify max thread numbers. Sometimes, it's much faster to execute IO/Network requests with a bit more threads. It's fair to say that the parallel Streams in Abacus is high efficient, may same as or faster than the parallel Streams in JDK when Q is big enough, except F is heavy cpu-used operation. Most of the times, the Q is big enough to consider using parallel Stream is because IO/Network is involved in F.
      • JDK 7 is supported by the Streams in Abacus. It's perfect to work with retrolambda on Android
      • All primitive types are supported by Stream APIs in Abacus except boolean


      A bit more about Lambdas/Stream APIs, you may heard that Lambdas/Stream APIs is 5 time slower than imperative programming. It's true when Q and F is VERY, VERY tiny, like f = (int a, int b) -> a + b;. But if we look into the samples in the article and think about it: it just takes less than 1 milliseconds to get the max value in 100k numbers. There is potential performance issue only if the "get the max value in 100K numbers" call many, many times in your API or single request. Otherwise, the difference between 0.1 milliseconds to 0.5 milliseconds can be totally ignored. Usually we meet performance issue only if Q and F is big enough. However, the performance of Lambdas/Streams APIs is closed to for loop when Q and F is big enough. No matter in which scenario, We don't need and should not concern the performance of Lambdas/Stream APIs.

      Although it's is parallel Streams, it doesn't means all the methods are executed in parallel. Because the sequential way is as fast, or even faster than the parallel way for some methods, or is pretty difficult, if not possible, to implement the method by parallel approach. Here are the methods which are executed sequentially even in parallel Streams.

      splitXXX/splitAt/splitBy/slidingXXX/collapse, distinct, reverse, rotate, shuffle, indexed, cached, top, kthLargest, count, toArray, toList, toList, toSet, toMultiset, toLongMultiset, intersection(Collection c), difference(Collection c), symmetricDifference(Collection c), forEach(identity, accumulator, predicate), findFirstOrLast, findFirstAndLast
      Specified by:
      parallel in interface BaseStream<T,A,P,C,PL,OT,IT,ITER extends Iterator<T>,S extends com.landawn.abacus.util.stream.StreamBase<T,A,P,C,PL,OT,IT,ITER,S>>
      Parameters:
      maxThreadNum - Default value is the number of cpu-cores. Steps/operations will be executed sequentially if maxThreadNum is 1.
      splitor - The target array is split by ranges for multiple threads if splitor is splitor.ARRAY and target stream composed by array. It looks like:
      
       for (int i = 0; i < maxThreadNum; i++) {
           final int sliceIndex = i;
      
           futureList.add(asyncExecutor.execute(new Runnable() {
               public void run() {
                   int cursor = fromIndex + sliceIndex * sliceSize;
                   final int to = toIndex - cursor > sliceSize ? cursor + sliceSize : toIndex;
                   while (cursor < to) {
                       action.accept(elements[cursor++]);
                   }
              }
          }));
       }
       
      Otherwise, each thread will get the elements from the target array/iterator in the stream one by one with the target array/iterator synchronized. It looks like:
      
       for (int i = 0; i < maxThreadNum; i++) {
           futureList.add(asyncExecutor.execute(new Runnable() {
               public void run() {
                   T next = null;
      
                   while (true) {
                       synchronized (elements) {
                           if (cursor.intValue() < toIndex) {
                               next = elements[cursor.getAndIncrement()];
                           } else {
                               break;
                           }
                       }
      
                       action.accept(next);
                   }
               }
           }));
       }
       
      Using splitor.ARRAY only when F (the per-element function (usually a lambda)) is very tiny and the cost of synchronization on the target array/iterator is too big to it. For the F involving IO or taking 'long' to complete, choose splitor.ITERATOR. Default value is splitor.ITERATOR.

      Consider using sps(ParallelSettings, Function) if only next operation need to be parallelized. For example:
       stream.parallel(maxThreadNum, splitor).map(f).filter(p)...;
      
       // Replace above line of code with "sps" if only "f" need to be parallelized. And "p" is fast enough to be executed in sequential Stream.
       stream.sps(SP.create(maxThreadNum, splitor), s -> s.map(f)).filter(p)...;
       // Or switch the stream back sequential stream if don't use "sps".
       stream.parallel(maxThreadNum, splitor).map(f).sequential().filter(p)...;
      
       
      In most scenarios, there could be only one operation need be parallelized in the stream. So sps(ParallelSettings, Function) is recommended in most of scenarios.
      Returns:
      See Also:
    • parallel

      public IntStream parallel(int maxThreadNum, Executor executor)
      Description copied from interface: BaseStream
      Consider using sps(ParallelSettings, Function) if only next operation need to be parallelized. For example:
       stream.parallel(maxThreadNum, executor).map(f).filter(p)...;
      
       // Replace above line of code with "sps" if only "f" need to be parallelized. And "p" is fast enough to be executed in sequential Stream.
       stream.sps(SP.create(maxThreadNum,  executor), s -> s.map(f)).filter(p)...;
       // Or switch the stream back sequential stream if don't use "sps".
       stream.parallel(maxThreadNum, executor).map(f).sequential().filter(p)...;
      
       
      In most scenarios, there could be only one operation need be parallelized in the stream. So sps(ParallelSettings, Function) is recommended in most of scenarios.
      Specified by:
      parallel in interface BaseStream<T,A,P,C,PL,OT,IT,ITER extends Iterator<T>,S extends com.landawn.abacus.util.stream.StreamBase<T,A,P,C,PL,OT,IT,ITER,S>>
      executor - should be able to execute maxThreadNum * following up operations in parallel.
      Returns:
      See Also:
    • parallel

      public IntStream parallel(Executor executor)
      Description copied from interface: BaseStream
      Consider using sps(ParallelSettings, Function) if only next operation need to be parallelized. For example:
       stream.parallel(executor).map(f).filter(p)...;
      
       // Replace above line of code with "sps" if only "f" need to be parallelized. And "p" is fast enough to be executed in sequential Stream.
       stream.sps(SP.create(executor), s -> s.map(f)).filter(p)...;
       // Or switch the stream back sequential stream if don't use "sps".
       stream.parallel(executor).map(f).sequential().filter(p)...;
      
       
      In most scenarios, there could be only one operation need be parallelized in the stream. So sps(ParallelSettings, Function) is recommended in most of scenarios.
      Specified by:
      parallel in interface BaseStream<T,A,P,C,PL,OT,IT,ITER extends Iterator<T>,S extends com.landawn.abacus.util.stream.StreamBase<T,A,P,C,PL,OT,IT,ITER,S>>
      Parameters:
      executor - should be able to execute maxThreadNum * following up operations in parallel.
      Returns:
      See Also:
    • parallel

      public IntStream parallel(int maxThreadNum, BaseStream.Splitor splitor, Executor executor)
      Description copied from interface: BaseStream
      Consider using sps(ParallelSettings, Function) if only next operation need to be parallelized. For example:
       stream.parallel(maxThreadNum, splitor, executor).map(f).filter(p)...;
      
       // Replace above line of code with "sps" if only "f" need to be parallelized. And "p" is fast enough to be executed in sequential Stream.
       stream.sps(SP.create(maxThreadNum, splitor, executor), s -> s.map(f)).filter(p)...;
       // Or switch the stream back sequential stream if don't use "sps".
       stream.parallel(maxThreadNum, splitor, executor).map(f).sequential().filter(p)...;
      
       
      In most scenarios, there could be only one operation need be parallelized in the stream. So sps(ParallelSettings, Function) is recommended in most of scenarios.
      Specified by:
      parallel in interface BaseStream<T,A,P,C,PL,OT,IT,ITER extends Iterator<T>,S extends com.landawn.abacus.util.stream.StreamBase<T,A,P,C,PL,OT,IT,ITER,S>>
      executor - should be able to execute maxThreadNum * following up operations in parallel.
      Returns:
      See Also:
    • parallel

      public IntStream parallel(BaseStream.ParallelSettings ps)
      Description copied from interface: BaseStream
      Consider using sps(ParallelSettings, Function) if only next operation need to be parallelized. For example:
       stream.parallel(parallelSettings).map(f).filter(p)...;
      
       // Replace above line of code with "sps" if only "f" need to be parallelized. And "p" is fast enough to be executed in sequential Stream.
       stream.sps(SP.create(parallelSettings), s -> s.map(f)).filter(p)...;
       // Or switch the stream back sequential stream if don't use "sps".
       stream.parallel(parallelSettings).map(f).sequential().filter(p)...;
      
       
      In most scenarios, there could be only one operation need be parallelized in the stream. So sps(ParallelSettings, Function) is recommended in most of scenarios.
      Specified by:
      parallel in interface BaseStream<T,A,P,C,PL,OT,IT,ITER extends Iterator<T>,S extends com.landawn.abacus.util.stream.StreamBase<T,A,P,C,PL,OT,IT,ITER,S>>
      Returns:
      See Also:
    • sps

      public <SS extends BaseStream> SS sps(Function<? super IntStream,? extends SS> ops)
      Description copied from interface: BaseStream
      Temporarily switch the stream to parallel stream for operation ops and then switch back to sequence stream.
      stream().parallel().ops(map/filter/...).sequence()
      Specified by:
      sps in interface BaseStream<T,A,P,C,PL,OT,IT,ITER extends Iterator<T>,S extends com.landawn.abacus.util.stream.StreamBase<T,A,P,C,PL,OT,IT,ITER,S>>
      Returns:
    • sps

      public <SS extends BaseStream> SS sps(int maxThreadNum, Function<? super IntStream,? extends SS> ops)
      Description copied from interface: BaseStream
      Temporarily switch the stream to parallel stream for operation ops and then switch back to sequence stream.
      stream().parallel(maxThreadNum).ops(map/filter/...).sequence()
      Specified by:
      sps in interface BaseStream<T,A,P,C,PL,OT,IT,ITER extends Iterator<T>,S extends com.landawn.abacus.util.stream.StreamBase<T,A,P,C,PL,OT,IT,ITER,S>>
      Returns:
    • sps

      public <SS extends BaseStream> SS sps(BaseStream.ParallelSettings ps, Function<? super IntStream,? extends SS> ops)
      Description copied from interface: BaseStream
      Temporarily switch the stream to parallel stream for operation ops and then switch back to sequence stream.
      stream().parallel(ps).ops(map/filter/...).sequence()
      Specified by:
      sps in interface BaseStream<T,A,P,C,PL,OT,IT,ITER extends Iterator<T>,S extends com.landawn.abacus.util.stream.StreamBase<T,A,P,C,PL,OT,IT,ITER,S>>
      Returns:
    • psp

      public <SS extends BaseStream> SS psp(Function<? super IntStream,? extends SS> ops)
      Description copied from interface: BaseStream
      Temporarily switch the stream to sequence stream for operation ops and then switch back to parallel stream with same maxThreadNum/splitor/asyncExecutor.
      stream().sequence().ops(map/filter/...).parallel(sameMaxThreadNum, sameSplitor, sameAsyncExecutor)
      Specified by:
      psp in interface BaseStream<T,A,P,C,PL,OT,IT,ITER extends Iterator<T>,S extends com.landawn.abacus.util.stream.StreamBase<T,A,P,C,PL,OT,IT,ITER,S>>
      Returns:
    • toArray

      public int[] toArray()
      Specified by:
      toArray in interface BaseStream<T,A,P,C,PL,OT,IT,ITER extends Iterator<T>,S extends com.landawn.abacus.util.stream.StreamBase<T,A,P,C,PL,OT,IT,ITER,S>>
    • close

      public void close()
      Description copied from interface: BaseStream
      It will be called by terminal operations in final.
      Specified by:
      close in interface AutoCloseable
      Specified by:
      close in interface BaseStream<T,A,P,C,PL,OT,IT,ITER extends Iterator<T>,S extends com.landawn.abacus.util.stream.StreamBase<T,A,P,C,PL,OT,IT,ITER,S>>
      Specified by:
      close in interface Closeable