Class/Object

org.platanios.tensorflow.api.ops.io.data

DynamicTFRecordDataset

Related Docs: object DynamicTFRecordDataset | package data

Permalink

class DynamicTFRecordDataset extends Dataset[tensors.Tensor[types.STRING], Output, types.STRING, core.Shape]

Dataset with elements read from TensorFlow record files.

Linear Supertypes
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. DynamicTFRecordDataset
  2. Dataset
  3. AnyRef
  4. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new DynamicTFRecordDataset(filenames: Output, compressionType: CompressionType = NoCompression, bufferSize: Long = 256 * 1024, name: String = "TFRecordDataset")

    Permalink

    filenames

    Scalar or vector tensor containing the the name(s) of the file(s) to be read.

    compressionType

    Compression type for the file.

    bufferSize

    Number of bytes to buffer while reading from the file.

    name

    Name for this dataset.

    Attributes
    protected

Value Members

  1. final def !=(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0

    Permalink
    Definition Classes
    Any
  5. def clone(): AnyRef

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  6. def createHandle(): Output

    Permalink

    Creates a VARIANT scalar tensor representing this dataset.

    Creates a VARIANT scalar tensor representing this dataset. This function adds ops to the current graph, that create the dataset resource.

    Definition Classes
    DynamicTFRecordDatasetDataset
  7. def createInitializableIterator(sharedName: String = "", name: String = "InitializableIterator"): InitializableIterator[tensors.Tensor[types.STRING], Output, types.STRING, core.Shape]

    Permalink

    Creates an Iterator for enumerating the elements of this dataset.

    Creates an Iterator for enumerating the elements of this dataset.

    **Note:** The returned iterator will be in an uninitialized state. You must execute the InitializableIterator.initializer op before using it.

    sharedName

    If non-empty, then the constructed reader will be shared under the the provided name across multiple sessions that share the same devices (e.g., when using a remote server).

    name

    Name for the op created in relation to the iterator.

    returns

    Created iterator.

    Definition Classes
    Dataset
  8. final def eq(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  9. def equals(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  10. implicit val evData: Aux[tensors.Tensor[types.STRING], Output, types.STRING, core.Shape]

    Permalink
    Definition Classes
    Dataset
  11. implicit val evFunctionInput: ArgType[Output]

    Permalink
    Definition Classes
    Dataset
  12. implicit val evStructure: Aux[tensors.Tensor[types.STRING], Output, types.STRING, core.Shape]

    Permalink
    Definition Classes
    Dataset
  13. def finalize(): Unit

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  14. final def getClass(): Class[_]

    Permalink
    Definition Classes
    AnyRef → Any
  15. def hashCode(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  16. final def isInstanceOf[T0]: Boolean

    Permalink
    Definition Classes
    Any
  17. val name: String

    Permalink

    Name for this dataset.

    Name for this dataset.

    Definition Classes
    DynamicTFRecordDatasetDataset
  18. final def ne(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  19. final def notify(): Unit

    Permalink
    Definition Classes
    AnyRef
  20. final def notifyAll(): Unit

    Permalink
    Definition Classes
    AnyRef
  21. def outputDataTypes: types.STRING

    Permalink

    Returns the data types corresponding to each element of this dataset, matching the structure of the elements.

    Returns the data types corresponding to each element of this dataset, matching the structure of the elements.

    Definition Classes
    DynamicTFRecordDatasetDataset
  22. def outputShapes: core.Shape

    Permalink

    Returns the shapes corresponding to each element of this dataset, matching the structure of the elements.

    Returns the shapes corresponding to each element of this dataset, matching the structure of the elements.

    Definition Classes
    DynamicTFRecordDatasetDataset
  23. def shard(numShards: Long, shardIndex: Long): Dataset[tensors.Tensor[types.STRING], Output, types.STRING, core.Shape]

    Permalink

    Creates a dataset that includes only 1 / numShards of the elements of this dataset.

    Creates a dataset that includes only 1 / numShards of the elements of this dataset.

    This operator is very useful when running distributed training, as it allows each worker to read a unique subset of the dataset.

    When reading a single input file, you can skip elements as follows:

    tf.data.TFRecordDataset(inputFile)
      .shard(numWorkers, workerIndex)
      .repeat(numEpochs)
      .shuffle(shuffleBufferSize)
      .map(parserFn, numParallelCalls)

    Important caveats:

    • Be sure to shard before you use any randomizing operator (such as shuffle).
    • Generally it is best if the shard operator is used early in the dataset pipeline. For example, when reading from a set of TensorFlow record files, shard before converting the dataset to input samples. This avoids reading every file on every worker. The following is an example of an efficient sharding strategy within a complete pipeline:
    tf.data.listFiles(pattern)
      .shard(numWorkers, workerIndex)
      .repeat(numEpochs)
      .shuffle(shuffleBufferSize)
      .repeat()
      .interleave(tf.data.TFRecordDataset, cycleLength = numReaders, blockLength = 1)
      .map(parserFn, numParallelCalls)
    numShards

    Number of shards to use.

    shardIndex

    Index of the shard to obtain.

    returns

    Created (sharded) dataset.

    Definition Classes
    Dataset
  24. final def synchronized[T0](arg0: ⇒ T0): T0

    Permalink
    Definition Classes
    AnyRef
  25. def toString(): String

    Permalink
    Definition Classes
    Dataset → AnyRef → Any
  26. def transform[TT, TO, TD, TS](transformFn: (Dataset[tensors.Tensor[types.STRING], Output, types.STRING, core.Shape]) ⇒ Dataset[TT, TO, TD, TS])(implicit evStructure: Aux[TT, TO, TD, TS], evT: Aux[TT, TO, TD, TS], evFunctionInputT: ArgType[TO]): Dataset[TT, TO, TD, TS]

    Permalink

    Applies a transformation function to this dataset.

    Applies a transformation function to this dataset.

    transform() enables chaining of custom dataset transformations, which are represented as functions that take one dataset argument and return a transformed dataset.

    transformFn

    Dataset transformation function.

    returns

    Transformed dataset.

    Definition Classes
    Dataset
  27. final def wait(): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  28. final def wait(arg0: Long, arg1: Int): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  29. final def wait(arg0: Long): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Inherited from AnyRef

Inherited from Any

Ungrouped