package compute
- Alphabetic
- Public
- All
Type Members
-
trait
Expressions extends AnyRef
Author:
杨博 (Yang Bo)
-
trait
Memory[Element] extends AnyRef
Author:
杨博 (Yang Bo) <[email protected]>
- trait OpenCL extends MonadicCloseable[UnitContinuation] with DefaultCloseable
-
trait
OpenCLKernelBuilder extends AllExpressions
Author:
杨博 (Yang Bo)
- trait Tensors extends OpenCL
-
trait
Trees extends Expressions
Author:
杨博 (Yang Bo)
Value Members
- object Expressions
- object Memory extends LowPriorityMemory
-
object
NDimensionalAffineTransform
Author:
杨博 (Yang Bo)
-
object
OpenCL
Author:
杨博 (Yang Bo)
- object OpenCLKernelBuilder
- object Tensors
- object Trees
-
object
cpu extends StrictLogging with UnsafeMathOptimizations with LogContextNotification with GlobalExecutionContext with CommandQueuePool with UseAllCpuDevices with DontReleaseEventTooEarly with SynchronizedCreatingKernel with HandleEventInExecutionContextForIntelAndAMDPlatform with WangHashingRandomNumberGenerator
Contains N-dimensional array types on CPU.
Contains N-dimensional array types on CPU.
You may want to import Tensor, which is the base type of N-dimensional arrays:
import com.thoughtworks.compute.cpu.Tensor
Multiple
Tensor
s of the sameshape
can be merged into a largerTensor
via theTensor.join
function. Given aSeq
of three 2x2Tensor
s,val mySubtensors: Seq[Tensor] = Seq( Tensor(Seq(Seq(1.0f, 2.0f), Seq(3.0f, 4.0f))), Tensor(Seq(Seq(5.0f, 6.0f), Seq(7.0f, 8.0f))), Tensor(Seq(Seq(9.0f, 10.0f), Seq(11.0f, 12.0f))), )
when
join
ing them,val merged: Tensor = Tensor.join(mySubtensors)
then the result should be a 2x2x3
Tensor
.merged.toString should be("[[[1.0,5.0,9.0],[2.0,6.0,10.0]],[[3.0,7.0,11.0],[4.0,8.0,12.0]]]") merged.shape should be(Array(2, 2, 3))
, A
Tensor
can besplit
into smallTensor
s on the direction of a specific dimension. Given a 3D tensor whoseshape
is 2x3x4,val my3DTensor = Tensor((0.0f until 24.0f by 1.0f).grouped(4).toSeq.grouped(3).toSeq) my3DTensor.shape should be(Array(2, 3, 4))
when
split
it at the dimension #0,val subtensors0 = my3DTensor.split(dimension = 0)
then the result should be a
Seq
of two 3x4 tensors.subtensors0.toString should be("TensorSeq([[0.0,1.0,2.0,3.0],[4.0,5.0,6.0,7.0],[8.0,9.0,10.0,11.0]], [[12.0,13.0,14.0,15.0],[16.0,17.0,18.0,19.0],[20.0,21.0,22.0,23.0]])") inside(subtensors0) { case Seq(subtensor0, subtensor1) => subtensor0.shape should be(Array(3, 4)) subtensor1.shape should be(Array(3, 4)) }
When
split
it at the dimension #1,val subtensors1 = my3DTensor.split(dimension = 1)
then the result should be a
Seq
of three 2x4 tensors.subtensors1.toString should be("TensorSeq([[0.0,1.0,2.0,3.0],[12.0,13.0,14.0,15.0]], [[4.0,5.0,6.0,7.0],[16.0,17.0,18.0,19.0]], [[8.0,9.0,10.0,11.0],[20.0,21.0,22.0,23.0]])") inside(subtensors1) { case Seq(subtensor0, subtensor1, subtensor2) => subtensor0.shape should be(Array(2, 4)) subtensor1.shape should be(Array(2, 4)) subtensor2.shape should be(Array(2, 4)) }
, In Compute.scala, an N-dimensional array is typed as Tensor, which can be created from scala.collection.Seq or scala.Array.
val my2DArray: Tensor = Tensor(Array(Seq(1.0f, 2.0f), Seq(3.0f, 4.0f))) my2DArray.toString should be("[[1.0,2.0],[3.0,4.0]]")
Examples: -
object
gpu extends StrictLogging with UnsafeMathOptimizations with LogContextNotification with GlobalExecutionContext with CommandQueuePool with UseAllGpuDevices with DontReleaseEventTooEarly with SynchronizedCreatingKernel with HandleEventInExecutionContextForIntelAndAMDPlatform with WangHashingRandomNumberGenerator
Contains N-dimensional array types on GPU.