com.twitter.finatra.kafkastreams.transformer.stores.internal
Removes the database entries in the range ["from", "to"), i.e., including "from" and excluding "to".
Removes the database entries in the range ["from", "to"), i.e., including "from" and excluding "to". It is not an error if no keys exist in the range ["from", "to").
InvalidStateStoreException
if the store is not initialized
Removes the database entries in the range ["begin_key", "end_key"), i.e., including "begin_key" and excluding "end_key".
Removes the database entries in the range ["begin_key", "end_key"), i.e., including "begin_key" and excluding "end_key". Returns OK on success, and a non-OK status on error. It is not an error if no keys exist in the range ["begin_key", "end_key").
This feature is currently an experimental performance optimization for deleting very large ranges of contiguous keys. Invoking it many times or on small ranges may severely degrade read performance; in particular, the resulting performance can be worse than calling Delete() for each key in the range. Note also the degraded read performance affects keys outside the deleted ranges, and affects database operations involving scans, like flush and compaction.
Consider setting ReadOptions::ignore_range_deletions = true to speed up reads for key(s) that are known to be unaffected by range deletions.
Note: Changelog entries will not be deleted, so this method is best used when relying on retention.ms to delete entries from the changelog
Delete the value from the store (if there is one) Note: This version of delete avoids getting the prior value which keyValueStore.delete does
Delete the value from the store (if there is one) Note: This version of delete avoids getting the prior value which keyValueStore.delete does
The key
The old value or null if there is no such key.
NullPointerException
If null is used for key.
Get the value corresponding to this key or return the specified default value if no key is found
Get the value corresponding to this key or return the specified default value if no key is found
The default value to return if key is not found in the store
The value associated with the key or the default value if the key is not found
InvalidStateStoreException
if the store is not initialized
NullPointerException
If null is used for key.
A range scan starting from bytes.
A range scan starting from bytes.
Note 1: This is an API for Advanced users only
Note 2: If this RocksDB instance is configured in "prefix seek mode", than fromBytes will be used as a "prefix" and the iteration will end when the prefix is no longer part of the next element. Enabling "prefix seek mode" can be done by calling options.useFixedLengthPrefixExtractor. When enabled, prefix scans can take advantage of a prefix based bloom filter for better seek performance See: https://github.com/facebook/rocksdb/wiki/Prefix-Seek-API-Changes
InvalidStateStoreException
if the store is not initialized
Get an iterator over a given range of keys.
Get an iterator over a given range of keys. This iterator must be closed after use.
The returned iterator must be safe from java.util.ConcurrentModificationException
s
and must not return null values. No ordering guarantees are provided.
Inclusive bytes to start the range scan
Exclusive bytes to end the range scan
The iterator for this range.
InvalidStateStoreException
if the store is not initialized
NullPointerException
If null is used for from or to.
Get an iterator over a given range of keys.
Get an iterator over a given range of keys. This iterator must be closed after use.
The returned iterator must be safe from java.util.ConcurrentModificationException
s
and must not return null values. No ordering guarantees are provided.
Allow stale reads when querying (Stale reads can occur when querying key value stores with caching enabled).
The iterator for this range.
InvalidStateStoreException
if the store is not initialized
NullPointerException
If null is used for from or to.
Register a flush listener callback that will be called every time a cached key value store entry is flushed into the underlying RocksDB store
Register a flush listener callback that will be called every time a cached key value store entry is flushed into the underlying RocksDB store
Flush callback for cached entries
Returns the task id of this tore
Returns the task id of this tore
the task id of this store
A write-behind caching layer around the FinatraKeyValueStore.
We cache Java objects here and then periodically flush entries into RocksDB which involves serializing the objects into byte arrays. As such this cache: 1) Reduces the number of reads/writes to RocksDB 2) Reduces the number of serialization/deserialization operations which can be expensive for some classes 3) Reduces the number of publishes to the Kafka changelog topic backing this key value store
This caching does introduce a few odd corner cases :-( 1. Items in the cache have pass-by-reference semantics but items in rocksdb have pass-by-value semantics. Modifying items after a put is a bad idea! Ideally, only immutable objects would be stored in a CachingFinatraKeyValueStore 2. Range queries currently only work against the uncached RocksDB data. This is because sorted Java maps are much less performant than their unsorted counterparts. We typically only use range queries for queryable state where it is ok to read stale data If fresher data is required for range queries, decrease your commit interval, or disable caching on your key value store
This class is inspired by: https://github.com/apache/samza/blob/1.0.0/samza-kv/src/main/scala/org/apache/samza/storage/kv/CachedStore.scala