Modifier and Type | Field and Description |
---|---|
protected Fun.RecordCondition |
cacheCondition |
protected ScheduledExecutorService |
cacheExecutor |
protected ScheduledExecutorService |
executor |
protected ScheduledExecutorService |
metricsExecutor |
protected Properties |
props |
protected ClassLoader |
serializerClassLoader |
protected Map<String,ClassLoader> |
serializerClassLoaderRegistry |
protected ScheduledExecutorService |
storeExecutor |
Modifier | Constructor and Description |
---|---|
protected |
DBMaker.Maker()
use static factory methods, or make subclass
|
protected |
DBMaker.Maker(File file) |
Modifier and Type | Method and Description |
---|---|
DBMaker.Maker |
_newAppendFileDB(File file) |
DBMaker.Maker |
_newArchiveFileDB(File file) |
DBMaker.Maker |
_newFileDB(File file) |
DBMaker.Maker |
_newHeapDB() |
DBMaker.Maker |
_newMemoryDB() |
DBMaker.Maker |
_newMemoryDirectDB() |
DBMaker.Maker |
_newMemoryUnsafeDB() |
DBMaker.Maker |
allocateIncrement(long sizeIncrement)
Tells allocator to grow store with this size increment.
|
DBMaker.Maker |
allocateRecidReuseDisable()
Deprecated.
this setting might be removed before 2.0 stable release, it is very likely it will become enabled by default
|
DBMaker.Maker |
allocateStartSize(long size)
Tells allocator to set initial store size, when new store is created.
|
DBMaker.Maker |
asyncWriteEnable()
Enables mode where all modifications are queued and written into disk on Background Writer Thread.
|
DBMaker.Maker |
asyncWriteFlushDelay(int delay)
Set flush interval for write cache, by default is 0
|
DBMaker.Maker |
asyncWriteQueueSize(int queueSize)
Set size of async Write Queue.
|
DBMaker.Maker |
cacheCondition(Fun.RecordCondition cacheCondition)
Install callback condition, which decides if some record is to be included in cache.
|
DBMaker.Maker |
cacheDisable()
Deprecated.
cache is disabled by default
|
DBMaker.Maker |
cacheExecutorEnable()
Enable separate executor for cache.
|
DBMaker.Maker |
cacheExecutorEnable(ScheduledExecutorService metricsExecutor)
Enable separate executor for cache.
|
DBMaker.Maker |
cacheExecutorPeriod(long period)
Sets interval in which executor should check cache
|
DBMaker.Maker |
cacheHardRefEnable()
Enables unbounded hard reference cache.
|
DBMaker.Maker |
cacheHashTableEnable()
Fixed size cache which uses hash table.
|
DBMaker.Maker |
cacheHashTableEnable(int cacheSize)
Fixed size cache which uses hash table.
|
DBMaker.Maker |
cacheLRUEnable()
Enables Least Recently Used cache.
|
DBMaker.Maker |
cacheSize(int cacheSize)
Set cache size.
|
DBMaker.Maker |
cacheSoftRefEnable()
Enables unbounded cache which uses
SoftReference . |
DBMaker.Maker |
cacheWeakRefEnable()
Enables unbounded cache which uses
WeakReference . |
DBMaker.Maker |
checksumEnable()
Adds CRC32 checksum at end of each record to check data integrity.
|
DBMaker.Maker |
closeOnJvmShutdown()
Adds JVM shutdown hook and closes DB just before JVM;
|
DBMaker.Maker |
commitFileSyncDisable()
Deprecated.
ignored in MapDB 2 for now
|
DBMaker.Maker |
compressionEnable()
Enables record compression.
|
protected Store.Cache |
createCache(boolean disableLocks,
int lockScale) |
DBMaker.Maker |
deleteFilesAfterClose()
Try to delete files after DB is closed.
|
DBMaker.Maker |
encryptionEnable(byte[] password)
Encrypt storage using XTEA algorithm.
|
DBMaker.Maker |
encryptionEnable(String password)
Encrypt storage using XTEA algorithm.
|
DBMaker.Maker |
executorEnable()
Enables background executor
|
protected Engine |
extendSnapshotEngine(Engine engine,
int lockScale) |
protected Volume.VolumeFactory |
extendStoreVolumeFactory(boolean index) |
protected Engine |
extendWrapSnapshotEngine(Engine engine) |
DBMaker.Maker |
fileChannelEnable()
Enable FileChannel access.
|
DBMaker.Maker |
fileLockDisable()
MapDB needs exclusive lock over storage file it is using.
|
DBMaker.Maker |
fileLockHeartbeatEnable()
MapDB needs exclusive lock over storage file it is using.
|
DBMaker.Maker |
fileMmapCleanerHackEnable()
Enables cleaner hack to close mmaped files at DB.close(), rather than Garbage Collection.
|
DBMaker.Maker |
fileMmapEnable()
Enables Memory Mapped Files, much faster storage option.
|
DBMaker.Maker |
fileMmapEnableIfSupported()
Enable Memory Mapped Files only if current JVM supports it (is 64bit).
|
DBMaker.Maker |
fileMmapPreclearDisable()
Disables preclear workaround for JVM crash.
|
DBMaker.Maker |
freeSpaceReclaimQ(int q)
Deprecated.
ignored in MapDB 2 for now
|
protected static boolean |
JVMSupportsLargeMappedFiles()
Check if large files can be mapped into memory.
|
DBMaker.Maker |
lockDisable()
Disable locks.
|
DBMaker.Maker |
lockScale(int scale)
Sets concurrency scale.
|
DBMaker.Maker |
lockSingleEnable()
Disables double read-write locks and enables single read-write locks.
|
DB |
make()
constructs DB using current settings
|
protected Fun.Function1<Class,String> |
makeClassLoader() |
Engine |
makeEngine()
constructs Engine using current settings
|
TxMaker |
makeTxMaker() |
DBMaker.Maker |
metricsEnable()
Enable metrics, log at info level every 10 SECONDS
|
DBMaker.Maker |
metricsEnable(long metricsLogPeriod) |
DBMaker.Maker |
metricsExecutorEnable()
Enable separate executor for metrics.
|
DBMaker.Maker |
metricsExecutorEnable(ScheduledExecutorService metricsExecutor)
Enable separate executor for metrics.
|
DBMaker.Maker |
mmapFileEnable()
Deprecated.
renamed to
fileMmapEnable() |
DBMaker.Maker |
mmapFileEnableIfSupported()
Deprecated.
renamed to
fileMmapEnableIfSupported() |
DBMaker.Maker |
mmapFileEnablePartial()
Deprecated.
mapdb 2.0 uses single file, no partial mapping possible
|
protected boolean |
propsGetBool(String key) |
protected int |
propsGetInt(String key,
int defValue) |
protected long |
propsGetLong(String key,
long defValue) |
protected int |
propsGetRafMode() |
protected byte[] |
propsGetXteaEncKey() |
DBMaker.Maker |
readOnly()
Open store in read-only mode.
|
DBMaker.Maker |
serializerClassLoader(ClassLoader classLoader)
Sets class loader used to POJO serializer to load classes during deserialization.
|
DBMaker.Maker |
serializerRegisterClass(Class... classes)
Register classes with their Class Loaders.
|
DBMaker.Maker |
serializerRegisterClass(String className,
ClassLoader classLoader)
Register class with given Class Loader.
|
DBMaker.Maker |
sizeLimit(double maxSize)
Deprecated.
right now not implemented, will be renamed to allocate*()
|
DBMaker.Maker |
snapshotEnable()
MapDB supports snapshots.
|
DBMaker.Maker |
storeExecutorEnable()
Enable separate executor for store (async write, compaction)
|
DBMaker.Maker |
storeExecutorEnable(ScheduledExecutorService metricsExecutor)
Enable separate executor for cache.
|
DBMaker.Maker |
storeExecutorPeriod(long period)
Sets interval in which executor should check cache
|
DBMaker.Maker |
strictDBGet()
DB Get methods such as
DB.treeMap(String) or DB.atomicLong(String) auto create
new record with default values, if record with given name does not exist. |
DBMaker.Maker |
transactionDisable()
Transaction journal is enabled by default
You must call DB.commit() to save your changes.
|
protected Fun.RecordCondition cacheCondition
protected ScheduledExecutorService executor
protected ScheduledExecutorService metricsExecutor
protected ScheduledExecutorService cacheExecutor
protected ScheduledExecutorService storeExecutor
protected ClassLoader serializerClassLoader
protected Map<String,ClassLoader> serializerClassLoaderRegistry
protected Properties props
protected DBMaker.Maker()
protected DBMaker.Maker(File file)
public DBMaker.Maker _newHeapDB()
public DBMaker.Maker _newMemoryDB()
public DBMaker.Maker _newMemoryDirectDB()
public DBMaker.Maker _newMemoryUnsafeDB()
public DBMaker.Maker _newAppendFileDB(File file)
public DBMaker.Maker _newArchiveFileDB(File file)
public DBMaker.Maker _newFileDB(File file)
public DBMaker.Maker executorEnable()
public DBMaker.Maker transactionDisable()
Transaction journal is enabled by default You must call DB.commit() to save your changes. It is possible to disable transaction journal for better write performance In this case all integrity checks are sacrificed for faster speed.
If transaction journal is disabled, all changes are written DIRECTLY into store. You must call DB.close() method before exit, otherwise your store WILL BE CORRUPTED
public DBMaker.Maker metricsEnable()
public DBMaker.Maker metricsEnable(long metricsLogPeriod)
public DBMaker.Maker metricsExecutorEnable()
public DBMaker.Maker metricsExecutorEnable(ScheduledExecutorService metricsExecutor)
public DBMaker.Maker cacheExecutorEnable()
public DBMaker.Maker cacheExecutorEnable(ScheduledExecutorService metricsExecutor)
public DBMaker.Maker cacheExecutorPeriod(long period)
period
- in mspublic DBMaker.Maker storeExecutorEnable()
public DBMaker.Maker storeExecutorEnable(ScheduledExecutorService metricsExecutor)
public DBMaker.Maker storeExecutorPeriod(long period)
period
- in mspublic DBMaker.Maker cacheCondition(Fun.RecordCondition cacheCondition)
true
for every record which should be included
This could be for example useful to include only BTree Directory Nodes and leave values and Leaf nodes outside of cache.
!!! Warning:!!!
Cache requires **consistent** true or false. Failing to do so will result in inconsitent cache and possible data corruption.
Condition is also executed several times, so it must be very fast
You should only use very simple logic such as value instanceof SomeClass
.public DBMaker.Maker cacheDisable()
public DBMaker.Maker cacheHardRefEnable()
Enables unbounded hard reference cache. This cache is good if you have lot of available memory.
All fetched records are added to HashMap and stored with hard reference. To prevent OutOfMemoryExceptions MapDB monitors free memory, if it is bellow 25% cache is cleared.
public DBMaker.Maker cacheSize(int cacheSize)
Set cache size. Interpretations depends on cache type. For fixed size caches (such as FixedHashTable cache) it is maximal number of items in cache.
For unbounded caches (such as HardRef cache) it is initial capacity of underlying table (HashMap).
Default cache size is 2048.cacheSize
- new cache sizepublic DBMaker.Maker cacheHashTableEnable()
Fixed size cache which uses hash table. Is thread-safe and requires only minimal locking. Items are randomly removed and replaced by hash collisions.
This is simple, concurrent, small-overhead, random cache.
public DBMaker.Maker cacheHashTableEnable(int cacheSize)
Fixed size cache which uses hash table. Is thread-safe and requires only minimal locking. Items are randomly removed and replaced by hash collisions.
This is simple, concurrent, small-overhead, random cache.
cacheSize
- new cache sizepublic DBMaker.Maker cacheWeakRefEnable()
WeakReference
.
Items are removed from cache by Garbage Collectorpublic DBMaker.Maker cacheSoftRefEnable()
SoftReference
.
Items are removed from cache by Garbage Collectorpublic DBMaker.Maker cacheLRUEnable()
public DBMaker.Maker lockDisable()
Disable locks. This will make MapDB thread unsafe. It will also disable any background thread workers.
WARNING: this option is dangerous. With locks disabled multi-threaded access could cause data corruption and causes. MapDB does not have fail-fast iterator or any other means of protection
public DBMaker.Maker lockSingleEnable()
Disables double read-write locks and enables single read-write locks.
This type of locking have smaller overhead and can be faster in mostly-write scenario.
public DBMaker.Maker lockScale(int scale)
Sets concurrency scale. More locks means better scalability with multiple cores, but also higher memory overhead
This value has to be power of two, so it is rounded up automatically.
public DBMaker.Maker mmapFileEnable()
fileMmapEnable()
public DBMaker.Maker fileMmapEnable()
Enables Memory Mapped Files, much faster storage option. However on 32bit JVM this mode could corrupt your DB thanks to 4GB memory addressing limit.
You may experience java.lang.OutOfMemoryError: Map failed
exception on 32bit JVM, if you enable this
mode.
public DBMaker.Maker fileMmapCleanerHackEnable()
Enables cleaner hack to close mmaped files at DB.close(), rather than Garbage Collection. See relevant JVM bug. Please note that this option closes files, but could cause all sort of problems, including JVM crash.
Memory mapped files in Java are not unmapped when file closes.
Unmapping happens when DirectByteBuffer
is garbage collected.
Delay between file close and GC could be very long, possibly even hours.
This causes file descriptor to remain open, causing all sort of problems:
On Windows opened file can not be deleted or accessed by different process. It remains locked even after JVM process exits until Windows restart. This is causing problems during compaction etc.
On Linux (and other systems) opened files consumes file descriptor. Eventually JVM process could run out of available file descriptors (couple of thousands) and would be unable to open new files or sockets.
On Oracle and OpenJDK JVMs there is option to unmap files after closing. However it is not officially supported and could result in all sort of strange behaviour. In MapDB it was linked to JVM crashes, and was disabled by default in MapDB 2.0.
public DBMaker.Maker fileMmapPreclearDisable()
Disables preclear workaround for JVM crash. This will speedup inserts on mmap files, if store is expanded. As sideffect JVM might crash if there is not enough free space. TODO document more, links
public DBMaker.Maker fileLockDisable()
MapDB needs exclusive lock over storage file it is using.
When single file is used by multiple DB instances at the same time, storage file gets quickly corrupted.
To prevent multiple opening MapDB uses FileChannel.lock()
.
If file is already locked, opening it fails with DBException.FileLocked
In some cases file might remain locked, if DB is not closed correctly or JVM crashes. This option disables exclusive file locking. Use it if you have troubles to reopen files
public DBMaker.Maker fileLockHeartbeatEnable()
MapDB needs exclusive lock over storage file it is using.
When single file is used by multiple DB instances at the same time, storage file gets quickly corrupted.
To prevent multiple opening MapDB uses FileChannel.lock()
.
If file is already locked, opening it fails with DBException.FileLocked
In some cases file might remain locked, if DB is not closed correctly or JVM crashes.
This option replaces FileChannel.lock()
exclusive file locking with *.lock
file.
This file is periodically updated by background thread. If JVM dies, the lock file gets old
and eventually expires. Use it if you have troubles to reopen files.
This method was taken from H2 database. It was originally written by Thomas Mueller and modified for MapDB purposes.
Original description from H2 documentation:
File.createNewFile
).
Then, the process waits a little bit (20 ms) and checks the file again. If the file was changed during this time,
the operation is aborted. This protects against a race condition when one process deletes the lock file just after
another one create it, and a third process creates the file again. It does not occur if there are only
two writers. This algorithm is tested with over 100 concurrent threads. In some cases, when there are many concurrent threads trying to lock the database, they block each other (meaning the file cannot be locked by any of them) for some time. However, the file never gets locked by two threads at the same time. However using that many concurrent threads / processes is not the common use case. Generally, an application should throw an error to the user if it cannot open a database, and not try again in a (fast) loop.
public DBMaker.Maker mmapFileEnablePartial()
public DBMaker.Maker mmapFileEnableIfSupported()
fileMmapEnableIfSupported()
public DBMaker.Maker fileMmapEnableIfSupported()
public DBMaker.Maker fileChannelEnable()
RandomAccessFile
.
whic is slower and more robust. but does not allow concurrent access (parallel read and writes). RAF is still thread-safe
but has global lock.
FileChannel does not have global lock, and is faster compared to RAF. However memory-mapped files are
probably best choice.public DBMaker.Maker snapshotEnable()
TxEngine
requires additional locking which has small overhead when not used.
Snapshots are disabled by default. This option switches the snapshots on.public DBMaker.Maker asyncWriteEnable()
Enables mode where all modifications are queued and written into disk on Background Writer Thread. So all modifications are performed in asynchronous mode and do not block.
Enabling this mode might increase performance for single threaded apps.
public DBMaker.Maker asyncWriteFlushDelay(int delay)
Set flush interval for write cache, by default is 0
When BTreeMap is constructed from ordered set, tree node size is increasing linearly with each item added. Each time new key is added to tree node, its size changes and storage needs to find new place. So constructing BTreeMap from ordered set leads to large store fragmentation.
Setting flush interval is workaround as BTreeMap node is always updated in memory (write cache) and only final version of node is stored on disk.
delay
- flush write cache every N milisecondspublic DBMaker.Maker asyncWriteQueueSize(int queueSize)
Set size of async Write Queue. Default size is
Using too large queue size can lead to out of memory exception.
queueSize
- of queuepublic DBMaker.Maker deleteFilesAfterClose()
public DBMaker.Maker closeOnJvmShutdown()
public DBMaker.Maker compressionEnable()
Enables record compression.
Make sure you enable this every time you reopen store, otherwise record de-serialization fails unpredictably.
public DBMaker.Maker encryptionEnable(String password)
Encrypt storage using XTEA algorithm.
XTEA is sound encryption algorithm. However implementation in MapDB was not peer-reviewed. MapDB only encrypts records data, so attacker may see number of records and their sizes.
Make sure you enable this every time you reopen store, otherwise record de-serialization fails unpredictably.
password
- for encryptionpublic DBMaker.Maker encryptionEnable(byte[] password)
Encrypt storage using XTEA algorithm.
XTEA is sound encryption algorithm. However implementation in MapDB was not peer-reviewed. MapDB only encrypts records data, so attacker may see number of records and their sizes.
Make sure you enable this every time you reopen store, otherwise record de-serialization fails unpredictably.
password
- for encryptionpublic DBMaker.Maker checksumEnable()
Adds CRC32 checksum at end of each record to check data integrity. It throws 'IOException("Checksum does not match, data broken")' on de-serialization if data are corrupted
Make sure you enable this every time you reopen store, otherwise record de-serialization fails.
public DBMaker.Maker strictDBGet()
DB Get methods such as DB.treeMap(String)
or DB.atomicLong(String)
auto create
new record with default values, if record with given name does not exist. This could be problem if you would like to enforce
stricter database schema. So this parameter disables record auto creation.
If this set, DB.getXX()
will throw an exception if given name does not exist, instead of creating new record (or collection)
public DBMaker.Maker readOnly()
UnsupportedOperationException("Read-only")
public DBMaker.Maker sizeLimit(double maxSize)
maxSize
- public DBMaker.Maker freeSpaceReclaimQ(int q)
public DBMaker.Maker commitFileSyncDisable()
public DBMaker.Maker allocateStartSize(long size)
public DBMaker.Maker allocateIncrement(long sizeIncrement)
public DBMaker.Maker serializerClassLoader(ClassLoader classLoader)
public DBMaker.Maker serializerRegisterClass(String className, ClassLoader classLoader)
public DBMaker.Maker serializerRegisterClass(Class... classes)
public DBMaker.Maker allocateRecidReuseDisable()
public DB make()
protected Fun.Function1<Class,String> makeClassLoader()
public TxMaker makeTxMaker()
public Engine makeEngine()
protected Store.Cache createCache(boolean disableLocks, int lockScale)
protected int propsGetInt(String key, int defValue)
protected long propsGetLong(String key, long defValue)
protected boolean propsGetBool(String key)
protected byte[] propsGetXteaEncKey()
protected static boolean JVMSupportsLargeMappedFiles()
protected int propsGetRafMode()
protected Volume.VolumeFactory extendStoreVolumeFactory(boolean index)
Copyright © 2015. All Rights Reserved.