This module handles some specific type serializers in a central way (so we don't need to pepper the model code with annotations)
HDFS does not have a file locking mechanism.
All Spark Job extend this trait.
All Spark Job extend this trait. Build Spark session using spark variables from application.conf.
This class provides a basic text substitution engine, suitable for performing basic variable
substitution in a way that is more reliable and faster than applying a sequence of String#replace
operations.
This class provides a basic text substitution engine, suitable for performing basic variable
substitution in a way that is more reliable and faster than applying a sequence of String#replace
operations.
The default start and end variable separators are \_\_ and \_\_, respectively, but can be configured to be anything suitable.
HDFS does not have a file locking mechanism. We implement it here in the following way - A file is locked when it's modification time is less than 5000ms than the current time - If the modification time is older tahn the current time of more than 5000ms, it is considered unlocked. - The owner of a lock spawn a thread that update the modification every 5s to keep the lock - The process willing to get the lock check every 5s second for the modification time of the file to gain the lock. - When a process gain the lock, it deletes the file first and tries to create a new one, this make sure that of two process gaining the lock, only one will be able to recreate it after deletion.
To gain the lock, call tryLock, to release it, call
release()
. If you are going to use the lock and release it within the same call stack frame, please consider callingtryExclusively()
(as in exclusive(delay) { action }) instead.