Interface DataSegmentPusher

    • Field Detail

      • JOINER

        static final com.google.common.base.Joiner JOINER
    • Method Detail

      • getPathForHadoop

        String getPathForHadoop()
      • push

        DataSegment push​(File file,
                         DataSegment segment,
                         boolean useUniquePath)
                  throws IOException
        Pushes index files and segment descriptor to deep storage.
        Parameters:
        file - directory containing index files
        segment - segment descriptor
        useUniquePath - if true, pushes to a unique file path. This prevents situations where task failures or replica tasks can either overwrite or fail to overwrite existing segments leading to the possibility of different versions of the same segment ID containing different data. As an example, a Kafka indexing task starting at offset A and ending at offset B may push a segment to deep storage and then fail before writing the loadSpec to the metadata table, resulting in a replacement task being spawned. This replacement will also start at offset A but will read to offset C and will then push a segment to deep storage and write the loadSpec metadata. Without unique file paths, this can only work correctly if new segments overwrite existing segments. Suppose that at this point the task then fails so that the supervisor retries again from offset A. This 3rd attempt will overwrite the segments in deep storage before failing to write the loadSpec metadata, resulting in inconsistencies in the segment data now in deep storage and copies of the segment already loaded by historicals. If unique paths are used, caller is responsible for cleaning up segments that were pushed but were not written to the metadata table (for example when using replica tasks).
        Returns:
        segment descriptor
        Throws:
        IOException
      • getStorageDir

        default String getStorageDir​(DataSegment dataSegment,
                                     boolean useUniquePath)
      • getAllowedPropertyPrefixesForHadoop

        default List<String> getAllowedPropertyPrefixesForHadoop()
        Property prefixes that should be added to the "allowedHadoopPrefix" config for passing down to Hadoop jobs. These should be property prefixes like "druid.xxx", which means to include "druid.xxx" and "druid.xxx.*".
      • getDefaultStorageDir

        static String getDefaultStorageDir​(DataSegment segment,
                                           boolean useUniquePath)
      • getDefaultStorageDirWithExistingUniquePath

        static String getDefaultStorageDirWithExistingUniquePath​(DataSegment segment,
                                                                 String uniquePath)
      • generateUniquePath

        static String generateUniquePath()