Class OracleConnectorEmbeddedDebeziumConfiguration

java.lang.Object
org.apache.camel.component.debezium.configuration.EmbeddedDebeziumConfiguration
org.apache.camel.component.debezium.configuration.OracleConnectorEmbeddedDebeziumConfiguration
All Implemented Interfaces:
Cloneable

@UriParams public class OracleConnectorEmbeddedDebeziumConfiguration extends org.apache.camel.component.debezium.configuration.EmbeddedDebeziumConfiguration
  • Constructor Details

    • OracleConnectorEmbeddedDebeziumConfiguration

      public OracleConnectorEmbeddedDebeziumConfiguration()
  • Method Details

    • setSnapshotLockingMode

      public void setSnapshotLockingMode(String snapshotLockingMode)
      Controls how the connector holds locks on tables while performing the schema snapshot. The default is 'shared', which means the connector will hold a table lock that prevents exclusive table access for just the initial portion of the snapshot while the database schemas and other metadata are being read. The remaining work in a snapshot involves selecting all rows from each table, and this is done using a flashback query that requires no locks. However, in some cases it may be desirable to avoid locks entirely which can be done by specifying 'none'. This mode is only safe to use if no schema changes are happening while the snapshot is taken.
    • getSnapshotLockingMode

      public String getSnapshotLockingMode()
    • setLogMiningBufferDropOnStop

      public void setLogMiningBufferDropOnStop(boolean logMiningBufferDropOnStop)
      When set to true the underlying buffer cache is not retained when the connector is stopped. When set to false (the default), the buffer cache is retained across restarts.
    • isLogMiningBufferDropOnStop

      public boolean isLogMiningBufferDropOnStop()
    • setMessageKeyColumns

      public void setMessageKeyColumns(String messageKeyColumns)
      A semicolon-separated list of expressions that match fully-qualified tables and column(s) to be used as message key. Each expression must match the pattern '<fully-qualified table name>:', where the table names could be defined as (DB_NAME.TABLE_NAME) or (SCHEMA_NAME.TABLE_NAME), depending on the specific connector, and the key columns are a comma-separated list of columns representing the custom key. For any table without an explicit key configuration the table's primary key column(s) will be used as message key. Example: dbserver1.inventory.orderlines:orderId,orderLineId;dbserver1.inventory.orders:id
    • getMessageKeyColumns

      public String getMessageKeyColumns()
    • setLogMiningArchiveDestinationName

      public void setLogMiningArchiveDestinationName(String logMiningArchiveDestinationName)
      Sets the specific archive log destination as the source for reading archive logs.When not set, the connector will automatically select the first LOCAL and VALID destination.
    • getLogMiningArchiveDestinationName

      public String getLogMiningArchiveDestinationName()
    • setSignalEnabledChannels

      public void setSignalEnabledChannels(String signalEnabledChannels)
      List of channels names that are enabled. Source channel is enabled by default
    • getSignalEnabledChannels

      public String getSignalEnabledChannels()
    • setIncludeSchemaChanges

      public void setIncludeSchemaChanges(boolean includeSchemaChanges)
      Whether the connector should publish changes in the database schema to a Kafka topic with the same name as the database server ID. Each schema change will be recorded using a key that contains the database name and whose value include logical description of the new schema and optionally the DDL statement(s). The default is 'true'. This is independent of how the connector internally records database schema history.
    • isIncludeSchemaChanges

      public boolean isIncludeSchemaChanges()
    • setSignalDataCollection

      public void setSignalDataCollection(String signalDataCollection)
      The name of the data collection that is used to send signals/commands to Debezium. Signaling is disabled when not set.
    • getSignalDataCollection

      public String getSignalDataCollection()
    • setConverters

      public void setConverters(String converters)
      Optional list of custom converters that would be used instead of default ones. The converters are defined using '<converter.prefix>.type' config option and configured using options '<converter.prefix>.
    • getConverters

      public String getConverters()
    • setSnapshotFetchSize

      public void setSnapshotFetchSize(int snapshotFetchSize)
      The maximum number of records that should be loaded into memory while performing a snapshot.
    • getSnapshotFetchSize

      public int getSnapshotFetchSize()
    • setSnapshotLockTimeoutMs

      public void setSnapshotLockTimeoutMs(long snapshotLockTimeoutMs)
      The maximum number of millis to wait for table locks at the beginning of a snapshot. If locks cannot be acquired in this time frame, the snapshot will be aborted. Defaults to 10 seconds
    • getSnapshotLockTimeoutMs

      public long getSnapshotLockTimeoutMs()
    • setLogMiningScnGapDetectionGapSizeMin

      public void setLogMiningScnGapDetectionGapSizeMin(long logMiningScnGapDetectionGapSizeMin)
      Used for SCN gap detection, if the difference between current SCN and previous end SCN is bigger than this value, and the time difference of current SCN and previous end SCN is smaller than log.mining.scn.gap.detection.time.interval.max.ms, consider it a SCN gap.
    • getLogMiningScnGapDetectionGapSizeMin

      public long getLogMiningScnGapDetectionGapSizeMin()
    • setDatabaseDbname

      public void setDatabaseDbname(String databaseDbname)
      The name of the database from which the connector should capture changes
    • getDatabaseDbname

      public String getDatabaseDbname()
    • setSnapshotTablesOrderByRowCount

      public void setSnapshotTablesOrderByRowCount(String snapshotTablesOrderByRowCount)
      Controls the order in which tables are processed in the initial snapshot. A `descending` value will order the tables by row count descending. A `ascending` value will order the tables by row count ascending. A value of `disabled` (the default) will disable ordering by row count.
    • getSnapshotTablesOrderByRowCount

      public String getSnapshotTablesOrderByRowCount()
    • setLogMiningSleepTimeDefaultMs

      public void setLogMiningSleepTimeDefaultMs(long logMiningSleepTimeDefaultMs)
      The amount of time that the connector will sleep after reading data from redo/archive logs and before starting reading data again. Value is in milliseconds.
    • getLogMiningSleepTimeDefaultMs

      public long getLogMiningSleepTimeDefaultMs()
    • setSnapshotSelectStatementOverrides

      public void setSnapshotSelectStatementOverrides(String snapshotSelectStatementOverrides)
      This property contains a comma-separated list of fully-qualified tables (DB_NAME.TABLE_NAME) or (SCHEMA_NAME.TABLE_NAME), depending on the specific connectors. Select statements for the individual tables are specified in further configuration properties, one for each table, identified by the id 'snapshot.select.statement.overrides.[DB_NAME].[TABLE_NAME]' or 'snapshot.select.statement.overrides.[SCHEMA_NAME].[TABLE_NAME]', respectively. The value of those properties is the select statement to use when retrieving data from the specific table during snapshotting. A possible use case for large append-only tables is setting a specific point where to start (resume) snapshotting, in case a previous snapshotting was interrupted.
    • getSnapshotSelectStatementOverrides

      public String getSnapshotSelectStatementOverrides()
    • setLogMiningArchiveLogOnlyScnPollIntervalMs

      public void setLogMiningArchiveLogOnlyScnPollIntervalMs(long logMiningArchiveLogOnlyScnPollIntervalMs)
      The interval in milliseconds to wait between polls checking to see if the SCN is in the archive logs.
    • getLogMiningArchiveLogOnlyScnPollIntervalMs

      public long getLogMiningArchiveLogOnlyScnPollIntervalMs()
    • setLogMiningRestartConnection

      public void setLogMiningRestartConnection(boolean logMiningRestartConnection)
      Debezium opens a database connection and keeps that connection open throughout the entire streaming phase. In some situations, this can lead to excessive SGA memory usage. By setting this option to 'true' (the default is 'false'), the connector will close and re-open a database connection after every detected log switch or if the log.mining.session.max.ms has been reached.
    • isLogMiningRestartConnection

      public boolean isLogMiningRestartConnection()
    • setTableExcludeList

      public void setTableExcludeList(String tableExcludeList)
      A comma-separated list of regular expressions that match the fully-qualified names of tables to be excluded from monitoring
    • getTableExcludeList

      public String getTableExcludeList()
    • setMaxBatchSize

      public void setMaxBatchSize(int maxBatchSize)
      Maximum size of each batch of source records. Defaults to 2048.
    • getMaxBatchSize

      public int getMaxBatchSize()
    • setLogMiningBufferInfinispanCacheTransactions

      public void setLogMiningBufferInfinispanCacheTransactions(String logMiningBufferInfinispanCacheTransactions)
      Specifies the XML configuration for the Infinispan 'transactions' cache
    • getLogMiningBufferInfinispanCacheTransactions

      public String getLogMiningBufferInfinispanCacheTransactions()
    • setTopicNamingStrategy

      public void setTopicNamingStrategy(String topicNamingStrategy)
      The name of the TopicNamingStrategy class that should be used to determine the topic name for data change, schema change, transaction, heartbeat event etc.
    • getTopicNamingStrategy

      public String getTopicNamingStrategy()
    • setSnapshotMode

      public void setSnapshotMode(String snapshotMode)
      The criteria for running a snapshot upon startup of the connector. Select one of the following snapshot options: 'always': The connector runs a snapshot every time that it starts. After the snapshot completes, the connector begins to stream changes from the redo logs.; 'initial' (default): If the connector does not detect any offsets for the logical server name, it runs a snapshot that captures the current full state of the configured tables. After the snapshot completes, the connector begins to stream changes from the redo logs. 'initial_only': The connector performs a snapshot as it does for the 'initial' option, but after the connector completes the snapshot, it stops, and does not stream changes from the redo logs.; 'schema_only': If the connector does not detect any offsets for the logical server name, it runs a snapshot that captures only the schema (table structures), but not any table data. After the snapshot completes, the connector begins to stream changes from the redo logs.; 'schema_only_recovery': The connector performs a snapshot that captures only the database schema history. The connector then transitions to streaming from the redo logs. Use this setting to restore a corrupted or lost database schema history topic. Do not use if the database schema was modified after the connector stopped.
    • getSnapshotMode

      public String getSnapshotMode()
    • setRetriableRestartConnectorWaitMs

      public void setRetriableRestartConnectorWaitMs(long retriableRestartConnectorWaitMs)
      Time to wait before restarting connector after retriable exception occurs. Defaults to 10000ms.
    • getRetriableRestartConnectorWaitMs

      public long getRetriableRestartConnectorWaitMs()
    • setSnapshotDelayMs

      public void setSnapshotDelayMs(long snapshotDelayMs)
      A delay period before a snapshot will begin, given in milliseconds. Defaults to 0 ms.
    • getSnapshotDelayMs

      public long getSnapshotDelayMs()
    • setLogMiningStrategy

      public void setLogMiningStrategy(String logMiningStrategy)
      There are strategies: Online catalog with faster mining but no captured DDL. Another - with data dictionary loaded into REDO LOG files
    • getLogMiningStrategy

      public String getLogMiningStrategy()
    • setSchemaHistoryInternalFileFilename

      public void setSchemaHistoryInternalFileFilename(String schemaHistoryInternalFileFilename)
      The path to the file that will be used to record the database schema history
    • getSchemaHistoryInternalFileFilename

      public String getSchemaHistoryInternalFileFilename()
    • setTombstonesOnDelete

      public void setTombstonesOnDelete(boolean tombstonesOnDelete)
      Whether delete operations should be represented by a delete event and a subsequent tombstone event (true) or only by a delete event (false). Emitting the tombstone event (the default behavior) allows Kafka to completely delete all events pertaining to the given key once the source record got deleted.
    • isTombstonesOnDelete

      public boolean isTombstonesOnDelete()
    • setDecimalHandlingMode

      public void setDecimalHandlingMode(String decimalHandlingMode)
      Specify how DECIMAL and NUMERIC columns should be represented in change events, including: 'precise' (the default) uses java.math.BigDecimal to represent values, which are encoded in the change events using a binary representation and Kafka Connect's 'org.apache.kafka.connect.data.Decimal' type; 'string' uses string to represent values; 'double' represents values using Java's 'double', which may not offer the precision but will be far easier to use in consumers.
    • getDecimalHandlingMode

      public String getDecimalHandlingMode()
    • setBinaryHandlingMode

      public void setBinaryHandlingMode(String binaryHandlingMode)
      Specify how binary (blob, binary, etc.) columns should be represented in change events, including: 'bytes' represents binary data as byte array (default); 'base64' represents binary data as base64-encoded string; 'base64-url-safe' represents binary data as base64-url-safe-encoded string; 'hex' represents binary data as hex-encoded (base16) string
    • getBinaryHandlingMode

      public String getBinaryHandlingMode()
    • setDatabaseOutServerName

      public void setDatabaseOutServerName(String databaseOutServerName)
      Name of the XStream Out server to connect to.
    • getDatabaseOutServerName

      public String getDatabaseOutServerName()
    • setSnapshotIncludeCollectionList

      public void setSnapshotIncludeCollectionList(String snapshotIncludeCollectionList)
      This setting must be set to specify a list of tables/collections whose snapshot must be taken on creating or restarting the connector.
    • getSnapshotIncludeCollectionList

      public String getSnapshotIncludeCollectionList()
    • setDatabasePdbName

      public void setDatabasePdbName(String databasePdbName)
      Name of the pluggable database when working with a multi-tenant set-up. The CDB name must be given via database.dbname in this case.
    • getDatabasePdbName

      public String getDatabasePdbName()
    • setDatabaseConnectionAdapter

      public void setDatabaseConnectionAdapter(String databaseConnectionAdapter)
      The adapter to use when capturing changes from the database. Options include: 'logminer': (the default) to capture changes using native Oracle LogMiner; 'xstream' to capture changes using Oracle XStreams
    • getDatabaseConnectionAdapter

      public String getDatabaseConnectionAdapter()
    • setLogMiningFlushTableName

      public void setLogMiningFlushTableName(String logMiningFlushTableName)
      The name of the flush table used by the connector, defaults to LOG_MINING_FLUSH.
    • getLogMiningFlushTableName

      public String getLogMiningFlushTableName()
    • setLogMiningBufferType

      public void setLogMiningBufferType(String logMiningBufferType)
      The buffer type controls how the connector manages buffering transaction data. memory - Uses the JVM process' heap to buffer all transaction data. infinispan_embedded - This option uses an embedded Infinispan cache to buffer transaction data and persist it to disk. infinispan_remote - This option uses a remote Infinispan cluster to buffer transaction data and persist it to disk.
    • getLogMiningBufferType

      public String getLogMiningBufferType()
    • setSignalPollIntervalMs

      public void setSignalPollIntervalMs(long signalPollIntervalMs)
      Interval for looking for new signals in registered channels, given in milliseconds. Defaults to 5 seconds.
    • getSignalPollIntervalMs

      public long getSignalPollIntervalMs()
    • setNotificationEnabledChannels

      public void setNotificationEnabledChannels(String notificationEnabledChannels)
      List of notification channels names that are enabled.
    • getNotificationEnabledChannels

      public String getNotificationEnabledChannels()
    • setEventProcessingFailureHandlingMode

      public void setEventProcessingFailureHandlingMode(String eventProcessingFailureHandlingMode)
      Specify how failures during processing of events (i.e. when encountering a corrupted event) should be handled, including: 'fail' (the default) an exception indicating the problematic event and its position is raised, causing the connector to be stopped; 'warn' the problematic event and its position will be logged and the event will be skipped; 'ignore' the problematic event will be skipped.
    • getEventProcessingFailureHandlingMode

      public String getEventProcessingFailureHandlingMode()
    • setSnapshotMaxThreads

      public void setSnapshotMaxThreads(int snapshotMaxThreads)
      The maximum number of threads used to perform the snapshot. Defaults to 1.
    • getSnapshotMaxThreads

      public int getSnapshotMaxThreads()
    • setNotificationSinkTopicName

      public void setNotificationSinkTopicName(String notificationSinkTopicName)
      The name of the topic for the notifications. This is required in case 'sink' is in the list of enabled channels
    • getNotificationSinkTopicName

      public String getNotificationSinkTopicName()
    • setLogMiningQueryFilterMode

      public void setLogMiningQueryFilterMode(String logMiningQueryFilterMode)
      Specifies how the filter configuration is applied to the LogMiner database query. none - The query does not apply any schema or table filters, all filtering is at runtime by the connector. in - The query uses SQL in-clause expressions to specify the schema or table filters. regex - The query uses Oracle REGEXP_LIKE expressions to specify the schema or table filters.
    • getLogMiningQueryFilterMode

      public String getLogMiningQueryFilterMode()
    • setSchemaNameAdjustmentMode

      public void setSchemaNameAdjustmentMode(String schemaNameAdjustmentMode)
      Specify how schema names should be adjusted for compatibility with the message converter used by the connector, including: 'avro' replaces the characters that cannot be used in the Avro type name with underscore; 'avro_unicode' replaces the underscore or characters that cannot be used in the Avro type name with corresponding unicode like _uxxxx. Note: _ is an escape sequence like backslash in Java;'none' does not apply any adjustment (default)
    • getSchemaNameAdjustmentMode

      public String getSchemaNameAdjustmentMode()
    • setLogMiningBatchSizeDefault

      public void setLogMiningBatchSizeDefault(long logMiningBatchSizeDefault)
      The starting SCN interval size that the connector will use for reading data from redo/archive logs.
    • getLogMiningBatchSizeDefault

      public long getLogMiningBatchSizeDefault()
    • setTableIncludeList

      public void setTableIncludeList(String tableIncludeList)
      The tables for which changes are to be captured
    • getTableIncludeList

      public String getTableIncludeList()
    • setQueryFetchSize

      public void setQueryFetchSize(int queryFetchSize)
      The maximum number of records that should be loaded into memory while streaming. A value of '0' uses the default JDBC fetch size, defaults to '2000'.
    • getQueryFetchSize

      public int getQueryFetchSize()
    • setLogMiningSleepTimeMinMs

      public void setLogMiningSleepTimeMinMs(long logMiningSleepTimeMinMs)
      The minimum amount of time that the connector will sleep after reading data from redo/archive logs and before starting reading data again. Value is in milliseconds.
    • getLogMiningSleepTimeMinMs

      public long getLogMiningSleepTimeMinMs()
    • setUnavailableValuePlaceholder

      public void setUnavailableValuePlaceholder(String unavailableValuePlaceholder)
      Specify the constant that will be provided by Debezium to indicate that the original value is unavailable and not provided by the database.
    • getUnavailableValuePlaceholder

      public String getUnavailableValuePlaceholder()
    • setHeartbeatActionQuery

      public void setHeartbeatActionQuery(String heartbeatActionQuery)
      The query executed with every heartbeat.
    • getHeartbeatActionQuery

      public String getHeartbeatActionQuery()
    • setPollIntervalMs

      public void setPollIntervalMs(long pollIntervalMs)
      Time to wait for new change events to appear after receiving no events, given in milliseconds. Defaults to 500 ms.
    • getPollIntervalMs

      public long getPollIntervalMs()
    • setLogMiningUsernameIncludeList

      public void setLogMiningUsernameIncludeList(String logMiningUsernameIncludeList)
      Comma separated list of usernames to include from LogMiner query.
    • getLogMiningUsernameIncludeList

      public String getLogMiningUsernameIncludeList()
    • setLobEnabled

      public void setLobEnabled(boolean lobEnabled)
      When set to 'false', the default, LOB fields will not be captured nor emitted. When set to 'true', the connector will capture LOB fields and emit changes for those fields like any other column type.
    • isLobEnabled

      public boolean isLobEnabled()
    • setIntervalHandlingMode

      public void setIntervalHandlingMode(String intervalHandlingMode)
      Specify how INTERVAL columns should be represented in change events, including: 'string' represents values as an exact ISO formatted string; 'numeric' (default) represents values using the inexact conversion into microseconds
    • getIntervalHandlingMode

      public String getIntervalHandlingMode()
    • setHeartbeatTopicsPrefix

      public void setHeartbeatTopicsPrefix(String heartbeatTopicsPrefix)
      The prefix that is used to name heartbeat topics.Defaults to __debezium-heartbeat.
    • getHeartbeatTopicsPrefix

      public String getHeartbeatTopicsPrefix()
    • setLogMiningArchiveLogOnlyMode

      public void setLogMiningArchiveLogOnlyMode(boolean logMiningArchiveLogOnlyMode)
      When set to 'false', the default, the connector will mine both archive log and redo logs to emit change events. When set to 'true', the connector will only mine archive logs. There are circumstances where its advantageous to only mine archive logs and accept latency in event emission due to frequent revolving redo logs.
    • isLogMiningArchiveLogOnlyMode

      public boolean isLogMiningArchiveLogOnlyMode()
    • setLogMiningBufferInfinispanCacheSchemaChanges

      public void setLogMiningBufferInfinispanCacheSchemaChanges(String logMiningBufferInfinispanCacheSchemaChanges)
      Specifies the XML configuration for the Infinispan 'schema-changes' cache
    • getLogMiningBufferInfinispanCacheSchemaChanges

      public String getLogMiningBufferInfinispanCacheSchemaChanges()
    • setLogMiningSleepTimeMaxMs

      public void setLogMiningSleepTimeMaxMs(long logMiningSleepTimeMaxMs)
      The maximum amount of time that the connector will sleep after reading data from redo/archive logs and before starting reading data again. Value is in milliseconds.
    • getLogMiningSleepTimeMaxMs

      public long getLogMiningSleepTimeMaxMs()
    • setDatabaseUser

      public void setDatabaseUser(String databaseUser)
      Name of the database user to be used when connecting to the database.
    • getDatabaseUser

      public String getDatabaseUser()
    • setDatatypePropagateSourceType

      public void setDatatypePropagateSourceType(String datatypePropagateSourceType)
      A comma-separated list of regular expressions matching the database-specific data type names that adds the data type's original type and original length as parameters to the corresponding field schemas in the emitted change records.
    • getDatatypePropagateSourceType

      public String getDatatypePropagateSourceType()
    • setHeartbeatIntervalMs

      public void setHeartbeatIntervalMs(int heartbeatIntervalMs)
      Length of an interval in milli-seconds in in which the connector periodically sends heartbeat messages to a heartbeat topic. Use 0 to disable heartbeat messages. Disabled by default.
    • getHeartbeatIntervalMs

      public int getHeartbeatIntervalMs()
    • setSchemaHistoryInternalSkipUnparseableDdl

      public void setSchemaHistoryInternalSkipUnparseableDdl(boolean schemaHistoryInternalSkipUnparseableDdl)
      Controls the action Debezium will take when it meets a DDL statement in binlog, that it cannot parse.By default the connector will stop operating but by changing the setting it can ignore the statements which it cannot parse. If skipping is enabled then Debezium can miss metadata changes.
    • isSchemaHistoryInternalSkipUnparseableDdl

      public boolean isSchemaHistoryInternalSkipUnparseableDdl()
    • setColumnIncludeList

      public void setColumnIncludeList(String columnIncludeList)
      Regular expressions matching columns to include in change events
    • getColumnIncludeList

      public String getColumnIncludeList()
    • setLogMiningUsernameExcludeList

      public void setLogMiningUsernameExcludeList(String logMiningUsernameExcludeList)
      Comma separated list of usernames to exclude from LogMiner query.
    • getLogMiningUsernameExcludeList

      public String getLogMiningUsernameExcludeList()
    • setColumnPropagateSourceType

      public void setColumnPropagateSourceType(String columnPropagateSourceType)
      A comma-separated list of regular expressions matching fully-qualified names of columns that adds the columns original type and original length as parameters to the corresponding field schemas in the emitted change records.
    • getColumnPropagateSourceType

      public String getColumnPropagateSourceType()
    • setLogMiningBufferInfinispanCacheProcessedTransactions

      public void setLogMiningBufferInfinispanCacheProcessedTransactions(String logMiningBufferInfinispanCacheProcessedTransactions)
      Specifies the XML configuration for the Infinispan 'processed-transactions' cache
    • getLogMiningBufferInfinispanCacheProcessedTransactions

      public String getLogMiningBufferInfinispanCacheProcessedTransactions()
    • setErrorsMaxRetries

      public void setErrorsMaxRetries(int errorsMaxRetries)
      The maximum number of retries on connection errors before failing (-1 = no limit, 0 = disabled, > 0 = num of retries).
    • getErrorsMaxRetries

      public int getErrorsMaxRetries()
    • setDatabasePassword

      public void setDatabasePassword(String databasePassword)
      Password of the database user to be used when connecting to the database.
    • getDatabasePassword

      public String getDatabasePassword()
    • setLogMiningBufferInfinispanCacheEvents

      public void setLogMiningBufferInfinispanCacheEvents(String logMiningBufferInfinispanCacheEvents)
      Specifies the XML configuration for the Infinispan 'events' cache
    • getLogMiningBufferInfinispanCacheEvents

      public String getLogMiningBufferInfinispanCacheEvents()
    • setSkippedOperations

      public void setSkippedOperations(String skippedOperations)
      The comma-separated list of operations to skip during streaming, defined as: 'c' for inserts/create; 'u' for updates; 'd' for deletes, 't' for truncates, and 'none' to indicate nothing skipped. By default, only truncate operations will be skipped.
    • getSkippedOperations

      public String getSkippedOperations()
    • setLogMiningScnGapDetectionTimeIntervalMaxMs

      public void setLogMiningScnGapDetectionTimeIntervalMaxMs(long logMiningScnGapDetectionTimeIntervalMaxMs)
      Used for SCN gap detection, if the difference between current SCN and previous end SCN is bigger than log.mining.scn.gap.detection.gap.size.min, and the time difference of current SCN and previous end SCN is smaller than this value, consider it a SCN gap.
    • getLogMiningScnGapDetectionTimeIntervalMaxMs

      public long getLogMiningScnGapDetectionTimeIntervalMaxMs()
    • setMaxQueueSize

      public void setMaxQueueSize(int maxQueueSize)
      Maximum size of the queue for change events read from the database log but not yet recorded or forwarded. Defaults to 8192, and should always be larger than the maximum batch size.
    • getMaxQueueSize

      public int getMaxQueueSize()
    • setRacNodes

      public void setRacNodes(String racNodes)
      A comma-separated list of RAC node hostnames or ip addresses
    • getRacNodes

      public String getRacNodes()
    • setLogMiningBufferTransactionEventsThreshold

      public void setLogMiningBufferTransactionEventsThreshold(long logMiningBufferTransactionEventsThreshold)
      The number of events a transaction can include before the transaction is discarded. This is useful for managing buffer memory and/or space when dealing with very large transactions. Defaults to 0, meaning that no threshold is applied and transactions can have unlimited events.
    • getLogMiningBufferTransactionEventsThreshold

      public long getLogMiningBufferTransactionEventsThreshold()
    • setLogMiningTransactionRetentionMs

      public void setLogMiningTransactionRetentionMs(long logMiningTransactionRetentionMs)
      Duration in milliseconds to keep long running transactions in transaction buffer between log mining sessions. By default, all transactions are retained.
    • getLogMiningTransactionRetentionMs

      public long getLogMiningTransactionRetentionMs()
    • setProvideTransactionMetadata

      public void setProvideTransactionMetadata(boolean provideTransactionMetadata)
      Enables transaction metadata extraction together with event counting
    • isProvideTransactionMetadata

      public boolean isProvideTransactionMetadata()
    • setSchemaHistoryInternalStoreOnlyCapturedTablesDdl

      public void setSchemaHistoryInternalStoreOnlyCapturedTablesDdl(boolean schemaHistoryInternalStoreOnlyCapturedTablesDdl)
      Controls what DDL will Debezium store in database schema history. By default (false) Debezium will store all incoming DDL statements. If set to true, then only DDL that manipulates a captured table will be stored.
    • isSchemaHistoryInternalStoreOnlyCapturedTablesDdl

      public boolean isSchemaHistoryInternalStoreOnlyCapturedTablesDdl()
    • setSchemaHistoryInternalStoreOnlyCapturedDatabasesDdl

      public void setSchemaHistoryInternalStoreOnlyCapturedDatabasesDdl(boolean schemaHistoryInternalStoreOnlyCapturedDatabasesDdl)
      Controls what DDL will Debezium store in database schema history. By default (true) only DDL that manipulates a table from captured schema/database will be stored. If set to false, then Debezium will store all incoming DDL statements.
    • isSchemaHistoryInternalStoreOnlyCapturedDatabasesDdl

      public boolean isSchemaHistoryInternalStoreOnlyCapturedDatabasesDdl()
    • setTopicPrefix

      public void setTopicPrefix(String topicPrefix)
      Topic prefix that identifies and provides a namespace for the particular database server/cluster is capturing changes. The topic prefix should be unique across all other connectors, since it is used as a prefix for all Kafka topic names that receive events emitted by this connector. Only alphanumeric characters, hyphens, dots and underscores must be accepted.
    • getTopicPrefix

      public String getTopicPrefix()
    • setIncludeSchemaComments

      public void setIncludeSchemaComments(boolean includeSchemaComments)
      Whether the connector parse table and column's comment to metadata object. Note: Enable this option will bring the implications on memory usage. The number and size of ColumnImpl objects is what largely impacts how much memory is consumed by the Debezium connectors, and adding a String to each of them can potentially be quite heavy. The default is 'false'.
    • isIncludeSchemaComments

      public boolean isIncludeSchemaComments()
    • setSourceinfoStructMaker

      public void setSourceinfoStructMaker(String sourceinfoStructMaker)
      The name of the SourceInfoStructMaker class that returns SourceInfo schema and struct.
    • getSourceinfoStructMaker

      public String getSourceinfoStructMaker()
    • setLogMiningArchiveLogHours

      public void setLogMiningArchiveLogHours(long logMiningArchiveLogHours)
      The number of hours in the past from SYSDATE to mine archive logs. Using 0 mines all available archive logs
    • getLogMiningArchiveLogHours

      public long getLogMiningArchiveLogHours()
    • setLogMiningBatchSizeMax

      public void setLogMiningBatchSizeMax(long logMiningBatchSizeMax)
      The maximum SCN interval size that this connector will use when reading from redo/archive logs.
    • getLogMiningBatchSizeMax

      public long getLogMiningBatchSizeMax()
    • setMaxQueueSizeInBytes

      public void setMaxQueueSizeInBytes(long maxQueueSizeInBytes)
      Maximum size of the queue in bytes for change events read from the database log but not yet recorded or forwarded. Defaults to 0. Mean the feature is not enabled
    • getMaxQueueSizeInBytes

      public long getMaxQueueSizeInBytes()
    • setDatabaseUrl

      public void setDatabaseUrl(String databaseUrl)
      Complete JDBC URL as an alternative to specifying hostname, port and database provided as a way to support alternative connection scenarios.
    • getDatabaseUrl

      public String getDatabaseUrl()
    • setTimePrecisionMode

      public void setTimePrecisionMode(String timePrecisionMode)
      Time, date, and timestamps can be represented with different kinds of precisions, including: 'adaptive' (the default) bases the precision of time, date, and timestamp values on the database column's precision; 'adaptive_time_microseconds' like 'adaptive' mode, but TIME fields always use microseconds precision; 'connect' always represents time, date, and timestamp values using Kafka Connect's built-in representations for Time, Date, and Timestamp, which uses millisecond precision regardless of the database columns' precision.
    • getTimePrecisionMode

      public String getTimePrecisionMode()
    • setDatabasePort

      public void setDatabasePort(int databasePort)
      Port of the database server.
    • getDatabasePort

      public int getDatabasePort()
    • setLogMiningSleepTimeIncrementMs

      public void setLogMiningSleepTimeIncrementMs(long logMiningSleepTimeIncrementMs)
      The maximum amount of time that the connector will use to tune the optimal sleep time when reading data from LogMiner. Value is in milliseconds.
    • getLogMiningSleepTimeIncrementMs

      public long getLogMiningSleepTimeIncrementMs()
    • setSchemaHistoryInternal

      public void setSchemaHistoryInternal(String schemaHistoryInternal)
      The name of the SchemaHistory class that should be used to store and recover database schema changes. The configuration properties for the history are prefixed with the 'schema.history.internal.' string.
    • getSchemaHistoryInternal

      public String getSchemaHistoryInternal()
    • setColumnExcludeList

      public void setColumnExcludeList(String columnExcludeList)
      Regular expressions matching columns to exclude from change events
    • getColumnExcludeList

      public String getColumnExcludeList()
    • setLogMiningSessionMaxMs

      public void setLogMiningSessionMaxMs(long logMiningSessionMaxMs)
      The maximum number of milliseconds that a LogMiner session lives for before being restarted. Defaults to 0 (indefinite until a log switch occurs)
    • getLogMiningSessionMaxMs

      public long getLogMiningSessionMaxMs()
    • setDatabaseHostname

      public void setDatabaseHostname(String databaseHostname)
      Resolvable hostname or IP address of the database server.
    • getDatabaseHostname

      public String getDatabaseHostname()
    • setLogMiningBatchSizeMin

      public void setLogMiningBatchSizeMin(long logMiningBatchSizeMin)
      The minimum SCN interval size that this connector will try to read from redo/archive logs. Active batch size will be also increased/decreased by this amount for tuning connector throughput when needed.
    • getLogMiningBatchSizeMin

      public long getLogMiningBatchSizeMin()
    • setSnapshotEnhancePredicateScn

      public void setSnapshotEnhancePredicateScn(String snapshotEnhancePredicateScn)
      A token to replace on snapshot predicate template
    • getSnapshotEnhancePredicateScn

      public String getSnapshotEnhancePredicateScn()
    • createConnectorConfiguration

      protected io.debezium.config.Configuration createConnectorConfiguration()
      Specified by:
      createConnectorConfiguration in class org.apache.camel.component.debezium.configuration.EmbeddedDebeziumConfiguration
    • configureConnectorClass

      protected Class configureConnectorClass()
      Specified by:
      configureConnectorClass in class org.apache.camel.component.debezium.configuration.EmbeddedDebeziumConfiguration
    • validateConnectorConfiguration

      protected org.apache.camel.component.debezium.configuration.ConfigurationValidation validateConnectorConfiguration()
      Specified by:
      validateConnectorConfiguration in class org.apache.camel.component.debezium.configuration.EmbeddedDebeziumConfiguration
    • getConnectorDatabaseType

      public String getConnectorDatabaseType()
      Specified by:
      getConnectorDatabaseType in class org.apache.camel.component.debezium.configuration.EmbeddedDebeziumConfiguration