public class SnapshotReader extends AbstractReader
MySqlSchema
.Modifier and Type | Class and Description |
---|---|
protected static interface |
SnapshotReader.RecordRecorder |
AbstractReader.AcceptAllPredicate
Reader.State
Modifier and Type | Field and Description |
---|---|
private ExecutorService |
executorService |
private boolean |
includeData |
private SnapshotReaderMetrics |
metrics |
private SnapshotReader.RecordRecorder |
recorder |
private MySqlConnectorConfig.SnapshotLockingMode |
snapshotLockingMode |
private boolean |
useGlobalLock |
changeEventQueueMetrics, connectionContext, context, logger
Constructor and Description |
---|
SnapshotReader(String name,
MySqlTaskContext context)
Create a snapshot reader.
|
SnapshotReader(String name,
MySqlTaskContext context,
boolean useGlobalLock)
Create a snapshot reader that can use global locking only optionally.
|
Modifier and Type | Method and Description |
---|---|
private Statement |
createStatement(Connection connection) |
private Statement |
createStatementWithLargeResultSet(Connection connection)
Create a JDBC statement that can be used for large result sets.
|
protected void |
doCleanup()
The reader has completed all processing and all
enqueued records have been
consumed , so this reader should clean up any resources that might remain. |
void |
doDestroy()
The reader has been requested to de-initialize resources after stopping.
|
protected void |
doInitialize()
The reader has been requested to initialize resources prior to starting.
|
protected void |
doStart()
Start the snapshot and return immediately.
|
protected void |
doStop()
The reader has been requested to stop, so perform any work required to stop the reader's resources that were previously
started . |
protected void |
enqueueSchemaChanges(String dbName,
Set<TableId> tables,
String ddlStatement) |
protected void |
execute()
Perform the snapshot using the same logic as the "mysqldump" utility.
|
SnapshotReader |
generateInsertEvents()
Set this reader's
execution to produce a Envelope.Operation.CREATE event for
each row. |
SnapshotReader |
generateReadEvents()
Set this reader's
execution to produce a Envelope.Operation.READ event for each
row. |
private Filters |
getCreateTableFilters(Filters filters)
Get the filters for table creation.
|
private void |
logRolesForCurrentUser(JdbcConnection mysql) |
private void |
logServerInformation(JdbcConnection mysql) |
protected String |
quote(String dbOrTableName) |
protected String |
quote(TableId id) |
protected void |
readBinlogPosition(int step,
SourceInfo source,
JdbcConnection mysql,
AtomicReference<String> sql) |
private Object |
readDateField(ResultSet rs,
int fieldNo,
Column column,
Table table)
In non-string mode the date field can contain zero in any of the date part which we need to handle as all-zero
|
protected Object |
readField(ResultSet rs,
int fieldNo,
Column actualColumn,
Table actualTable) |
private void |
readTableSchema(AtomicReference<String> sql,
JdbcConnection mysql,
MySqlSchema schema,
SourceInfo source,
String dbName,
TableId tableId) |
private Object |
readTimeField(ResultSet rs,
int fieldNo)
As MySQL connector/J implementation is broken for MySQL type "TIME" we have to use a binary-ish workaround
|
private Object |
readTimestampField(ResultSet rs,
int fieldNo,
Column column,
Table table)
In non-string mode the time field can contain zero in any of the date part which we need to handle as all-zero
|
protected void |
recordRowAsInsert(RecordMakers.RecordsForTable recordMaker,
Object[] row,
Instant ts) |
protected void |
recordRowAsRead(RecordMakers.RecordsForTable recordMaker,
Object[] row,
Instant ts) |
protected org.apache.kafka.connect.source.SourceRecord |
replaceOffsetAndSource(org.apache.kafka.connect.source.SourceRecord record)
Utility method to replace the offset and the source in the given record with the latest.
|
private boolean |
shouldRecordTableSchema(MySqlSchema schema,
Filters filters,
TableId id) |
cleanupResources, completeSuccessfully, destroy, enqueueRecord, failed, failed, initialize, isRunning, name, poll, pollComplete, start, state, stop, toString, uponCompletion, wrap
private final boolean includeData
private SnapshotReader.RecordRecorder recorder
private final SnapshotReaderMetrics metrics
private ExecutorService executorService
private final boolean useGlobalLock
private final MySqlConnectorConfig.SnapshotLockingMode snapshotLockingMode
public SnapshotReader(String name, MySqlTaskContext context)
name
- the name of this reader; may not be nullcontext
- the task context in which this reader is running; may not be nullSnapshotReader(String name, MySqlTaskContext context, boolean useGlobalLock)
name
- the name of this reader; may not be nullcontext
- the task context in which this reader is running; may not be nulluseGlobalLock
- false
to simulate cloud (Amazon RDS) restrictionspublic SnapshotReader generateReadEvents()
execution
to produce a Envelope.Operation.READ
event for each
row.public SnapshotReader generateInsertEvents()
execution
to produce a Envelope.Operation.CREATE
event for
each row.protected void doInitialize()
AbstractReader
AbstractReader.doStart()
.doInitialize
in class AbstractReader
public void doDestroy()
AbstractReader
AbstractReader.doStop()
.doDestroy
in class AbstractReader
protected void doStart()
AbstractReader.poll()
until that method returns null
.doStart
in class AbstractReader
protected void doStop()
AbstractReader
started
.
This method is always called when AbstractReader.stop()
is called, and the first time AbstractReader.isRunning()
will return
true
the first time and false
for any subsequent calls.
doStop
in class AbstractReader
protected void doCleanup()
AbstractReader
enqueued records
have been
consumed
, so this reader should clean up any resources that might remain.doCleanup
in class AbstractReader
protected Object readField(ResultSet rs, int fieldNo, Column actualColumn, Table actualTable) throws SQLException
SQLException
private Object readTimeField(ResultSet rs, int fieldNo) throws SQLException
SQLException
https://issues.jboss.org/browse/DBZ-342
private Object readDateField(ResultSet rs, int fieldNo, Column column, Table table) throws SQLException
SQLException
private Object readTimestampField(ResultSet rs, int fieldNo, Column column, Table table) throws SQLException
SQLException
protected void execute()
private void readTableSchema(AtomicReference<String> sql, JdbcConnection mysql, MySqlSchema schema, SourceInfo source, String dbName, TableId tableId) throws SQLException
SQLException
private boolean shouldRecordTableSchema(MySqlSchema schema, Filters filters, TableId id)
protected void readBinlogPosition(int step, SourceInfo source, JdbcConnection mysql, AtomicReference<String> sql) throws SQLException
SQLException
private Filters getCreateTableFilters(Filters filters)
filters
- the default filters of this SnapshotReader
Filters
that represent all the tables that this snapshot reader should CREATEprivate Statement createStatementWithLargeResultSet(Connection connection) throws SQLException
By default, the MySQL Connector/J driver retrieves all rows for ResultSets and stores them in memory. In most cases this
is the most efficient way to operate and, due to the design of the MySQL network protocol, is easier to implement.
However, when ResultSets that have a large number of rows or large values, the driver may not be able to allocate
heap space in the JVM and may result in an OutOfMemoryError
. See
DBZ-94 for details.
This method handles such cases using the
recommended
technique for MySQL by creating the JDBC Statement
with forward-only
cursor
and read-only concurrency
flags, and with a minimum value
fetch size hint
.
connection
- the JDBC connection; may not be nullSQLException
- if there is a problem creating the statementprivate Statement createStatement(Connection connection) throws SQLException
SQLException
private void logServerInformation(JdbcConnection mysql)
private void logRolesForCurrentUser(JdbcConnection mysql)
protected org.apache.kafka.connect.source.SourceRecord replaceOffsetAndSource(org.apache.kafka.connect.source.SourceRecord record)
record
- the recordprotected void enqueueSchemaChanges(String dbName, Set<TableId> tables, String ddlStatement)
protected void recordRowAsRead(RecordMakers.RecordsForTable recordMaker, Object[] row, Instant ts) throws InterruptedException
InterruptedException
protected void recordRowAsInsert(RecordMakers.RecordsForTable recordMaker, Object[] row, Instant ts) throws InterruptedException
InterruptedException
Copyright © 2020 JBoss by Red Hat. All rights reserved.