This will search the implicit scope for a NamingStrategy defined.
This will search the implicit scope for a NamingStrategy defined. If none is found, this will return the table name as is.
The name of the table as derived from the user input.
A new table name adjusted according to the NamingStrategy.
Finds the first type in the type hierarchy for which columns exist as direct members.
Finds the first type in the type hierarchy for which columns exist as direct members.
The type of the table.
An optional symbol, if such a type was found in the type hierarchy.
A "generic" type extractor that's meant to produce a list of fields from a record type.
A "generic" type extractor that's meant to produce a list of fields from a record type. We support a narrow domain of types for automated generation, currently including: - Case classes - Tuples
To achieve this, we simply have specific ways of extracting the types from the underlying records, and producing a Record.Field for each of the members in the product type,
The underlying record type that was passed as the second argument to a Cassandra table.
An iterable of fields, each containing a TermName and a Type that describe a record member.
Extracts the name of the table that will be generated and used in Cassandra.
Extracts the name of the table that will be generated and used in Cassandra. This can be changed at runtime by the user by overriding CassandraTable.tableName. This mechanism is incompatible with the historical way we used to do this, effectively using the type inferred by the final database object.
Instead, at present times, it is the type hierarchy that dictates what a table will be called, and the first member of the type hierarchy of a table type with columns defined will dictate the name.
The source table type to extract the name from. We rely on this to be the first in the hierarchy to contain column definitions, determined by determineReferenceTable() above.
Materializes an extractor method for a table, the so called "fromRow" method.
Materializes an extractor method for a table, the so called "fromRow" method.
This will only work if the types of the record type match the types inferred by the return types of the columns inside the table.
If the implementation could not be inferred, the output of this method will be the unimplemented method exception and the user will have to manually override the fromRow definition and create one themselves.
def fromRow(row: Row): R = ???
The fields do not have to be in the same order in both the record and the table. The macro algorithm will go to some length to try and figure out a correct match even if the fields are in random order.
case class MyRecord( id: UUID, email: String, date: DateTime ) class MyTable extends Table[MyTable, MyRecord] { object id extends UUIDColumn with PartitionKey object email extends StringColumn object date extends DateTimeColumn }
For example, the below will be a perfect match as well:
case class MyRecord( date: DateTime id: UUID, email: String, ) class MyTable extends Table[MyTable, MyRecord] { object id extends UUIDColumn with PartitionKey object email extends StringColumn object date extends DateTimeColumn }
For a more detailed description on how this method works, see extractorRec.
An interpolated tree that will contain the automatically generated implementation of the fromRow method in a Cassandra Table. Alternatively, this will return an unimplemented ??? method, provided a correct definition could not be inferred.
This works by recursively parsing a list of fields extracted here as record members.
This works by recursively parsing a list of fields extracted here as record members. The algorithm will take every fiel from the record and: - If there are record fields to address left, we will search within the available columns for a type that either matches or can be implicitly converted to the record type. - If a single match is found, we declare that as a match, without comparing the field names. - If there is more than one match found, we look for a column with a name that matches the record field name. - If a matching name is found, it means we have both a matching type and name and we consider that a correct match. - If no matching name is found, this is appended to the unprocessed list of record fields. We do this because we need to resort to different techniques to deal with unmatched record fields or fields with multiple possible matches. Until 2.6.0, we resorted to using the first column field of the correct type as per user input, in situations where a given record type cou;d match more than one column. However, this introduces a subtle problem as we risk "using up" a column field with a potentially incorrect matching record field because we do not exhaust all "direct" easy matches before attempting to handle the more complex situations. - If a direct match is found or no matching type is found we recursively remove from both the list of record fields to look for and also from the dictionary of column members to look up from.
An ordered "as-written" map of column types with a list of terms associated with it. This is used to deal with the fact that multiple table columns can have the same Scala type.
An ordered "as-written" list of record fields.
A table descriptor, built recursively, which will hold all the information we need to generate the extractor at the end of this recursive cycle.
The list of unprocessed record fields, dealt with last to avoid the above described scenario. We attempt to make all "easy matches" before analysing situations where it's harder to derive a simple field match.
A TableDescriptor, which contains all the information needed to create a full cassandra table.
A set of reserved CQL keywords that should not be used as column names.
A set of reserved CQL keywords that should not be used as column names. They are described here: http://docs.datastax.com/en/cql/3.1/cql/cql_reference/keywords_r.html.
This method will check for common Cassandra anti-patterns during the intialisation of a schema.
This method will check for common Cassandra anti-patterns during the intialisation of a schema. If the Schema definition violates valid CQL standard, this function will throw an error.
A perfect example is using a mixture of Primary keys and Clustering keys in the same schema. While a Clustering key is also a primary key, when defining a clustering key all other keys must become clustering keys and specify their order.
We could auto-generate this order but we wouldn't be making false assumptions about the desired ordering.