public class HBaseStore<K,T extends PersistentBase> extends DataStoreBase<K,T> implements org.apache.hadoop.conf.Configurable
| Modifier and Type | Field and Description |
|---|---|
static String |
DEFAULT_MAPPING_FILE |
static org.slf4j.Logger |
LOG |
static String |
PARSE_MAPPING_FILE_KEY |
beanFactory, conf, datumReader, datumWriter, fieldMap, keyClass, persistentClass, properties, schema| Constructor and Description |
|---|
HBaseStore() |
| Modifier and Type | Method and Description |
|---|---|
void |
close()
Close the DataStore.
|
org.apache.hadoop.hbase.client.ResultScanner |
createScanner(Query<K,T> query) |
void |
createSchema()
Creates the optional schema or table (or similar) in the datastore
to hold the objects.
|
boolean |
delete(K key)
Deletes the object with the given key.
|
void |
delete(T obj) |
long |
deleteByQuery(Query<K,T> query)
Deletes all the objects matching the query.
|
void |
deleteSchema()
Deletes the underlying schema or table (or similar) in the datastore
that holds the objects.
|
Result<K,T> |
execute(Query<K,T> query)
Executes the given query and returns the results.
|
void |
flush()
Forces the write caches to be flushed.
|
T |
get(K key,
String[] fields)
Returns the object corresponding to the given key.
|
org.apache.hadoop.conf.Configuration |
getConf() |
HBaseMapping |
getMapping() |
List<PartitionQuery<K,T>> |
getPartitions(Query<K,T> query)
Partitions the given query and returns a list of
PartitionQuerys,
which will execute on local data. |
int |
getScannerCaching()
Gets the Scanner Caching optimization value
|
String |
getSchemaName()
Returns the schema name given to this DataStore
|
void |
initialize(Class<K> keyClass,
Class<T> persistentClass,
Properties properties)
Initializes this DataStore.
|
T |
newInstance(org.apache.hadoop.hbase.client.Result result,
String[] fields)
Creates a new Persistent instance with the values in 'result' for the fields listed.
|
Query<K,T> |
newQuery()
Constructs and returns a new Query.
|
void |
put(K key,
T persistent)
Inserts the persistent object with the given key.
|
boolean |
schemaExists()
Returns whether the schema that holds the data exists in the datastore.
|
void |
setConf(org.apache.hadoop.conf.Configuration conf) |
HBaseStore<K,T> |
setScannerCaching(int numRows)
Sets the value for Scanner Caching optimization
|
equals, get, getBeanFactory, getFields, getFieldsToQuery, getKeyClass, getOrCreateConf, getPersistentClass, getSchemaName, newKey, newPersistent, readFields, setBeanFactory, setKeyClass, setPersistentClass, truncateSchema, writepublic static final org.slf4j.Logger LOG
public static final String PARSE_MAPPING_FILE_KEY
public static final String DEFAULT_MAPPING_FILE
public void initialize(Class<K> keyClass, Class<T> persistentClass, Properties properties)
DataStoreinitialize in interface DataStore<K,T extends PersistentBase>initialize in class DataStoreBase<K,T extends PersistentBase>keyClass - the class of the keyspersistentClass - the class of the persistent objectsproperties - extra metadatapublic String getSchemaName()
DataStoregetSchemaName in interface DataStore<K,T extends PersistentBase>public HBaseMapping getMapping()
public void createSchema()
DataStorecreateSchema in interface DataStore<K,T extends PersistentBase>public void deleteSchema()
DataStoredeleteSchema in interface DataStore<K,T extends PersistentBase>public boolean schemaExists()
DataStoreschemaExists in interface DataStore<K,T extends PersistentBase>public T get(K key, String[] fields)
DataStoreget in interface DataStore<K,T extends PersistentBase>key - the key of the objectfields - the fields required in the object. Pass null, to retrieve all fieldspublic void put(K key, T persistent)
put in interface DataStore<K,T extends PersistentBase>persistent - Record to be persisted in HBasepublic void delete(T obj)
public boolean delete(K key)
delete in interface DataStore<K,T extends PersistentBase>key - the key of the objectpublic long deleteByQuery(Query<K,T> query)
DataStoredeleteByQuery in interface DataStore<K,T extends PersistentBase>query - matching records to this query will be deletedpublic void flush()
DataStoreflush in interface DataStore<K,T extends PersistentBase>public Query<K,T> newQuery()
DataStorenewQuery in interface DataStore<K,T extends PersistentBase>public List<PartitionQuery<K,T>> getPartitions(Query<K,T> query) throws IOException
DataStorePartitionQuerys,
which will execute on local data.getPartitions in interface DataStore<K,T extends PersistentBase>query - the base query to create the partitions for. If the query
is null, then the data store returns the partitions for the default query
(returning every object)IOExceptionpublic Result<K,T> execute(Query<K,T> query)
DataStorepublic org.apache.hadoop.hbase.client.ResultScanner createScanner(Query<K,T> query) throws IOException
IOExceptionpublic T newInstance(org.apache.hadoop.hbase.client.Result result, String[] fields) throws IOException
result - result form a HTable#get()fields - List of fields queried, or null for allIOExceptionpublic void close()
DataStoreclose in interface Closeableclose in interface AutoCloseableclose in interface DataStore<K,T extends PersistentBase>public org.apache.hadoop.conf.Configuration getConf()
getConf in interface org.apache.hadoop.conf.ConfigurablegetConf in class DataStoreBase<K,T extends PersistentBase>public void setConf(org.apache.hadoop.conf.Configuration conf)
setConf in interface org.apache.hadoop.conf.ConfigurablesetConf in class DataStoreBase<K,T extends PersistentBase>public int getScannerCaching()
Scan.setCaching(int)public HBaseStore<K,T> setScannerCaching(int numRows)
numRows - the number of rows for caching >= 0Scan.setCaching(int)Copyright © 2010-2014 The Apache Software Foundation. All Rights Reserved.