capreolus.extractor.lce_bertpassage

Module Contents

Classes

LCEBertPassage

Extracts passages from the document to be later consumed by a BERT based model.

Attributes

logger

capreolus.extractor.lce_bertpassage.logger[source]
class capreolus.extractor.lce_bertpassage.LCEBertPassage(config=None, provide=None, share_dependency_objects=False, build=True)[source]

Bases: capreolus.extractor.bertpassage.BertPassage

Extracts passages from the document to be later consumed by a BERT based model. Does NOT use all the passages. The first passages is always used. Use the prob config to control the probability of a passage being selected Gotcha: In Tensorflow the train tfrecords have shape (batch_size, maxseqlen) while dev tf records have the shape (batch_size, num_passages, maxseqlen). This is because during inference, we want to pool over the scores of the passages belonging to a doc

module_name = LCEbertpassage[source]
config_spec[source]
create_tf_train_feature(self, sample)[source]

Returns a set of features from a doc. Of the num_passages passages that are present in a document, we use only a subset of it. params: sample - A dict where each entry has the shape [batch_size, num_passages, maxseqlen] Returns a list of features. Each feature is a dict, and each value in the dict has the shape [batch_size, maxseqlen]. Yes, the output shape is different to the input shape because we sample from the passages.

parse_tf_train_example(self, example_proto)[source]
id2vec(self, qid, posid, negids=None, label=None)[source]

See parent class for docstring