You are on the HTRC Sandbox. This is where new functionality is introduced on a smaller public domain subset of the HathiTrust.
Read more about the Sandbox or visit the main HTRC Portal.

HTRC Extracted Features Dataset (2015)
Page-level features from 4.8 million volumes

Note that this is a beta data release. Please send feedback to (Need javascript to show).

A great deal of useful research can be performed non-consumptively with pre-extracted features. For this reason, we've prepared a data export of features in an out-of-copyright collection of digitized text from the Hathitrust.

Features are notable or informative characteristics of the text. We have processed a number of useful features, including part-of-speech tagged token counts, header and footer identification, and various line-level information. This is all provided per-page. Providing token information at the page level makes it possible to separate text from paratext. (An example of the latter may be: thirty pages of publishers’ ads at the back of a book). We have also decided to break each page into a collection of three parts: header, body, and footer. The specific features that we extract from the text are described in more detail below.

The primary precalculated feature that we are providing is the token (unigram) count, on a per-page basis. Term counts are specific to the part-of-speech usage for that term, so that a term used as both a noun and a verb, for example, with have separate counts provided for both these modalities of its use. We also include line information, such as the number of lines with text on each page, and a count of characters that start and end lines on each page. This information can illuminate genre and volume structure: for instance, it helps distinguish poetry from prose, or body text from an index.

The present release is a beta release, and we would love to hear about how you use it, or what else you would like to see!

Review the documentation below or jump straight to the downloads.

Data Stats

# of volumes 4,801,237
# of pages 1,825,317,899
Median pages/volume 330

More information about HathiTrust datasets.

Feature File Documentation

The HTRC Extracted Features dataset has two data files for each volume: a basic features files and an advanced features file. For most users, the basic features should suffice. Both files include the volume metadata.

Metadata

A small amount of bibliographic metadata for identifying the volume is included in this dataset. See also: “Where can I find detailed bibliographic metadata?”.

Volume

The volume represents the current work (e.g. book) as represented in the HathiTrust index.

id: A unique identifier for the current volume. This is the same identifier used in the HathiTrust and HathiTrust Research Center corpora.

Metadata

schemaVersion: A version identifier for the format and structure of this metadata object. metadata.schemaVersion is separate from features.schemaVersion below.

dateCreated: The time this metadata object was processed. metadata.dateCreated is not necessarily the same as the features.dataCreated below.

title: Title of the given volume.

pubDate: The publication year.

language: The primary language of the given volume.

htBibUrl: The HathiTrust Bibliographic API call for the volume.

handleUrl: The persistent identifier for the given volume.

oclc: The array of OCLC number(s).

imprint: The publication place, publisher, and publication date of the given volume.

Basic Features

The extracted features data is provided in JSON form.

Features

The features extracted from the content of the volume.

schemaVersion: A version identifier for the format and structure of the feature data (HTRC generated).

dateCreated: The time the batch of metadata was processed and recorded (HTRC generated).

pageCount: The number of pages in the volume.

pages: An array of JSON objects, each representing a page of the volume.

Page

Pages are contained within volumes, they have a sequence number, and information about their header, body, and footer.

Page-level information

seq: The sequence number. See notes on ID usage.

tokenCount: The total number of tokens in the page.

lineCount: The total number of non-empty lines in the page.

emptyLineCount: The total number of empty lines in the page.

sentenceCount: Total number of sentences identified in the page using OpenNLP. Details on parsing.

languages: Automatically inferred language likelihood for the page, Shuyo Nakatani's Language Detection library. Language code reference.

The fields for header, body, and footer are the same, but apply to different parts of the page. Read about the differences between the sections.

tokenCount: The total number of tokens in this page section.

lineCount: The number of lines containing characters of any kind in this page section. This represents the layout of the page, for sentence counts, see the sentenceCount field.

emptyLineCount: The number of lines without text in this page section.

sentenceCount: The number of sentences found in the text in this page section, parsed using OpenNLP.

tokenPosCount: An unordered list of all tokens (characterized by part of speech using OpenNLP), and their corresponding frequency counts, in this page section. Tokens are case-sensitive, so a capitalized “Rose” is shown as a separate token. There will be separate counts, for instance, for “rose” (noun) and “rose” (verb). Words separated by a hyphen across a line break are rejoined. No other data cleaning or OCR correction was performed. Details on POS parsing and types of tags used.

Advanced Features

This extracted features data is also provided in JSON form. The volume and metadata are the same as included basic extracted features. In the features data, we are providing the page count information again.

Features

The features extracted from the content of the volume.

schemaVersion: A version identifier for the format and structure of the feature data (HTRC generated).

dateCreated: The time the batch of metadata was processed and recorded (HTRC generated).

pageCount: The number of pages in the volume.

pages: An array of JSON objects, each representing a page of the volume.

Page

Pages are contained within volumes, they have a sequence number, and information about their header, body, and footer.

Page-level information

seq: The sequence number. See notes on ID usage.

beginLineChars: Count of the initial character of each line in this page section (ignoring whitespace).

endLineChars: Count of the last character on each line in this page section (ignoring whitespace).

capAlphaSeq: (body only) Maximum length of the alphabetical sequence of capital characters starting a line.


Get the Data

This feature dataset is licensed under a Creative Commons Attribution 4.0 International License.

The data is accessible using rsync. Rsync should be installed already on your Mac or Linux system; Windows users can use it through Cygwin.

Sample Files

A sample of 100 extracted feature files is available for download through your browser: sample.tar.

Also, thematic collections are available to download: DocSouth (87 volumes), EEBO (355 volumes), ECCO (505 volumes).

Rsync

Rsync will download each feature file individually, following a pairtree directory structure.

To sync all the basic feature files:

rsync -av sandbox.htrc.illinois.edu::pd-features/basic/ .

Note that this data is 1.19 Terabytes! Only download all of it if you know what you're doing.

To sync all the advanced feature files:

rsync -av sandbox.htrc.illinois.edu::pd-features/advanced/ .

A randomly sorted listing of all the basic files is available in the following location:

rsync -azv sandbox.htrc.illinois.edu::pd-features/listing/pd-basic-file-listing.txt .
There is also an advanced file listing at
pd-features/listing/pd-advanced-file-listing.txt

Users hoping for a more flexible file listings can use rsync's --list-only flag.

To rsync only the files in a given text file:

rsync -av --files-from FILE.TXT sandbox.htrc.illinois.edu::pd-features/ .

File names

Volume feature files are use that volume's ID, with the following characters substituted: : to +, and / to =. This means that any list of Hathitrust public domain files can be used to download the corresponding feature files.

Questions

How are tokens parsed?

Hyphenation of tokens at end of line was corrected using custom code. Apache OpenNLP was used for sentence segmentation, tokenization, and part of speech (POS) tagging. No additional data cleaning or OCR corrections was performed.

OpenNLP uses the Penn Treebank POS tags.

Can I use the page sequence as a unique identifier?

The seq value is always sequential from the start. In this version of processing, the seq value was extracted from the filename. In some limited cases, if the labeling of the filenames doesn’t align with the seq given in the METS file, then our pages are not in alignment with HT. In the future, we will change and use the METS file to specify the seq number. Each scanned page of a volume has a unique sequence number, but it is specific to the current version of the full text. In theory, updates to the OCR that add or remove pages will change the sequence. The practical likelihood of sequential changes is low, but uses of the page as an id should be cautious.

A future release of this data will include persistent page identifiers that remain unchanged when there are sequential changes.

Where’s the bibliographic metadata? Who wrote the book, when is it from, etc.?

This dataset is foremost an extracted features dataset, with minimal metadata included as a convenience. For additional metadata information, i.e. subject classifications, etc., HT offers Hathifiles, which can be paired to our feature dataset through the volume id field.

The metadata that is included in this data includes MARC metadata from Hathitrust and additional information from Hathifiles:

  • imprint: 260a from Hathitrust MARC record, 260b and 260c from Hathifiles.
  • language: MARC control field 008 from Hathifiles.
  • pubDate: extracted from Hathifiles. See also: details on HathiTrust's rights-determination.
  • oclc: extracted from Hathifiles.

Additionally, schemaVersion and dateCreated are specific to this feature dataset.

What do I do with beginning- or end-of-line characters?

The characters at the start and end of a line can be used to differentiate text from paratext at a page level. For instance, index lines tend to begin with capitalized letters and end with numbers. Likewise, lines in a table of contents can be identified through arabic or roman numerals at the start of a line.

What is the difference between the header, body, and footer sections?

Because repeated headers and footers can distort word counts in a document, but also help identify document parts, we attempt to identify repeated lines at the top or bottom of a page and provide separate token counts for those forms of paratext. The “header” and “footer” sections will also include tokens that are page numbers, catchwords, or other short lines at the very top or bottom of a page. Users can of course ignore these divisions by aggregating the token counts for header, body, and footer sections.


Contact Us

Need JavaScript to show

Tools

If you've built tools or scripts for processing our data, let us know and we'll feature them here!

Projects

Let us know about your projects and we'll link to them here.

HTRC Portal(v3.1 )