Automatic Metadata Extraction Incorporating Visual Features from Scanned Electronic Theses and Dissertations
dc.contributor.author | Choudhury, Muntabir | en |
dc.contributor.author | Jayanetti, Himarsha R. | en |
dc.contributor.author | Wu, Jian | en |
dc.contributor.author | Ingram, William A. | en |
dc.contributor.author | Fox, Edward | en |
dc.date.accessioned | 2024-01-22T13:06:34Z | en |
dc.date.available | 2024-01-22T13:06:34Z | en |
dc.date.issued | 2021-09-27 | en |
dc.description.abstract | Electronic Theses and Dissertations (ETDs) contain domain knowledge that can be used for many digital library tasks, such as analyzing citation networks and predicting research trends. Automatic metadata extraction is important to build scalable digital library search engines. Most existing methods are designed for born-digital documents such as GROBID, CERMINE, and ParsCit, so they often fail to extract metadata from scanned documents such as for ETDs. Traditional sequence tagging methods mainly rely on text-based features. In this paper, we propose a conditional random field (CRF) model that combines text-based and visual features. To verify the robustness of our model, we extended an existing corpus and created a new ground truth corpus consisting of 500 ETD cover pages with human validated metadata. Our experiments show that CRF with visual features outperformed both a heuristic baseline and a CRF model with only text-based features. The proposed model achieved 81.3%-96% F1 measure on seven metadata fields. The data and source code are publicly available on Google Drive1 and a GitHub repository2. | en |
dc.description.notes | Yes, full paper (Peer reviewed?) | en |
dc.description.version | Submitted version | en |
dc.format.extent | Pages 230-233 | en |
dc.format.extent | 4 page(s) | en |
dc.format.mimetype | application/pdf | en |
dc.identifier.doi | https://doi.org/10.1109/jcdl52503.2021.00066 | en |
dc.identifier.eissn | 2575-8152 | en |
dc.identifier.isbn | 9781665417709 | en |
dc.identifier.issn | 2575-7865 | en |
dc.identifier.orcid | Ingram, William [0000-0002-8307-8844] | en |
dc.identifier.orcid | Fox, Edward [0000-0003-1447-6870] | en |
dc.identifier.uri | https://hdl.handle.net/10919/117431 | en |
dc.identifier.volume | 2021-September | en |
dc.language.iso | en | en |
dc.publisher | IEEE | en |
dc.relation.uri | http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&KeyUT=WOS:000760315700026&DestLinkType=FullRecord&DestApp=ALL_WOS&UsrCustomerID=930d57c9ac61a043676db62af60056c1 | en |
dc.rights | In Copyright | en |
dc.rights.uri | http://rightsstatements.org/vocab/InC/1.0/ | en |
dc.subject | Digital Libraries | en |
dc.subject | Optical Character Recognition | en |
dc.subject | Text Mining | en |
dc.subject | Metadata Extraction | en |
dc.subject | CRF | en |
dc.subject | BiLSTM-CRF | en |
dc.title | Automatic Metadata Extraction Incorporating Visual Features from Scanned Electronic Theses and Dissertations | en |
dc.title.serial | 2021 ACM/IEEE JOINT CONFERENCE ON DIGITAL LIBRARIES (JCDL 2021) | en |
dc.type | Conference proceeding | en |
dc.type.dcmitype | Text | en |
dc.type.other | Proceedings Paper | en |
dc.type.other | Meeting | en |
dc.type.other | Book in series | en |
pubs.finish-date | 2021-09-30 | en |
pubs.organisational-group | /Virginia Tech | en |
pubs.organisational-group | /Virginia Tech/Engineering | en |
pubs.organisational-group | /Virginia Tech/Engineering/Computer Science | en |
pubs.organisational-group | /Virginia Tech/Library | en |
pubs.organisational-group | /Virginia Tech/All T&R Faculty | en |
pubs.organisational-group | /Virginia Tech/Engineering/COE T&R Faculty | en |
pubs.organisational-group | /Virginia Tech/Library/Library assessment administrators | en |
pubs.organisational-group | /Virginia Tech/Library/Dean's office | en |
pubs.organisational-group | /Virginia Tech/Library/Information Technology | en |
pubs.organisational-group | /Virginia Tech/Graduate students | en |
pubs.organisational-group | /Virginia Tech/Graduate students/Doctoral students | en |
pubs.start-date | 2021-09-27 | en |