Unsupervised Transcription of Historical Documents
Printing-press era documents are difficult for OCR systems to transcribe because these documents are extremely noisy. However, the noise originates from processes that are causally understood. For example, thickened glyphs are caused by over-inking, and vertical offset is caused by slop in a mechanical baseline. We present a generative probabilistic model, inspired by historical printing processes, for transcribing images of documents from the printing press era. By jointly modeling the text of the document and the noisy (but regular) process of rendering glyphs, our unsupervised system is able to decipher font structure and more accurately transcribe images into text. Overall, our approach gives state-of-the-art results on two datasets of historical document images.
Speaker Details
Taylor Berg-Kirkpatrick is a 5th year graduate student in computer science at the University of California, Berkeley. He works with professor Dan Klein on applying unsupervised learning techniques to natural language problems. Taylor completed his undergraduate degree in mathematics and computer science at Berkeley as well, where he won the departmental Dorothea Klumpke Roberts Prize in mathematics. As a graduate student, Taylor has received both the Qualcomm Innovation Fellowship and the National Science Foundation Graduate Research Fellowship.
- Series:
- Microsoft Research Talks
- Date:
- Speakers:
- Taylor Berg-Kirkpatrick
-
-
Jeff Running
-
Series: Microsoft Research Talks
-
Decoding the Human Brain – A Neurosurgeon’s Experience
Speakers:- Pascal Zinn,
- Ivan Tashev
-
-
-
-
-
-
Challenges in Evolving a Successful Database Product (SQL Server) to a Cloud Service (SQL Azure)
Speakers:- Hanuma Kodavalla,
- Phil Bernstein
-
Improving text prediction accuracy using neurophysiology
Speakers:- Sophia Mehdizadeh
-
Tongue-Gesture Recognition in Head-Mounted Displays
Speakers:- Tan Gemicioglu
-
DIABLo: a Deep Individual-Agnostic Binaural Localizer
Speakers:- Shoken Kaneko
-
-
Recent Efforts Towards Efficient And Scalable Neural Waveform Coding
Speakers:- Kai Zhen
-
-
Audio-based Toxic Language Detection
Speakers:- Midia Yousefi
-
-
From SqueezeNet to SqueezeBERT: Developing Efficient Deep Neural Networks
Speakers:- Sujeeth Bharadwaj
-
Hope Speech and Help Speech: Surfacing Positivity Amidst Hate
Speakers:- Monojit Choudhury
-
-
-
-
-
'F' to 'A' on the N.Y. Regents Science Exams: An Overview of the Aristo Project
Speakers:- Peter Clark
-
Checkpointing the Un-checkpointable: the Split-Process Approach for MPI and Formal Verification
Speakers:- Gene Cooperman
-
Learning Structured Models for Safe Robot Control
Speakers:- Ashish Kapoor
-