PaperProof
Left Border
Prof. Dr. Beat Signer
Vrije Universiteit Brussel
Department of Computer Science
Pleinlaan 2, 1050 Brussels
(Belgium)
+32 2 629 1239, bsigner@vub.be
Office: PL9.3.60 (Pleinlaan 9)
VUB
View Beat Signer's profile on LinkedIn twitter View Beat Signer's profile on Facebook View Beat Signer's profile on YouTube Instagram View Beat Signer's profile on academia.edu View Beat Signer's profile on Google Scholar View Beat Signer's profile on ResearchGate View Beat Signer's profile in the ACM Digital Library View Beat Signer's ORCID profile Slideshare View Beat Signer's profile on Speaker Deck View Beat Signer's profile on 500px View Beat Signer's profile on SmugMug

PaperProof

PaperProof is a paper-digital proof-editing application developed in the Global Information Systems (GlobIS) research group at ETH Zurich that allows users to edit digital documents based on gesture-based markup of printed versions. It interprets the pen strokes made by the users on paper and can automatically execute the intended editing operations in the digital source document.


PaperProof is able to maintain a mapping between the printed and digital document instances that allows gestures such as a stroke through a word to be mapped back to the corresponding structural word element within the document. Further this allows us to maintain this logical mapping even if the digital instance of the document has been edited in parallel.
PaperProof paper document
Fig. 1: Paper document
Currently, PaperProof is integrated with the open source word processing tool OpenOffice Writer. The iDoc publishing framework supports the transformation of the digital document into an interactive paper document. The interaction with the interactive document is based on Anoto digital pen and paper technology and is implemented using the iServer/iPaper framework.
PaperProof OpenOffice digital document
Fig. 2: OpenOffice digital document
PaperProof offers a gesture-based interface to trigger editing operations on the corresponding digital instance of the document. It offers a set of five proof-editing operations: insert, delete, replace, move and annotate.
PaperProof Editing Gestures
Fig. 3: Editing Gestures
The editing commands are triggered by an ordered sequence of one or more pen-based gestures optionally followed by the user’s textual input. We use the iGesture framework to recognise the specific gestures and the MyScript Intelligent Character Recognition from VisionObjects to translate the textual information into digital string representations. An identified operation is stored in a special buffer until the paper-digital synchronisation is performed. Optionally, an acoustic and/or visual message informs the user when the gesture recogniser has identified an operation.

The following operations and corresponding gestures are defined in PaperProof:

DELETE: To issue a delete command the user simply needs to sketch a horizontal line gesture striking through the content to be removed. The corresponding digital entities are also eliminated.

REPLACE: The replacement of information is performed by first marking the content to be removed with a horizontal line gesture. Next the user writes the substitute text on paper. The ICR result is used as a replacement for the original information within the digital source document.

INSERT: To insert content, first a user has to specify the position drawing an inverted caret gesture. Next, they write down the new information which is recognised by the ICR software and inserted into the digital document.

ANNOTATION: To annotate structural elements such as paragraphs or sentences, the user first has to enclose them between the opening and closing horizontal angular bracket gestures. Next, the annotation is written on paper. Finally, the ICR recognised textual information is added to the original document.

SIDE ANNOTATION: PaperProof also supports side annotations. In this case, the target of the annotation is indicated by a vertical line gesture. As for the previous operation, the annotation is handwritten next to the annotated content and in the end inserted into the digital artefact.

MOVE: To move specific pieces of information to different positions, another composite gesture is used. First, the target entity is marked by sketching an enclosing pair of opening and closing angular bracket gestures. Then, the user points to the new location of the designated content with an upwards vertical line gesture.

Related Publications

  • 2019

  • 2017

  • 2009

  • 2008

  • 2007

  • 2005