Monday, November 24, 2014

Fitts' law as a research and design tool in human-computer interaction (Paper Review)








Bibliography:

MacKenzie, I. Scott. "Fitts' law as a research and design tool in human-computer interaction." Human-computer interaction 7.1 (1992): 91-139.


Link: 


Summary:

This paper analyzes how Fitts' law has been used and the results of subsequent research using Fitts' law or a modified form of it.


Comments:

I'm taking Embodiedment Interaction and seems like this model might fit in with this paradigm.


Research Ideas:

I would be interested to see if using 3D hand gestures could be analyzed using this model.


Wednesday, November 19, 2014

Using Entropy to Distinguish Shape Versus Text in Hand-Drawn Diagrams (Paper Report)



Bibliography:


Bhat, Akshay, and Tracy Hammond. "Using Entropy to Distinguish Shape Versus Text in Hand-Drawn Diagrams." IJCAI. Vol. 9. 2009.

Link:


http://www.aaai.org/ocs/index.php/IJCAI/IJCAI-09/paper/download/592/906

Summary:


Entropy is used, in the sense of information theory, to classify text from shapes. In this sense text is thought to be more random, have greater entropy, than shapes. To encode the entropy of a stroke the authors use an alphabet which consists of seven letters which describe the angle of a stroke point with its two temporal neighbors (one for the endpoints.) Some preprocessing is done by normalizing the stroke, resampling so that points are equidistant from each other. Strokes are also grouped together according to some threshold in the spatial and temporal dimension. Strokes that are close together in time and space should belong to the same class (text or shape.) Finally probability of given 'letter' is used to calculate the entropy according to the following formula:


To calculate the confidence the authors use:


where b is the confidence of classification of TEXT is 0.5.

They trained on COA drawings and tested classification on mechanics drawings with favorable results. 

Comments:


I think this approach is very clever.  I'm interested in how entropy differs between different users from different locales.

Research Ideas:


Find out what other domains might entropy be relevant in classification.