Browsing by Author "Younes, Rabih"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
- Classifier for Activities with VariationsYounes, Rabih; Jones, Mark T.; Martin, Thomas L. (MDPI, 2018-10-18)Most activity classifiers focus on recognizing application-specific activities that are mostly performed in a scripted manner, where there is very little room for variation within the activity. These classifiers are mainly good at recognizing short scripted activities that are performed in a specific way. In reality, especially when considering daily activities, humans perform complex activities in a variety of ways. In this work, we aim to make activity recognition more practical by proposing a novel approach to recognize complex heterogeneous activities that could be performed in a wide variety of ways. We collect data from 15 subjects performing eight complex activities and test our approach while analyzing it from different aspects. The results show the validity of our approach. They also show how it performs better than the state-of-the-art approaches that tried to recognize the same activities in a more controlled environment.
- Predicting Spatial Visualization Problems’ Difficulty Level from Eye-Tracking DataLi, Xiang; Younes, Rabih; Bairaktarova, Diana; Guo, Qi (MDPI, 2020-03-31)The difficulty level of learning tasks is a concern that often needs to be considered in the teaching process. Teachers usually dynamically adjust the difficulty of exercises according to the prior knowledge and abilities of students to achieve better teaching results. In e-learning, because there is no teacher involvement, it often happens that the difficulty of the tasks is beyond the ability of the students. In attempts to solve this problem, several researchers investigated the problem-solving process by using eye-tracking data. However, although most e-learning exercises use the form of filling in blanks and choosing questions, in previous works, research focused on building cognitive models from eye-tracking data collected from flexible problem forms, which may lead to impractical results. In this paper, we build models to predict the difficulty level of spatial visualization problems from eye-tracking data collected from multiple-choice questions. We use eye-tracking and machine learning to investigate (1) the difference of eye movement among questions from different difficulty levels and (2) the possibility of predicting the difficulty level of problems from eye-tracking data. Our models resulted in an average accuracy of 87.60% on eye-tracking data of questions that the classifier has seen before and an average of 72.87% on questions that the classifier has not yet seen. The results confirmed that eye movement, especially fixation duration, contains essential information on the difficulty of the questions and it is sufficient to build machine-learning-based models to predict difficulty level.
- ViTA: A flexible CAD-tool-independent automatic grading platform for two-dimensional CAD drawingsYounes, Rabih; Bairaktarova, Diana (SAGE, 2022-01-01)Grading engineering drawings takes a significant amount of an instructor’s time, especially in large classrooms. In many cases, teaching assistants help with grading, adding levels of inconsistency and unfairness. To help in grading automation of CAD drawings, this paper introduces a novel tool that can completely automate the grading process after students submit their work. The introduced tool, called Virtual Teaching Assistant (ViTA), is a CAD-tool-independent platform that can work with exported drawings originating from different CAD software having different export settings. Using computer vision techniques applied to exported images of the drawings, ViTA can not only recognize whether or not a two-dimensional (2 D) drawing is correct, but also offers the detection of many important orthographic and sectional view mistakes such as mistakes in structural features, outline, hatching, orientation, scale, line thickness, colors, and views. We show ViTA’s accuracy and its relevance in the automated grading of 2 D CAD drawings by evaluating it using 500 student drawings created with three different CAD software.