Romanian Journal of
Human - Computer InteractionVol.11, No.1, 2018
ISSN 1843-4460
Contents
Combining Visual and Textual Attention in Neural Models for Enhanced Visual Question Answering
Cosmin Dragomir, Cristian Ojog, Traian Rebedea
1 - 27 Measuring the educational value of the online discussion groups: a gender analysis
Dragos-Daniel Iordache, Costin Pribeanu
28 - 39 Game Development and Evaluation of the EvoGlimpse Video Game
Bianca-Cerasela-Zelia Blaga, Dorian Gorgan
40 - 62 Modern techniques of web scraping for data scientists
Mihai Gheorghe, Florin-Cristian Mihai, Marian Dârdală
63 - 75 Liftoff - ReaderBench introduces new online functionalities
Gabriel Gutu-Robu, Maria-Dorinela Sîrbu, Ionuț Paraschiv, Mihai Dascălu, Philippe Dessus, Ștefan Trăușan-Matu
76 - 91
Abstracts
Combining Visual and Textual Attention in Neural Models for Enhanced Visual Question Answering
Cosmin Dragomir, Cristian Ojog, Traian Rebedea
University Politehnica of Bucharest
Splaiul Independenței nr. 313, sector 6, 060042, Bucharest
E-mail: cosmin.gabriel.dragomir@gmail.com, crisojog@gmail.com, traian.rebedea@cs.pub.ro
Abstract: While visual information is essential for humans as it models our environment, language is our main method of communication and reasoning. Moreover, these two human capabilities interact in complex ways, therefore problems involving both visual and natural language data became widely explored in recent years. Thus, visual question answering aims at building systems able to process questions expressed in natural language about images or even videos. This would significantly ease the quality of life for visually impaired people by allowing them to get real-time answers about their surroundings. Unfortunately, the relations between images and questions are complex and the current solutions that exploit recent advanced in deep learning for text and image representation are not reliable enough. To improve these results, the visual and text representations must be fused into the same multimodal space. In this paper we present two different solutions for solving this problem. The first performs reasoning on the image by using soft attention mechanisms computed given the question. The second uses soft attention not just on the image, but the text as well. Although our models are more lightweight than state of the art solutions for this task, we achieve near top performance with the proposed combination of visual and textual representations.
Keywords: Visual question answering, deep learning, natural language processing, computer vision, visual impaired users.
Cite this paper as:
Dragomir, C., Ojog, C, Rebedea, T. Combining Visual and Textual Attention in Neural Models for Enhanced Visual Question Answering. Revista Romana de Interactiune Om-Calculator 11(1), 1-27, 2018.
Measuring the educational value of the online discussion groups: a gender analysis
Dragoș-Daniel Iordache1,2, Costin Pribeanu1,3
1National Institute for R&D in Informatics – ICI Bucharest
Blvd. Maresal Averescu, No. 8-10, 011455, Bucharest, Romania
E-mail: dragos.iordache@ici.ro; costin.pribeanu@ici.ro
2University of Bucharest, Romania
Blvd. Mihail Kogalniceanu, No. 34-36, 050107, Bucharest, Romania
3Academy of Romanian Scientists
Splaiul Independenței, Nr. 54, Bucharest, Romania
Abstract: Social media technologies provide various forms of educational support. Online discussion groups have been created by educators to maintain a closer relationship with the students. Although the online discussion groups are widely used in universities, the research on this topic has been carried on mainly by qualitative studies. There are few approaches measuring the educational support provided by the online discussion groups. The main objective of this paper is to analyze the gender differences in the perception of the educational support provided by the online discussion groups. The educational support has been conceptualized as a global factor that manifests along three dimensions: support for teaching, support for personal development and support for professional formation. An invariance analysis has been carried on that showed a metric invariance of the model. The gender analysis results show that both female and male students consider that the discussion groups stimulate the collaborative learning, facilitates sending the projects to the teacher and stimulates the initiative in learning. Meantime, female students scored higher almost all items.
Keywords: Online discussion groups, gender differences, educational support, social media technology, multidimensional model, invariance analysis.
Cite this paper as:
Iordache, D.D., Pribeanu, C. Measuring the educational value of the online discussion groups: a gender analysis. Revista Romana de Interactiune Om-Calculator 11(1), 28-39, 2018.
Game Development and Evaluation of the EvoGlimpse Video Game
Bianca-Cerasela-Zelia Blaga, Dorian Gorgan
Technical University of Cluj-Napoca, Computer Science Department
26-28 G. Barițiu street, 400027, Cluj-Napoca, Romania
E-mail: {zelia.blaga, dorian.gorgan}@cs.utcluj.ro
Abstract: Game development is a complex task that requires a lot of hard work and patience because it contains various elements such as 3D objects, collision detection, scripting, sound management, animation, rendering, control, and artificial intelligence. Video games are also interactive applications; therefore, they need to be designed in such a way that would enable a high level of usability. In this paper, the development methodology steps that were done for creating a video game are presented. We also propose a heuristic evaluation with the purpose of answering questions that can determine if the game respects the usability requirements. The goal is to gain knowledge in game development with a hands-on experiment and to estimate the level of usability of the final product.
Keywords: Development methodology, heuristic evaluation, game design, game implementation, video games, usability.
Cite this paper as:
Blaga, B.C.Z., Gorgan, D. Game Development and Evaluation of the EvoGlimpse Video Game. Revista Romana de Interactiune Om-Calculator 11(1), 40-62, 2018.
Modern techniques of web scraping for data scientists
Mihai Gheorghe, Florin-Cristian Mihai, Marian Dârdală
The Bucharest University of Economic Studies
6 Piata Romana, 1st district, Bucharest, 010374 Romania
E-mail: mihai.gheorghe@gdm.ro, fcmihai@gmail.com, dardala@ase.ro
Abstract: Since the emergence of the World Wide Web an outstanding amount of information has become easily available with the use of a web browser. Harvesting this data for scientific purposes isn't feasible to be done manually and has evolved into a distinct new field, Web Scraping. Although at the beginning automatically collecting data in a structured format was at hand with any programming language able to process a text block, which was the HTML response of a HTTP request, with the latest evolution of web pages, complex techniques to achieve this goal are needed. This article identifies problems a data scientist may encounter when trying to harvest web data, describes modern procedures and tools for web scraping and presents a case study on collecting data from the Bucharest's Public Transportation Authority's website in order to use it in a geo-processing analysis. The paper is addressed to data scientists with little or no prior experience in automatically collecting data from the web in a way that doesn't require extensive knowledge of Internet protocols and programming technologies therefore achieving rapid results for a wide variety of web data sources.
Keywords: Human-computer interaction, web scraping, data harvesting, content mining.
Cite this paper as:
Gheorghe, M., Mihai, F.C., Dardala, M. Modern techniques of web scraping for data scientists. Revista Romana de Interactiune Om-Calculator 11(1), 63-75, 2018.
Liftoff - ReaderBench introduces new online functionalities
Gabriel Gutu-Robu1, Maria-Dorinela Sîrbu1, Ionuț Paraschiv1, Mihai Dascălu1,2, Philippe Dessus3, Ștefan Trăușan-Matu1,2
1University Politehnica of Bucharest
313 Splaiul Independentei, Bucharest, Romania
E-mail: gabriel.gutu@cs.pub.ro, maria.sirbu@cti.pub.ro, ionut.paraschiv@cs.pub.ro, mihai.dascalu@cs.pub.ro, stefan.trausan@cs.pub.ro
2Cognos Business Consulting S.R.L.
35 Blv Marasesti. sector 4, 040251, Bucharest, Romania
3Univ. Grenoble Alpes, LaRAC
F-38000 Grenoble, France
E-mail: philippe.dessus@univ-grenoble- alpes.fr
Abstract: Natural Language Processing (NLP) became a trending domain within recent years for many researches and companies due to its wide applicability and the new advances in technology. The aim of this paper is to introduce an updated version or our open-source NLP framework, ReaderBench (http://readerbench.com/), designed to support both students and tutors in multiple learning scenarios that encompass one or more of the following dimensions: Cohesion Network Analysis of discourse, textual complexity assessment, keywords’ extraction using Latent Semantic Analysis (LSA), Latent Dirichlet Allocation (LDA) and word2vec semantic models, as well as the analysis of online communities and discussions. The latest version of our ReaderBench framework (v4.1) includes: a) new features, Application Programing Interfaces (APIs) and visualizations (e.g., sociograms, analysis of interaction between participants inside a community), b) a new web interface written in Angular 6, and c) the integration of new technologies to increase performance (i.e., spaCy and AKKA), as well as modularity and ease of deployment (i.e., Artifactory and Maven modules). ReaderBench is a fully functional framework capable to enhance the quality of learning processes conducted in multiple languages (English, French, Romanian, Dutch, Spanish, and Italian), and covering both individual and collaborative assessments.
Keywords: ReaderBench Framework, Natural Language Processing, Semantic Models, Cohesion Network Analysis, Computer-Supported Collaborative Learning.
Cite this paper as:
Gutu-Robu, G., Sirbu, M.D., Paraschiv, I., Dascalu, M., Dessus, P., Trausan-Matu, S. Liftoff - ReaderBench introduces new online functionalities. Revista Romana de Interactiune Om-Calculator 11(1), 76-91, 2018.