Deliverable 5 - Worked Hours Report
In this deliverable, we seamlessly integrated components that were initially developed in isolation. The foundational elements of the robot, encompassing motor-driven movements of the head and arms, coupled with the simulation of emotions through the display and speech, were the first to be harmoniously integrated. This cohesive integration ensures these essential processes work in tandem and sequence.
Following this, we introduced features constituting the learning mode. This includes the utilization of a lockable compartment, presence detection, and break reminders. When initiating an activity, the user is prompted to specify whether they intend to employ the lockable compartment. Upon confirmation, they press the corresponding button to activate the compartment. The lock remains open until the user manually opens the door. Once the door is closed, a magnetic sensor registers the closure and prompts the next set of instructions, facilitating the placement of a reward inside the compartment.
Alternatively, if the user opts not to use the lockable compartment, they are then prompted to indicate their preference for employing the presence detector during activities. The presence detector, utilizing a camera mounted on the robot's body, initiates a scan rotated by the base motor. Facial detection occurs during this scan.
When both functionalities are concurrently activated, the lockable compartment unlocks upon completion of the activity. Additionally, the presence detector engages after a period of inactivity, wherein the robot, prompted by a button press from the user, conducts a presence detection scan. If the user is detected, the activation sequence continues. However, if no user interaction is detected, the activity is terminated.
In conjunction with the break reminders feature, the robot has been programmed to implement structured breaks during activities. Specifically, the robot will prompt a 5-minute break for every 25 minutes dedicated to studying. Additionally, after completing four intervals of 25 minutes of study, each followed by a 5-minute break, the robot will enforce a more extended 15-minute rest period. This systematic approach ensures that users not only take regular short breaks to rejuvenate but also engage in a more substantial rest interval after sustained periods of focused study. By incorporating these intervals, the robot promotes a balanced and healthy study routine.
Now that the foundational features are in place, the focus shifts to the development of the learning mode, a crucial component of the system. To initiate this phase, the learning mode will first retrieve pertinent data from the database. This database will house information about the custom learning material, encompassing questions, answers, and additional performance-related data from previous activities.
With the preceding components seamlessly integrated, the development of the learning mode unfolds, introducing three distinct activity types: reading, listening, and assessment.
In reading activity, upon entering a chapter, they have the opportunity to read the content, with periodic reminders encouraging breaks for a more effective and enjoyable learning process. To demonstrate reading mode, we reduced the break reminder times to shorten the length of the video.
Demostration of reading mode with initial configuration of the lockable compartment and presence detection, and break reminders.
In the listening activity, users engage in a dynamic learning experience by listening to music.
Demonstration of the feelings chapter listening mode
Demonstration of the feelings chapter listening mode
The assessment activities serve as a means to gauge and reinforce knowledge through a series of sequential questions. An innovative feature allows users to skip a question if they find it challenging. In the case of three consecutive incorrect responses, an option to bypass the question is presented. This adaptive approach ensures users can tailor their learning experience to their understanding and pace, fostering a more personalized and effective learning journey.
Demonstration of the assessment mode of the feelings chapter
Demonstration of the assessment mode of the feelings chapter
Finally we have the conversation mode, which was previously shown in the last deliverable, but now integrated with display, movement and speech. In addition, we timed the robot's response time after asking the question.
Demonstration of conversation mode
Demonstration of conversation mode
For the next steps, we have to optimize the demonstration of emotions on the display and develop the flow of activities as a whole, in addition to integration with the database.