Syllabus

Introduction

Cognitive robotics and social robotics are related, but distinct fields within robotics. Cognitive robotics is focused on creating robots that can perform tasks that require advanced perception, reasoning, and decision-making abilities. These tasks may include recognizing objects, understanding natural language, learning from experience, and making predictions about the future. Cognitive robots are designed to be able to process and analyze large amounts of data, and to make decisions based on that data. Social robotics, on the other hand, is focused on creating robots that can interact and communicate with humans in a natural and effective way. Social robots are designed to be able to understand and respond to human emotions, social cues, and body language. They are also designed to be able to generate expressive and natural-sounding speech and gestures. Social robots are intended to be used in a variety of applications, such as customer service, education, and healthcare. Both cognitive and social robotics requires the use of advanced technologies such as machine learning, computer vision, and natural language processing. However, cognitive robotics is more focused on the robot’s ability to process information and make decisions, while social robotics is more focused on the robot’s ability to interact with humans.

Cognitive and social robotics enables the development of robots that can work in collaboration with human workers, improving the overall safety and performance of the workplace. Moreover, cognitive and social robotics enables the creation of robots that are able to interact with their environment and make decisions based on their perceptions, providing the opportunity for greater flexibility and autonomy in the production process. Cognitive and social robotics is becoming increasingly important in industrial production for several reasons. The ability to use robots in a way that leverages their cognitive and social capabilities can have a profound impact on industrial production. By enabling robots to work in a more collaborative and adaptive manner, it becomes possible to reduce the time and effort required to perform tasks, improve product quality and reliability, and minimize the impact of machine downtime and failure. For example, in a manufacturing setting, robots equipped with cognitive abilities can monitor production processes in real-time, identify potential problems, and make decisions to address those problems before they lead to equipment failures or production slowdowns. Additionally, social robots can interact with human workers in a production environment, providing assistance with tasks, monitoring worker safety, and collecting data that can be used to improve production processes.

Total Hours

This course unit covers 100 hours, from which 28 hours lectures, 14 hours lab work, and 58 hours individual study and work.

General Objective

The general objective of a course in cognitive and social robotics for industrial production is to equip students with the knowledge and skills necessary to design, develop, and integrate socially intelligent robots into various industrial settings. The course covers the principles of cognitive and social robotics, including human-robot interaction, machine learning, computer vision, and natural language processing. Students will learn how to design socially intelligent robots that can interact with human operators in a natural and intuitive way, and how to use these robots to enhance the efficiency, safety, and overall quality of industrial production processes. Through hands-on projects and case studies, students will gain practical experience in developing socially intelligent robots and integrating them into real-world industrial environments.

Specific Objectives / Learning Outcomes

The specific objectives or learning outcomes of this course unit are:

  • Understanding the concepts and principles of cognitive and social robotics, and their applications in industrial production.
  • Acquiring knowledge of the various sensing, perception, and decision-making capabilities of cognitive and social robots, and their impact on industrial production.
  • Developing skills in programming, controlling, and integrating cognitive and social robots into industrial production processes.
  • Familiarizing with the latest technologies and trends in cognitive and social robotics for industrial production, including IoT, cloud computing, and machine learning.
  • Applying the knowledge and skills gained from the course to real-world industrial production problems, through hands-on laboratory work, projects, and case studies.
  • Evaluating the potential benefits and challenges of integrating cognitive and social robots into industrial production processes, and proposing solutions to overcome these challenges.
  • Developing a critical perspective on the ethical, social, and economic implications of cognitive and social robotics for industrial production.
Professional Competencies

The professional competencies that a student can gain from this course unit include:

  • Knowledge and understanding of the basics of cognitive and social robotics and its applications in industrial production.
  • Knowledge and understanding of the fundamental principles of human-robot interaction, including the design and implementation of interfaces, algorithms, and software for social robots.
  • Knowledge and understanding of the ethical, legal, and societal implications of deploying social robots in industrial production, including data privacy, security, and other important considerations.
  • Ability to develop, implement, and evaluate systems for integrating social robots into industrial production, including design and development of software and algorithms.
  • Ability to work effectively with other professionals and stakeholders, including industrial engineers, technicians, and production managers, to design, develop, and integrate social robots into industrial production.
  • Understanding of how to use data and analytics to optimize the deployment and performance of social robots in industrial production, including how to develop and implement predictive maintenance strategies.
  • Ability to evaluate and interpret the results of experiments and case studies to inform decision-making and improve the performance of social robots in industrial production.
  • Ability to communicate effectively with a wide range of stakeholders, including engineers, technicians, production managers, and customers, to explain the benefits and limitations of social robots in industrial production and to promote their effective integration into the workplace.
Cross Competencies

The course unit on Cognitive and Social Robotics for industrial production develops the following cross-competencies in students:

  • Problem-Solving: Students will develop their problem-solving skills as they learn how to design, program, and integrate social robots into industrial production systems.
  • Innovation: Students will learn how to identify new opportunities for using social robots in industrial production, and how to develop innovative solutions to improve production processes.
  • Teamwork: Students will learn how to work in teams, collaborating with engineers and technicians from different disciplines to design and implement social robotics solutions for industrial production.
  • Communication: Students will develop their communication skills as they work with stakeholders to explain the benefits and limitations of social robots, and how they can be used in industrial production.
  • Adaptability: Students will learn how to work in an ever-changing environment, adapting to new technologies and changing business requirements as the use of social robots in industrial production evolves.
  • Cross-cultural understanding: Students will gain an understanding of the cultural and ethical implications of using social robots in industrial production, and will learn how to work with stakeholders from diverse backgrounds.
  • Ethical awareness: Students will develop their ethical awareness as they learn about the responsible use of social robots in industrial production, including data privacy and security, human-robot interaction, and ethical decision-making.
Alignment to Social and Economic Expectations
The outcomes of the course align with social and economic expectations by preparing students to meet the demands of Industry 4.0 and contribute to the development of more sustainable and human-centered production systems. The skills and knowledge gained through the course enable students to effectively address the challenges posed by Industry 4.0, drive innovation and play a role in shaping the future of work in the industrial sector. By fostering a technologically advanced and socially responsible workforce, the course unit helps to promote economic growth and social well-being, supporting the transition towards a more sustainable and equitable future. While the adoption of new technologies may result in some job losses for repetitive or boring operations, it also creates new job opportunities in areas such as technology development, implementation, and maintenance. 
Evaluation

Assessment methods

For the lectures portion of the course unit on cognitive and social robotics in industrial production, the following assessment methods are used:

  • Quizzes: In-class quizzes or online quizzes to test the students’ understanding of key concepts and theories covered in the lectures.
  • Written assignments: Individual assignments that require students to apply their knowledge and skills to solve a real-world problem or case study.
  • Midterm and Final Exams: These exams consist of multiple choice, short answer, and essay questions and assess the students’ overall understanding of the course material.

For the lab work portion of the course, the following assessment methods are used:

  • Lab reports: Students are required to write lab reports documenting their experiments, results, and analysis. These reports are graded on the quality of their writing, methodology, results, and analysis.
  • Oral presentations: Students are required to present their lab work to the class, which is assessed based on the quality of their presentation skills, content, and interaction with the audience.

Assessment criteria

For lectures, the assessment criteria for this course unit on ognitive and social robotics in industrial production are:

  • Knowledge and Understanding: Assessment of the student’s ability to comprehend and apply the concepts, theories, and principles of ognitive and social robotics in industrial production.
  • Analytical and Problem Solving Skills: Assessment of the student’s ability to analyze complex problems, evaluate different solutions, and make informed decisions related to ognitive and social robotics in industrial production.
  • Communication Skills: Assessment of the student’s ability to communicate their ideas, designs, and solutions in a clear, concise, and effective manner.
  • Teamwork and Collaboration Skills: Assessment of the student’s ability to work effectively in a team and collaborate with others to achieve a common goal.
  • Application of Technology: Assessment of the student’s ability to apply appropriate technologies, tools, and software of cognitive and social robotics in industrial production.

For lab work, the assessment criteria could include:

  • Technical Skills: Assessment of the student’s ability to use and apply the technical skills and knowledge acquired in the course to cognitive and social robotics solutions in industrial production.
  • Quality of Work: Assessment of the student’s ability to produce high-quality work that meets the requirements and standards set for cognitive and social robotics in industrial production.
  • Creativity and Innovation: Assessment of the student’s ability to think creatively and apply innovative solutions of cognitive and social robotics in industrial production.
  • Attention to Detail: Assessment of the student’s ability to pay close attention to details and ensure that cognitive and social robotics solutions are accurate, complete, and well-documented.
  • Time Management: Assessment of the student’s ability to manage their time effectively and deliver completed lab work within the specified timeframe.

Quantitative performance indicators to assess the minimum level of performance (mark 5 on a scale from 1 to 10)

Quantitative performance indicators to assess the minimum level of performance (mark 5 on a scale from 1 to 10) for the lectures:

  • Attendance and participation in class discussions – The student should attend at least 80% of the lectures and actively participate in class discussions.
  • Homework and Quizzes – The student should complete all homework assignments and quizzes with a minimum score of 60%.
  • Midterm Exam – The student should achieve a minimum score of 50% on the midterm exam.

Quantitative performance indicators to assess the minimum level of performance (mark 5 on a scale from 1 to 10) for the lab works:

  • Lab attendance and participation – The student should attend and participate in all scheduled lab sessions.
  • Lab reports – The student should submit all lab reports on time, with a minimum score of 60% on each report.
  • Lab assignments – The student should complete all lab assignments with a minimum score of 60%.
  • Lab exams – The student should achieve a minimum score of 50% on the lab exams.

Quantitative performance indicators for the final exam to assess the minimum level of performance:

  • Completion of a minimum number of lecture-related questions correctly – 70% of the total questions.
  • The student should be able to demonstrate an understanding of the basic concepts and theories related to cognitive and social robotics and their applications in industrial production, with a minimum score of 50% on multiple-choice questions or short answer questions.
  • The student should be able to explain and analyze real-life case studies and their results, with a minimum score of 50% on case study analysis questions.
  • The student should be able to demonstrate a basic knowledge of the technologies, tools, and methodologies used in the design and implementation of cognitive and social robotics, with a minimum score of 50% on matching or labeling questions.
  • The student should be able to apply the concepts and theories learned in the lectures to solve practical problems, with a minimum score of 50% on problem-solving questions.
  • The student should be able to critically evaluate the benefits and challenges of using cognitive and social robotics in industrial production, with a minimum score of 50% on essay questions.
  • Evidence of the ability to apply learned concepts and theories to practical scenarios, as demonstrated by the number of correctly answered application-based questions.
  • Display of critical thinking skills, as evidenced by the number of correct answers to questions requiring analysis and synthesis of information.
  • Overall exam performance, measured in terms of the total number of correct answers and expressed as a percentage of the total exam score. A minimum score of 50% or above is set as the benchmark for a mark of 5.
Lectures

Block 1 – Foundations (Units 1–4)

1. Introduction to Cognitive and Social Robotics

  • Definitions, scope, and differences between cognitive and social robotics
  • Overview of human cognition & social interaction relevant to robotics
  • Historical evolution & landmark projects

2. Human–Robot Interaction (HRI) Principles

  • Communication channels (verbal, non-verbal, proxemics)
  • Social presence and trust in robots
  • HRI evaluation metrics

3. Cognitive Architectures for Robotics

  • Symbolic vs. sub-symbolic cognition
  • Classical architectures (Soar, ACT-R, BDI)
  • Hybrid and emergent architectures

4. Perception and Multimodal Sensing

  • Vision, speech, tactile sensing, physiological monitoring
  • Sensor fusion and contextual understanding
  • Challenges in dynamic environments

    Block 2 – Core Methods & Algorithms (Units 5–8)

    5. Natural Language Understanding in Robots

    • Dialogue systems, intent recognition, semantic parsing
    • Integration with speech recognition/synthesis
    • Context-aware conversation management

    6. Emotion Recognition and Expression

    • Facial expression analysis, voice emotion detection
    • Emotional state modeling
    • Expressive behaviors in robots

    7. Learning and Decision-Making in Cognitive Robots

    • Reinforcement learning, imitation learning, and case-based reasoning
    • Decision under uncertainty (POMDPs, Bayesian approaches)
    • Explainable AI in cognitive decision-making

    8. Memory and Knowledge Representation

    • Short-term vs. long-term memory in robots
    • Ontologies, semantic graphs, episodic memory
    • Adaptive learning from interaction history

    Block 3 – Platforms & Implementation (Units 9–10)

    9. Introduction to Furhat and Other Social Robots

    • Hardware and software architectures (Furhat, Pepper, NAO)
    • Development environments & APIs
    • Design considerations for social embodiment

    10. Practical Programming of a Social Robot

    • Dialogue scripting, gestures, and multimodal cues
    • Integration with external APIs (IoT, vision, databases)
    • Mini-lab: implement a multi-turn interaction scenario

    Block 4 – Applications (Units 11–12)

    11. Social Robotics in Industry and Public Spaces

    • Roles in manufacturing, logistics, customer service
    • Case studies: social robots as team mediators or trainers
    • Deployment challenges and ROI considerations

    12. Cognitive Robotics in Maintenance and Assistance

    • Predictive maintenance in industrial settings
    • Cognitive robots for healthcare, education, and elderly care
    • Mixed human–robot teams

    Block 5 – Advanced Perspectives & Future (Units 13–14)

    13. Ethics, Social Impact, and Regulation

    • Bias, privacy, autonomy, and accountability in robots
    • Standards and legal frameworks (EU AI Act, ISO 13482)
    • Cultural adaptation in social robot design

    14. Future Trends in Cognitive and Social Robotics

    • Swarm and collective cognition
    • Bio-inspired and affective robotics
    • Long-term research challenges and open questions
    Lab Work

    1. Team Projects (Specialization Track – 7 Sessions)

    General Framework

    • Teams of 6-10 students.
    • Each assigned one robot (Misty II, Furhat, NAO, Pepper).
    • Work across 7 sessions (~14h).
    • Deliverable: live end-of-semester demo + short project report.
    • All projects integrate at least one AI module (LLM, vision, planning, sentiment analysis, etc.).

    Shared Narrative: “Robotic Reception Desk” → each robot contributes a different role.

    Misty II – Embodiment + Expressive Emotion

    Objective: Demonstrate Misty as an emotional greeter that responds to visitor input with expressive cues.
    Steps:

    1. Use simulator → learn API calls for LEDs, face animations, head/arm movement.
    2. Connect sentiment analysis (text or speech → sentiment score).
    3. Map sentiment → Misty’s emotional expression (happy, sad, surprised, neutral).
    4. Add simple gestures (nodding, tilting head, waving).

    Deliverables:

    • Live demo: Misty greets visitor and changes expression based on input.
    • Short documentation: how sentiment → expression mapping works.
      • Required Sections:
        • Project Overview Short description of Misty’s role at the reception. Intended user interaction (what the visitor experiences).
        • System Architecture Diagram: inputs (speech/text), sentiment analysis, mapping function, Misty API outputs (LEDs, gestures). Tools & libraries used (sentiment library, SDK, simulator).
        • Implementation Details How sentiment → expression mapping was defined. List of gestures/expressions created.
        • Demo Scenario 1–2 example interactions (visitor input → Misty’s response).
        • Challenges & Limitations What worked smoothly, what didn’t (e.g., latency, sentiment misclassification).
    • Evaluation (30 pts): Emotion variety (10), API integration (10), AI integration (10).

    Furhat – Dialogue-Rich Interaction with Face & Gaze

    Objective: Show Furhat as the conversation hub of the reception.
    Steps:

    1. Use simulator → build state-based dialogue flow.
    2. Integrate LLM (OpenAI, Llama, Gemini) for Q&A.
    3. Implement gaze + face animation synced to dialogue.
    4. Add at least one “social skill” (empathy, humor, politeness).

    Deliverables:

    • Live demo: Furhat answers visitor questions naturally, with gaze and expressions.
    • Short documentation: dialogue flow + LLM integration.
      • Required Sections:
        • Project Overview Role of Furhat at the reception desk. Type of dialogues supported (info, casual talk, etc.).
        • Dialogue Flow Design Diagram: state-based flow + LLM integration point. How turn-taking and fallback responses were handled.
        • Multimodal Behavior Description of gaze control, facial animations, and how they sync to dialogue.
        • AI Component Which LLM was used (OpenAI, Gemini, Llama). Prompting strategy (short description).
        • Demo Scenario 1–2 dialogues with annotated gaze/animation responses.
    • Evaluation (30 pts): Dialogue naturalness (10), gaze/animation sync (10), AI integration (10).

    NAO – Humanoid Gestures, Movements, Social Play

    Objective: Demonstrate NAO as the friendly physical presence at the reception.
    Steps:

    1. Learn Choregraphe/Webots → program gesture sequences.
    2. Sync speech + gesture (e.g., waving, pointing, dancing).
    3. Add a “social play” routine (e.g., teaching simple movement, playing a short game).
    4. Optionally, integrate vision (recognize a visitor → trigger gesture).

    Deliverables:

    • Live demo: NAO introduces itself and performs a social routine.
    • Short documentation: gesture + speech coordination.
      • Required Sections:
        • Project Overview NAO’s social role at the reception. Type of “social play” routine chosen.
        • Gesture Programming How gestures were created (Choregraphe, Webots). Synchronization with speech (timing details).
        • Interaction Design Explanation of routine/game structure (input → gesture + speech).
        • AI/Creativity Component Optional: vision recognition, rule-based triggers. How creativity was applied (unique routine).
        • Demo Scenario Step-by-step interaction script for live demo.
    • Evaluation (30 pts): Motion variety (10), synchrony (10), creativity/AI use (10).

    Pepper – Multimodal Assistant (Tablet + Speech + Vision)

    Objective: Show Pepper as the information assistant.
    Steps:

    1. Build HTML content for the chest tablet (menu, buttons, images).
    2. Link tablet input ↔ speech output.
    3. Integrate vision module (face detection or object recognition).
    4. Add AI (LLM to answer questions, or a vision model for recognition).

    Deliverables:

    • Live demo: Pepper greets visitor, shows options on tablet, speaks answers, reacts to visual cues.
    • Short documentation: multimodal orchestration (tablet + speech + vision).
      • Required Sections:
        • Project Overview Pepper’s reception role (tablet + speech + vision).
        • System Design Diagram: tablet inputs ↔ speech outputs ↔ vision module. Tools used (HTML, SDKs, AI modules).
        • AI/Recognition Component How Pepper processes visual cues OR integrates LLM for Q&A.
        • Multimodal Orchestration How tablet, speech, and vision are synchronized.
        • Demo Scenario Example of user selecting a menu option and Pepper’s full response.
    • Evaluation (30 pts): Tablet-speech integration (10), vision/AI use (10), interaction design (10).

    2. Individual Projects (AI & Cognition – Simulators)

    Misty II – Cognitive Emotion Mapping

    Objective: Show Misty as an emotional companion that understands and reacts cognitively to user states.
    AI Component:

    • Sentiment analysis on input text (positive/negative/neutral).
    • Simple cognitive rule: if same sentiment repeats 3 times → Misty changes strategy (e.g., if user stays “sad,” Misty tries a “cheerful dance”).
    • Interaction: Misty responds with LEDs, facial display, and head movement.
    • Deliverable: Demo where Misty adapts its emotional responses over time (not just once).
      • Overview & Objective Emotional adaptation idea.
      • AI Component Sentiment analysis method. Rule for cognitive adaptation (3× repetition → new behavior).
      • Implementation How Misty’s LEDs, head, gestures were mapped.
      • Demo Case Example of repeated “sad” inputs and Misty’s adaptive reaction.
      • Limitations
    • Learning Focus: APIs + affective computing + cognitive adaptation.

    Furhat – LLM-Driven Social Dialogue

    Objective: Build a cognitive conversational agent that adapts its style to user input.
    AI Component:

    • LLM integration for open answers.
    • Cognitive layer: classify user intent (question, personal statement, request).
    • Furhat adapts gaze & expressions to the intent class (e.g., if it’s a question → direct gaze; if it’s emotional → empathetic expression).
    • Interaction: Natural turn-taking with simulated gaze and expressive face.
    • Deliverable: Demo where Furhat converses and adapts posture/gaze depending on type of input.
      • Overview & Objective Adaptive social dialogue.
      • AI Component LLM used, intent classifier (rule/ML).
      • Implementation How gaze & expressions were tied to intent. Dialogue examples.
      • Demo Case Example of question vs. emotional input.
      • Limitations
    • Learning Focus: Dialogue cognition, multimodal coordination, intent recognition.

    NAO – Cognitive Action Planner

    Objective: Program NAO to combine speech + gestures with a simple planning logic.
    AI Component:

    • Symbolic planning (e.g., if a user asks NAO to “show two greetings,” NAO must plan a sequence: wave + bow).
    • Knowledge base of “actions” (wave, bow, step, clap) that can be combined.
    • Interaction: User input → NAO dynamically builds a short action plan and executes gestures with synchronized speech.
    • Deliverable: Demo where user gives 1–2 high-level requests and NAO autonomously chooses + executes a sequence.
      • Overview & Objective Action planning from high-level request.
      • Action Knowledge Base List of primitive actions (wave, bow, clap, etc.).
      • AI Component Simple planner logic (rules, symbolic AI).
      • Implementation How actions are sequenced + synchronized with speech.
      • Demo Case Example: “show two greetings” → sequence executed.
      • Limitations
    • Learning Focus: Planning + embodied execution, combining symbolic AI with humanoid motion.

    Pepper – Multimodal Cognitive Assistant

    Objective: Design Pepper as a decision-making assistant using speech + tablet.
    AI Component:

    • Tablet offers multiple-choice (e.g., “Show schedule,” “Explain safety rules,” “Play a demo”).
    • Pepper uses LLM or rule-based reasoning to adapt its answer to the selected context.
    • Cognitive twist: Pepper remembers previous choice → tailors follow-up (if user asks twice about safety, Pepper adds new detail).
    • Interaction: User taps tablet OR speaks a request → Pepper responds via speech + updated tablet content.
    • Deliverable: Demo where Pepper provides context-aware, adaptive multimodal help.
      • Overview & Objective Context-aware multimodal assistant.
      • AI Component LLM or rule-based reasoning for adaptive answers. Memory: how previous choices are stored.
      • System Design Interaction diagram: tablet ↔ speech ↔ memory.
      • Demo Case Example of user asking safety twice → Pepper adapts.
      • Limitations
    • Learning Focus: Multimodality, context memory, cognitive adaptation.
    Supporting Infrastructure

    To run the activity for this course unit, students will have the possibility to work in our labs with the following technologies:

    • Several Aldebaran technologies (4 Nao Robots, 2 Pepper robots)
    • Furhat robot
    • Computers to run various programming languages
    • Industrial robots connected to the cloud with video perception (ABB, Kuka, Dobo Magician)
    • Autonomous mobile platforms