[This document is taken from an assignment which involved creating a mini inworld learning activity to demonstrate the use of artificial intelligence in virtual worlds teaching practice.]

Background and Context

The mini-project is part of a virtual world learning activity aimed at developing knowledge and understanding of the types, causes, communication needs and impact of dual sensory loss (DSL).

The primary audience for this activity are new employees / health and social care professionals.

The topic is taught as part of a one-day, face-to-face training session on deafblind awareness. Prior to 2014 it was taught as part of a two-day session, but due to budget and time constraints the content now must be delivered as a single day. This places significant pressure on both the trainer and the participants.

The topic uses a mix of simulation, video and discussion; it requires significant person-to-person interaction (with a limited pool of trained tutors). There are also issues with the video footage of deafblind people including:

  • permissions, ethics and mental capacity;
  • lack of interactivity – the videos are numerous and relatively lengthy;
  • ‘old’ footage showing people who have since died or left the organisation; and
  • costs of replacing footage.

Delivering the session as blended learning with some elements delivered via virtual worlds and artificial intelligence would: enable the curriculum to be retained and developed; replace 2D ‘passive’ videos with 3D activities adding interest (Kapp 2013; Kapp & O’Driscoll 2007); retain corporate knowledge (CIPD 2012); decrease costs and improve availability of training (Nebolsky et al 2004); and improve the quality, teaching and learning experience for both the tutor and the participants by reducing time pressures.

There has also been a significant change in the nature of the participants over the last few years – new employees are used to learning in different ways: their use of technologies to learn for themselves means they are familiar with using Google and YouTube amongst others to acquire information (Brown 2000; CIPD 2012 & 2014; JISC 2007 & 2009). In part this reflects the concept of the digital native (Prensky 2001) but extends beyond that group into the wider workforce.

Virtual world activities incorporating gamification and artificial intelligence offer the opportunity to “create an emotionally compelling context for the player and build on nostalgia, curiosity, visual appeal and employees’ interest” (CIPD 2012 p13).

 

Aims and learning objectives

The overall aim of the activity is to introduce learners to the range of communication methods used by people with a DSL.

The mini-activity focuses on one communication method and cause of DSL, and it is estimated that the mini-activity interaction will take about 15 minutes.

Learning Objectives

By the end of the mini activity described below, learners will be able to:

  • Describe the communication method ‘deafblind manual alphabet’.
  • Describe one cause of acquired DSL.
  • Describe three effects of acquired DSL on daily living.

 

Description of the learning activity

The artificial intelligence (AI) element of the activity involves non-player characters (NPCs) which act as the individuals with DSL and a HUD (heads up display) which contains an AI agent.

For this mini-activity the NPC represents an individual with acquired DSL, who uses deafblind manual for receptive communication and speech for expressive communication.

Diagram showing how receptive communication is communication that an individual receives; and expressive communication is communication that an individual produces.

The participants have been asked to interview the individual with acquired DSL (the NPC) as part of a case study, which they will be asked to give feedback on in a group session to be held on a separate occasion.

 

Mini Case Study 

Kathy is looking forward to talking to you about her dual sensory loss.

You have arranged to meet her in her home at:

{SLURL}

When you meet her, sit on the red poseball (and accept the HUD you will be offered) to communicate with her.

You can ask Kathy about the:

·       type and cause(s) of her dual sensory loss

·       communication method(s) she uses

·       impact of her dual sensory loss on her daily life

You will discuss Kathy’s case study when you meet with the others in the group session on: {date, time and SLURL}

Illustration of the case study notecard

 

The participant will encounter the NPC in a ‘natural’ setting – in this case the NPC’s own home – and will need to sit on a poseball to communicate with the NPC.

Once seated on the poseball, the participant is placed into the correct pose for communicating with the NPC – next to the NPC with their left hand supporting the left wrist of the NPC and their right hand over the left hand of the NPC ready to perform the deafblind manual alphabet.

 The bot sat on a sofa waiting for a learner to sit on the red poseball  Learner sat on the sofa next to the bot with hands in the correct position for the deafblind manual alphabet
NPC waiting for participant to be seated on the red poseball Participant sitting next to the NPC with appropriate pose for the communication method which will be used – deafblind manual

 

When the participant sits on the poseball they are offered a self-attaching temporary HUD. The HUD attaches to the viewer screen and provides a menu of questions which the participant can ask the NPC.

Showing the HUD that appears on a user's screen inworld with the questions to choose from

 

When the participant chooses a question from the HUD the correct animation is played so that the participant simulates communicating that message using the deafblind manual alphabet. The message is also relayed to local chat.

The NPC replies using the appropriate communication method (simulated) and animation, and the message is also relayed to local chat.

For the wider activity, which this mini-activity forms part of, once all of the questions have been asked, the participant is offered a ‘badge’ for that communication method. Collecting all the ‘badges’ – one for each simulation of a DSL and communication method – entitles the participant to an award level and prize.

 

Design, rationale and pedagogy

 

Delivering learning via a virtual world is not simply a matter of moving the learning over and delivering it in the same manner: creating the right context; selecting appropriate learning outcomes; guiding rather than telling; and building in incentives, are all key to success (Kapp & O’Driscoll 2010 pp210-216).

The chosen topic for this mini-activity lends itself well to the affordances of virtual worlds, in particular the learning archetype ‘simulation’ (Chase & Scopes 2012) and using AI as a pedagogical agent. It suits a constructivist approach to learning where: participants can use genuine tasks in context to reflect and construct knowledge; develop a better conceptual understanding; and work collaboratively with their peers to apply knowledge to practice (Dede 1995; Dickey 2003; Jonassen et al 1995).

Delivering this topic in virtual worlds would not be possible from a practical perspective without using an AI agent/NPC, as a tutor would be needed to play the role of the individual with DSL each time. The knowledge to play the role of the person with DSL is possessed by a few trainers within the organisation, who cannot provide this support. Using an AI agent allows this corporate knowledge to be captured and reused in an innovative manner (CIPD 2012) aiding knowledge management.

In terms of cybergogy, the outcomes for the simulation are a mix of cognitive, social and affective/emotional learning, however virtual world cognitive and dextrous skills will also be needed for participants to complete the activity (Chase & Scopes 2012) – see Appendix Two for session plan extract:

  • cognitive and dextrous skills – use of virtual worlds;
  • cognitive – remembering, understanding and applying knowledge of DSL and communication methods;
  • emotional – engaging and empathising with people with DSL;
  • social – collaborating with others.

Although previous activities will have built some virtual world skills, participants will have varying levels of competency with digital literacy and using virtual worlds – their dextrous skills and cognitive skills in relation to using virtual worlds may be at level one or two (Chase & Scopes 2012). For this reason, the activity is structured to be as simple as possible without losing the benefits of interacting with the NPC:

  • a large red poseball with hover text keeps the instructions to a minimum;
  • when the participant sits on the poseball the HUD will automatically be offered and if accepted will attach to the viewer screen;
  • HUD with a menu of questions which can be asked.

The HUD is the AI agent containing the programming for the interactions and triggering the matching animations through the poseball. Restricting the question choices using the HUD enables a matching animation for the deafblind manual alphabet to be played – the animations can be captured using Motion Capture (MOCAP) technology, uploaded into the virtual world, and triggered via the poseball. Although the animation cannot be sufficiently detailed to teach the deafblind manual alphabet, this does ensure the closest practical match between the question and the communication method, maintaining fidelity and immersion by providing a realistic experience (Dede 1995; Delgarno & Lee 2010).

The simulation in a virtual world has significant advantages in terms of cognitive overhead (Kapp 2013) for this task, as the participants do not have to imagine the context – they are ‘there’. Communicating with an individual who has DSL is complex; a simulation in a virtual world offers the benefits of experiencing all the aspects involved in as realistic a setting as is possible – creating an ‘authentic task’ (Nebolsky et al 2004). This means that the cognitive processes are as similar to those that will be performed in the workplace as possible to aid transfer of learning into practice (Kapp 2013); the learning is ‘situated’ in the context within which it will be used.

Immersion and the added incentive from the gamification element of collecting badges and awards for accomplishment (Chou 2013; Kapp 2015) should stimulate interest and increase motivation (CIPD 2012; Moreno-Ger et al 2009) however, there is a danger of decreasing motivation if participants cannot see the benefits and feel ‘lost’ (Moore & Pflugfelder 2012). For this reason it is important to scaffold the activity carefully to guide the participants through the simulation and learning, without specifying the exact learning outcomes, to increase retention and recall of knowledge – careful matching of the simulation to practice is key to this (Kapp & O’Driscoll 2010 pp212 & 214).

Relaying the messages to local text enables group participation in the activity for learners who wish to work collaboratively with others, with one participant acting as the communicator-interpreter. This simulates the actual world situation where only one person at a time can communicate with someone who has a DSL, maintaining the ‘situated’ nature of the relationship between simulation and context. It would be possible to change the awards process so a group award could be issued for ‘badge’ collection rather than an individual award increasing the competitive, gamification and collaborative dimensions of the activity.

For the NPC to assist with developing the ‘emotional’ or empathetic understanding of DSL, the NPC representing the individual with DSL must be sufficiently anthropomorphic and visually appealing to engage the participant (Gluz & Haake 2006; Ho & MacDorman 2010; Veletsianos 2010), but not create feelings of unease or discomfort associated with the ‘uncanny valley’ (Mori 1970). Creating this engagement with the NPC is crucial to minimise the ‘deficit model’ of disability by providing realistic positive representations of people with DSL (Herbert 2000; Pfeiffer 2002). The most applicable NPC is an avatar-based bot – an avatar operated and logged into the virtual world by a computer, rather than being controlled by a human (Linden Lab 2011). This allows for the highest level of fidelity, customisation of appearance and animation, although it has added overheads for maintaining login.

 

Assessment and evaluation

The assessment for the activity will involve a whole group activity including re-visiting the simulation and AI-agent with the tutor present. Consolidating learning using group activities is an important aspect of the constructivist learning approach. An AI question ball will present questions to the group via local chat and record scores for individuals answering correctly – adding a further element of competition and gamification along with assessment of learning.

As this mini-activity replaces a taught version, evaluation will involve a quasi-experimental approach comparing the knowledge and practice of at least three groups of learners – a group following the current curriculum; a group using the blended approach; and a group following the current curriculum and then given access to the virtual world simulations as an extension activity. This should enable comparison of learning outcomes and learner experience.

 

References

Brown, J.S. (2000) ‘Growing Up: Digital: How the Web Changes World, Education, and the Ways People Learn’, Change. 32 (2), pp. 11–20.

Chase, S. & Scopes, L. (2012) ‘Cybergogy as a framework for teaching design students in virtual worlds’. In: Achten, H., Pavlíček, J., Hulín, J. & Matějovská, D. (eds) Digital Physicality: Proceedings of the 30th International Conference on Education and research in Computer Aided Architectural Design in Europe. pp. 125–133.

Chou, Y. (2013) Octalysis: Complete Gamification Framework [online]. Available from: http://www.yukaichou.com/gamification-examples/octalysis-complete-gamification-framework/#.Uqrsbycxg3M [Accessed 13 December 2013].

CIPD (2012) From e-learning to ‘gameful’ employment, CIPD.

CIPD 2014. Learning and Development 2014 [online]. CIPD. Available from: https://www.cipd.co.uk/binaries/learning-and-development_2014.pdf [Accessed December 29, 2014].

Dede, C. (1995). ‘The Evolution of Constructivist Learning Environments: Immersion in Distributed’, Virtual Worlds. Educational Technology. 35(5) pp. 46–52.

Delgarno, B. & Lee, M.J.W. (2010) ‘What are the learning affordances of 3-D virtual environments?’, British Journal of Educational Technology. 41 (1) pp. 10–32 [online]. Available from: http://edtc6325teamone2ndlife.pbworks.com/f/6325%2BLearning%2Baffordances%2Bof%2B3-D.pdf [Accessed 22 December 2013].

Dickey, M.D. (2003) ‘Teaching in 3D: Pedagogical Affordances and Constraints of 3D Virtual Worlds for Synchronous Distance Learning’, Distance Education. 24 (1) pp. 105–121.

Gluz, A. & Haake, M. (2006) ,Design of animated pedagogical agents – A look at their look’, International Journal of Human-Computer Studies. 64 (4) pp. 322–339.

Herbert, J.T. (2000) ‘Simulation as a Learning Method to Facilitate Disability Awareness’, Journal of Experiential Education. 23 (1) pp. 5–11.

Ho, C. & MacDorman, K.F. (2010) ‘Revisiting the uncanny valley: Developing and validating an alternative to the Godspeed indices’, Computers in Human Behavior. 26 (6) pp. 1508–1518.

JISC (2007) In their own words: Exploring the learner’s perspective on e-learning, [online]. Available from: http://webarchive.nationalarchives.gov.uk/20140702233839/http://jisc.ac.uk/media/documents/programmes/elearningpedagogy/iowfinal.pdf [Accessed 8 January 2015].

JISC (2009) Effective Practice in a Digital Age, [online]. Available from: http://webarchive.nationalarchives.gov.uk/20140702233839/http://jisc.ac.uk/publications/programmerelated/2009/effectivepracticedigitalage.aspx [Accessed 8 January 2015].

Jonassen, D., Davidson, M., Collins, M., Campbell, J. & Haag, B.B. (1995) ‘Constructivism and computer‐mediated communication in distance education’, American Journal of Distance Education. 9 (2), pp. 7–26.

Kapp, K.M. 2013. ‘Once Again, Games Can and Do Teach!’, Learning Solutions Magazine [online]. Available from: http://www.learningsolutionsmag.com/articles/1113/once-again-games-can-and-do-teach [Accessed January 7, 2015].

Kapp, K.M. (2015) ‘2014 Reflections on Gamification for Learning’, Kapp Notes [online]. Available from: http://karlkapp.com/2014-reflections-on-gamification-for-learning/ [Accessed 7 January 2015].

Kapp, K.M. & O’Driscoll, T. (2007) ‘Escaping Flatland:Learning via the First-Person Interface’ [online]. Available from: http://wadatripp.wordpress.com/2007/10/22/escaping-flatlandlearning-via-the-first-person-interface/ [Accessed 7 January 2015].

Kapp, K.M. & O’Driscoll, T. (2010) Learning in 3D: Adding a New Dimension to Enterprise Learning and Collaboration. San Francisco, CA: Pfeiffer.

Linden Lab (2011) ‘Bot’, Second Life Wiki [online]. Available from: http://wiki.secondlife.com/wiki/Bot [Accessed 9 January 2015].

Moore, K. & Pflugfelder, E.H. (2012) ‘On being bored and lost (in virtuality)’. In: Hunsinger J. & Krotoski A. (eds.). Learning and Research in Virtual Worlds. Abingdon, Oxon: Routledge. pp. 152–156.

Moreno-Ger, P., Burgos, D. & Torrente, J. (2009) Digital Games in eLearning Environments. Simulations & Gaming. 40 (3) pp. 669–687.

Mori, M. (1970) The Uncanny Valley. Energy. 7 (4), pp. 33–35. [Accessed 26 December 2014].

Nebolsky, C., Yee, N., Petrushin, V.A. & Anatole, V. (2004) ‘Corporate Training in Virtual Worlds’. Journal of Systemics, Cybernetics and Informatics. 2 (6) pp. 31–36.

Pfeiffer, D. (2002) ‘The Philosophical Foundations of Disability Studies’, Disability Studies Quarterly. 22 (2) pp. 3–23.

Prensky, M. (2001) ‘Digital Natives, Digital Immigrants’, On the Horizon. 9(5) [online]. Available from: http://www.marcprensky.com/writing/Prensky%20-%20Digital%20Natives,%20Digital%20Immigrants%20-%20Part1.pdf [Accessed January 30, 2011].

Veletsianos, G. (2010) ‘Contextually relevant pedagogical agents: Visual appearance, stereotypes, and the first impressions and their impact on learning’, Computers & Education. 55 (2) pp. 576–585.

 

Appendix One – Sample Questions for the HUD

 

Personal Information

  1. What is your name?
  2. How old are you?
  3. Where do you live?
  4. What is your job?
  5. Are you married?
  6. Do you have children?

Deafblind questions

  1. What type of dual sensory loss do you have?
  2. What caused your dual sensory loss?
  3. What form of communication do you use?
  4. How old were you when you became deafblind?
  5. What is the deafblind manual alphabet?
  6. Is it difficult to learn?
  7. Is it difficult to use?
  8. Is it slow to use?
  9. How do you talk to someone who doesn’t know the deafblind manual alphabet?
  10. How does it affect your life?

 

Appendix Two – Extract from session plan

 

Time VW activity Learning domains (Chase & Scopes 2012)
15 mins Simulation – involving AI agent representing person with DSLLocate simulation using SLURL / TP

Follow hover text instructions

Sit on poseball

Read local chat for instructions

Accept HUD

Interact with HUD

Read local chat for responses

Identify:

·       Type of DSL

·       Cause of DSL

·       Impact of DSL on daily life

·       Type of communication and practical requirements (ie positioning)

Collect badge

Detach HUD

If collaborating with others:

·       Use local / group chat

·       Agree who will question the AI / record responses

·       Discuss responses

 

Cognitive lvl 1 & 2

Dextrous lvl 1 & 2

 

 

 

 

 

 

Cognitive lvl 1 – 5

 

Emotional lvl 4 & 5

 

 

 

Social lvl 3

 

DSL – dual sensory loss