The 29th International Conference on Computers in Education (ICCE 2021) is organized by the Asia-Pacific Society for Computers in Education (APSCE). ICCE 2021 will be held on November 22-26, 2021, from Monday to Friday. Pre-conference events (e.g., Doctoral Student Consortium, workshops, and tutorials) will be conducted on the first two days and the main conference will begin on November 24th, 2021.
Accepted papers in the main conference, workshops, Early Career Workshop, Doctoral Student Consortium and Work-in-Progress Posters will be published in proceedings, which will be submitted to Elsevier for inclusion in Scopus. Proceedings of the mainconference (excluding posters) will also be submitted to Thomson Reuters for inclusion in the Conference Proceedings Citation Index.
Keynote for the topic on digital learning ecosystem for transforming classroom into learning community will be presented based on the lesson learned from the Khon Kaen University (KKU) Smart Learning project which has been implementing at more than 200 junior high schools in the northeast of Thailand for 5 years. The presentation will cover main three topics: (1) initiation and background of KKU Smart Learning Academy; (2) KKU smart learning model, which is the principle and concept of learning competency development of students from the research and development of the research team in the project; and (3) overview of how the KKU smart learning model has been used at schools, and the making process of the digital learning ecosystem in classrooms that promote students’ learning. Lastly, conclusion will be on the lessons we have learned from the work.
Keywords: Digital learning ecosystem, Smart learning, Junior high schools, Northeast Thailand, Learning community
Entering a modern car is like entering a computer with wheels, seats and windows. Similarly, entering a classroom is like entering a large digital system with chairs, windows and a board. The input devices of this system are not a keyboard and mouse, but an entire classroom equipped with sensors. The output device of this system is not a screen but a set of digital elements distributed in the class. The output is of course not a simple reflection of the input but input data are processed by multiple operators that aggregate, compare and visualize data. The resulting dashboards are used for monitoring the learners’ progress in order to decide when and to whom to intervene. They are also used to compile data from the constructivist activities for supporting the debriefing phase, as well as to predict the completion time of an activity. Monitoring, debriefing and timing are central processes in classroom orchestration.
Determining how, when, and whether to provide personalized support is a well-known challenge called the assistance dilemma. A core problem in solving the assistance dilemma is the need to discover when students are unproductive so that the tutor can intervene. This is particularly challenging for open-ended domains, even those that are well-structured with defined principles and goals. In this talk, I will present a set of data-driven methods to classify, predict, and prevent unproductive problem-solving steps in the well-structured open-ended domains of logic and programming. Our approaches leverage and extend my work on the Hint Factory, a set of methods that to build data-driven intelligent tutor supports using prior student solution attempts. In logic, we devised a HelpNeed classification model that uses prior student data to determine when students are likely to be unproductive and need help learning optimal problem-solving strategies. In a controlled study, we found that students receiving proactive assistance on logic when we predicted HelpNeed were less likely to avoid hints during training, and produced significantly shorter, more optimal posttest solutions in less time. In a similar vein, we have devised a new data-driven method that uses student trace logs to identify struggling moments during a programming assignment and determine the appropriate time for an intervention. We validated our algorithm’s classification of struggling and progressing moments with experts rating whether they believe an intervention is needed for a sample of 20% of the dataset. The result shows that our automatic struggle detection method can accurately detect struggling students with less than 2 minutes of work with 77% accuracy. We further evaluated a sample of 86 struggling moments, finding 6 reasons that human tutors gave for intervention from missing key components to needing confirmation and next steps. This research provides insight into the when and why for programming interventions. Finally, we explore the potential of what supports data-driven tutors can provide, from progress tracking to worked examples and encouraging messages, and their importance for compassionately promoting persistence in problem solving.
The advancement of artificial intelligence (AI) technologies has attracted the attention of researchers in the globe. However, it remains a challenging task for educational technology researchers to apply AI technologies to school settings, not to mention designing AIED (Artificial Intelligence in Education) studies. In this talk, Prof. Hwang is going to introduce the basic conceptions and applications of AI; following that, potential research issues of AIED in the mobile era are presented. In addition, several examples are given to demonstrate how AI can be used to promote teaching and learning outcomes. Finally, several approaches to designing and implementing AIED research are demostrated.
Judging from what we hear and read, there seem to be as many supporters as detractors of Massive Open Online Courses (MOOCs). However, MOOCs are still a growing phenomenon and rely on technology to reach out to potential learners in populated cities as well as remote rural areas. Higher education in particular hasembraced this education “outlet” as a way to cater for an increasing demand for high quality online course materials to cover the needs of professionals who would like to engage in lifelong learning, and to satisfy the need to be at the forefront of educational developments and gain more international visibility. However, currently available MOOC platforms are in many respects limited in terms of courseware design and implementation as they are based on the template approach to software authoring. This limitation increases when we think of MOOCs that are intended for language learning – one of the most cognitively demanding disciplines learners can be confronted with. These MOOCs are commonly referred to as Language MOOCs or LMOOCs. Based on the Prof. Gimeno’s experience in designing four upper-intermediate level MOOCs for learners of English as a Foreign Language, which have attracted over 200,000 learners to date from 258 different countries, she will discuss the findings deriving from over 17,000 learner responses to a survey conducted longitudinally over a period of two and a half years to shed light on some of the factors involved in learner motivation, expectations and learning styles. Additionally, as lack of guidance and scaffolding are factors that can lead to learner drop-outs, she will discuss the solutions that were implemented to overcome these deficiencies. In line with this, as some of the more challenging areas in LMOOC design relate to providing opportunities for learners to practise speaking and writing skills, she will discuss ways of designing activities to support learner interaction and communication, considering that these must satisfy learners who come from very different educational backgrounds and cultures.
Game learning analytics is the collection and analysis of user’s gameplay interaction data to provide a better evidence-based insight on the educational process with serious games. The application of game learning analytics can provide a more data-driven scientific approach to improve all the steps of serious games’ lifecycle. These steps include not only obtaining a better understanding about players learning and what actually happens when deploying a game in an educational scenario, but also enhancing the earlier steps of design, implementation and the overall quality of serious games. However, there is still a long way to go as learning analytics in games are not yet widespread and, in fact, there are very few serious games scientifically validated. The talk will introduce game learning analytics, their possible contributions to improve serious games lifecycle and the requirements (e.g. data standards, ethical considerations) for their systematization and generalization in real educational settings.
To date, the digital environment has evolved rapidly around three key genres of innovation: search, social, and smart. If we take stock of where we’re at right now then there’s a mix of potential drivers of change – where technology can empower and enhance our experience and productivity … or it can frustrate and disrupt it. There is therefore both an upside and a downside in each innovation. When it comes to one of the most basic questions we all ask as children trying to make sense of things – why? – we don’t yet have access to mature technologies that can scaffold this. This basic question is also fundamental to learning. This presentation will scan through some promising innovations that might inform the way we teach, learn, and research in the digital environment – and, not all these innovations are about technology. Moreover, ‘questions that matter’ are always contextual. Within these constraints, the question of how to facilitate questioning within the digital environment is the key theme explored in this presentation.