Vibe-Coded Interactives: Cross-Subject Findings from SLS
Background
The Authoring Copilot (ACP) in the Singapore Student Learning Space (SLS) enables teachers to rapidly generate a wide range of interactive learning components, including quizzes, simulations, data visualisations, and images, directly within SLS modules. As of March 2026, teachers can also create educational games through platforms such as Padlet Arcade and Canva using vibe coding.
These interactives are designed to support dynamic, hands-on learning by encouraging students to actively manipulate content, explore scenarios, and receive immediate feedback. Such tools aim to enhance both engagement and conceptual understanding across subjects and educational levels.
Recent research on LLM-based educational interactives highlights key design principles, including pedagogical alignment, subject-specific design, adaptive scaffolding, feedback quality, accessibility, and the importance of human oversight. While these frameworks provide useful theoretical grounding, they remain largely STEM-focused and have not been extensively validated in authentic classroom LMS environments. Consequently, there is limited empirical evidence on how such interactives function across diverse subject areas in real-world teaching contexts.
With the introduction of ACP interactives in December 2025 and the growing adoption of vibe coding in educational tools, it is timely to examine both the prevalence and the pedagogical quality of these interactives in practice.
The Study
Research Questions
- RQ1: How are primary and secondary teachers using vibe-coded interactives (e.g. across subjects, levels, lesson purposes, and types such as quizzes, games, and simulations)?
- RQ2: How do teachers perceive the usefulness, quality, and effort involved in using vibe coding to generate interactives for teaching, learning, and assessment?
Based on your two research questions, here are the most likely outcomes (findings) grounded in your study design and literature framing:
🔍 RQ1: How are teachers using vibe-coded interactives?
Likely Outcomes
1. Usage will be broad but uneven across subjects
Strongest adoption in Science/Math (simulations, visualisations)
Moderate in Languages (vocab games, quizzes)
Emerging in Humanities (timeline tools, scenario-based interactives)
➡️ This aligns with the known STEM bias in existing research
2. Interactives will cluster into a few dominant types
Most teacher-created interactives will likely fall into:
Quiz / MCQ / short-answer auto-feedback
Simple games (drag-drop, matching)
Simulations (mainly Science)
Less common:
Deep inquiry-based or open-ended simulations
➡️ Suggests surface interactivity > deep learning design
3. Primary vs Secondary differences
Primary: More gamified, visual, engagement-focused
Secondary: More exam-oriented, concept reinforcement, practice-heavy
4. Used mainly as supplements, not core pedagogy
Interactives embedded within lessons (not standalone)
Often used for:
Engagement starters
Practice / consolidation
Quick checks for understanding
➡️ Confirms integration into “broader pedagogy rather than isolation”
5. Variation in design quality
You will likely see:
Some high-quality interactives (aligned, scaffolded)
Many “clickable but shallow” ones
➡️ Wide variation due to:
Teacher expertise
Prompt quality
Time constraints
🧠 RQ2: How do teachers perceive usefulness, quality, and effort?
Likely Outcomes
1. High perceived usefulness (especially for engagement)
Teachers will report:
Increased student motivation
Better participation
More active learning
➡️ Strong alignment with definition of interactives as engagement tools
2. Mixed views on pedagogical quality
Teachers will likely say:
Good for practice and reinforcement
Less strong for:
Misconception diagnosis
Deep conceptual understanding
➡️ Confirms gap between interactivity vs pedagogy quality
3. Significant human effort still required
Despite AI generation:
Teachers must:
Edit prompts
Fix inaccuracies
Align to syllabus
Improve feedback
➡️ Reinforces “human-in-the-loop” necessity
4. Prompting skill becomes a key factor
Teachers who:
Iterate prompts
Refine outputs
➡️ produce much better interactives
Others:
Use first output → lower quality
5. Feedback quality is a major limitation
Common issues teachers will report:
Feedback too generic
Not diagnostic
Not adaptive
➡️ Matches literature emphasis on adaptive scaffolding gaps
6. Time-effort paradox
Teachers will likely report:
Faster to create initial version
BUTTime needed to refine still significant
➡️ Outcome:
“AI saves time, but only if you know how to use it well”
📊 Overall Synthesis (Most Important Finding)
The likely BIG conclusion:
👉 Vibe coding lowers the barrier to creating interactives, but does NOT guarantee pedagogical quality
🎯 Implications You Can Expect to Write
From both RQs combined:
1. Need for better prompting supports
Templates
Subject-specific examples
2. Need for design frameworks embedded in ACP
Feedback design
Scaffolding
Misconception targeting
3. Need for teacher professional development
Prompt engineering
Evaluation of AI outputs
4. Opportunity for subject-specific interactives
Move beyond generic tools
💡 If I summarise in one sharp line (good for your report):
Vibe-coded interactives are widely adopted and effective for engagement, but their pedagogical impact depends heavily on teacher expertise, prompting skill, and post-generation refinement.
teacher expertise: practise and use below
prompting skills https://sg.iwant2study.org/ospsg/index.php/ai-prompt-library/1366-prompt-library-for-educational-simulations
post generation refinement use external tools like
Please download and install one of these tools:Claude Code by Anthropic https://claude.com/downloadDownload the installer and open your simulation folder in Claude Code. It can launch a browser with Playwright, observe which events fire, generate tracking code, inject it, rerun the simulation, and iterate until storeState() sends real data.
OpenAI Codex by OpenAI https://openai.com/codex/
Similar workflow: run the simulation, observe behaviour, generate code, verify it, fix issues, and repeat locally with full browser access.
Visual Studio Code by Microsoft https://code.visualstudio.com/download with generous credits
A strong alternative development environment for testing and debugging.
AntiGravity by Google https://antigravity.google/ with generous credits
Trae by ByteDance https://www.trae.ai/download
These tools run on your own computer with full terminal and browser access. That allows them to repeatedly test and refine the xAPI integration until it is actually working — something a web-based, single-pass API call cannot reliably do for complex simulations.
No comments:
Post a Comment