Deep Reinforcement Learning: Insights from the Rubik’s Cube
DOI:
https://doi.org/10.31926/but.ens.2024.17.66.2.1Keywords:
games, algorithm, problem solving, reinforcement learning, cognitive experiment, deep Q-networkAbstract
This study explores the Rubik’s Cube as a medium for investigating both human cognition and artificial intelligence. Through two experiments—one involving novice human participants and the other employing a Deep Q-Network (DQN) reinforcement learning agent—the research examines how different systems learn to solve complex problems. Human participants demonstrated improvement over time, highlighting adaptability, individual strategy development, and learning without formal guidance. In contrast, the DQN agent learned to solve the cube through trial-and-error interactions within a simulated environment, guided by reward feedback and policy refinement. While the AI model achieved high solving accuracy, it required extensive computational resources and lacked generalization beyond the specific task. The findings underscore key differences and potential complementarities between human and machine intelligence. This comparison offers insights into the strengths and limitations of both approaches, reinforcing the value of hybrid systems and continued cross-disciplinary research in understanding intelligent behaviour.


