The holy grail of AI has always been to enable computers to learn the way humans do. The most powerful AIs today, however, still rely on having certain known rules, like rules for a game of chess or Go. Human learning, however, is often messy in inferential, learning the rules of life as we go. DeepMind has long been trying to create such AIs using games as their environment and test suite. Google's sister company focusing on AI research has just revealed its latest achievement in MuZero, an AI that can master a game without learning the rules beforehand.