(Bloomberg) -- Alphabet Inc.’s artificial intelligence division Google DeepMind is making the maze-like game platform it uses for many of its experiments available to other researchers and the general public.

DeepMind is putting the entire source code for its training environment -- which it previously called Labyrinth and has now renamed as DeepMind Lab -- on the open-source depository GitHub, the company said Monday. Anyone will be able to download the code and customize it to help train their own artificial intelligence systems. They will also be able to create new game levels for DeepMind Lab and upload these to GitHub.

The decision to make this AI test bed available to the public is further evidence of DeepMind’s decision to embrace more openness around its research. Last month, the company announced a partnership with Activision Blizzard Inc. to turn the popular video game Starcraft II into a testbed for any artificial intelligence researcher who wanted to try to create an AI system that could master the complex game.

Putting its Lab code on GitHub will allow other researchers to see if its developer’s own breakthroughs can be replicated and will allow these scientists to measure the performance of their own AI agents on the exact same tests DeepMind uses, one of the company’s co-founders, Shane Legg, said in an interview. "They can try to beat our results if they want," he said.

OpenAI, a rival research shop set up by billionaire entrepreneur Elon Musk, venture capitalist Peter Thiel and Sam Altman, a founder of Silicon Valley startup accelerator Y Combinator, made its own AI training platform, called OpenAI Gym, available to the public in April. 

On Monday it also announced that it was making public an interface called Universe that lets an AI agent “use a computer like a human does: by looking at screen pixels and operating a virtual keyboard and mouse,” the company said in a statement. In short, it’s a go-between that lets an AI system learn the skills needed to play games or operate other applications. Researchers can use tools in OpenAI’s Gym to measure how these agents perform. 

Universe is rolling out with 1,000 different titles and training environments and the company called on video game developers to give OpenAI permission to include their games in future.

Legg denied that DeepMind’s decision to open its Lab to the public was motivated by competition with OpenAI or past criticism that Google was too proprietary about its AI breakthroughs. “The machine learning research community has always been very open," he said. "We publish 100 research papers a year and we have open-sourced a bunch of our agents before."

DeepMind is best known for having created an AI agent that was able to beat the world’s top ranked human player in the ancient strategy game Go. The achievement is considered a major breakthrough in computer science because Go has so many possible moves it cannot be mastered through brute force calculation, so an AI system has to have something akin to human intuition to play the game successfully.

While AlphaGo, the DeepMind agent that mastered Go, was not trained on its Lab platform, the company has used its Lab environment for several of its cutting-edge efforts in which a system must master perception, memory, planning and navigation. These include a recent experiment in which the company’s researchers significantly reduced the time it takes train an AI agent to navigate through the game environment and score points by finding digital apples scattered through the maze.

Legg said that Lab was a superior AI training environment to others available because the game environment is more complex. The AI agent in Lab controls a hovering sphere through a "first person" point of view. The agent can look and move in any direction, which is not the case in other AI training environments, Legg said.