Over the last week I’ve taken Unity Learn’s ML-Agents learning course for a couple of reasons. The first reason was to explore the application and viability of Machine Learning in our VR projects, and the second was to familiarize myself with a new way of creating/developing AI. While I’m glad I saw this journey through ’til the end, the result of the experiment was marred by Unity’s disjointed and non-unified approach to Machine Learning. Lemme explain:
The Good!
![HB](https://sites.psu.edu/andykennedy/files/2021/09/Hummingbird-300x275.png)
The course itself is well taught and well explained by someone who knows what they’re doing. Adam Kelly was a good teacher and I’m interested in learning more from him.
Watching the AI grow in its intelligence over millions (and I do mean millions) of runs was honestly breathtaking. Watching the little guy go from banging his head on the floor and ceiling to consistently executing the task he was given was a joy to watch, even if it took multiple hours to get there.
Inside of Unity, everything is fairly easy to understand. From writing the agent’s logic to attaching the Neural Network to the agent, as long as your inside of Unity, everything goes pretty smooth.
The Bad…
![HB Profile](https://sites.psu.edu/andykennedy/files/2021/09/HumminbirdProfiler-300x199.png)
Takes quite a while to train up the brain. Took my system over 2 hours and 3 million runs over 8 agents to get to a point where I felt comfortable using that Neural Network, whereas for AI that simple I could write up a state machine that I know would work as soon as I hit play.
If you want a new Neural Network (brain), you need to train the system all over again, so one little change means your computer is a render farm for AI for the next couple of hours.
You need to do a lot of prep for this project. ML-Agents is the easiest, at first, since all you need to do is access it through the Package Manager. But then, you find out it lives on its own repo, and you learn this because you need to download the corresponding Python package from there, because the actual machine learning is done in Python, so that means you need to download a Python interface (they recommend Anaconda). Then install the right version of pip via Anaconda. Then set up the environment in Anaconda for Machine Learning. ThenĀ make sure your ML-Agents Python package’s version is compatible with your ML-Agents Unity package (this took me easily half an hour to debug and understand what was going on). Then you can start testing after you’ve setup a trainer configuration file so Python knows what parameters it needs to use to train the AI!
Now in all fairness, Adam does a pretty good job at walking you through all these steps. But as he doesn’t explain the Python side of things very well, it helps if you’re familiar with Python as a language and Anaconda as an interface.
The Conclusion!
I’m hard pressed to find a practical use for this in our line of work. On the one hand, machine learning systems and VR thematically fit together as two pieces of futurist technology. On the other hand, any AI system I could want to put in a VR game would be simple enough that writing a simple state machine or yarn spinner equivalent.
I think this is a system to put on the back burner for a while. At the very least I wouldn’t recommend using it until Unity has a more unified and simpler way to integrate the Python elements into the design of this tool.
Leave a Reply