More recently, it has been noted by the machine learning community that training data and access to rich datasets serve as a large factor to the improvement of AI algorithms.
“Bias”, as some would call it, is currently being researched by ethicists and engineers alike; it has been seen that some of this can be eradicated by enhanced data labelling, as well as preprocessing. In many cases however, not any available training data will suffice. What if one could capture the required data in novel states, further improving metrics in specific cases? If you could not find the data, why not just make it?
A burgeoning solution to this problem has been the usage of simulations, and the creation of synthetic data thereafter, as seen in companies such as Unity, who has partnered with DeepMind to help them build graphical environments to train self driving cars. Uber, Ascent Robotics, and NVIDIA join the list of companies that utilize internal simulators to improve how their algorithms work.
In this talk, I delve into the usage of fabricated worlds to tackle real life problems, and discuss how these methods have found themselves in numerous ML cases today, from digital twins to neural network design.