So heads down and back to refining our data pipelines for processing video from drones. We focused again on our AniML software, and put together a few Proof of Concept (POCs) for companies with some interesting use-cases (thermal cameras off fixed wing drones). Then just for fun Zac custom-hacked DJI video feeds so we could collect data and do demos with consumer level drones.
Anyway AniML proved its worth! Our software can automatically detect (“There is an animal”, classify (“the animal is a cow”) and geo-tag (“the cow is near the road at this latitude and longitude”). It’s a pretty remarkable cycle for a drone payload to achieve.. and it’s starting to verge on AI. But first, we need Heaps more data! We need heaps more data to be able to optimize ML models for a range of AniMLs.
Now, while we don’t have enough data to automatically detect every species on earth, we did design a really handy fix for one of the big challenges in Machine Learning:
Every ML model is totally reliant on the quality of the dataset its trained from. Shit in, shit out. Labelling data for this training data set is a totally tedious process. So we came up with a fix to solve our problem, and have created Exponential Labelling. You’ll hear more about exponential labelling this year as it matures and we roll it out from being an internal tool to a product. But essentially think of it as a virtual assistant that helps you label objects in images or video.
Still a little bit more development work to do, but really happy with the structure and progress of how the code is all coming together! At the very least it’s proven a really valuable internal tool that beats anything we could find out on the market, but the whole concept is designed to be very scale-able so we’re going to see how far we can take it.