The agricultural drone market is expected to exceed US $4 billion by 2022, with imaging sensors and software expected to capture the largest share of growth of market. Huge amounts of attention are focused on payloads for crop farms, however, nobody has entered the livestock sector, because, well, they don’t have the capacity to generate the technology. Identifying and tracking live targets has typically been reserved for the budgets of the Defence industry.

Using Thermal Imagery and Computer Vision we can identify anything with a heat signature (i.e. animals), and with Machine Learning we can automatically classify any animal. Integration between these Artificial Intelligence techniques and the UAV control system lets us do passive surveys of a property, or have the UAV actively search and track animals.

This will gives farmers oversight of what’s on their property with insight typically reserved for military applications. In the same way that crop monitoring with UAV has become critical for maximising crop yield, SkyDog will give advantage to early-adopter grazing properties. When is the right time to muster? Do your stock have access to water? How many new calves (or missing stock) do you since yesterday? While it is undeniably technologically more difficult to monitor moving cattle than stationary crops, we believe we have a solution that will revolutionise how farmers do business.

SkyDog will give farmers, conservationists, pest controllers, and wildlife-monitors tactical eyes-on-the-ground coverage of animals on their property, and oversight for stock monitoring and control. Increasing levels of autonomy means this leaves them to focus on the parts of the job they love, while SkyDog provides awareness of what’s around them. SkyDog does the dull work by night to let farmers make smarter decisions during the day.

We envision a drone for every farm, and a payload on every drone. But first, the mission is to turn our prototype into a robust module capable of withstanding the conditions of Australian farmland.

The prototype has been developed over the last ten months, beginning with basic computer vision, and slowly being integrated to the state where we can post-process footage and achieve classification.

We are currently at the stage of doing field trials to integrate UAV control system feedback collected footage. This way, we align GPS with the processed footage to confirm the geotagging capability. Secondly, by using live-time altitude the AI has a better idea of the relative size of object it’s looking for which increases accuracy of the computer vision which allows the Machine Learning to process faster.