My first venture into computer vision was in 2014 when I prototyped an automated sorting system for conveyor belts for a production factory in Indonesia. The goal was to sort through many tonnes of pebbles in order to find the best ones for a company making tiles destined for feature walls in luxury resorts around the world.

The idea was to have a camera above and a camera from the side of the conveyor belt that would classify pebbles according to size (height, width, roundness, depth) and colour. These days I would achieve that with stereo vision but at the time I knew no better. The prototyping process taught me the basics of image processing and computer vision and I ended up with a solution that tracked pebbles as they moved along a conveyor belt using RGB and HSV analysis for blob detection. In hindsight, a rudimentary approach. Still, the classification worked, and with a timed delay a servo motor down the conveyor belt could sort the pebbles left or right according to whether they were appropriate, or let them continue down the conveyor belt for further sorting.

There were some laughs at my fixation with pebbles for a time, but a short while later these same techniques I’d used for classifying pebbles were applied to video footage taken from an Autonomous Underwater Vehicle (AUV) to achieve non-destructive monitoring of scallops on the sea floor. This helped a friend with his thesis work, saved some scallops, and confirmed the power of computer vision in my mind.

Fast forward 3 years and we are prototyping an ambitious project – having a UAV with thermal vision and night vision cameras capable of detecting, tracking and classifying anything with a heat signature. Our client wants to be able to find animals on his property, and wants it to be done with a drone that can conduct autonomous search while he waits by his car. The applications of this capability continue to grow in our minds, and following this prototype we intend to develop the capability for the agricultural industry and for search and rescue.

Three of us were interested in the project. Zac Pullen, our UAV engineer, can automate the drone control system and search behaviour. Harry Hubbert has the Machine Learning capability (through which the drone can recognize an animals species once we have a good photo of it). The technology gap remaining was advanced computer vision capability. How can the drone detect and track unknown animals when every animal is different and the environment is constantly changing? We needed to solve this to give the UAV capability to home in on targets and take night vision footage for classification. After a month and sleepless nights I’ve taken the basic principles that started with pebbles, and have a computer vision solution advanced close to achieving just that.

Computer vision is a fun area to dive into. For one, it makes you marvel at the power of the human brain. Further, the potential for increasing the level of systems autonomy is endless and open, but critical. Part 2 of this blog will look deeper into the project.

James Keane

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s