UAV Computer Vision (Part 2) – introducing thermal imaging

We’ve been busy over the last few months, and the blog posts haven’t been flowing as expected, but that’s because exciting things have been taking place in the background. Our network is growing and there’s increasing interest in our developments. We’re about to start looking for the right investor to partner with to take our flagship prototype to market. SkyDog is a UAV payload for detecting, classifying and geotagging animals with any off-the-shelf UAV. More on that soon, but firstly I want to continue this development blog and provide context of the problem we set out to solve.

The brief was for a UAV that could detect, classify, geotag, and track (if necessary) an animal for tactical and strategic land oversight. This is for farmers, pest-controllers, hunters, and wildlife researchers who all have different interests in knowing where different animals are on their property.

We decided to develop a payload which could be attached to any off-the-shelf drone in order to make life easiest for our customers. The payload has a FLIR Boson thermal camera to pick out animals at night, combined with a microprocessor running custom Computer Vision, Machine Learning (ML), and ‘backseat driver’ Artificial Intelligence (AI) to process the thermal video feed and enable adaptive maneuvering through the flight controller while providing real-time updates of the geotagged locations of animals to the operator. Nothing existed in the market, nor in the academic research papers we read. This means it was technically going to be a particularly tricky problem to solve.

So how did we do this? Usually, with any robotics or Computer Vision (CV) problem, the first thing to do is see how other people have solved the problem that you want to solve, or at least find the solution to something similar. In our case however, nobody has achieved anything close to what we were aiming for – or at least, they weren’t sharing it online. There was nothing beyond the basic CV tracking functions which were embedded in off-the-shelf UAV. These functions use various CV techniques to follow a set of feature points across frames in order to provide limited tracking capability. Some libraries automatically identify humans, or faces at least, though there is nothing out there yet for animals.

The thing is that the strong tracking functions all rely on knowing what you want to track in the first place. For us, the whole point of our payload software is to find an animal, and then determine whether or not it is worth tracking! And this from a UAV with changing heights, angles, and speed, in an environment with changing temperatures and terrain, looking for animals with a range of sizes, orientations, body temperatures. Are they in herds or are they lying down under foliage? There were a lot of variables in play.

Which is why we needed the thermal camera to at least give us an edge. The number of variables also meant we needed an integrated approach to developing our AI: The CV would detect a possible target, feed it into the ML for classification, then if it was what we were looking for, would Geotag and pass the confirmed target along to the users land management software. If required the CV would enable the UAV to track the target, (circle it for more footage, etc) according to the users aims.

I think we’ve ended up with the simplest solution (which is of course the best, but never easy to see until it’s done). And I think that’s why we’ve had success where not many others have achieved what we’re going for. But I’ll leave it at that for the moment, and let’s get more technical in the next post. Might even touch on some of the tricks to take CV to the level of adaptive thresh-holding for a camera embedded on a UAV tracking moving targets across a changing terrain.

James Keane

Estimating Sea Surface Temperatures with UAV

In this article I am going to talk about a subject that is close to my heart. Maybe as I spent a great deal of my precious time in my final year of university at the Australian Maritime College studying this topic. This topic being “Estimating Sea Surface Temperature (SST) using Unmanned Aerial Vehicles (UAV)”.

Zac Drone

As you read this I will endeavour to give you a greater understanding into the use of thermal imaging on UAV’s and also how similar technology can be applied to a multitude of areas and disciplines, such as remote sensing for environmental monitoring to animal classification and detection. With technology constantly improving the limits of autonomous technology are endless and truly exciting. We at GreenRoom Robotics strive to, and are ready to push the boundaries to develop technology to improve the cost efficiency and ease of doing the tasks that make your business run smoothly. Follow our progress through these blog posts as we design, test and deploy our newest creations.

The thing you might be asking yourself at this point in time is why do we need a UAV to estimate the temperature of oceans, lakes and estuaries. Currently we have a number of ways to do this so why aren’t these good enough? The current methods have two main categories and include:

  • In Situ (Physical Measurements)
    – Buoys (Drifting or Fixed)
    – Ocean Gliders
    – Measurements from ships
  • Remote
    – Manned Aerial fights
    – Satellites

These systems are proven and are very efficient (for remote at least) at covering large areas in a relatively short period of time. Remote satellite systems such as Landsat 8 can survey the entire oceans of the world in around 16 days. This is great on a macro scale, however if precises measurement’s for a particular area are required, especially near land, issues arise. As shown in the figure below you are able to visually see the problem, as the measurements are so large, between 60 and 200 meters in size, any pixel of measurement that covers land and sea are discarded due to the difference in emissive properties between the reflective water and the less reflective land forms. Seen in this image the river systems are left data-less as the measurement size is not small enough.

Ocean SST

To be able to get precise measurements in these area’s other systems can be utilised such as manned aerial flights. This has the capacity to provide high resolution measurements and also has the capacity to cover significant areas, however gaining data in this fashion is expensive and takes a significant amount of planning and ground work.

This is where UAV’s can be applied to significantly reduce the cost and increase the ease of gaining measurements for a small area. These systems can be easily deployed at short notice to give very high resolution data of an area. The purpose of the developed system was to give a way to gain measurements for scientific environmental monitoring particularly for looking into the effects of drought on lake and river systems.

The system developed is shown below, it carried a relatively small 80×60 pixel thermal imaging sensor that has the capacity to take an image with a logged location. The UAV has the capacity to operate for up to 50 minutes depending on the mass of the sensor that is being carried. The drone system was built from scratch in every way including the thermal unit. The UAV was made out of carbon fibre for its strength to weight ratio with all of the mounts and fixtures being made out of 3D printed ABS plastic. The thermal camera is placed on an electronically stabilised gimbal to ensure the captured thermal image was taken at an angle parallel to the ground.


Developing a system from the ground up ensured that the system could be fully customisable for what ever payload was required. In the case of this project a significantly smaller UAV could have been used however, a large octocopter (eight armed) system was chosen for future needs carrying larger and more expensive equipment such as hyperspecteral cameras.

The thermal imaging system was tested in Launceston, Tasmania for a small section of the Tamar River. This was done at the end of the project to prove the system had the capacity to gain high resolution data for an area. With more time larger areas may be mapped. Below is an excerpt of the data collected and processed. The plots showing both the flight path and the final collected data.

results zac thesis.PNG

With technology ever improving the limits of using UAV systems such as this are always being raised. Sensors are becoming smaller and processing power is constantly increasing meaning processing can be done in flight and reviewed in real time. This is what GreenRoom is striving to do with its new animal detection and tracking system. Track and plot in real time to a user,  providing a truly powerful interface between the system and the user. The lessons learnt from this project are being used heavily in the development of the new system. We cant wait to get a prototype in the air and to show the world what we can achieve!

For access to the full version of the thesis visit the following link:

Estimating SST With UAV – Zac Pullen Thesis 2015

Watch data collection day!

UAV Computer Vision (Part 1) – From Pebbles

My first venture into computer vision was in 2014 when I prototyped an automated sorting system for conveyor belts for a production factory in Indonesia. The goal was to sort through many tonnes of pebbles in order to find the best ones for a company making tiles destined for feature walls in luxury resorts around the world.

The idea was to have a camera above and a camera from the side of the conveyor belt that would classify pebbles according to size (height, width, roundness, depth) and colour. These days I would achieve that with stereo vision but at the time I knew no better. The prototyping process taught me the basics of image processing and computer vision and I ended up with a solution that tracked pebbles as they moved along a conveyor belt using RGB and HSV analysis for blob detection. In hindsight, a rudimentary approach. Still, the classification worked, and with a timed delay a servo motor down the conveyor belt could sort the pebbles left or right according to whether they were appropriate, or let them continue down the conveyor belt for further sorting.

There were some laughs at my fixation with pebbles for a time, but a short while later these same techniques I’d used for classifying pebbles were applied to video footage taken from an Autonomous Underwater Vehicle (AUV) to achieve non-destructive monitoring of scallops on the sea floor. This helped a friend with his thesis work, saved some scallops, and confirmed the power of computer vision in my mind.

Fast forward 3 years and we are prototyping an ambitious project – having a UAV with thermal vision and night vision cameras capable of detecting, tracking and classifying anything with a heat signature. Our client wants to be able to find animals on his property, and wants it to be done with a drone that can conduct autonomous search while he waits by his car. The applications of this capability continue to grow in our minds, and following this prototype we intend to develop the capability for the agricultural industry and for search and rescue.

Three of us were interested in the project. Zac Pullen, our UAV engineer, can automate the drone control system and search behaviour. Harry Hubbert has the Machine Learning capability (through which the drone can recognize an animals species once we have a good photo of it). The technology gap remaining was advanced computer vision capability. How can the drone detect and track unknown animals when every animal is different and the environment is constantly changing? We needed to solve this to give the UAV capability to home in on targets and take night vision footage for classification. After a month and sleepless nights I’ve taken the basic principles that started with pebbles, and have a computer vision solution advanced close to achieving just that.

Computer vision is a fun area to dive into. For one, it makes you marvel at the power of the human brain. Further, the potential for increasing the level of systems autonomy is endless and open, but critical. Part 2 of this blog will look deeper into the project.

James Keane

The Lloyds Register Unmanned Marine Systems Code – Why does it matter?

I haven’t seen a lot of fuss out there about the recent release (14 June 17) of the Lloyds Register (LR) Code for Unmanned Marine Systems (UMS). I only say this because it is the first real set of rules and regulations that have been applied to AUVs and USVs, and I think it is a pretty big deal.

For those who don’t know what LR is, or what they do, here is the most concise and brief I can be. LR is one of a few maritime classification societies that produce, upkeep and update a set of codes, rules and guidelines for the design and construction of maritime vessels. They make these documents so that ship and submarine designers have a clear understanding of the requirements they need to achieve to ensure that their design is safe to be commercially operated in their intended operating environment.

So, what is the significance of this? Well up until now, autonomous and unmanned systems have been advancing so quickly that the laws and regulations haven’t really kept up. This means that everyone is coming up with revolutionary ideas to automate the shipping industry, map the ocean floors and much more, but how do you insure something that isn’t bound by some sort of assurance, or classification, organisation? The answer is you can’t.

This is why I am excited and intrigued by this release from LR. It is providing a starting point to which unmanned systems can truly be applied on a large scale in the Maritime Industry. I know a lot of you will be saying; “What about IMO?” and, “But nothing has been certified under this code and who can actually checks this”? And I completely agree with you, there is still a long way to go before we see driverless container ships navigating the world’s oceans, but this is a great step in the right direction and I can’t wait to see where it goes.

If you want to have a read of this new LR code, I will attach the link below and you can have a read. It is only 39 pages and I think it is really interesting.

I hope you enjoyed this article and feel free to contact us with your thoughts and views, would love to hear what you think as well.

Harry Hubbert

Thermal Vision, Drones and Livestock

GreenRoom Inflight Robotics (a subsidiary of GreenRoom Robotics) are undertaking an ambitious project combining Unmanned Aerial Vehicles (UAV), Computer Vision and Machine Learning. We are aiming to improve the way in which drones are applied in to a number of industries, and for varying applications.

This seems to be a fairly common claim in modern tech companies, but I hope that by the end of this article that you agree with me, as you’ll see we’re developing some useful applications for UAVs, and robotics in general, in response to direct market need.

I shall explain what we are doing. What, but not how, (because then you will probably just take the idea and do it for yourself). Nevertheless, here is a sneak peak into this seemingly basic, but innovative, new program called ThermalVis.

ThermalVis is a program that will be able to be integrated with any reasonably sized UAV (fixed wing or rotary) that will take input from the vehicle’s control system, cameras (initially thermal imaging cameras), it will process this data (on a micro computer board) and then either track or map the position of animals or objects in a given area, depending on what the user wants to do. But this isn’t all it does, it will relay this information back to the operator, plotting what animal it has found, and where it has found it. If you want to track or find one particular species, we can train it to do that as well.

If you haven’t heard of Machine Learning (ML, otherwise known as Neural Networks or Artificial Intelligence) then it is worth having a look at. This field is not new, with the first real reference of it going back to 1957 with Frank Rosenblatt credited with making the first neural network, but in recent years it has exploded and is now being applied in all areas of robotics, computer science, and pattern recognition. I won’t bore you with the details, but essentially it is a computer model that roughly mimics the structure of neurons in the human brain, essentially making it able to be ‘trained’ and learn to recognise patters, and in our case, images.

Combining ML with industry standard computer vision (analysing images for abnormalities essentially) has enabled us to create a system that will find an animal, and then determine what species that animal is. This is essentially useless if it is given to a person to hold, as they could probably do it better with their own eyes and brain, which is where UAVs come into it.

Modern UAVs are increasing in range, stability and capability, with their application booming in modern industry. We aim to take advantage of this technology and integrate our image detection and classification programs with ROS (Robot Operating System) to make a user-friendly, robust and capable system to achieve the goals of the client, in whatever they want to find, track or record. ROS is the control software, user interface and supporting structure that we are using as the control system to combine the information from our ThermalVis program to achieve tracking, mapping and any other functions a customer may want.

Initially ThermalVis is being launched for the agricultural industry (for locating and counting livestock, checking fencelines/troughs, etc). Simultaneously we’re prototyping for pest monitoring and wildlife tracking. We are so excited and passionate about bringing the product to world. Like I said at the start of this article, we aim to improve the way that UAVs are used in industries around the world. Watch this space.

Our first prototype will take flight toward this end of this year integrating thermal vision and night vision for a client in Brisbane, Qld.

Harry Hubbert

Autonomous Underwater Vehicle (AUV) Localisation and Homing

Autonomous Underwater Vehicles (AUV’s) are taking bigger roles in science, industry, and defence. Essentially robotic submarines, or subsea drones, AUV technology lags its aerospace counterpart due to the physics of the subsea medium making robotics advancements more difficult. A high degree of uncertainty in navigation and manoeuvring results from the significant limitations of physics underwater. Subsea robotics cannot rely upon GPS for navigation and communication is hugely limited once the drone dives. These factors push for increasing levels of autonomy and demand that increasing decision-making capability be embedded onboard the AUV. This means there are exponential benefits to be achieved through even basic advancements in subsea Artificial Intelligence (AI).

Localization is seen as a critical step to increasing artificial intelligence of robotics systems. My engineering honours thesis of AUV Localization and Homing to a single beacon was commissioned with two goals in mind. Firstly, for expedited recovery of vehicles deployed underneath ice, and secondly, for advancement toward automated docking with a surface vessel (manned or unmanned).

AUVs are being deployed beneath ice for various reasons. For example, Environmental Scientists map the underside of icebergs to validate ice-thickness models generated by satellite, and NASA trains astronauts to work alongside robotic systems for over-the-horizon exploration in an environment hostile to humans. AUVs are often deployed through relatively small holes cut through the ice. Recovery after these missions can be a significant task, especially given a case where an AUV gets ‘lost’ under the ice. Having an AUV capable of locating and returning to a beacon dipped through the ice is thus an important redundancy.

Launch and Recovery of AUVs is still also a major challenge from surface vessels. While there are various Launch and Recovery Systems (LARS) in operations, there is also a push for having the AUV home to an automated docking capability. This has been achieved by various groups, such as Woods Hole Oceanographic Institute (WHOI), and work continues in this area around the world. Of course, different vessels and operational requirements require a range of solutions. Covert operations in defence for example will benefit from having an AUV capable of localizing and homing to an Unmanned Surface Vessel (USV). Long term, the goal is to have AUVs, USVs and Unmanned Aerial Vehicles (UAV) working with swarm capability for multi-platform fleet autonomy.

We achieved AUV Localisation and Homing to a single beacon in Iceland in 2015 working alongside Teledyne Gavia to deploy a Teledyne Gavia AUV enhanced with Mission Oriented Operating Suite Interval Programming (MOOS-IvP) as a backseat driver. MOOS-IvP-GAVIA was able to take in range-reports generated from Long BaseLine (LBL) beacon ‘pings’ and generate a homing behaviour using a custom algorithm. This demonstrated an increased level of autonomy wherein the AUV was effectively able to update its original mission plan and execute the new plan without human involvement.

Real-time adaptive manouevring will be key to efficient robotics operations in the subsea environment. As AUVs become more ‘aware’ of their environments they become increasingly useful as they adapt their mission plans according to pre-programmed priorities. For instance, an AUV conducting survey operations could identify an item of interests (e.g. a shipwreck or a mine) and re-task itself to conduct closer inspection of the area before continuing on its original mission plan.

Keep an eye out for the research of Fletcher Thompson who has just been awarded a fellowship from the Institute of Marine Engineering Science & Technology (IMarEST) to extend his graduate research in autonomous multi-platform fleet capabilities at the Australian Maritime College.

James Keane

The beginnings of GreenRoom

GreenRoom grows from a society of engineers graduated from the Australian Maritime College (AMC) in Launceston, Tasmania. All met studying Engineering honours degrees in Naval Architecture, Ocean Engineering and Marine and Offshore Systems. The AMC draws likeminded individuals which made four years of college a unique think-tank filled with extracurricular projects and activities. Surfing, kitesurfing, hiking and diving, winning awards at international engineering competitions and running an autonomous technologies society all became part of daily college life.

Having graduated and gone our separate ways, the team remains committed to excellence in their own pursuits, and our network has grown to reach across the spectrum of marine engineering. While some have gone in to serve with the Royal Australian Navy, others have taken up doctoral studies, and others into graduate industry positions.

GreenRoom is our framework to grow and deliver our expertise, projects and networks as we transition increasingly into the commercial realm.