Detecting wildlife in unmanned aerial systems imagery using convolutional neural networks trained with an automated feedback loop

Connor Bowley
Marshall Mattingly
Andrew Barnas
Susan N. Ellis-Felege, University of North Dakota
Travis Desell, University of North Dakota


Using automated processes to detect wildlife in uncontrolled outdoor imagery in the field of wildlife ecology is a challenging task. This is especially true in imagery provided by an Unmanned Aerial System (UAS), where the relative size of wildlife is small and visually similar to its background. This work presents an automated feedback loop which can be used to train convolutional neural networks with extremely unbalanced class sizes, which alleviates some of these challenges. This work utilizes UAS imagery collected by the Wildlife@Home project, which has employed citizen scientists and trained experts to go through collected UAS imagery and classify it. Classified data is used as inputs to convolutional neural networks (CNNs) which seek to automatically mark which areas of the imagery contain wildlife. The output of the CNN is then passed to a blob counter which returns a population estimate for the image. The feedback loop was developed to help train the CNNs to better differentiate between the wildlife and the visually similar background and deal with the disparate amount of wildlife training images versus background training images. Utilizing the feedback loop dramatically reduced population count error rates from previously published work, from +150% to −3.93" role="presentation" style="box-sizing: border-box; display: inline-table; line-height: normal; letter-spacing: normal; word-spacing: normal; overflow-wrap: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0px; min-height: 0px; border: 0px; padding: 0px; margin: 0px; position: relative;">−3.93−3.93% on citizen scientist data and +88% to +5.24% on expert data.