Center for Digital Agriculture

Economic Incentives for Adoption of Semi-Autonomous Robots for Weed Control in Corn-Soybean Systems

Speaker: Chengzheng Yu
Graduate Advisor: Dr. Madhu Khanna
Abstract: The rapid development of herbicide resistance in weed population is threatening the productivity of corn and soybeans and raising the demand for robotic approaches to weed control that can detect weeds under the canopy and mechanically remove them. This study is developing an integrated economic-weed ecology model to examine the economic determinants of landowner choice of weeding strategy and the effects of weed environment, robot characteristics, and farm size on robot adoption decision. We examine the profit maximizing choices of timing of weed management with robots, the number of robots needed and whether to own or rent robots. We show the effects of robot cost, effectiveness at weed control and extent of weed resistance on the profitability of robotic weeding.

Multi-Sensor Fusion based Robust Row Following for Compact Agricultural Robots

Speaker: Mateus Gasparino
Graduate Advisor: Dr. Girish Chowdhary
Abstract: Navigation through crop rows is one of the most important tasks for agricultural robots. Such navigation can be used for many field tasks, such as data acquisition, weeding, seeds planting, or spraying. Under-canopy agricultural navigation has been a challenging problem due to GNSS and other positioning sensors are prone to significant errors due to attenuation and multi-path caused by crop leaves and stems. To address this problem, we present the development, application, and experimental results of a real-time lidar-based and a GNSS-based navigation framework. Our system addresses this challenge by fusing IMU and LiDAR measurements using an Extended Kalman Filter framework on onboard hardware. A real-time model predictive control framework is developed and applied to a fully autonomous mobile robotic platform designed for in-field phenotyping applications in corn fields. An Extended Kalman Filter used to estimate robot states, and a non-linear MPC is designed based on the robot kinematics. Our system is validated extensively in real-world field environments over a distance of 50.88 km on multiple robots in different field conditions across different locations. We report state-of-the-art distance between intervention results, showing that our system can safely navigate without interventions for 386.9 m on average.

Learned Visual Navigation for Under-Canopy Agricultural Robots

Presenter: Arun Narenthiran Sivakumar
Team: Sahil Modi, Mateus Valverde Gasparino, Che Ellis, Andres Eduardo Baquero Velasquez, Girish Chowdhary*, Saurabh Gupta*
Graduate Advisor:
Dr. Girish Chowdhary
Abstract:
We describe a system for visually guided autonomous navigation of under-canopy farm robots. Low-cost under-canopy robots can drive between crop rows under the plant canopy and accomplish tasks that are infeasible for over-the-canopy drones or larger agricultural equipment. However, autonomously navigating them under the canopy presents a number of challenges: unreliable GPS and LiDAR, high cost of sensing, challenging farm terrain, clutter due to leaves and weeds, and large variability in appearance over the season and across crop types. We address these challenges by building a modular system that leverages machine learning for robust and generalizable perception from monocular RGB images from low-cost cameras, and model predictive control for accurate control in challenging terrain. Our system, CropFollow, is able to autonomously drive 485 meters per intervention on average, outperforming a state-of-the-art LiDAR based system (286 meters per intervention) in extensive field testing spanning over 25 km.

ApproxRobotics: The Cost and Accuracy Tradeoff for Small Mobile Robots

Speaker: Hashim Sharif
Graduate Advisor: Dr. Vikram Adve
Abstract: Autonomous robots are increasingly relying on high-dimensional visual information for perception, planning, and control. The deep neural network pipelines that perform inference tasks on these robots require significant computational resources,  which increases the robot energy consumption and the cost. This has been a barrier in adopting learning-based methods in cost-constrained domains, such as agriculture. In our work, we show that structured pruning on neural networks can enable agricultural robots to employ lower-cost computational hardware, without losing task robustness in visual navigation and visual phenotyping tasks. We expose key trade-offs between computational cost and prediction accuracy in the perception module of an autonomous navigation stack in a production agricultural robot. Our key finding is that, for closed-loop control systems used in robots, it is often possible to relax the accuracy of CNNs used for computer vision without significantly hurting the end-to-end task outcomes. We show that computational approximations enable us to deploy a state-of-the-art vision-based autonomous navigation pipeline  and a real-time video analytics task on a single resource-constrained Raspberry Pi4. Our results show that it is possible to use learning-based control for small mobile robots using low-cost compute hardware. 

Multi-Modal Failure Detection for Uncertain Field Environments

Speaker: Tianchen Ji
Graduate Advisor: Dr. Katie Driggs-Campbell
Abstract: To achieve high-levels of autonomy, robots require the ability to detect and recover from failures and anomalies with minimal human supervision. Multi-modal sensor signals could provide more information for such anomaly detection tasks; however, the fusion of high-dimensional and heterogeneous sensor modalities remains a challenging problem. We propose a new deep neural network architecture called the supervised variational autoencoder (SVAE), for use in failure identification in unstructured and uncertain environments. The model leverages the representational power of VAE to extract robust features from high-dimensional inputs for supervised learning tasks. We show that our method provides superior failure identification performance than baseline methods, and that our model learns interpretable representations. We also effectively deploy our method in the field, detecting and classifying common failure modes encountered during crop row navigation.

Structured Prediction on Top-K Assignments for Multi-Object Tracking and Segmentation

Speaker: Anwesa Choudhuri
Graduate Advisor: Dr. Girish Chowdhary
Abstract: Multi-object tracking and segmentation (MOTS) is important for understanding dynamic scenes in video data. Existing methods perform well on multi-object detection and segmentation for independent video frames but tracking of objects over time remains a challenge. Most MOTS methods formulate tracking locally, i.e., frame-by-frame, leading to sub-optimal results. In this work, we formulate a global method for MOTS. We first find the top-K local assignments of objects between consecutive frames and develop a structured prediction formulation to score assignment sequences. We then use dynamic programming to find the global optimizer of this formulation in polynomial time. On challenging datasets, this method achieves the state-of-the-art results on tracking.

A Berry Picking Robot with a Hybrid Soft-Rigid Arm: Design and Task Space Control

Speaker: Benjamin Walt
Graduate Advisor: Dr. Girish Krishnan
Abstract: We present a hybrid rigid-soft arm and manipulator for performing tasks requiring dexterity and reach in cluttered environments. Our system combines the benefit of the dexterity of a variable length soft manipulator and the rigid support capability of a hard arm. The hard arm positions the extendable soft manipulator close to the target, and the soft arm manipulator navigates the last few centimeters to reach and grab the target. A novel magnetic sensor and reinforcement learning based control is developed for end effector position control of the robot. A compliant gripper with an IR reflectance sensing system is designed, and a k-nearest neighbor classifier is used to detect target engagement. The system is evaluated in several challenging berry picking scenarios.

Addressing Accuracy Issues in Soft Pneumatic Arms using Visual Servoing 

Presenter: Shivani Kamtikar
Graduate Advisor: Dr. Girish Chowdhary
Abstract: Reduced availability of labour during harvest season is a major problem in agriculture especially in crops such as strawberries, cherry tomatoes, since they are still harvested manually. Automation of berry harvesting to address the labor shortage problem using traditional mechanical grippers suffers from limited reachability, low dexterity and high manufacturing cost. As an attempt towards achieving this goal, a hybrid rigid-soft manipulator to reach the periphery as well as the interior of the plant has been developed by Monolithic Systems Lab (MSL) and Distributed Autonomous Systems Lab (DASLab). While the rigid arm control and dynamics are well studied, the complex dynamics for the soft arm makes it difficult to develop control policy for the hybrid arm in reaching targets. Current state-of-art soft continuum arms (SCA) have a mean tip position error of 2.5 cm. In order to successfully use SCA or its variants for berry picking, an accurate control of tip position is required. The focus of this talk is on visual servoing of soft arm in the hope of closing the accuracy gaps in soft pneumatic arms using visual feedback. Our research focuses on training a neural network based relative pose estimation network which we evaluate on our soft arm for validating its performance in accurately reaching the target.

Grasp Detection for Berry Harvesting

Presenter: Samhita Marri 
Graduate Advisor: Dr. Girish Chowdhary
Abstract: With the labor shortage in agriculture especially in the fruit harvesting sector, developing autonomous berry picking robots is crucial, especially with the burgeoning population. While perception, planning and control of manipulators are equally important in realizing a truly autonomous agent, in this presentation, I’ll focus on perception particularly, grasp detection. Object detection and instance segmentation methods are the rst steps towards localizing the berries on a plant, but they do not offer insight on end-effector pose for a successful grasp. I will give an overview of current grasp detection methods ranging from structured setups like grasping rigid objects on a table to an unstructured harvesting world along with the preliminary results of our ongoing research on a self-supervised based approach.

Leaf Angle Estimation with Sensor Fusion

Speaker: Junzhe Wu
Graduate Advisor: Dr. Girish Chowdhary
Abstract: In this leaf angle estimation method, a neural network is trained to detect object leaves, then point cloud data from depth camera and vision data from camera are combined via the sensor fusion to get the leaf rolling angle and grasp point. This method can produce a consistent leaf rolling angle estimate quantitatively and qualitatively on multiple corn leaves, especially on leaves with multiple different angles. 

FedSSL: A Framework for Object Detection Leveraging Semi Supervised and Federated Learning

Speaker: Garvita Allabadi
Graduate Advisor: Dr. Vikram Adve
Abstract: Machine learning techniques are being employed on farm robots for various agricultural tasks such as harvesting, navigation, weeding. For this purpose, the robots are collecting large amounts of image and video data. However, using this data for training the machine learning models is challenging as it requires significant time, effort and expertise to label the raw data. In this work, we explore the use of Semi Supervised Learning to leverage the unlabeled data to improve the performance of the machine learning models. In addition, due to limited compute and communication capabilities of the farm robots we also explore techniques such as Federated Learning to optimize the process. This work presents a generalized technique for object detection leveraging Semi Supervised Learning and Federated Learning with the focus on berry detection for harvesting.

Robot Sound Interpretation: Representation Learning for Verbal Robot Commands

Speaker: Peixin Chang
Graduate Advisor: Dr. Katie Driggs-Campbell
Abstract: We explore the interpretation of sound for robot decision-making, inspired by human speech comprehension. Previous work uses an end-to-end deep neural network to directly interpret sound commands for visual-based decision-making, but suffers from the need for auxiliary input like explicit labels for the images and sounds. We propose a method for learning a representation that associates images and sound commands with minimal supervision. Using this representation, we automatically create a reward function for reinforcement learning to learn a policy for robot control. We demonstrate our approach on three robot platforms, a TurtleBot3, a Kuka-IIWA arm, and a Kinova Gen3 robot, which hear a command word, identify the associated target object, and perform precise control to navigate to the target. We show the effectiveness of our network in generalization to sound types and robotic tasks empirically, and successfully transfer the policy learned in the simulator to a real-world Kinova Gen3 robot. Through this preliminary work, we hope to demonstrate a framework for natural human-robot communication that guides robot control. This interaction paradigm should facilitate affecting human-robot teaming in many agricultural settings, from directing the TerraSentia to cooperatively tending to crops with FarmBot.

An Intelligent Tutor for Amateur Farmers

Presenter: Qingyun Wang
Graduate Advisor: Dr. Heng Ji
Abstract: One major challenge for amateur farmers to perform tasks is the lack of explicit mutlimodal structured information. For example, to assist a beginner to plant sunflowers, an ideal AI agent should be able to guide the farmer step by step, with each step described in natural language along with illustrative images with specific objects (e.g., sunflower seeds, pot) and actions (pouring water). We propose to leverage the large-scale procedural knowledge base, Wikihow, to reach this goal. We learn a novel schema induction and action prediction framework from Wikihow data about gardening. Existing work has exploited text descriptions in Wikihow for event prediction. However, to create such an intelligent assistant for the Agriculture domain, we need to tackle two challenges: (1) lack of domain knowledge (e.g., the difference between seed and seedling); (2) lack of visual illustrations. We propose a knowledge graph guided neural generation system which includes two novel components: (1) construct and integrate domain-specific knowledge from scientific literature and domain-specific ontologies and databases; (2) construct a multi-modal common semantic embedding space ton represent the natural language descriptions and their corresponding images and videos in Wikhow. We will demonstrate a system that can take a task’s goal as input, and generate each step’s detailed instructions in natural language and illustrative images. Moreover, our approach is highly generalizable and capable of predicting procedures for unseen tasks or concepts.

smol: Sensing Soil Moisture using LoRa

Speaker: Daniel Kiv 
Graduate Advisor: Dr. Deepak Vasisht
Abstract: Technologies for environmental and agricultural monitoring are on the rise, however there is a lack of small, low-power, and low cost sensing devices in the industry. One of these monitoring tools is a soil moisture sensor. Soil moisture has significant effects on crop health and yield, but commercial monitors are very expensive, require manual use, or constant attention. This calls for a simple and low cost solution based on a novel technology. In this work we introduce smol: Sensing Soil Moisture using LoRa, a low-cost system to measure soil moisture using received signal strength indicator (RSSI) and transmission power. It’s small and can be deployed in the field to collect data automatically with little manual intervention. We designed and tested our measurement-based prototype in both indoor and outdoor environments. With proper regression calibration, we show soil moisture can be predicted using LoRa parameters.

Pasture Monitoring in Simulated Environments

Speaker: Kulbir Ahluwalia 
Graduate Advisor: Dr. Girish Chowdhary
Abstract: In a world where rising hunger and malnourishment are pressing problems, robotics research in agriculture offers hope in the form of innovative solutions. Precision agriculture involving use of Unmanned Aerial Vehicles (UAVs) to monitor growth of crops is a promising approach to covering large areas in reasonable times. Analysing point clouds obtained from the UAV can help to find out the spatial distribution of plants, growth in different regions of the farm and consequently help the farmer focus his resources in regions where they are required the most. This would help get the best possible yield from the same or lesser amount of resources. 

In this presentation, we look at a simulation based approach to gather point clouds for grass pastures for precision agriculture. The growth of a pasture is monitored using deployments of an autonomously controlled UAV which collects the point clouds of the pasture.

 Safety Risk Assessment of the Usage of Robots in Agriculture

Speaker: Guy Roger Aby
Graduate Advisor: Dr. Salah Issa
Abstract: The implementation of artificial intelligence (AI) based machinery and equipment (drone sprayers, driverless tractors, and robots) in agriculture seems to be associated with several advantages such as increased productivity, reduced exposure to hazardous conditions, reduced environmental impact, and reduced labor needs. However, a systematic literature review of peer-reviewed published articles revealed that associated hazards created by AI technology in agriculture are often overlooked. In this presentation, I will be reviewing several safety risk assessment steps, which will be used later to conduct a technical safety risk assessment on an artificial intelligence sprayer. In general, highlighting potential hazards associated with AI technologies and providing solutions to addressing these hazards would facilitate the adoption of AI in agriculture.