Assistant Professor, Agricultural & Biological Engineering, U of I College of ACES | Biography
Abstract Written by Breanna Larson
The application of computer vision models can aid the analysis and development of systems within the agricultural field. Current initiatives are researching to assist in macro scale detections and applications which require further deep learning. Automated particle detection and tracking will aid in more efficient agricultural processes. This study focuses on the ability to detect and extract small helium-filled soap bubbles through machine learning as well as track the bubbles’ movement with VPTV system. Eight cameras were used to take images of a helium bubble solution propelled at varying velocities and image frames per second. The collected images were then segmented using ITK-Snap to create a training dataset of five-hundred images and build upon a Mask R-CNN model. The model is evaluated using Intersection over Union scores based upon the randomized test dataset. The current model will require an additional growth in the dataset to create a high accuracy model, however it is showing promising results in precise detection. While currently based on an inside simulation, in the future this model could be implemented through transfer learning in sprayer or droplet detection, air quality monitoring, or microorganism detection.
Associate Professor, Agricultural & Biological Engineering, UIUC College of ACES | Biography
Abstract Written by Feidra Gjata
Animal behavior can provide key insights on animal health and welfare. Currently, each finishing pig in a commercial barn is typically observed for less than one second per day. There is a need for an automated detection tool to monitor animal behavior in commercial settings as a support tool for animal caretakers. This study focuses on the automated detection of activity levels in finishing pigs in a commercial setting to understand behavioral differences influenced by various feed supplements. Our machine learning model, using YOLOv8 and single images, was first trained using labeled images from a large commercial farm video dataset. The labels were represented by activity levels (categories of 1, 2, and 3 – defined by a proprietary method) assigned by a trained reviewer. The models were applied to one hour of data from each of six cameras to assess their ability to classify videos based on the activity index, and summaries were created for the activity distribution across the six pens. The mislabels were summarized based on the model label and correct label. The models exhibited varying degrees of accuracy, particularly when rating activity level two. Preliminary results also suggest that there is a difference in activity patterns among the pens, influenced by the feed supplement, and the model detected those differences. This study demonstrates the initial efficacy of machine learning models in automating the detection and classification of pig activity levels using single images. Future work may focus on refining the models to improve accuracy and scalability, as well as advancing the potential of automated monitoring to capture and analyze behavior differences.
Abstract written by Camiya Knox
This project utilized previously collected behavior data of individually housed research pigs and focuses on using pen location to predict behavior based on pig location in the pen. Ultimately, this may advance automated behavior tracking using computer vision to help with real time monitoring for more timely interventions for animal care. An existing data set was previously created in a behavioral observation software, (BORIS), and consisted of young pigs’ behavior with time stamps and a summary of how frequently and how long they performed a certain behavior. Individual pig data were extracted from this software, including the centroid location coordinates of the pigs and water and feeder, as well as the behavior label for each frame. These values were used to create a cluster plot of location, behavior, and time. The data for each pig was also summarized for the total number of occurrences for each behavior label. Active behaviors such as exploring, locomotion, and nosing tended to occur towards the outer perimeter of the pen. Behaviors such as elimination were performed at the edges of the pen environment. Some of the behaviors between the pigs were thousands of numbers apart, displaying how some pigs were more active. This analysis serves as a preliminary data exploration to identify specific behaviors to further explore for automatically predicted using just pig centroid location coordinates. Future works will consist of a correlation analysis and visualizing the time sequence of behaviors.
Abstract written by Isabella Fonseca
Respiratory illness is a prevalent issue in pork production systems, resulting in economic and animal losses each year. Behavioral analysis may detect early onset of respiratory illness in breeding herds, increasing the overall health and welfare of the animals. The purpose of this study was to determine if behavior and posture differences could be used as a tool to identify emerging respiratory illness in pigs. In this study, a cohort of 10 pigs were split between two main treatment groups, a control group and pigs administered a lipopolysaccharide (LPS) immune challenge. The pigs were recorded continuously using video monitoring, and three unexpected mortalities were observed. The mortalities were identified postmortem as resulting from an undetected subclinical respiratory illness. An ethogram containing 29 behavior and posture labels was created to study the behavior of the sows within a specific time frame. Behavioral Observation Research Interactive Software, or BORIS, was used to analyze the time frames of 10:30am-12:00pm, 30-90 minutes after LPS injection. The analysis was split into 3 treatment groups (control, LPS + recover, and LPS + death). An ANOVA comparison, Kruskal-Wallis rank sum test, and a Dunn-Bonferroni test were conducted to normalize, transform, and compare the collected data for each treatment group. Differences in feeding, drinking, sham chewing, and head shaking behaviors were found between the control group and LPS treatment groups for both bout length and number of occurrences, with all four occurring less for LPS groups. These results serve as a preliminary analysis to inform analysis of the larger time period covered by the full dataset. Additionally, the result of this study indicates a promising future for the application of automated behavioral analysis in agricultural systems for animal monitoring.
MS Student
Agricultural & Biological Engineering
PhD Student
Animal Sciences
Associate Professor, Communication, UIUC College of LAS | Biography
Abstract Written by Kennedy Shorter
This study explored public perceptions of “programmable plants” by analyzing survey responses using Sarah Tracy’s thematic analysis methodology. The research aimed to decode diverse visions and understandings of programmable plants, informing future communication and engagement strategies. Qualitative surveys were conducted to engage various communities and gather insights into their knowledge, desires, and needs regarding programmable plants. The analysis identified six key themes: Technology and Robotics, Genetic Engineering and Manipulation, Control and Purposeful Influence, General and Unfamiliar, Communication and Experimentation, and Growth and Survival. Respondents associated programmable plants with advanced technology and robotics, genetic modification, external control, and purposeful influence. Some participants expressed confusion or unfamiliarity with the concept, while others envisioned plants used for communication or experiments and altering plant growth and survival traits. The results highlighted a range of perceptions, from advanced technology to genetic manipulation to control over plant behavior. The themes also revealed the need for more precise communication and education about programmable plants. These findings provided valuable insights for guiding future research, public engagement, and outreach efforts in programmable plant technology, extending the implications of previous studies. Understanding these themes can help spark the development of prototypes and messaging to create positive opinions and enhance outreach efforts, contributing to the larger CROPPS project focused on tackling climate change and promoting sustainability.
PhD Student
Communication
Assistant Professor, Animal Sciences, UIUC College of ACES | Biography
Abstract written by Sina Yarmohammadi
Understanding piglet behavior is crucial for enhancing management conditions in the swine production industry and addressing welfare concerns. This study investigates the utilization of computer vision models, specifically the YOLOv8 model, to automatically detect and identify piglet behavior during their critical first week after weaning. Analyzed from continuous video recordings over a period of seven days, six behaviors (At_Drinker, At_Feeder,Lying_Laterally, Lying_Sternally, Sitting, and Standing) were observed in 16 pens. The dataset contained 600 manually labeled images and was divided into a train-validation split, with 80% of the images used for training and 20% for validation. We utilized the YOLOv8 model that was trained for 250 epochs. We assessed the model’s performance by calculating metrics such as mean Average Precision (mAP) and F1 scores across various model sizes, namely YOLOv8n-seg and YOLOv8s-seg. The YOLOv8s-seg model achieved the highest mAP of 96.9%, followed by YOLOv8n-seg with 96.3%. The percentage of overlap between the identified behavioral segments and the uncertain behaviors of piglets indicates a strong possibility for enhancing animal welfare and management in swine production systems.
Abstract Written by Mary Ann Fox
Effective pasture management is crucial for optimizing cattle feeding and preventing overgrazing. Traditional methods of measuring forage availability, such as cutting samples and drying, are destructive and labor-intensive. This study aims to develop a correlation model between destructive and non-destructive evaluation methods to assess forage availability in fescue (Festuca) pastures. Eight (8) paddocks at the Beef and Sheep Research Field Laboratory were planted with fescue and evaluated at different times for varying canopy heights. Paddocks were evaluated after the first cut for hay production up to four weeks. The methods used include forage mass (FM) with a standard destructive method, compared to non-destructive methods: Normalized Difference Vegetation Index (NDVI) values at 100, 150, and 200 feet, a ruler, and plate meter (PM) readings. Regression analysis using SAS 9.4 software and Pearson’s correlation coefficients were calculated. Results indicated that NDVI and FM correlations are best modeled by polynomial functions, with coefficients of determination (R²) of 0.56, 0.64, and 0.51, respectively. PM showed a strong relationship with FM (R² = 0.80), whereas the ruler method did not yield a significant fit. Pearson’s correlation analysis confirmed significant correlations between FM and NDVI values at the specified heights (r = 0.762, 0.761, and 0.772; p < 0.05) at 100, 150, and 200ft heights, respectively. In conclusion, PM and NDVI, regardless of the height used, demonstrate the potential of integrating non-destructive technologies for accurate forage availability assessment.
Abstract written by Farhiya Abdalla
Advances in integrating technology into livestock production have brought many benefits, most notably the additional support it has provided to livestock producers. The automated ability to manage and monitor the health and welfare of livestock has reduced the need for intensive, costly, and tedious labor. The ability to individually identify cows would allow producers to address their specific needs and increase overall efficiency. By using computer vision to achieve this, facial recognition technology has proven to be highly beneficial in distinguishing between animals. This technology offers a significant advantage by reducing reliance on skilled farmers to visually identify the differences between cows. This is particularly important when considering the number of cows to be managed, as it helps to minimize inequitable treatment. This project aims to develop a Holstein cow identification system using computer vision. Two-minute facial videos of 44 Holstein cows were recorded using the Intel RealSense D455e camera. The videos were categorized according to the identification of each cow, and then 500 frames (10-11 frames per animal) were extracted to be labeled as cow faces. The images were imported into Label Studio and annotated with bounding boxes around the faces. The YOLOv8 detection model was then trained to detect the facial region of the animal. The trained model was then used to crop the facial regions of 56 images per animal (2464 images for 44 Holstein cows), which were used to develop the individual animal classification model (YOLOv8 classify model). To ensure the model’s effectiveness, the data sets for both the detection and classification models were partitioned into training and validation sets in an 80/20 ratio. This partitioning allowed evaluation of the model’s performance and accuracy in detecting and distinguishing the cows. The results showed a mean precision of 99.5% for the detection model and an accuracy of 99.6% for the individual face identification of the dairy cow classification model. According to these results, computer vision technology and a robust training model can effectively track and monitor each cow as an individual entity once the model shows a potential to efficiently classify individual Holstein cows’ faces.
PhD Student
Agricultural & Biological Engineering
Post Doc
Animal Sciences
PhD Student
Animal Sciences
Assistant Professor, Electrical and Computer Engineering, U of I Grainger College of Engineering | Biography
Abstract Written by Cassidy Wall
Recently, there has been a growing interest in autonomous agricultural robots that can help alleviate the current labor crisis and enable more sustainable and informed practices. Agricultural fields, however, are challenging operating domains for robots, due to the unpredictable environment and uncertain terrain. Remote teleoperation of these robots can help compensate for navigation failures. However, connectivity in the field is highly unreliable due to limited coverage and blockage from the plant growth. This project aims to develop a user interface that alerts users when they are navigating a robot toward a dead zone in a corn field, where communication disruption and video live feed delays can occur. Dead zones pose significant risks by causing delays in the video feed, which are crucial for users to monitor and control the robot effectively. Without a proper live feed, there is an increased risk of the robot being harmed, the user missing critical observations while navigating the field, and a notable drop in robot utility. To address this issue, our interface integrates live video feed and predictive algorithms to identify potential dead zones and fill in visual delays before they cause disruptions in operation. By providing alerts, our interface helps users stay in tune with what is happening in the field and offers them the choice to avoid the area, ensuring a smoother experience and better robot performance.
Professor, Integrative Biology, UIUC College of LAS | Biography
Abstract written by Charlotte Klurfeld
A substantial majority of federal crop indemnity cases are caused by climatic disasters, a metric that is predicted to increase as climate change exacerbates extreme weather (Swain,2020). It will become crucial to accurately predict the risk and liability facing each agricultural stakeholder based on a standard and universal metric. Using over two decades of data from the NASA Gravity Recovery and Climate Experiment (GRACE) remote sensing satellite project, this project created several different types of regression and machine learning models assessing the relationship between surface, root zone, and groundwater moisture percentiles, and climatic crop indemnity claims across the contiguous United States. These regression models also incorporated county level temperature and precipitation data as additional variables. Standard multivariable linear regressions, quadratic regressions, as well as Support Vector Regressions were conducted, with the climate and soil data regression performing most accurately. This analysis is a first step towards understanding if specific edaphic factors are associated with increased crop yield loss from extreme climate events.
Assistant Professor, Agricultural & Biological Engineering, U of I Grainger College of Engineering | Biography
Abstract Written by Kyeaira Faustin
A wood chip bioreactor is a trench dug in the edge of the field, filled with woodchips aiming to reduce the amount of NO3 being discharged downstream. The woodchips are a source of carbon, NO3 is a source of energy and microorganisms exist in the water. The woodchip bioreactor transforms NO3 into N2 through the denitrifying process. Nitrate sensors are used to collect the data needed. Nitrate sensors are used because manual sampling is labor-intensive and costly and they are more accurate in accessing the current state of NO3 leaving the field and entering the water bodies. The objective of this is to compare nitrate concentrations collected manually with samples collected by the sensors.
Assistant Professor, Agricultural & Biological Engineering, U of I Grainger College of Engineering | Biography
Abstract written by Asher Sprigler
The extended incubation of male and infertile eggs decreases the hatchability rate of layer hens and leads to a significant loss of energy through the incubation process, around one week in the case of infertile eggs and up to twenty-one days in the case of male eggs. The dispatching of male layer chicks, as they are deemed economically unviable, also leads to a significant global welfare concern. This study tests the feasibility of using non-invasive, non-destructive hyperspectral imaging technologies within the 400-1000 nm range to classify both the sex and the fertility of unborn chickens to avoid these losses. Edge detection was used to segment the eggs from their backgrounds, and after the application of different spectral pre-processing methods such as Standard Normal Variate, Multiplicative Scatter Correction and Savitzky-Golay pre-processing several standard classifiers such as Catboost, Xgboost, Random Forest, and SVM were used to classify both targets. Following these results, a variety of feature selection methods such as Lasso, PCA, Sequential Feature Selection, and others were used to identify the important spectral features. Accuracies of 83% and 98% were achieved for sex and fertility prediction respectively. The fertility results demonstrated using fewer spectral bands (30) could both increase the effectiveness of the model as well as decrease the computational power required, leading to all fertile eggs being classified correctly and only one infertile egg being incorrectly categorized as fertile. The target of sex still requires a full range to detect complex patterns. Overall, this reveals a new level of proficiency for hyperspectral imaging applications towards classifying both categories prior to incubation.
Research Scientist, Center for Digital Agriculture, National Center for Supercomputing Applications | Biography
Written by Manolis Huerta-Stylianou
Traditionally, image-based learning models have relied on real data to train. This poses many issues, however. Using real data requires processing large volumes of information and is often infeasible due to the collection and labeling of such data being labor-intensive and time-consuming. Particularly in the field of agriculture, this can pose a challenge when working with large quantities of plants. Bridging the gap between simulated and real environments in our models has shown to provide a solution to the challenges of using real data in training while still producing regularized results. Recent contributions to the sim2real gap have shown that using a shared latent space, variational auto-encoders can generate segmentation maps based on simulated data without requiring data from real environments. Using a VQ-VAE architecture with a shared latent space between real and simulated tomato plants, we provide an application to bridging the sim2real gap in agricultural tasks. Without overfitting, our model uses simulated data to successfully produce segmentations on real plant images.
Abstract written by Sona Javadi
This research focuses on evaluating 2D point tracking models to enhance autonomous navigation in under-canopy agricultural environments. The primary goal is to address the inherent challenges of navigating through tightly spaced crop rows where conventional navigation systems struggle due to issues like reduced RTK-GPS accuracy and noisy LiDAR measurements. We developed various 2D point tracking models to determine their potential effectiveness in detecting and following crop rows. These models measure the distance from the rows and the angle relative to the rows, using these parameters to guide the robot’s movement. By identifying optimal 2D points and connecting them to form a reference path, the models aim to enable precise and reliable navigation. Future work will involve testing these models in real-world conditions to validate their performance and determine their impact on improving the accuracy and reliability of under-canopy navigation systems. This research contributes to the ongoing development of efficient autonomous systems and explores new possibilities for precision agriculture and better management of crop fields.
PhD Student
Electrical & Computer Engineering
PhD Student
Computer Science
Assistant Professor, Crop Sciences, UIUC College of ACES | Biography
Abstract Written by Attia Dean
Genomic safe harbors are areas in the genome that are ideal sites for transgene insertion where the inserted gene will be maximally expressed. For a gene to be expressed, it needs to be in an area that is accessible so it can be transcribed into RNA and translated into proteins. Areas at the ends of chromosomes, away from centromeres tend to be better because they are more easily accessible. Open chromatin, which lacks methyl groups and often includes acetyl groups also is more accessible. Lastly, genomic safe harbors need to be in areas outside of genes and their promoters, terminators, and other regulatory regions. Any insertion into these areas interferes with native gene function and can massively decrease fitness.
CRISPR COPIES is a program that identifies genomic safe harbors. To identify Genomic safe harbors in tomato we used CRISPR COPIES on the IGB Biocluster, a supercomputer on campus. This provided a list of several hundred potential genetic safe harbors that we are narrowing down to the best sites for gene insertion. Sites in ideal locations with low methylation, high GC content, and a far distance from sites with similar target sequences that could lead to off target cutting are favored.
Future work will target these sites using CRISPR Cas 12 endonuclease to insert UV–GFP into protoplasts. This will allow us to measure the expression of our inserted transgenes. Identifying reliable locations for insertion has the potential to aid crop production by providing a faster and more cost effective way to add genes that could provide disease or herbicide resistance, drought resistance, increase fruit size, etc.
Assistant Professor, Agricultural & Biological Engineering, U of I College of ACES | Biography
Abstract written by Moises Rodriguez
This research investigates the feasibility of using artificial intelligence (AI) to identify horseradish and weeds, Methods Farmer field data with the goal of assisting horseradish farmers in effectively managing weeds while minimizing harm to crops and the environment. The primary hypothesis was that an AI model could accurately classify horseradish and weeds using image data. We collected images through two methods: (1) manual capture with a camera on a tripod, and (2) using a robotic platform. Data were gathered from drone and proximal images of horseradish farms in Collinsville, Illinois. These images were annotated with bounding boxes around horseradish and weeds to train the object detection model. The study demonstrated that the AI model effectively differentiates between horseradish and weeds, showing improved accuracy with the annotated datasets. The findings suggest that AI can be a valuable tool for weed management in horseradish production. The project plans to enhance the robotic platform to not only detect but also act on identified weeds. Currently, weed removal relies on manual labor; however, the future vision includes integrating the trained AI model into the robotic platform for autonomous weed detection and removal. Future research will focus on refining the model with larger datasets and exploring its application in autonomous weeding systems.
Assistant Professor, Animal Science, U of I College of ACES | Biography
Abstract written by Veeraraju Elluru
We propose an image segmentation pipeline for generating out-of-distribution cattle segmentation masks at scale, employing an integration of Variational Autoencoder (VAE) and Generative Adversarial Network (GAN) architectures. Initially, the VAE is pre-trained in an unsupervised setting on a comprehensive cattle dataset to extract and encode latent representations, which are pivotal for downstream segmentation tasks. Following this, a GAN framework is employed to fine-tune these representations, enabling the network to learn and adapt to site-specific features with minimal manual labeling—a process that drastically reduces the labor-intensive annotation workload.
Our innovative few-shot self-supervised network excels in producing precise segmentation masks for large, unlabeled datasets, significantly mitigating the need for extensive labeled data.
The proposed pipeline demonstrates superior generalization capabilities for out-of-distribution datasets and surpasses existing self-supervised semantic segmentation methodologies for cattle. Our approach represents a substantial advancement in the realm of automated image segmentation, providing a scalable, efficient, and highly accurate solution tailored for agricultural applications characterized by data heterogeneity and scarcity
Donald B. Gillies Professor, Computer Science, U of I Grainger College of Engineering | Biography
Abstract written by Sofia Zasiebida
Precision agriculture utilizes technological advancements to empower farmers with the capabilities to manage crops at the individual level. A major advancement in this field are under-canopy robots that have been developed to monitor cornfield conditions in the row. Current computer vision models can identify weeds, pests, and areas of nutrient deficiencies from images collected by these robots. A main roadblock to usability is that these models can only detect problematic areas, without pinpointing their location across extensive fields. Additionally, the robot’s GPS signal is unreliable to locate it and the data collected as the canopy thickens. This research addresses these challenges by integrating computer vision with GPS data to develop a pipeline that identifies and locates corn crops even in later growth stages. A YOLO V 8 detection model and BoT-SORT tracker were used on the Ultralytics Hub platform to identify, track, and generate unique identifiers for individual corn stems as the robot collects videos and navigates the row. These IDs were synchronized with GPS data collected by the robot, resulting in a map displaying IDs and GPS coordinates for a row of crops. The model was 76% accurate at generating IDs for each corn stem. A major challenge to tracking was leaf occlusion of the stems which was partially solved by adjusting the camera angle and location. Occlusion will remain a significant problem to accurate tracking as it causes overgeneration of IDs. Future work on this project would include improving the accuracy of tracking to reduce ID switches and automating the map creation. Including this model in a farm management system could be a potential use case for this project.
Abstract written by Austin Veal
The need for efficient and precise agricultural robots is crucial in modern farming to increase productivity and reduce labor costs. This project focuses on enhancing an agricultural robot’s durability and autonomous navigation capabilities to address these challenges. The robot’s robustness is improved through 3D modeling and 3D printing of parts designed to withstand outdoor environments, including water resistance and protection from plant interactions. Enhancing durability is essential to ensure continuous operation in diverse agricultural conditions, overcoming the limitations of previous models. For navigation, the robot uses GPS and LIDAR to execute precise end-row turns, optimizing its efficiency in agricultural tasks. Full autonomy is vital for agricultural robots to perform repetitive tasks accurately without human intervention. The navigation system employs GPS for general positioning, especially at field edges, and LIDAR for guiding the robot along rows and avoiding obstacles. Key objectives include enabling the robot to detect row ends, developing turning mechanisms to align with the next row, and integrating these functions into a master script for autonomous operation. This script coordinates the individual scripts, ensuring seamless navigation, continuous monitoring, and error handling. By addressing these challenges, the project enhances the robot’s overall performance in full-field autonomous navigation, contributing to more efficient and sustainable agricultural practices.
Abstract written by Sophie Reznik
Precision agriculture leverages advanced technologies such as sensors, cameras, GPS, AI, and machine learning to gather and analyze data on various farming aspects, including soil health, crop conditions, weather patterns, and resource usage. The primary goal is to optimize agricultural practices, increasing crop yield, efficiency, and profitability while minimizing environmental impact. This project aims to achieve these goals through the development of MyFarm, a virtual farm management system featuring an integrated image retrieval and analysis pipeline. MyFarm includes several modules: Sensor Data, Satellite Images, Farm View, and the Crop Wizard chat. The pipeline facilitates the transfer of robot-collected eld images from the cluster server to the Farm View display and subsequently to the CropWizard chat. This system enables farmers to e ortlessly view data collected from their farms and monitor crop growth via the Farm View page. When issues arise, specific images can be downloaded and analyzed by prompting the Crop Wizard chat, a Large Language Model that uses a RAG-based approach for agricultural applications, such as pest and weed detection. Crop Wizard provides quick and accurate diagnoses, enhancing decision-making and improving farm management efficiency. Enhancements such as adding a direct forwarding feature from Farm View to Crop Wizard rather than the current download and upload mechanism will further improve usability. Additionally, developing geolocation for crops in Farm View images will allow farmers to detect patterns of concern. Future plans envision MyFarm as a comprehensive farm management system, accessible as a mobile app with user-specifc accounts, capable of connecting with eld-deployed robots for agricultural operations.
Assistant Professor, Computer Science, U of I Grainger College of Engineering | Biography
Abstract written by Irene Pi
Real-world conversations heavily involve visual information, and multimodal vision language models expand large language models (LLMs) beyond text. We focused on improving the visual understanding of multimodal models through Retrieval-Augmented Generation (RAG) in the field of agriculture where precision and details are much more important. However, existing datasets lack a natural-domain focus. First, we curated two diverse datasets of knowledge-intensive multiple choice questions on biological images. The first dataset contains questions based on iNat biological images of plants and animals and Wikipedia context. The second dataset includes a modified version of question-answer pairs, each associated with images, from Extension.org. Second, we present a model architecture that leverages the vision capabilities of BioCLIP and advanced language understanding power of LLaVA. Early experiments demonstrate that the combination of BioCLIP and LLaVA outperforms LLaVA alone and the union of CLIP and LLaVA on our presented datasets. These results underscore the potential of our approach to advance visual understanding in multimodal models, particularly within the agricultural sector.
Abstract Written by Aruna Gauba
Images are a fundamental component of agriculture questions. Plants can be affected by a variety of diseases, insects, and other conditions, which can only be identified with an expert understanding of biological image features.
In our project, we build on the following tools to attempt to give a Vision Language Model the ability to comprehend agricultural images:
Associate Professor, Agricultural & Biological Engineering, UIUC College of ACES | Biography
Abstract written by 2023 REU student Jelena Herriott.
Abstract
Ractopamine hydrochloride is a dietary supplement that increases growth rate when added to feed. The downside to using this supplement is an increase in agonistic behavior in pigs, and now an alternative supplement needs to be identified. The goal for this project is to develop an activity index model that can distinguish between various pig activity levels. This activity index will be developed by training a model that identifies low, medium, and high pig activity from image frames extracted from video. This Pig Activity Index Model (PAIM) is being developed to aid in solving the problem of efficiency of workers on farms, pig health, and the monitoring pig outcomes, such as during trials for new feed supplements. Pig activity can be used to determine the health status of pigs based on the change of activity levels, and the outcomes of this project will be used in future health detection tools.
This version of a PAIM utilized a learning model to recognize image differences to represent overall activity level. The development of the PAIM model used over 300 frames of groups of pigs representing a variety of behaviors and postures in six different pens of the same pig barn. Frames were categorized based on low, medium, and high activity level. Low pig activity is defined as lying and/or sitting. Medium pig activity is defined as maintenance behavior including: urinating, defecating, or feeding. High activity level is defined as exploring the pen, socializing, or agonistic behavior
Abstract written by 2023 REU student Scotteria Scott.
Alternative Title: Using Computer Vision Tools to Track Animal Movements
Abstract
Many studies have shown that using Computer Vision (CV) tools can help strength animal research through deep learning. CV plays a significant role in food production, animal welfare and their behavior. The current goal of the study is to analyze an animal’s location and correlate it with their behavior to be able to predict an animal’s movement and activity. Within this project a Standard Operating Procedure (SOP) was created to document the steps of animal tracking and detection. The SOP breaks down the following detection models: YOLOV4, Label Studio, Detectron2, and Deep Simple Online Realtime Tracking (SORT). The detection models created an algorithm using the bounding boxes to create x and y coordinates tracking the animal. Using this method is important because it can contribute to detecting problems in animals before they happen and help animal scientist create solutions and solve them proactively. Applying their location and pairing it with their behavior can provide opportunities to further computer vision work in animal management. In the end of this study progress would have been made to correlate an animal’s location with their behavior and make farm life much easier.
Abstract written by 2023 REU student Emma Fuentes.
Alternative Title: Relevant Behavior and Posture for Emerging Respiratory Infection in Pigs
Abstract
Behavioral analysis could be utilized for the early detection of respiratory illness in sows. When executed with labor reducing computer technology, this could enable early treatment which could mitigate the economic and animal welfare costs of illnesses such as porcine reproductive and respiratory syndrome virus (PRRS), which burdens American breeding herds by an estimated $117.71 per breeding female and $302.6 million annually in additional costs and lost revenue (Holtkamp et al., 2012). The Behavioral Observation Research Interactive Software (BORIS (Friard & Gamba, 2016) was used to analyze behavioral footage of 10 sows subjected to a Lipopolysaccharide (LPS) immune challenge, which resulted in 3 mortalities. Postmortem evaluation revealed evidence of pre-existing subclinical respiratory conditions. A behavior ethogram, which consisted of 26 behavior labels, with paired postures, behaviors, and animal identifications was developed and applied between 10:30a.m. and 11:00a.m. shortly after the 2nd injections. This selected time frame served as a preliminary sample for anticipated further analysis between 10:30a.m. and 11:30a.m on the day prior to the trial and on the days of the 1st and 2nd injections. Behavior observations were continuously recorded for the selected time frame.
The goal of the behavior analysis of this study was to detect pigs with subclinical respiratory infections. Information about an animal’s health status gleaned from behavioral analysis could lead to achieving a more generous timeline for management intervention. Additionally, the methods of this experiment are supportive of an optimistic future for eventually training computer vision models for animal behavior analysis, which would relieve the pork industry of strain from labor shortages.
Associate Professor, Communication, UIUC College of LAS | Biography
Abstract written by 2023 REU student Iradatulah Sulayman.
Abstract
This study addresses the need to understand the career paths, research projects, social-ethical implications of research in Bioinformatics and its application towards society. Our goal was to explore how students, graduates and professionals in bioinformatics reached their current positions and examine their perspectives on the social and ethical aspects of their work. We wanted to understand if society and experts in other fields have enough information about their research and how this engagement or lack thereof might perpetuate a narrative. The research methods comprise of in-depth interviews and literature reviews. In our interviews, qualitative methods of questioning were used and thematic analysis was applied to the data to identify recurring patterns and key themes. We used that to figure out what questions or areas need to be studied more and focused on aspects of bioinformatics that prior research finds controversial.
This study is important because it sheds light on the research and educational backgrounds of our interviewees while exploring their opinions on how their work affects society. It highlights their field’s significant contributions to digital biology and the agricultural system. It also dives into the ethical implications of their research, regarding food system, and the responsible use of technology. This study lays the groundwork for future research in the field of bioinformatics, regarding ethical considerations and will help continue to build and encourage responsible, impactful research practices. The findings contribute to the ongoing ethical discourse in Bioinformatics research and its impact on society.
Assistant Professor, Animal Sciences, UIUC College of ACES | Biography
Abstract written by 2023 REU student Kennedie Manuel.
Alternative Title: Deep Learning Approach to Interpreting Tail Movement as an Estrus Sign in Gilts
Abstract
Timely estrus detection is vital to optimizing artificial insemination (AI) in gilts. Inaccurate insemination timing from missed estrus events is a major contributor to AI failure and can result in a loss of time, money, and reproductive output. Conventional methods rely heavily on human intervention, placing an undue burden on the depleting amount of farm labor. Some researchers have found that there may be a relationship between tail movement and estrus. This study aimed to create a computer vision-based object detection model using YOLO v8 (You Only Look Once) to detect estrus in gilts focusing on tail movements. Digital images of gilts were labeled for tail movement using Label Studio. The model was capable of detecting two tail positions (tail up, tail down) with an overall mean average precision (mAP) of 0.821 at 0.5 intersection over union (IoU). The mAP for tail up and tail down was 0.892 and 0.750, respectively. These results are satisfactory for detecting tail movement in gilts automatically. If appropriately implemented, this Artificial Intelligence model may lay the groundwork for expanding estrus detection technology to increase farm efficiency and farrowing rates.
Professor, Integrative Biology, UIUC College of LAS | Biography
Abstract written by 2023 REU student Munirat Ibrahim.
Alternative Title: Visualizing and Analyzing the Effects of Ozone and Rainfall Exclusion on Soil Moisture Profiles
Abstract
Understanding the impacts of environmental factors on soil moisture profiles is very important for predicting how crops will respond to global climate change. In this study, we examined the effects of increased ozone concentration and rainfall exclusion on soil moisture dynamics. Soybean was grown under two ozone concentrations and two rainfall levels at the SoyFACE (Soybean Free Air Concentration Enrichment) facility. SoyFACE enables researchers to obtain essential insights into the possible effects of carbon dioxide and ozone pollution on crop productivity by using open-air field plots and controlled release of the gases to predict the impacts of future climate conditions on agriculture. In addition to ozone exposure, drought harms plants by reducing soil moisture levels and decreasing the amount of water available to plants. Soil moisture can be measured at various depths by inserting multiple probes into the soil, each equipped with sensors to measure soil moisture content at their respective depths. In order to understand the relationship between ozone exposure, drought, and soil moisture more clearly, we will visualize experimental soil moisture data using three-dimensional graphs. Three-dimensional graphs provide the advantage of improved visualization by representing numerous variables at once, allowing for a more thorough understanding of complex data. The intricate relationship between ozone concentration, rainfall, and soil moisture dynamics through time and space remains inadequately understood.
Our project aims to bridge this knowledge gap by utilizing agricultural data and mathematical coding techniques to analyze and visualize the effects of ozone and rainfall exclusion on soil moisture profiles.
Professor, Crop Sciences, UIUC College of ACES | Biography
Abstract written by 2023 REU student Iradatulah Sulayman.
Abstract
This study addresses the need to understand the career paths, research projects, social-ethical implications of research in Bioinformatics and its application towards society. Our goal was to explore how students, graduates and professionals in bioinformatics reached their current positions and examine their perspectives on the social and ethical aspects of their work. We wanted to understand if society and experts in other fields have enough information about their research and how this engagement or lack thereof might perpetuate a narrative. The research methods comprise of in-depth interviews and literature reviews. In our interviews, qualitative methods of questioning were used and thematic analysis was applied to the data to identify recurring patterns and key themes. We used that to figure out what questions or areas need to be studied more and focused on aspects of bioinformatics that prior research finds controversial.
This study is important because it sheds light on the research and educational backgrounds of our interviewees while exploring their opinions on how their work affects society. It highlights their field’s significant contributions to digital biology and the agricultural system. It also dives into the ethical implications of their research, regarding food system, and the responsible use of technology. This study lays the groundwork for future research in the field of bioinformatics, regarding ethical considerations and will help continue to build and encourage responsible, impactful research practices. The findings contribute to the ongoing ethical discourse in Bioinformatics research and its impact on society.
Assistant Professor, Crop Sciences, UIUC College of ACES | Biography
Abstract written by 2023 REU student Valeria Suss.
Abstract
Tomatoes (Solanum lycopersicum) rank as second for the most-consumed crop in the United States and are a model organism for studying fruit development, metabolic processes, and genetics. Plant transformation is a type of gene transfer that can naturally occur in the environment via infection by certain pathogenic bacteria, and has been utilized by scientists in the laboratory, allowing for significant advances in biotechnological research. Generation of transgenic plants allows for faster implementation of traits of interest, thus leading towards development of more productive and resilient crops. Rhizobium rhizogenes, which is the causal agent of hairy root disease, causes the formation of transgenic or “hairy” roots on an infected plant.
The purpose of this study is to generate transformed tomato plants expressing an ultraviolet-excited Enhanced Green Fluorescent Protein (eYGFPuv) gene using hairy root transformation mediated by R. rhizogenes. Tomato seedlings were inoculated with a strain of R. rhizogenes (ATCC 15834) containing a plasmid encoding for eYGFPuv, and roots were monitored with a UV light for the expression of the reporter gene. We were able to induce hairy root transformation in tomato seedlings and observed eYGFPuv expression in all inoculated plants.