Understanding Field Trial Data: A Producer's Guide
Katelyn Miller, Field Crops and Forage Specialist
Southwest New York Dairy, Livestock and Field Crops Program
A few winters ago, I heard a presentation from Jaime Cummings that reviewed fungicide trial data that has continued to stick with me. Her presentation covered interpreting data from various corn trials; diving into what makes good data, the questions to ask, and determining if meaningful data was collected. Having completed a statistics class this fall, I have developed a different understanding that I haven't had previously when it comes to interpreting research data, which is what got me reinvigorated on this topic.
You all have consultants or salesmen (or saleswoman) showing up on farm, receive newsletter chalked full of articles (thanks for reading CCC by the way!), and attend winter meetings where data on best management practices or products are shared with you, in which you have to make a determination of if it's something that fits into your operation. Knowing all of this, being able to disseminate the information that's being shared is critical to making this determination.
Field data allows us to measure real world impact over controlled environments (like greenhouse studies), but both have their place. For in-field research, there are a multitude of factors that need to be considered to determine the quality of data being provided.
- Plot setup: Randomized, replicated plots within a field or across locations remove unconscious bias that may be caused from trial setup, or natural variability in field, including soil types, slopes, fertility, and moisture.
- Controls: In a study, there should be a comparison against an untreated check or a standard benchmark. Without comparing data against a "normal", there is no way to know where the observed responses came from. Was it the treatment, weather, fertility, management? If a baseline isn't clear, then the results aren't particularly useful.
- Repetition: As we know, no year or field is alike. With data spanning over multiple years, and in various environments, there can be more confidence in the results. Single-site, single year data can be heavily influenced by weather patterns, disease pressures, or soil conditions attributed to that season.
- Experimental Design: Look for explanations of the study - what rates, plot size, timing, etcetera are used. Was the equipment used similar to what you use? Is it agronomically realistic? When information is missing, it can be difficult to interpret or replicate.
- Demonstrations: Demonstrations serve as a valuable learning tool, but they are not the same as scientific experiments. Using side-by-side strips without repetition, or a treated versus check area do serve a purpose, as it shows what can happen, but it doesn't highlight that it is what will most likely happen.
- Statistical Analysis: Information beyond yield averages should be shared, as there is data beyond those values. Look for statistical significance, such as least significant difference, p-values, or confidence intervals, along with any measures of variability. Just because one variety yielded more than the other, doesn't mean the difference was large enough for it to be a factor of consideration.
- Transparency: Transparency matters, no matter what the result is. This comes to reporting what the methods and results were, regardless of "good" or "bad" conclusions.
- Verbiage: Be conscientious of what is being shared with you. Watch for selective reporting of best performing locations, claims based only on testimonials, and percent increases without numerical data.
At the end of the day, just because data doesn't specifically align with the considerations laid out, doesn't mean the results are automatically invalidated. Ask questions and think critically.
An additional thing to think about is how data is presented, as how its shared in graphs is important to interpretation as well. It can be easy to misinterpret, often lead to poor data representation, or fail to represent the whole story. Things to be on the lookout for include odd values listed on the x-axis or y-axis, a lack of a legend, or charts that are inappropriate for the data being shown.
Now, as I alluded to above, a proper statistical analysis is important to determining quality data. I feel as though statistical terms get casually thrown around, but I think it's important to take time to break down the definitions and give some context to what they mean.
p-value: (probability value) a number indicating the probability that the result happened by chance. A lower p-value means higher confidence that the treatment is what caused the result
least significant difference: (LSD) the minimum difference in yield needed between two treatments to say that one is truly different than the other
coefficient of variation: (CV) a measure of how variation is in the data. When the CV is low, that means that the results were inconsistent, while high values suggest high field variability.
standard deviation: (SD) a measure of how spread out the individual yields are from the average. When the value is low, yields were close to the average, while a high SD value means yields were inconsistent.
significance level: (confidence level) tells you how confident you are that the treatment worked, rather than the results being due to random, uncontrolled factors
least square means: adjusted average values that estimate with the mean performance would be if the dataset was perfectly balanced
Daily, you all are sorting through information, and having to use that to make management decisions. Being able to effectively interpret and analyze the information shared with you is an important skill. Hopefully, now you'll be able to take this information and be better prepared to make decisions from the data being shared with you.
Upcoming Events
NYSDEC How to Get Certified Course
March 3, 2026 : NYSDEC How to Get Certified Course
Ellicottville, NY
NYSDEC training course in preparation to take the pesticide applicator exam.
From Data to Dollars: Making Data-driven Decisions to Increase Farmers Market Success
March 3, 2026
The Cornell Agricultural Marketing Research Program and Penn State University are excited to present this new, 6-week course as part of Cornell's Farmers Market Research Project. The course is for farmers with experience selling at farmers markets who wish to increase their earnings through management and marketing practices.
Cornell Organic Field Crops & Dairy Conference
March 6, 2026
Waterloo, NY
Farmers, researchers, educators, and agricultural service providers from across the Northeast are invited to the 2026 Cornell Organic Field Crops & Dairy Conference, held Friday, March 6, 2026, from 8:00 a.m. to 4:30 p.m. at the Lux Hotel & Conference Center in Waterloo, N.Y.
Co-hosted by New York Soil Health and Cornell CALS, the annual conference brings together leaders in organic grain, dairy, and livestock systems to share practical tools, new research, and farmer-tested strategies to support resilient and profitable organic production.
Announcements
Cows, Crops & Critters Newsletter Sponsorship
TRYING TO REACH GROWERS AND AGRIBUSINESSES IN OUR SOUTHWEST REGION OF NEW YORK?Weekly Email Update: Shared with 625+ households who have signed up with our program.
Monthly Paper Mailer: To reach our stakeholders and farmers who lack internet access, we send out a monthly mailer where your company's logo and contact information would be featured with a mailing list of 330+ households.
If you sponsor our weekly and monthly publications you reach approximately 955 households.





