(NIFA is accepting proposals for conferences to identify opportunities and bottlenecks in generating, managing, and integrating data within the food and agricultural system. NIFA will also consider research proposals that apply or enhance big data activities and efforts. Applications submitted to the 2017 Agriculture and Food Research Initiative Foundational Program will be accepted through the deadlines as specified for specific program areas.)
We live in an age in which large volumes of information—“big data”—are generated and collected rapidly to add value to our daily lives. Industry has harnessed data to target advertising, tailor medical treatments, and even develop the technology and infrastructure that allows us to carry miniature computers in our pockets that will pause active functions to accept incoming phone calls.
The agricultural community wants to use big data to help farmers make decisions that will increase yields and deliver safe, nutritious food to communities around the world. Researchers and farmers use predictive modeling to identify best management practices for getting the best crop and livestock performance under various environmental conditions. To make the most accurate predictions, models based on even the most advanced machine learning algorithms must be rooted in comprehensive datasets. Such datasets often include numerous weather and soil measurements as well as corresponding plant or animal performance assessments under multiple management regimes over multiple years.
But how “big” is big data, anyway?
If you were to print each of the 2.5 billion DNA bases that comprise the genome of a single corn plant, you would end up with a stack of printer paper taller than the Statue of Liberty. That’s pretty big, and it only sheds light upon a single perspective of our multifaceted agricultural systems.
USDA’s National Institute of Food and Agriculture (NIFA) launched the Food and Agriculture Cyberinformatics and Tools Initiative at a summit in October 2016 to promote effective use of data within the agricultural community. At this event and through a web-based “ideas engine,” stakeholders from academia, industry, and government emphasized the need to maintain standards to ensure that agricultural datasets follow “FAIR”—findable, accessible, interoperable, and reusable—principles. Stakeholders identified many priorities to achieve these goals, including:
- Develop a culture that supports and rewards communities of researchers to build off one another’s datasets, standardize protocols, harmonize experimental designs, and ensure that datasets can be usable over the long-term to minimize the number of “orphan” datasets;
- Foster private-public partnerships while addressing tradeoffs among data availability, ownership, value, incentives for sharing, and privacy;
- Build a robust infrastructure that houses and provides open access to publicly funded datasets; and
- Train a workforce that can manage, analyze, and manipulate large datasets.
Ultimately, NIFA seeks to help the agricultural community harness big data in ways that enable all people, at all times, sufficient access to safe and nutritious food.
NIFA invests in and advances agricultural research, education and extension and promotes transformative discoveries that solve societal challenges.