Data-driven management of greenhouse high-wire fruiting vegetables using 3D scanning

Original paper: Ohashi, Y., Y. Ishigami, and E. Goto. 2020. Monitoring the growth and yield of fruit vegetables in a greenhouse using a three-dimensional scanner. Sensors. 20: 5270. doi:10.3390/s20185270

Controlled environment agriculture (CEA) for food crop production allows high productivity and efficient resource use. Applications of CEA have been expanding rapidly worldwide. However, the current limitation that slows the expansion is shortage of experienced human workers. In addition, increasing costs of labor are serious problem that slows the potential growth of CEA. More automation is needed to lower the labor input in various tasks of crop management. Automated monitoring of plant growth and morphology (structure) can allow growers to make more science-based, data-driven decisions for crop management and minimize labor, energy, water, and other resource usage.

Three-dimensional scanning technologies have developed rapidly to create a digital twin (model) for various measurements and computational analyses. There are affordable technologies and various data processing software.  The authors of this paper are a team of horticultural engineers at Chiba University (Matsudo, Japan) and demonstrated usage of 3D scanning for greenhouse crop management. Their specific objective was to estimate key metrics of plant growth and productivity, such as leaf area, plant height, biomass, and fruit yield for three major fruiting crops grown in greenhouse (tomato, cucumber, and sweet (bell) pepper).

The tests were conducted in a greenhouse located in Chiba University. Different ages (sizes) of tomato, cucumber and pepper plants of selected cultivars are used for the tests. A hand-held 3D scanner (DPI-8X, Opt Technologies, Tokyo) was used to scan individual plants as well as a group (canopy) of six plants grown using a typical soilless cultivation method with a common density applied in commercial greenhouses. The scan generated a set of many data points with xyz coordinates (point cloud) which are then converted to a digital ‘surface model’ for finding leaf area and a digital ‘solid model’ for finding fruit yield. The leaf areas were then used for finding biomass (dry weight) of leaves and whole plants using predetermined correlation between these variables. However, accuracy of estimation of leaf area and plant biomass for tomato was less than that for cucumber and pepper, due to more complex leaf morphology (compound leaves with many leaflets) of tomato than the other crops.

The authors also estimated the leaf areas at different heights of canopy, demonstrating the capacity of quantifying leaf distributions inside the canopy.  This information is especially useful for finding levels of available light at various heights inside the canopy. The information of light distribution in the canopy will help growers use more data-driven crop management practices including leaf pruning, plant density management, so that light use efficiency can be maximized.

Yield prediction based on 3D scans showed reasonable correlation with measured yield (R2 > 0.7) for tomato and pepper. The RGB data obtained by the scanner was used for detecting ripe fruit, whose point cloud data were converted to sold (voxel) models. Fruit volume estimated by the solid model was then converted to fruit mass (grams) by a predetermined fruit density (ratio of mass and volume). Cucumber was not scanned for fruit analysis as fruit shapes were extracted based on colors (red or yellow) and fruit color of cucumbers is green (difficult to distinguish from leaf images).  Similarly, immature (unripe) fruits of tomato and pepper were not extracted either for the same issue of color-based recognition.  Overall, fruit yield estimate was least accurate compared with leaf area and other metrics. The inaccuracy is based on the difficulty to scan the opposite side of fruit.

Although dense canopy and overlapping leaves add difficulty in achieving high accuracy, this preliminary effort to demonstrate the potential use of a 3D scanner for crop management was successful. Coefficient of determination (R2) was almost always high (>0.8) except LAI for tomato.  The tools (hardware and software) that they used for scanning and data conversions were commercially available. Although more work will need to be done to improve the accuracy and perhaps to streamline the logistics of measurement and data processing, this is a valuable demonstration of what we can do with a simple handheld 3D scanner or similar tool in future greenhouse crop production.

5 thoughts on “Data-driven management of greenhouse high-wire fruiting vegetables using 3D scanning

  1. The paper does not have too many details of data acquisition and typical file size etc. I would imagine it takes a long time to complete one scan especially for plant canopy even though one scan covered only 6 plants.

    • After reading this paper I wondered what the commercial application of this technology would actually look like. For example, the current technology they used relied on a hand held scanner which would obviously not save much in terms of labour, but I could envision a commercial system where a mounted scanner moves on a line between the rows to gather information. However, I think further development with the capability to estimate vegetative vs. generative growth would make this technology even more valuable to a grower.
      Regardless of commercial application, this sort of 3D scanning technology has great research value as it seems like it would allow typical destructive measurements like leaf area to be taken at multiple points without having to sacrifice plants.

  2. Similarly, the paper does not provide enough information regarding the process of extracting target information by eliminating background or unwanted area/object scanned. The software they used may allow them to do such but more information would have been helpful.

  3. I found the yield and fruit detection portion of the study very impressive and fascinating. Like Jason Hollick said above, I also thought about the application part of the technology and how it could be applied to a wider variety of greenhouse crops. The researchers’ choice of technology, a handheld 3D scanner, allowed them to achieve multiple predictions. Yet it had a limitation, which was the inability to differentiate or detect cucumber fruits from the canopy of the same green color based on RGB values. I looked into the topic of fruit detection using imaging information to see if there are any advancements that could overcome this problem, and there are! With the incorporation of machine learning or deep learning, we can enhance the ability to detect green fruits using the same RGB information. A relevant example or study can be found via this link https://www.semanticscholar.org/paper/DeepFruits%3A-A-Fruit-Detection-System-Using-Deep-Sa-Ge/9397e7acd062245d37350f5c05faf56e9cfae0d6.
    This is also exciting, as machine learning is the last topic we discussed and is gradually becoming a more useful and powerful tool in CEA research and application.

  4. I think it’s exciting we are getting to points where our technology can reliably track plant growth rates. While this technology still has a lot of room to go, decent R values are exciting to see. I wonder if AI can be applied to improve the technology’s ability to scan complex leaf shapes so that growth systems can become more precise over time. I am also curious if there is another way to scan and differentiate between a green fruit (e.g. cucumber) and its leaves other than visible spectrum wavelengths, such as scanning with NIR or near-UV frequencies.

Leave a Reply