Since Linnaeus and until today, plant classification is based on flowers.
Flowers, unlike flat leaves and linear stems, are essentially 3D structures.
This means that one can rarelly see all the significant features on a single
flat image. Also equal but differently orientated organs (like petals)
will appear unequal. Also there is possible confusion beetwen an individual
flower and an inflorescence (gathering of small flowers, like the clover).
The color is often not a discriminating character, since it can vary within
a single species.
A typical flower is composed, from bottom to top, of 5 rings (or verticillas)
of pieces attached on a vertical axis:
Leaves also have discriminating characters, and are (generally) 2D.
These characters are:
- shape
- veins (better seen on the underside)
- differents types of hairs, better distinguished by touch or by lenses
- sometimes color
So you see, plant images are a challenge for a 2D general purpose image system.
In french:
Of course L-Systems is a link in the chain we envision:
So we can enter L-Systems models in the database. I see the following advantages to this representation:
What is not clear now to me (this is not an urgent issue) is:
Will we develop, or is there, a XML vocabulary for L-Systems, or will
a L-System definition remain "just" a character string.
The avantage of a XML vocabulary would be to have a unified syntax,
from which a non-graphical processor could extract information of taxonomic
relevance. From a XML representation, it is easy to generate a standart
L-System definition, using a XSLT stylesheet.
I'm currently evaluating free downloads, notably Geometra. The demo I saw in their site is a house, having sharp edges; the user must click on 2 or more points in the 2D pictures, before 3D information is computed.
The next stage in our project would be to generate from the 3D facets a compact, non proprietary, preferably XML, clean definition for complex 3D geometry in the form of reunions and intersections of volumes defined by equations:
f(x,y,z)>=0
and (e.g. NURBS and Beziers patches) surfaces defined by 3 functions R2 ---> R3
(u,v) ---> (X(u,v),Y(u,v),Z(u,v))
But this "next stage" is probably still a research subject. But this should not prevent us from gathering pictures from different angles, and generate 3D information out of it.