Abstract
In this paper we study the application of Function-Described Graphs (FDGs) for 3D-object modeling and recognition. From a set of topological different 2D-views taken of an object, FDGs are synthesized from the attributed adjacency graphs that are extracted for each view. It is shown that, by keeping in the object representation (an FDG) a qualitative information of the 2nd-order joint probabilities between vertices, the object recognition ratio increases while the run time of the classification process decreases.