As far as I remember the first application I gave to entropy was to help a robot to find its way in a semi-structured environment using only vision [Bonev,Cazorla,Escolano 2007]. Not in terms of high-level knowledge, but just to go where there seem to be things in the distance. If people have a walk they don't get stuck by trying to go against a building. Instead, they see something in the end of a street and they start to walk that way.
In an image representing 360º of the environment that "something in the end of a street" is visually perceived as a more entropic region:
To avoid ambiguities I forced to have only two most entropic regions at a time: the two ends of a corridor or a street. A Fourier approximation results in the following map: for each moment in the time line we have only to hot regions in the angle axis. The robot should head to one of them: the one which grees with its current heading.
This simplistic approach had to be aided by another vision-based mechanism to avoid obstacles. I refer to it as visual sonars, but no range sensors and no GPS are used in the following video, only vision:
This simplistic approach had to be aided by another vision-based mechanism to avoid obstacles. I refer to it as visual sonars, but no range sensors and no GPS are used in the following video, only vision:
[Escolano, Suau, Bonev 2009] F. Escolano, P. Suau, B. Bonev. "Information Theory in Computer Vision and Pattern Recognition". (Hardcover) Springer, 2009
[Bonev, Cazorla, Escolano 2007] B. Bonev, M. A. Cazorla, F. Escolano. "Robot Navigation Behaviors based on Omnidirectional Vision and Information Theory". Journal of Physical Agents - September 2007