Robots operate in an environment that is hard to describe in any detail. Physical stuff does not come with clean, unambiguous addresses or attributes (MIT property office tags aside). Compared with their computational analogues, physical objects seem disturbingly ineffable.
Some sensible approaches exist in robotics to sidestep this obstacle. For example, sometimes we can arrange for the world to look after the details for us and not attempt to describe them explicitly. Or we may use simple deictic descriptions that are sufficient for a given task, or use cognizant failure to switch between multiple fallible representation schemes.
But the problem recurs with a vengeance when we consider how a human and a robot might communicate in such an environment -- for example, to supply the robot with a task that changes from day to day. I propose that algorithms developed for machine learning may be co-opted for quickly communicating about physical stuff that would otherwise have no reliable description. In particular, I propose borrowing from conventional programming to describe the control flow of a task, using that segmentation to set up tightly focused supervised learning problems for the perceptual judgements involved in the task, and subjecting that learning in turn to deictic constraints that operate as feature selection. More generally, I argue that since communication and learning lie on a continuum, they are best approached in tandem.