In addition to supporting typed input, some of these systems accept speech input using the Web Speech API Specification, currently only supported in the Google Chrome browser. The first time you push the record button you will need to approve audio capture via a horizontal bar at the top of the page.
The Nutrition system aims to automatically extract food concepts from a user's spoken meal description and display the relevant nutrition information, thereby lowering the user burden compared to existing self-assessment methods. The language understanding component of the system has two phases: semantically labeling the foods and properties (e.g., brands, quantities, and descriptions), and assigning attributes to the corresponding food items. We use conditional random field (CRF) models trained on human-annotated Amazon Mechanical Turk (AMT) data for the semantic tagging and food segmenting tasks. The nutrition information is retrieved from the USDA and semantics3 databases.