Virtual assistants are the new interface to the Web.
We want them to have:
Any company should be able to participate and innovate, without privileging existing monopolies.
Users should be able to define their own tasks, without learning complex programming languages.
Users should control who has access to their data, for what purpose, and with whom their data is shared.
New services, new devices should be easy to add, and by anyone. This supports technological innovation.
Notifications are hard! Every website, every social network, every app is continuously competing for our attention. But that's no more: with Almond, you decide what you care about.
Commands in Almond can be monitored and filtered: you can specify to be notified whenever the result changes, when a certain condition is true, or only for a certain subset of the data. For example:
Almond's conditions can make use of any result returned by a command, like the title of an article or the body of the new email. You can also use a command, and check if that satisfies a condition. Just specify your conditions in English, and let Almond notify you.
Almond is the first virtual assistant that allows you to specify commands that combine two or more services at once. You can specify when to execute the command, what data to get, and what to do, and each part can be any of the primitives supported by Almond.
You can use compound commands for:
when I leave home, turn off the heating.
when I post to Twitter, copy the post to Facebook.
get the Bitcoin price and then send it to my colleague on Slack.
If you have used IFTTT, you'll love Almond.
Almond provides a uniform interface to your physical devices, your social network accounts, and many more services. Almond wants to let you access anything on the Internet, from your assistant.
Almond capabilities are defined in Thingpedia, a crowdsourced repository of commands and interfaces to online services and Internet of Things. Anyone can contribute new entries to Thingpedia, and with small amounts of training data, Almond will be immediately able to interact with the new device or service.
Almond uses a state-of-the-art natural language understanding model. Almond's deep learning model allows it to understand more complex commands across more domains than any other assistant on the market: just train Almond with pairs of sentences and programs, and Almond will learn.
We collaborate with professors from Stanford Natural Language Processing research group, one of the world's leading hubs for NLP research, to continuously improve Almond. In our experiments, we have found that Almond understands user's input with 88% accuracy, provided that sufficient training data is provided. This is marked improvement over the previous best known result, which would achieve a mere 64%.
As academics, our research is open-source, and all technology is freely available to the public. Anyone can leverage our algorithm in their product or in their own research. Learn more about our research and how you can use Almond's technology.
How do you teach your brand new virtual assistant to understand language? How do you represent the user's input? How do you acquire training data cheaply, before you have users at all? To answer these questions, we present our latest paper, "Genie: A Generator of Natural Language Semantic Parsers for Virtual Assistant Commands" (PLDI 2019).
It's midterm season, but the Almond team won't be distracted. We're proud to present a new round of updates, including our new community forum, opening up developer access to Thingpedia broadly, and a preview of our latest, state-of-the-art natural language understanding technology.
Prof. Monica Lam was invited to give a keynote at HiPEAC 2019, one of the premier international academic conferences for computer systems and computer architectures.
Here is a sample of what Almond can do, and a few commands that our users and developers think are interesting. It is not an exhaustive list! Commands can combined in arbitrary ways, creating endless possibilities for your assistant.