Technology Google AI experiment has you talking to books

10:55  15 april  2018
10:55  15 april  2018 Source:   Engadget

Google Lens can identify dog and cat breeds

  Google Lens can identify dog and cat breeds Now that the Android-first Google Lens has finally rolled out to iOS devices, nearly all mobile users can appreciate its newest feature: Identifying pet breeds.  © Google You can also use Lens to comb through your photos searching by particular breed, species (including animals that aren't cats or dogs) or emoji. Which could be helpful if you want to find the latest pic of your sibling's pet but don't want to sift through your entire camera roll. The announcement came with a few reminders of what Google Lens could already do, like make photo books or videos of your favorite weird animals.

With that in mind, check out this little in-browser experiment from Google named Teachable Machine. Teachable Machine lets you use your webcam to train an extremely basic AI program. I taught it to recognize my houseplants and respond with relevant GIFs, but others have used it make

Google AI Experiment #4: A . I . Duet: Or maybe you ’re more of a musician and want to experiment with piano chords because, even if you are, we have the app for you ! Artificial intelligence. The 6 Ted Talks that will change how you perceive AI dominance.

a stack of flyers on a table© Provided by EngadgetGoogle Research is giving us a (fun) glimpse of how far natural language processing in artificial intelligence has come. Mountain View's research division has rolled out a couple of what it calls Semantic Experiences, which are websites with interesting activities that demonstrate AIs' ability to understand how we speak. One of the two experiences is called "Talk to Books," because, well, you can use the website to talk to books to a certain extent. You simply type in a statement or a question, and it will find whole sentences in books related to what you typed.

In the announcement post, notable futurist/Google Research Director of Engineering Ray Kurzweil and Product Manager Rachel Bernstein said the system doesn't depend on keyword matching. They trained its AI by feeding it "billion conversation-like pairs of sentences," so it can learn to identify what a good response looks like. Talk to Books can help you find titles simple keyword searches might not surface -- for instance, when I searched "He says he's the greatest detective who ever lived," one of the results highlighted a sentence that doesn't contain any of my query's keywords, because the AI associated the word "detective" with "investigator."

Google Assisstant finally works on Pixel C tablets

  Google Assisstant finally works on Pixel C tablets Google Assistant has been available on Pixel phones from the get-go and has spread to virtually every device that's even vaguely capable of handling it, but there has been a glaring exception: the Pixel C. Google confirmed to Engadget that the deployment started today.

Google is trying to create artificial intelligence ( AI ) capable of making art — and it has now taught it to play piano. The interface of the AI experiment . Google . There's a video of Yotam Mann talking about the experiment below, and you can play with it here ».

I’m talking about Google ’s AI research. Obviously, the experiment in and of itself doesn’t have any real-life usages. However, the technology that goes behind this, does show how good machines have become at recognizing objects that they see.

Google Research's other new website called Semantris offers word association games, including a Tetris-like break-the-blocks experience. The two games can recognize both opposite and neighboring concepts, even sounds like "vroom" for motorcycle or "meow" for cat.

a screenshot of a computer© Google

The development in word vector, an AI-training model that enables algorithms to learn relationships between words based on actual language usage, led to the advancement in natural language processing over the past few years. According to Kurzweil and Bernstein, these websites show how AIs' "new capabilities can drive applications that weren't possible before." They said other potential applications include "classification, semantic similarity, semantic clustering, whitelist applications (selecting the right response from many alternatives) and semantic search (of which Talk to Books is an example)." Google has released a module on TensorFlow other researchers and developers can use, so the tech giant's work could lead to more AI-powered applications that can understand how we wield words better than their older counterparts can.

Google is opening up VR180 to hardware-makers and developers .
For Google's VR180 to become successful, manufacturers and developers have to be onboard, creating devices and churning out videos and apps that use the format. Here's Google's summary on the info it released to the public: "For VR180 video, we simply extended the Spherical Video Metadata V2 standard. Spherical V2 supports the mesh-based projection needed to allow consumer cameras to output raw fisheye footage. We then created the Camera Motion Metadata Track so that you're able to stabilize the video according to the camera motion after video capture. This results in a more comfortable VR experience for viewers.

—   Share news in the SOC. Networks

Topical videos:

This is interesting!