Skip to main content

Google Whisk is a new way to create AI visuals using image prompts –here's how to try it

Web Hosting & Remote IT Support
  • Google Whisk uses images as inputs instead of text-based prompts
  • It's built on Google’s Imagen 3 generative AI model
  • The experimental tool is free to try for users in the US

Google’s new AI tool makes it easier to create and remix your visual concepts. Instead of asking you to describe what’s in your mind’s eye, Whisk lets you input three image prompts: one for subject, one for scene and one for style. Whisk takes care of the rest, making it a more intuitive way to experiment with different ideas.

While most of the best AI image generators require you to write a detailed prompt, Whisk handles that behind the scenes. When you drop pictures into the web-based Whisk interface as inspiration, Google’s Gemini model automatically analyzes them and writes a detailed caption for each. These are then fed into the Imagen 3 model, to create a matching image.

For example, you could drop in an image of a car as the subject and a photo of a rural landscape for the scene. You could them add a watercolor as the style to see what Whisk creates. Hit the button and you’ll get a pair of images based on your inputs.

From here, it’s easy to remix the images. The interface allows you to specify additional text-based details to tweak the outcomes. You can also easily drop in different source images or roll the dice if you’re in need of inspiration. New results appear in pairs in the feed, making it an intuitive way to ideate. You can also choose to refine images by revealing the text prompt and adding more details.

Whisk it up

While Whisk is designed to eliminate the need for text-based prompts, Google includes the option to refine the written prompts because results won’t always match up to the source material.

In a blog post about the experimental tool, Google explains that Whisk, “captures your subject’s essence, not an exact replica.” It’s only as effective as Gemini’s analysis of the images you submit. While this is generally very impressive, it also isn’t able to get inside your mind: you might expect Whisk to pull out one detail from an image, where it focuses on another.

The post explains further: “Since Whisk extracts only a few key characteristics from your image, it might generate images that differ from your expectations. For example, the generated subject might have a different height, weight, hairstyle or skin tone. We understand these features may be crucial for your project and Whisk may miss the mark, so we let you view and edit the underlying prompts at any time.”

Even with these shortcomings, Whisk an interesting application of Google’s existing AI tools. The underlying generative models are the same as if you were chatting with Gemini via its text interface. By relying on image inputs, though, Whisk is a more accessible and intuitive way for visual creators to play with their ideas.

Based on early feedback from digital creatives, Google refers to Whisk as “a new type of creative tool” which is intended for “rapid visual exploration, not pixel-perfect edits.”

How to try Google Whisk

Google Whisk is currently only available to users in the US. If you’re based there, you can try it out via your web browser at labs.google/whisk.

The experimental tool is completely free to play with. Data from your experience with Whisk will be fed back to Google to help refine and develop future AI products.

You might also like...



via Hosting & Support

Comments

Popular posts from this blog

Hacking Huawei Modems

Report: Android's desktop mode might allow future tablets to double as computers

Web Hosting & Remote IT Support Back in April , evidence surfaced online revealing that Google was working on improving Android's desktop mode. Early demos show it’ll be more user-friendly than before by having movable windows, although it still lacks vital features. Since then, we haven’t heard much about the project until recently, when it popped up again in the “latest Android 15 Beta 4.1 release”. Android expert Mishaal Rahman discovered that Android’s feature may work on a tablet – provided it has a big enough display. In the build, he states that if you go to the device’s 'Recents' view and open the dropdown menu for an app, you will see a new button called “Desktop.” Tapping said button causes whatever app you were on to turn into a free-floating window. From here on, it behaves similarly to a browser on Samsung's New DeX system. The app can be minimized, maximized, attached to the side, or connected to another window. Down at the bottom is a taskbar...