Adobe hosted its annual Sneaks event at MAX 2019 tonight, co-hosted by writer and comedian John Mulaney. Sneaks offers a preview into new features Adobe is developing for its Creative Cloud apps, and experimental technologies that may or may not make it to production.
More from Adobe MAX 2019:
Using machine learning technology, a subject in a photo can be automatically identified and extracted from the scene. Project All In takes a photo of the same scene where the extracted subject is missing, and blends them seamlessly into the environment. The feature is designed for group photos where the photographer is behind the camera, but would still like to be part of the shot.
Unwanted, repetitive sounds in an audio clip are automatically identified by selecting one instance of the sound. With one click, all of the selected sounds are removed. Project Sound Seek could be used to remove every instance of a speaker saying “uhh,” or even environmental sounds like a car horn.
Animating a character’s face to a voiceover track is currently a tedious process that requires a complex 3D mesh. Project Sweet Talk takes any image file, including illustrations, photos, and paintings, and automatically identifies the subject. The character’s facial expressions are mapped to the audio and the image is animated without any additional work from the user.
Based on Adobe Aero technology, augmented reality design mockups can be created by simply sketching on an iPad. The tools allows artists to record a video, insert an AR placeholder device into the 2D video, and track 3D planes. Graphics and freeform illustrations can be mapped to the AR placeholders.
With just a reference photograph, Adobe Sensei can apply textures to a simple sketch with one click, creating completely new photos. Image Tango can also be used to rapidly test multiple textures on a single object. In one example, stock images of various purses were used as reference images to re-texture an existing purse.
Fantastic Fonts turns typography into a series of infinitely customizable parameters. Simple adjustments like font weight can be manipulated, but the tool goes much further. Using an iPhone or iPad’s accelerometer, fonts can be animated and distorted with various motion effects. The tool enables customizations on live fonts previously only possible by outlining text.
Sensei-based body tracking transfers a character’s motion in a video clip to an animation. The body tracker identifies the arms, legs, and torso of up to 18 characters in a scene. Keyframes are generated and automatically applied to a rigged character. The effect is similar to motion capture, but works without any additional hardware.
Using Photoshop, a photo can be relit to change the mood of the scene. Algorithms identify the geometry of the photo and allow the sun to be repositioned. Depending on where the sun is placed, highlights and shadows are adjusted, and the color temperature of the photo changes. Using the same technology, even a video file can be relit.
Troublesome audio with distracting background noise or a muffled voice is corrected using Adobe Sensei technology. Cleaning up a rough audio file is a process that normally takes hours in a tool like Adobe Audition. Voiceover segments recorded in multiple locations can be analyzed, unifying the background ambience across clips.
Built inside of Illustrator, Project Glowstick turns lines and shapes into sources of light and shadow in an image. Light sources and shadows are defined with one click, and influence the coloring of other objects and shapes in an image.
Project About Face analyzes the pixels in an image to attempt to identify if the photo has been digitally manipulated. The tool displays the probability that a photo was tampered with, and reveals a heat map of questionable pixels.
FTC: We use income earning auto affiliate links. More.