The source (Adobe MAX) 'demoes' full range of incredible scenarios..
I find that Adobe is really pulling away from open source software with all this AI stuff. A few years ago it could be argued that GIMP, Inkscape, and Darktable could do almost everything that Photoshop, Illustrator, and Lightroom could, albeit with a jankier user interface.
But now none of the open source software can compete with AI generative fill, AI denoising, and now AI rotation.
Ok that's VERY impressive, now give me the possibility of exporting it as an .stl to 3D print and then we'll be talking. Just imagine drawing something in 2D and be able to print it as a fully 3D object, it gives me chills just by thinking about it.
If you rmb-click on the video and select "show controls", you will not only be able to seek, but you'll also be able to unmute it.
I don't know why it was embedded with the controls hidden.
This is the true power of generative AI, enabling new functionality for the user with simple UX while doing all the heavy lifting in the background. Prompting as a UX should be abstracted away from the user.
As someone who currently works in GenAI and analytics but paid their way through college doing design (for print media) and still keeps around old copies of Illustrator and Fireworks (running under Wine) as well as using Affinity Suite, this is STUPEFYINGLY more impressive than any LLM.
Still not enough to make me pay for Adobe Creative Suite (I just dabble these days), but the target demographic will be all over it.
I spent so many hours trying to do rotations with a pirated copy of Flash as a kid, and I never really got the hang of it, and it always bothered me how deceptively hard rotation was; when I would show my parents my work, they would do their very best to try and act excited but I could tell that they weren't really impressed with the effort because it doesn't seem that hard, at least to a lot of people.
This makes me irrationally happy.
I don't think quite the same kind of tech, but this kinda reminds me of the "3D" pixel art sprite editor thing in Smack Studio
It looks cool and covinient for people like designers and other non-techinical content creators. One natural follow-up would be, can we find many other similar operations that are used by creativity people everyday and tackle them under a unified framework?
Incredible, but a shame you'll have to use Adobe to get it.
I've seen a lot of cool shit from adobe but its mostly rehashed stuff thats been cleaned up from public workflows from stuff we've seen done in comfyui and other flux/stable diffusion based expansion workflows... like the IC-Light style relighting they demod...
But this... this is really fuckin cool
There are actually multiple open source ML models for 2d to 3d which is clearly what they are doing. The difference with most of them is that this is vectors.
There might actually be a similar open source model already.
But I think to create it you would build it from a database of 3d assets that you could render from many angles. Probably quite similar to the way the 2d to 3d works. I don't know maybe the typical 2d to 3d models will work out of the box or with some kind of smoothing or parameterization. Maybe if you have a large database of parameterized 3d models then you combine that with rendering in 2d from different angles then you can basically use the existing 2d to 3d model.
Good idea, but such a frustrating company to do business with as a consumer
> Adobe's Brian Domingo told Creative Bloq that like other Adobe Innovation projects, there's still no guarantee that this feature will be released commercially.
Well, I confess I got a little bit confused here :/ . What's the purpose then for such an innovative solution if not commercialized?!
This looks very cool. I really hope the results are not overly cherry-picked like Adobe's first version of the text-to-vector generation that only worked particularly well for the showcased art styles.
This captures the essence of what "modern AI" is great at! Relieving the tedium of a highly constrained task.
Great demo. This will really help animators and artists.
Looks like Adobe finally found a way to cut down on piracy.
None of these new AI features will work on a pirated copy because it's all server-side processing.
I found the Project Turntable page on Adobe's site more interesting (with embedded video) on mobile than the linked CreativeBloq site:
https://www.adobe.com/max/2024/sessions/project-turntable-gs...
Well, when is so big bad company going to bully us into using their tools to convert 3D sculpts into flawlessly animatable models? I'll submit to their abuse and surrender my lunch money to them. Though not if it is Adobe, I still have some self-love.
Makes me think of
https://lookingglassfactory.com/looking-glass-go-spatial-pho...
which needs multiple views of your image from different angle and tries to make it up with AI.
I thought this was one of those sarcastic headlines, highlighting the overuse of AI for basic processes.
preserving the vector art after transforming is really cool, anyone know the relevant papers? or was this original research done by Adobe?
Came here assuming they were using AI for "rotate 90°" ready to drop a rant, but this was actually impressive.
As someone who otherwise hates genAI, I must admit, this is actually a very cool demo and a very sensible application of AI.
How very strange, my partner was mocking up a room for our home just a few hours ago, and I asked whether an AI tool existed to rotate the incorrect angle of a sofa in a photo being used within the mock up - and here it is on hackernews just an hour later, just that tool..
Edit/ apparently I misunderstood it's only possible with vectors - getting close though to the reality mentioned!
It took me a while to understand that the second picture is actually a muted video with hidden controls.
Amazing this will give ancient GIFs a facelift.
haven't been in the loop for a while, stupid question: why do people hate adobe
I want the actual 3D models.
This looks like the perfect tech for a cel shaded game!
Better link with working video:
https://www.adobe.com/max/2024/sessions/project-turntable-gs...
NeRF or gaussian splatting?
Pretty incredible
SIGGRAPH from over a decade ago has entered the chat...
https://www.youtube.com/watch?v=Oie1ZXWceqM
It may not be AI, but this single video blew my mind back in *2013* and I find myself thinking about it often.
I'm pretty tired seeing AI slapped on everything but holy shit this is impressive.
Is there another source? None of the images loaded for me.
I am sure this is the right time for hobbysts to make your own movies, and animations.
I personally started programming, in part, to make simple animations like the ones you see in Scratch, and it’s incredible how accessible the tools are today for anyone looking to bring their ideas to life.
one thing is you can't be lazy when drawing the initial vector like a car for example, you can't just draw from the top and expect it to generate a side shot after rotating. You need to draw maybe an isometric version first.
[dead]
People have been using 3D models for 2D graphics for at least a decade. 3D models rotate, by default.
This demo shows generating a 3D model from a simple 2D shape. It'll fall flat on its face trying to 3D model anything non-trivial which begs the question - who cares?
Also, you'll want to animate the 3D model - which this doesn't do, so you'll soon be back to your usual 3D toolkit anyway.
I'm making some big assumptions about Adobe's product ideation process, but: This seems like the "right" way to approach developing AI products: Find a user need that can't easily be solved with traditional methods and algorithms, decide that AI is appropriate for that thing, and then build an AI system to solve it.
Rather than what many BigTech companies are currently doing: "Wall Street says we need to 'Use AI Somehow'. Let's invest in AI and Find Things To Do with AI. Later, we'll worry about somehow matching these things with user needs."