The whole concept of AI has always been quite fascinating to me. Not so much the whole end of the world Terminator stuff, but how AI relates to imagery. That a computer can generate somewhat realistic-looking images with very little input from the user is just mind-boggling. One of the companies at the forefront of AI imaging tech is NVIDIA. So, when NVIDIA reached out to me to ask if I’d like to test out the recently released NVIDIA Canvas and some of the AI-powered features in Photoshop and DaVinci Resolve on one of the also recently launched RTX Studio laptops, I jumped at the opportunity.

While NVIDIA has shown off several very cool AI image generating technologies over the last couple of years, NVIDIA Canvas has been particularly intriguing to me. I suppose it makes sense, though. Being mostly on lockdown for the last 18 months has meant that I haven’t been out to explore and photograph the glorious Scottish landscape as much as I would like, so why not see if we can create something that resembles it here at home?

The laptop I’m using is the latest Acer ConceptD 5, which contains an 8-core (16 threads) 11th Gen Intel Core i7-11800H, 16GB RAM a 512GB PCIe NVMe SSD and an NVIDIA RTX 3060 Mobile GPU with 6GB DDR6 RAM. I’ll have a more in-depth review of the laptop coming soon, but for now, let’s take a look at Canvas.

What is Canvas?

Canvas is essentially a software tool that lets you draw out a scene in rough shapes using various colours. These coloured shapes are then converted into various landscape elements, like hills, trees, water, etc. It can even simulate different times of day and lighting styles with just the click of a button. And you don’t have to be artistic in the least – which is fortunate for me. The basic interface of the software looks a little something like this. You have minimal buttons across the top to create a new file, open or save a Canvas document or export out a PSD file. Yes, that’s right, a PSD file. You’ll find out why in a minute.

Down the left hand side of the interface, we see several buttons that let us alter our source canvas. We can draw with a brush or with straight lines (handy for horizons), erase, change the size of the brush and various other things.

On the right, we have our different landscape element types – clouds, trees, grass, sand, hills, mountains, rivers, seas, etc. Below these we have layers. Yup, actual layers. This is why you have the option to export out as a PSD. You can bring them into Photoshop and access each of them individually. And finally, at the bottom we have the different image styles to simulate different types and times of day.

Some things to note about Canvas are that it’s currently in beta, it only runs on NVIDIA RTX GPUs and the resolution of the images isn’t very high at the moment. Hopefully, the resolution is something that NVIDIA will boost a bit in future versions. For now, though, we’re stuck with 512×512 pixels of resolution.

How does it work?

Well, the basic premise is quite simple. As shown in the UI screenshot above, you’re confronted with two canvases. The one on the left is your basic shape drawing, defining the zones that the software will use to place various environmental elements such as clouds, water, sand, mountains, etc. On the right is the final presentation that the software generates. You pick the type of element you want to draw and then just draw it where you want it to be. You can draw on either the left or the right canvas, although drawing on the one on the right can be a little laggy. Drawing on the one on the left is pretty much instantaneous and when you release your mouse (or lift up your pen if you’re using a tablet) it generates a new final result on the right. You can change the overall colour scheme and look to different times of day or night using the buttons on the right below the layers. My first attempt at drawing a scene wasn’t so great, but it sort of got what I was going for. You can see that I used several layers here, which makes life much easier. The lowest layer is my sky in the background with its clouds and on top of this is the basic landmass. The two layers above these just add a little more detail to parts of the land and the sky.

I used this same sort of approach for my other tests. I decided to move it away from the coastline this time, though, to see if we could have a mountain poking up out of the clouds. And, well, it sort of works.

One thing I noticed in that last one, is a couple of repeated patterns – particularly in the clouds. It reminds me a little of the Content-Aware Fill feature in Photoshop, particularly in its early days. I think as time passes and the software develops, this kind of thing will be less common. A third attempt shows just how simple you can make your scene. I could’ve done all this on a single layer, but using multiple layers makes it much easier to fix your mistakes. I was drawing with a Huion Kamvas 12 tablet, which is way easier than using a mouse, but I’d still want to erase and redraw things on the canvas on the left depending on how it looked on the right.

In one, you can see another artifact of the AI – and something that also happens with Content-Aware Fill in Photoshop. At hard edges, there’s a bit of a blend. I expected the mountain halfway up along the left side to have a hard edge, like a cliff. Instead, it’s kind of blended into the sky. I don’t mind it in this instance, because it kind of looks like a distant mist or rain. But it’s a little unpredictable at times exactly how things will be drawn. At this point, one thing I hadn’t messed much with yet but was something I really wanted to try out more was trees. I’d done some bits on the others, just dabbing on some bits for a forest or jungle type zone in the distance, but I wanted to try painting some individual trees. Here, again, you can see that edge blending issue I mentioned, particularly on the left tree.

I felt the beach was appropriate for trees in Canvas because it always seems to draw some kind of beach-type palm trees. Trying to get it to draw something like an oak just turned out to be mostly futile, regardless of which style scene I picked. It would invariably either look like a palm or it woud think it was a forest way off in the distance. I decided it was time to see if I could recreate one of the scenes shown in the thumbnails that let you choose the lighting and time of day. I chose the first one because I thought it looked kind of neat and was fairly simple. A sandy foreground with a bit of grass foliage and a couple of flat-topped mountains in the background with a little bit of cloud.

I have to say, it got quite close, but it didn’t reproduce it exactly. One thing I would like to see added to Canvas at some point is a sun brush. Something that would let you just put that big fiery ball somewhere in the sky or let it show the sun peeking out between gaps in the clouds. Overall, it kind of got the elements I was after, although it put way more grass on there than I expected considering how much I drew on the canvas on the left. It certainly doesn’t look as dramatic as the thumbnail, but it’s not bad.

Recreating a real landscape

This is something else that I really wanted to try. The images above were all created from just doodling on various layers on the canvas to see what happened. For this one, I wanted to try to recreate a photo I shot along the River Lune in Lancaster, England a few years ago.

This turned out to be surprisingly good. No, it’s not an exact match for my original photo – because the Canvas has never seen my photo – but it’s pretty close!

The aspect ratio is different, obviously, with my original image being 3:2 and the Canvas recreation being a 1:1 square, but I was actually quite shocked by how good it looked. There are definitely some obvious differences, like the lack of reflection from the sky and trees in much of the water that we see in my original photo, but on first glance, you’d be forgiven for thinking it was just a bad smartphone photo.

I certainly don’t think landscape photographers have anything to worry about just yet when it comes to AI, but if it can eventually let you reproduce a scene from the real world accurately enough, it could be good for pre-visualising how it might look at different times of the day or night and planning future shoots. It could even be potential for portrait shooters as backgrounds to composite into portraits shot in the studio.

That low resolution, though…

We’re a ways off that just yet, though. As mentioned, Canvas produces 512×512 pixel images. It produces them as layered PSD files with each of your individual drawing canvas layers as separate layers and your final AI render as the background layer.

But what if you want it bigger? How bad does it get when you scale it up to, say, 200%? Well, here’s one of the 512×512 pixel images I created at 100% as rendered by Canvas.

I scaled this up to 200% using two different methods. The first being the standard Bicubic Smoother within Photoshop (the “Before” in the slider below) and the second using Photoshop’s new Super Resolution (“after”). There isn’t a massive amount of detail in the original 512×512 pixel image, and once they’re scaled up to 1024×1024, the difference between the two methods is quite subtle but definitely noticeable – particularly on the water’s edge.   Unfortunately, Photoshop seems to have enhanced the artifacts, too. I guess when AI scaling meets an AI-generated image, it’s not entirely sure how to handle things. I expect as Canvas improves in quality over time to produce even more photorealistic results, Photoshop will be able to do a much better job of scaling things up when needed.

Who’s it for?

This is a tough question. In its current state, NVIDIA Canvas is just a cool toy. It’s a neat proof-of-concept that doesn’t really have any kind of practical use for most people in the real world yet. At least, certainly not for photographers. I do see a couple of specific potential applications, though, where it could be very handy. For both real-world and digital painters, it can allow you to quickly sketch out some scenes to get an idea of the overall form and scale of a landscape scene. Very quickly. The examples above, even trying to recreate one of my photos, didn’t take more than a couple of minutes each using my Huion graphics tablet. For real-world painters, it’s a reference you can look at on-screen or print out to have next to your actual work while you’re creating it. For digital painters, you can bring it right into Photoshop and use it as a background layer acting as a sort of template and paint your details right on top of it. It can also potentially help 3D artists and filmmakers, too. Scaled up and blurred, the images generated by Canvas can easily act as a background for a digital set extension. Even an image this low resolution can work for something like that in certain conditions. Sure, you might have to bring it into Photoshop and adjust the colours, contrast and brightness to match whatever else is in your scene in the foreground, but it’s certainly doable when you’re trying to simulate a shallow depth of field and the background’s out of focus anyway. Or maybe you’ll scale it up and use it as a template to composite higher-resolution photographs and textures. Either way, the potential is definitely there for digital set extensions. I’d love to hear what other practical uses for Canvas you can think of. Feel free to add your thoughts down in the comments.

Overall thoughts

Right now, as I said, Canvas is pretty much just a toy the way it is right now. But it is still in beta right now and it’s a pretty cool toy nonetheless. It’s also a lot of fun. I think if the resolution can be increased and we get a little more control over the lighting and options for various types of environmental elements (like different types of trees or clouds), it could become a very powerful tool, particularly for the filmmaker and CG artist uses I mentioned above. Sure, you can generate those landscapes in CG software, too, but with the RTX 3060 GPU in the Acer ConceptD 5 laptop, Canvas does it much more quickly. Within a second of lifting my pen from the tablet, the canvas on the right had updated with the changes I’d made. Even fairly large changes with huge blocks of colour. Being able to recreate actual landscape photographs to try to get an idea how the scene might look at different types of day is also a very handy feature. It would be very useful for shoot planning. Of course, it couldn’t replace actually scouting a real location at different times of the day, but that’s not always possible. Hopefully, this software doesn’t just remain a perpetual beta that just serves as a proof of concept. I think it could become a very useful tool in the future for a number of different tasks. If you want to find out more about NVIDIA Canvas and maybe take it for a spin yourself, you can download it on the NVIDIA website and there’s a User Guide here. Do note, though, that to run Canvas you’ll need an NVIDIA RTX GPU. It won’t work on GTX GPUs. Have you tried NVIDIA Canvas? What practical uses do you see for it?