Scott Benson's profile

Photography Concepts for 3D Artists

Camera Series
Photography Concepts for 3D Artists
Version 1.0, Updated May 2024 using practical experience and web research (C4D/Octane for the illustrations).

About This Guide

3D programs (mostly) strive to simulate reality. Because of this, camera systems in 3D apps often attempt to simulate real camera gear and properties so that the resulting images look more like what we’re used to seeing in photos or on film. This also makes the transition easier for artists who are used to working with real-world cameras.

This guide is completely app-agnostic. Many 3D apps and engines use similar terms and settings that we find in real cameras, so having an understanding of what these settings mean and do, and at least roughly knowing the scales for each one will make life a lot easier when we’re trying to achieve a particular look in our renders.

There’s a follow-up Octane Render Camera Guide which builds on the concepts found here and looks at how they’re implemented in Octane (specifically the C4D Plugin, but it’s valid for any DCC).

PDF
PDF Version of this guide can be found here

Intro  
A camera is a light-tight box with a hole in one end and a medium inside of it that’s capable of capturing an image (piece of film or digital sensor). The hole (aperture) is opened up using a shutter for a particular amount of time, and the light from outside comes in and projects an image on the medium.

The medium starts out blank. The longer the shutter is open, the more light comes in, and the brighter and more resolved the image gets. This process is called exposure. Eventually it hits a point where the exposure is perfect and the resulting image looks like what we see. If it’s open longer than that it starts to overexpose, or wash out. Less than that and it’s dark and dingy, so it’s important to get that timing right.

Pretty simple, right?

The biggest advancement to the original pinhole camera was the addition of a lens assembly to control the size of the aperture (using a diaphragm), and to distort and focus the incoming light using glass lens elements. This allowed for faster exposures, smaller kit, and lots of practical and artistic capabilities.

The medium still had to be exposed to light for a particular amount of time, so different formulas, sizes, and scales were set up to allow just enough light in to properly expose the image. Some really clever people pieced together a brilliant system where the time the shutter is open, the size of the aperture, and the sensitivity of the film could all be adjusted in pre-set stops to ensure proper exposure.

These three settings - aperture, shutter speed, film sensitivity, or ISO - make up the exposure triangle. Let’s say we have good exposure with a particular set of settings, but then need to increase the shutter speed to reduce blur. Because the shutter is open for a shorter period of time, the image will get darker because less light comes in. If we want to keep that same exposure, but still want the shutter that fast, we can move the shutter one stop faster, and then open up the aperture one stop wider, and it will compensate exactly. We could also increase the sensitivity of the medium (by swapping rolls of film, or by changing the ISO on a digital sensor) by one stop instead of opening the aperture wider, and that would also produce the same exposure because the medium is more sensitive and resolves faster.

Some 3D apps ignore the exposure triangle to make it easier on the artist, some respect it to try to make the camera experience more authentic, and some give us a choice. All of them are usually more interested in replicating the side effects of the shutter and aperture like motion blur and shallow depth of field, as well as bloom, glare, distortion, and other lens-related things. These side effects are often what we equate to “realistic” images and allow us more creative flexibility.

Exposure
Good exposure occurs when all of the light and shadow data stays within the range of what can properly be displayed by the file format and monitor, and doesn’t get clipped on either end. Clipped highlights means anything over a certain brightness is just displayed as pure white, or “blown out” like in the middle image above. Clipped shadows means anything below a certain range goes to pure black and we lose darker detail. Clipped shadows can have artistic merit where blown highlights are almost universally bad.

Since many 3D apps don’t tie exposure to aperture or shutter speed, our first and best line of defense against clipping is to actually control the lighting itself. We want to make sure that HDRIs are set properly, and that area and other physical lights have real world values and aren’t too hot or dim. Other ways include exposure compensation controls in the engine, or exporting out a high bit depth EXR which retains highlight and shadow well beyond what our monitors can display, and compressing that in post. Those are good for touch-up work, but getting the light right from the getgo is always the best strategy.

Tone Mapping
Tone mapping is another way to help with bad exposure. Tone mapping schemes like ACES and AgX are becoming more accessible in render engines, and they usually do a great job of avoiding clipping by running all the values along a curve to fit them into something we can see, but they tend to crunch the midtones and make everything a lot higher contrast to achieve this if the lighting isn’t set up right.

Because different tone mapping choices have different looks, it’s always best to choose a target one first and then cater the lighting to it, rather than vice-versa.

Part I: The Basics

All the attributes we’re about to look at simulate real-world values in cameras and lenses, and this is one of the main reasons we want all our objects to be real-world scale. If we have giant people or tiny buildings, the calculations will get way off and it’ll be hard to predict how our 3D camera will behave.

Focal Length
In a physical camera, the focal length of a lens is the distance from the optical center of the lens itself to the sensor, measured in millimeters (mm). This is one of the most important settings and should always be considered early on since it affects how we compose our scene. Typically focal lengths go from 16mm - 600mm, and there are a few standard ones (16, 18, 24, 35, 50, 85, 105, 200, 300, 600). If a lens only has one focal length, it’s called a prime. If it can cover several, it’s called a zoom. 3D lenses are all zooms.

Magnification & Field of View

The focal length determines the magnification and field of view of our objects when we look through the viewfinder. The larger the focal length value (or the “longer” the lens is), the larger objects appear to us from any given vantage point, and the narrower our field of view is (so fewer of them fit in the frame). Conversely, the shorter the focal length (or “wider” the lens), the smaller individual objects appear from the same vantage point, but the more of them we can get into our frame.
Made with LightStage assets and Symmetrical Garden 02 by Poly Haven
In the above illustration, all of the cameras are located at the same point in space facing the same direction. We can see how much of the scene we get with a 16mm lens compared to a 200mm, but also how far away the woman appears, even though neither the woman nor our camera has moved at all.

Distortion

We’re able to get these different properties because different focal lengths distort the light that comes into the camera (and the resulting image/render produced as well). This distortion can lead to some practical and creative effects, but we need to know how to properly use it.
It’s tricky to map human vision to a single focal length in a camera system since we have binocular vision and squishy biological eyes, but it’s generally accepted that it’s about 50mm. A lens with a 50mm focal length is called a “normal” lens for this reason and has the least amount of noticeable distortion of any focal length. Lenses that get further away from 50mm in either direction distort in different ways.
Made with 3dpeople, Kitbash, and Kloppenheim PureSky by Poly Haven
In this illustration, the three cameras in the scene are all aimed the same way, but they’re brought closer or further from the subject until the framing of the subject (the target or the woman) is exactly the same. This shows off one of the most important qualities of different focal lengths: Background compression.

Lenses longer than 50mm magnify distant objects more than foreground ones, which makes everything appear closer together than it really is. This totally changes the composition of the photo as we can see above, which is why the focal length should be chosen very early on in the process.
Made with 3dpeople, Kitbash, and Kloppenheim PureSky by Poly Haven
Focal length also messes with perspective. Again, normal lenses (50mm) work about the same way our eyes do, so parallel railroad tracks seem to converge back in the distance as we’d expect to see and tall buildings seem to “lean in” a certain amount when we’re looking up at them. Wider lenses amplify this effect by stretching out objects on the edges of the frame and pinching them in toward the center. Longer lenses do the opposite and appear to straighten out all the lines.

Common uses for focal length categories
All images in this section from unsplash.com
Ultrawides (<24mm) can be used creatively to give the viewer a sense of enormous scale or a unique perspective on otherwise everyday objects. They’re also used in tight spots like interior architecture or real estate promo shots where we physically can’t back the camera up any more or we’ll hit the wall.
Wides (~24-40mm) are good for landscapes or cityscapes where we want to get a lot of stuff in the frame, but want to avoid the crazy distortion of an ultrawide.

Normal lenses (~40-60mm) are great for simulating what we’re seeing - these give the viewer a sense of realism, like they’re standing in the scene and looking at what the camera is looking at.
Short telephotos (~75mm-120mm) are perfect for portraiture. They distort people in a very flattering way, and are great for isolating the subject from the background, particularly when Depth of Field effects are applied. They’re also excellent for product photography.

Telephotos/Super telephotos (~200mm+) in the real world are good for sports, wildlife, and other cases where we can’t (or shouldn’t) get physically closer to our subjects. In 3D, we can go wherever we want, so they’re only really good if we want to try to keep our parallel lines as parallel as possible and/or really compress the background in without having to shift our objects around. 

Focusing

Camera lenses (and our eyes, for that matter) don’t have the luxury of ideal optics -  we can never get everything in perfect focus. No matter what our gear is or how we set the settings, things will start to go out of focus as they get further out into the distance or closer to the sensor from a certain point in space.

In 3D, it’s the opposite problem. Focus calculations are difficult for render engines, so most of the time the default state is for everything to be perfectly sharp. Developers need to go out of their way to torture render engines into giving imperfect focus. Perfection is one of the things that leads to unnatural-looking renders since that’s not the way our eyes work, and not the way we’re used to seeing printed and digital photographs over the last hundred-odd years.
If we draw an imaginary perpendicular line from the center of the sensor to some point in front of the camera, we get what’s known as the focus distance. If we then draw an infinite plane that’s parallel to the sensor at the focus distance, everything on that focal plane will be perfectly sharp and in focus.
There’s an area that straddles the focal plane where everything is acceptably in focus, meaning it seems sharp enough to our eyes that we’re not bothered by it. This area is referred to as the depth of field. Any objects that are outside of this area (both in front of and behind it) get more and more out of focus. Depth of field is referred to as “deeper” (more of the scene in focus) or “shallower” (less of the scene in focus).

Important: Depth of field is a product of the sensor size, focus distance, distances between objects in the scene, and aperture, but not the focal length. This is a common misconception in the camera world. Longer lenses appear to have shallower depth of field than wider ones, but that’s because of the background compression phenomenon that we looked at earlier (good explanation here).

Let’s have a look at what affects the depth of field.

Focus distance’s effect on depth of field
Made with 3dpeople, Kitbash, and Kloppenheim PureSky by Poly Haven
The above example uses a 100mm focal length lens. We can see the depth of field expanding the further we step back (not zooming, which is changing the focal length, but physically moving the camera with the lens pegged at 100mm) while keeping the focal plane on the subject. When the camera is right up in her face (40cm, or 16” away), the depth of field is a tiny sliver, and we’re only getting one eye in focus. If we keep the camera settings the same and step back to 3 meters (9.8’ or so), we can now get her whole body pretty much in focus because the DoF is a bit deeper. When we go back even further to 15 meters (about 49’), we can also get the midground people nice and sharp. To get everything in focus from our subject all the way back to the clouds in the far background, we’ll need to get to the hyperfocal distance, which for this particular setup (100mm, f/1.2) is all the way back at 281 meters (~920’).

Aperture size’s effect on depth of field
Made with 3dpeople, Kitbash, and Kloppenheim PureSky by Poly Haven
The other factor for depth of field is the aperture size. In the previous example, the camera was fixed at a single aperture size, so the only way to adjust depth of field was to move the camera. We do have control over this setting though, and at any given focal distance, making the aperture smaller makes the depth of field deeper, while making the aperture larger makes it more shallow. In the example above, we’re using the same 100mm lens. The camera is in the same spot, but the aperture is getting smaller left to right. At 0.3cm we can see the couple in the midground, and at 0.075cm, everything is pretty sharp.

Aperture Scale

In most real-world cameras, the aperture is set using some sort of clicky wheel or slider that opens and closes the aperture to specific values called f-stops. Each full f-stop equates to one stop of light. What gets really confusing (especially to new photographers) is that the larger the f-stop number, the smaller the aperture is (so more is in focus), and vice-versa. We use terms like “wide open” for the lowest f-stop value the lens allows, or “stopped all the way down” for the highest f-stop value for this reason.

In common lenses, the scale usually starts around f/1.2 and goes to f/22. Because of the exposure triangle system and circle math and other annoying things, this scale is not linear. The most common full stops are 1.2, 1.4, 2, 2.8, 4, 5.6, 8, 11, 16, & 22. Some lenses allow for larger/smaller apertures, and fractional stops as well. Also, as 3D artists, we’re not limited to this scale. We have the technology, so we can set it to f/0.00001, f/1.7273 or f/1000 if our engine allows it. Depending on the 3D software, we can also sometimes skip the f-stop system and just manually control the aperture itself (in cm).

Bokeh

Bokeh (warring factions in pronunciation - it’s either “Boh-KUH” or “Boh-KAY”) refers to the aesthetic quality of the out of focus area of an image. The shallower the depth of field, the more apparent this becomes (because more of the image is out of focus).
Made with LightStage assets and Symmetrical Garden 02 by Poly Haven
The ‘flavor’ of the bokeh is determined by a ton of factors. Lens construction, diaphragm shape, and quality of materials has a lot to do with it, but even the same lens can change quality depending on the aperture, focal length it’s set to, distance to the background, what the background actually is, and several other factors.

Since this is such a subjective thing, there’s a lot of leeway on what’s considered “good” bokeh. The rule of thumb is this: If it’s distracting, it’s bad. If it complements the image, it’s good. That said, there are adjectives generally associated with pleasing bokeh like smooth, creamy, yummy, etc (photogs are a hungry lot). Unpleasant bokeh is referred to as chattery, noisy, distracting, or harsh.

Computer-generated bokeh tends to default to ‘perfect’ which can actually look pretty unnatural (there are no perfect real-world lenses, sorry fanboys), so engines often allow us to simulate defects and idiosyncrasies in the diaphragm and glass which lead to more interesting and realistic looking bokeh.

Shutter Speed

Similar to aperture, the main purpose of shutter speed in a real camera is to let more or less light into the system to expose a frame. The longer the shutter is open, the more light comes in and the brighter the image gets. The side effect to shutter speed is blur. The longer the shutter is open, the more smeary things get when they’re moving in relation to the camera and each other. It doesn’t matter whether it’s the objects moving or the camera moving, the effect is the same - the difference in speed and direction is what makes the effect more or less pronounced for any given shutter speed.

Object Motion Blur 
Object motion blur occurs when the camera is still, but objects in the scene are moving. The longer the shutter is open (or the slower the shutter speed), the more blur moving objects end up with. Still objects stay sharp if the camera is still because they aren’t moving in relation to the camera.

Camera Motion Blur
Camera motion blur occurs when the camera is moving, and the objects are either still or moving at a different speed and direction to the camera. The closer the object is in speed/direction to the camera movement, the less blur they appear to have. Some engines allow us to turn off camera motion blur but keep object motion blur on, or vice-versa so we can art direct it a little better, but in a real camera anything moving = blur with a slow enough shutter, including how shaky our hands are.

Panning

Panning is a technique used by photographers to try to make the camera move at the same speed and direction as a subject so that the subject appears to freeze while the rest of the background is moving. This can produce some very dramatic effects in both stills and motion pieces.
In the above example, the camera is parented to the center swan and the whole rig is moving left to right at 50cm/second. The background swans and floor are static.

Shutter Speed Scale

In most cases, shutter speed is measured in fractions of a second (we won’t cover shutter angle here, but some apps allow for that scale as well). It mostly doubles from stop to stop, but there are a few exceptions because of course there are.
A standard still camera will start at 1/2 second, then go to 1/4, 1/8, 1/15, 1/30, 1/60, 1/125, 1/500, 1/1000, 1/2000, and 1/4000. Most cameras will also include a “bulb” mode to go longer if needed.

Common Shutter Speeds 

Faster than 1/4000 sec speeds are typically used in high speed cameras for slow-mo video or to freeze something that’s just moving stupidly fast in a controlled environment (bullet through an apple, etc). 

1/500sec-1/4000sec is often used in action, sports, and nature photography to freeze faster moving subjects. 1/4000 will freeze hummingbird wings. 1/500 is good for putting a little blur on quick things.

1/60sec - 1/500sec is usually good enough to freeze most slower and predictable subjects if we’re walking around with a camera. We can still get some good blur happening on fast objects in this range.

1/5sec - 1/60sec is commonly used in poor light conditions to get decent exposure, and more often than not needs to be stabilized somehow. This is a good range to be in if we’re looking for realistic motion blur on objects that aren’t moving too fast or too slow.

Slower than 1/5sec speeds are usually reserved for specialty applications like astrophotography or super dim light conditions. We can use it creatively for light painting and other effects like that.

Lens Design
What we call a “lens” is almost always a series of glass elements and a diaphragm housed in a tube and spaced apart at exact intervals so that they work together to focus light on the sensor.
No lens element by itself is perfect. If we ever picked up an old-style magnifying glass, we probably noticed all manner of distortion and focusing issues in different parts of our ‘scene’ when we looked through it. Very early cameras used similar lenses, and the same optical issues made predicting the outcome of a photograph very difficult.
 
Pretty much immediately after the first cameras were out in the wild, people started coming up with different shapes and designs of the glass itself, and then began combining different elements into complex assemblies to counter all these optical issues (in addition to adding functionality).

Attention was also paid to other components of the assembly like the diaphragm shape, materials, and coatings, since those have a big impact on the final image.

Modern imaging systems are really good. With a hundred plus years of tinkering under our belts, we’ve landed at the point where most images produced by most camera systems are corrected either in the lens design or in software after the fact and our images are sharp, clear, and much closer to perfect than they’ve ever been.

So what do we do in our 3D simulations? Add all those issues back, of course :)

Optical imperfections may be undesirable in a lot of imaging applications, but creatively it adds character to an image, and sometimes it can be used to make a less-than-interesting subject a lot more interesting or feel like it was shot in a different time.These imperfections are also largely what’s responsible for interesting bokeh since they affect focus and distortion.

There are several issues that plague lens designers that we can throw back in their faces by simulating in our 3D software. Some common categories we’ll look at here are geometric distortion, aberration, and lens construction artifacts.  

Geometric Distortion
Geometric distortion warps the geometry of the image. Because of the circular nature of a glass lens element, The most common types are Barrel, Pincushion (inverse of barrel), and Mustache (barrel with pincushion corners, or vice-versa). Fisheye lenses use heavy barrel distortion to get wider fields of view.
 
Aberration
Aberration is what happens when the light hitting certain parts of the lens focuses at a different rate, position, or just not at all compared to other parts. Most aberration types are radial in nature, so they keep sharp and clear in the middle of the lens but get progressively worse as they get to the edges.

Assembly Construction Effects
Even if the glass itself is perfect, there’s still a matter of light bouncing around and splitting inside the housing. Common issues include bloom (light leaking), diffraction (sunburst spikes and other patterns), vignetting (shadowing around the corners), flares, and other artifacts and anomalies.

Part II: Special Gear

So now we’ve covered how a standard camera with a standard set of lenses works. Next, let’s have a look at a few pieces of specialty kit that often translate over to 3D apps.

Fisheye Lens
Most lenses, even though they’re round in shape, project objects in a rectilinear fashion instead of spherical. This means that straight lines stay straight and the image looks correct to us.

Fisheye lenses distort in a spherical fashion, so straight lines curve around the center of the frame. This allows for a much greater field of view (more stuff in the frame) than the widest standard ultrawide lens, but at the expense of heavy barrel distortion. These lenses are used practically in applications like security cameras where it doesn’t matter how distorted things are, so long as we can identify who is breaking into our warehouse. They’re also used creatively to bring a surreal sense to the photo or video.
Above on the left, we can see a scene through a 12mm rectilinear lens. This is about the widest lens we can buy today, and has a horizontal field of view of about 121 degrees. We’re on the ground tilted up, which shows pretty clearly that straight lines always stay straight, but the perspective distortion is pretty severe considering these sticks are only a hundred cm (40”) tall.

The next panel to the right shows what happens when we go to a 6mm rectilinear ultrawide. This lens isn’t found in any camera store (maybe it was made in a lab somewhere). This would have a ~140 degree field of view, so we get even more in the shot, but the converging line distortion is quite a bit more apparent.

To the right of that is a 180 degree fisheye. Fisheyes are usually measured in angle of view (degrees) instead of focal length because that’s the spec we’re more interested in for this type of lens. If we look at the bottom corners of the colored blocks in the scene above, It might appear that our 180° fisheye only goes about as wide as the 6mm (~140°) ultrawide, but that’s just because most lenses project a larger area than the sensor so the corners aren’t blacked out. We’re effectively getting a crop, so we lose 20 degrees or so on either side, as well as the top and bottom.

If we look to the right at the circular-type fisheye, we’ll see what a 180 degree lens can really capture. It’s quite a bit more, but since it’s grabbing a full circle, we end up with it not filling our rectangular frame very well, so usually we opt for the crop. Some 3D apps allow us to extend the corners though, which is nice.

Alternate Projections

This requires a bunch of backstory to fully comprehend. This Wikipedia article is a good start if you’re interested. In a nutshell, when we take a 3D object and make a 2D representation of it, we have to use some sort of projection method. 
Most of the time we’re after something that looks fairly realistic, so we use a perspective projection in our 3D apps that produces an image similar to how our eyes and camera gear experience reality. Objects further away appear smaller than ones closer to us, straight lines appear straight, and parallel lines converge as they go into the distance (think railroad tracks). There are a few other projections that we use in 3D for special purposes.

Equirectangular (Spherical) Projection
Equirectangular projection captures a 360 degree view of the scene and distorts it in such a way that it maps (projects) properly to the inside of a sphere. There are real cameras out there that capture images like this so they can be fed back into a system that reprojects them in VR goggles or allows us to use them for environmental (HDRI) lighting. Some 3D engines allow for this as well for the same reasons.

Parallel Projection

Parallel projection is a method of representing 3d objects on a 2D plane WITHOUT perspective. All parallel lines stay parallel and don’t converge as they go back into distance. In fact, there really isn’t much of a concept of “the distance”. Objects can be in front of or behind other objects, but the depth of field is infinitely shallow, so they just kind of overlap the way they do in a 2D illustration program and don’t change size as they change distance from the camera like they do in perspective.

There’s no perfect parallel lens in real life, but something like the James Webb space telescope would probably come close. It has a focal length of 131.4 meters (131,400mm), so the background compression is so extreme that the image would appear 2D if we could stage a shot way out in space (obviously it still wouldn't be - it observes physics, it doesn't rewrite it).

Parallel projection (more specifically orthographic projection) is what we use for our side, top, and front views so we can accurately measure lengths and angles, and line things up exactly. Engineers and others who work with technical diagrams also use it in their 2D and 3D views to make representations of objects that they can use to take measurements from.

Important:  A lot of 3D applications treat parallel projection cameras differently than perspective ones. There’s no concept of z-depth, so to magnify the objects in frame, there’s sometimes a “zoom” or “scaling” option instead of simply moving the camera back and forth in Z. Some might even disregard the X and Y position values of the camera and use dedicated X and Y offset values instead. Just be aware that some extra steps might be needed.
One of these 3D parallel projections is called Isometric. This means that the scale along each axis is the same. Most of us probably don’t care about taking accurate measurements off a 3D render, but we do care about interesting and unique looks for our renders, and isometric projection delivers there. There are a ton of examples of what can be achieved with this by searching on a site like Behance or Dribble.
As we can see in the example above, really long perspective lenses like the 600mm crunch in the depth of field and straighten lines out, but it’s still not the same thing as parallel projection, and this effect breaks even more when we start to rotate the objects unless we’re using true parallel projection.

Macro

All real-world lenses have a minimum focus distance (MFD), which - like it sounds - is the closest distance to the sensor that the lens is able to focus at. This is also where the magnification of any object on the focal plane is the highest. Because of the math and physics of optics, longer lenses typically have to have longer MFDs. This means we can’t just bust out a random 200mm lens, put it up right up to a bug’s face and see its eye because the MFD would probably be at least a meter away, meaning the bug would still be small in the frame if we wanted it to be in focus (we do).

Macro lenses are made to sacrifice the functionality, usability, and affordability of a general purpose lens so that this minimum focus distance can be a lot closer and we can magnify small things quite a bit more. All the angles, geometry, and measurements get a lot trickier at this scale, leading to a whole market of helper tools and software specifically for it.

Using our our 3D software we can just mash the camera right up to any subject, magnify it as much as we want and get perfect focus and infinite depth of field, but the resolution of our meshes and textures have to account for this, otherwise we get a blurry or chunky mess due to lack of detail.

If we want to simulate photos taken with a real macro lens, we’ll need to study macrophotography as a whole a bit more to see what we need to consider, build or acquire specialty models and textures, and set up the lighting to mimic the kind we’d need for a real world macro shot.

Wrap Up

Hopefully this gives you a good background on real-world cameras that you can apply to your 3D renders to either make them look more realistic, more photographic, and/or more creative.

If you’re an Octane user, check out the Octane Camera: Settings & Effects guide to learn how to apply this all to Octane Render. That guide is written for C4D, but most of the concepts are sound for any DCC.

Author Notes

OG027 Photography Concepts for 3D Artists Version 1.0, Updated May 2024 using practical experience and web research (Octane for the illustrations)

This guide originally appeared on https://be.net/scottbenson and https://help.otoy.com/hc/en-us/articles/212549326-OctaneRender-for-CINEMA-4D-Cheatsheet

All rights reserved.

The written guide may be distributed freely and can be used for personal or professional training, but not modified or sold. The assets distributed within this guide are either generated specifically for this guide and released as cc0, or sourced from cc0 sites, so they may be used for any reason, personal or commercial.
Photography Concepts for 3D Artists
Published:

Photography Concepts for 3D Artists

Published: