Unit 41 Assignment 1

Applications of 3D 

3D modelling is used in a wide range of places such as: models, product design, animations, TV, film, web, games, education, architectural walk-through. I will be going through each of these and comprehensively explain the applications with examples. 


One of the main use applications of 3D is modelling. 3D Modelling is a technique in computer graphics that allows us to produce a 3D digital object which we can then output onto a screen. A 3D modelling designer more commonly referred to as an artist will use a 3D development software for example, 3D Studio Max. With this software they will manipulate these vertices in a virtual space in order to form a mesh, a mesh is simply a collection of vertices which form an object.  

These 3D objects can either be generated automatically for the artist or created manually. If generated manually then the artist will typically begin by creating some sort of primitive type like a cube, sphere or plane like we did in Unity. The primitive object is just a starting shape they will use as a foundation to start their modelling, they will then build upon this simple form and manipulate the vertices using several modelling tools in order to create a more complex model. 

(Author: Josh Petty, Created/Last Updated: Unknown: Accessed: 25/01/2021) 


Product Design 

Picture source: https://3d-ace.com/press-room/articles/3d-product-design 

Product design is another application of 3D. It is used for many different reasons by companies to show people what the end product will look like or how an idea they have could appear.  

For example, it is very common for 3D models to be used to generate a product prototype which can then be further inspected to evaluate the product’s design concept, details, manufacturing costs, 360-degree view of the product and more.   

Another reason is the precise measurements, 3D modelling software of today’s time allows customers to see the size of the product relative to the size of another object on the same scale, which allows customers to have a realistic expectation of the measurements of the product. Also, the top quality animation which can be created with the help of modern 3D modelling software which aids a company to persuading its customer to desire the product by showing off all its amazing features in the best way possible. 

An additional reason is due to production. 3D modelling is spectacular for automating the manufacturing process of a product as it is capable of transmitting data which is directly in the 3D model over to the machinery automatically starting production with precise accuracy. 


Toy story was the first entirely computer-animated film in 1995, 3D animation adds detail, depth and lighting even to the dullest object making your projects step up a level. As a result, the uses of 3D animation have grown exponentially since the 1990s. 

3D Animation can be defined as three-dimensional moving images. Due to these images being three-dimensional it makes them appear extremely realistic and lifelike. Therefore, this makes 3D animation a great deal more eye-catching than your typical boring standard image. In order to make a 3D animation you will need to go through the three stages of modelling, rigging and animation. You require a rigged 3D character for your animation before it can even become an animation.  

3D animation is very important because it allows businesses to communicate in a way that is more immersive and memorable due to the leverages of 3D animation. This communication is in most cases with customers but in some cases is internally between staff. It has several importance’s such as: Creating more customer conversions, greatly improves brand recall and recognition, makes previously old products more attractive and standing out, saves businesses time, money and resources, is usually permanent in video form, as opposed to one-off training days meaning viewers can revisit it. 

Film and TV 

Picture sourcehttps://www.looper.com/193954/the-disney-tron-series-you-never-got-to-see/ 

In film and TV 3D modelling is used almost everywhere. As mentioned above Pixar created Toy story which was the first ever entirely computer-animated film done in 1995. There are also other movies such as Tron which use 3D modelling. The original idea of Tron was that it originally was going to be a combination of animation and CGI, however due to the high cost of animation it was decided that CGI and live action would be much cheaper, so as you can see CGI can greatly reduce the workload of projects saving the business more money.  

Picture source: https://filmstudies2270.wordpress.com/animation/the-evolution-of-animation-to-cgi-computer-generated-imagery-and-the-impact-of-james-camerons-avatar/#:~:text=Avatar%20utilizes%2060%25%20CGI%20imagery,Avatar%20since%20the%20early%201990s

Another example is Avatar. In avatar the 3d models were created using the modelling software called Maya, they used 60% CGI imagery with most of the CG character animation filmed by using revolutionary new motion capture methods with live actors. The other 40% of the film uses standard techniques of live-action imagery. James Cameron director of the film approached the way motion capture sequences were filmed in a different way. He made the actors wear special bodysuits and head rigs equipped with a standard definition camera that took constant images of their faces which can be seen in the picture above. The data would then be transmitted to another camera which in turn would create a real-time image of the live actor equipped with their CGI costume. This allowed Cameron to see the motion capture results in real time as they were filmed as opposed to waiting a length amount of time for the computer to render the images. 


Picture source: https://www.vectary.com/3d-modeling-how-to/3D-banner-maker-how-to-create-3D-banner/ 

3D Web is a term that refers to integrating 3D content which is calculated in real time into web applications by rendering these 3D objects in real time with user interaction options. Due to web browsers being present in almost all end devices today this means that web 3D content is platform-independent and there is no requirement for any sort of special operating system or program. It can work on several devices including desktop computers, tables, smartphones, TV, VR glasses and more. These 3D web applications can be embedded into websites, apps, VR/AR application, online shops and social media websites. Web 3D content allows a user to enter a virtual scene which they can move and interact freely in 

There are two main methods of implementing 3D into your web application, this is by using either of the APIS: WebGL and Three.js. 

WebGL abbreviated for Web Graphics Library is a library that allows 3D graphics to be displayed directly onto the browser without the need of plug-ins. WebGL itself is not a 3D engine, it is a JavaScript programming interface that developers can use which is based on OpenGL ES 2.0. WebGL uses HTML5 canvas elements to show WebGL content and to directly integrate it into HTML. 

Three.js is a lightweight JavaScript library that has a much easier to use interface to access WebGL. It allows developers to do less but make more which in programming terms means code less but get more results quickly. Three.js comes already packaged with many standard functions to render complex scenes whereas WebGL requires hundreds of lines of code just to render a basic small 3D object. As a result, Three.js has become a fixed standard in the development of real time 3D applications due to the development time being greatly reduced because of its basic API calls. 


Video game creators will use 3D modelling to create all 3D environments, characters, assets and objects within their video game to which they can further texture, add shaders and animate to make it more creative and interesting. They will use an advanced 3D animation software program such as Maya, Houdini, 3DS max, blender and others in order to create these 3D models for their game.  

A lot of 3D games that are played nowadays are created in a popular game engine called Unreal Engine which has these features for creating 3D models as can be seen below. 

Picture source: https://game-ace.com/blog/unreal-engine-3d-modeling-a-step-by-step-guide/ 

In order to create a static 3D asset that won’t be performing any complex animations you will need to use one of the many methods of modelling. 

Picture source: http://vik3d.blogspot.com/2014/03/2013-good-year-of-dragons.html 

The first modelling method is primitive modelling which simply means creating a simple 3D objects such as a cube and then influencing it by changing its attributes. By utilizing the modelling toolkit, an artist is able to split, extrude, merge, or delete the components from the objects to increase or decrease the complexity of the primitive object.  

One of the other modelling methods would be curve modelling which is a technique of making 3D assets by manipulating the surfaces with curves that are controlled by the weighted control points. By changing the weight for a specific point, an artist can pull the curves closer to a specified point. These curves incorporate splines, patches and NURBS. 

Another modelling technique is converting which is the process of creating polygons by using the actual NURBS. Individual polygons can also be generated by placing vertices to display polygon faces. Afterwards the create polygon tool or the quad draw tool can be used to extrude or split the faces of the polygon mesh. 

Digital sculpting is also another modelling method which involves pushing, smoothing, grabbing, pulling and pinching of objects in order to manipulate them into the way the artist desires. By combining these with volumetric and dynamic tessellation methods the artist can generate an image with high resolution. 


3D Modelling is also used in education for example the medical industry. In the medical industry 3d modelling can be used to educate students on a topic such a human body anatomy with more visual examples like a true to life digital 3D model of the human body to better explain in depth the different functions, shapes, positions of certain organs, joints and muscles and more. It also lets the students get a 360 degree view of the human body and because it is a digital model they can return back to it any time they like which is helpful for when they are revising or checking notes later for an exam. Therefore, 3D modelling is very helpful in the process of training people in the medical field such as surgeons as they get a better visual understanding of how things will appear during the different the procedures they go through in their job. 

Picture source: https://www.zygote.com/poly-models/3d-human-collections/3d-male-anatomy-collection 

Architectural walk-through 

Picture source: https://www.theengineeringdesign.com/tag/3ds-max-design-architectural-visualization/ 

In architectural design 3D modelling is often used. 3D modelling allows the client and the architects or designers to bring their project idea to life and get a more realistic understanding of how it will look. One of the great benefits is that it greatly speeds up the design process and also enables architects and designers to play around with different ideas. 

By using 3D modelling animation can also be done allowing the client to have a virtual walkthrough of the building giving them a better feel of how things will be laid out and appear in real life which greatly increases the chance of the project selling by manifold times.  

Another advantage is the realistic lighting which can be used to demonstrate things like the warm energy that will come from a busy kitchen or objects such as furniture can be placed to fill up the room to find the best possible combination. Having a bird’s eye view is an additional benefit as they can see in relativity how their design will look and adjustments that need to be made potentially from the outside.  

Furthermore, with the assistance of the software the architect or designer will be able to track any error in the design that might have slipped in unintentionally. It eliminates the chance of an aspect being overlooked, the software is usually embedded with features that can intuitively estimate the approximate time it will take to construct the project, the cost of it as well as the manpower will be required to complete it in order to meet time demands. 

All these benefits and features allows any potential clients to weight the suitability of the project to meet their needs and streamlines the whole communication process of planning and customizing the project to fit the client’s demands so there is the perfect balance between the client’s vision, architect’s conceptualization and design. As compatibility between different stakeholders grows, optimum resource usage and minimal wastage are assured thanks to 3d modelling architectural walk through bringing down the overall cost of projects. 

Picture source: https://www.pinterest.com/pin/691302611534927932/ 

This is a picture of a home design using 3ds max software. 

Displaying 3D Polygon Animations 


Picture source: https://www.facebook.com/Direct3D/ 

Direct3D is a Microsoft-developed application program interface (API) that offers a series of 3D object manipulation commands and functions. Direct3D offers powerful 3D graphic rendering and acceleration services and enables developers, especially for higher end applications such as games, movies and animation, to access advanced graphical features and functions. 

Software developers can take advantage of all of these pre-written functions with the use of Direct3D commands. This helps programmers to write significantly fewer lines of code than if all the functions were to be written from ground up.  

Direct3D allows handling of three-dimensional objects, including lighting and shadows in a reasonably straightforward manner. Direct3D makes it easy to use specialized hardware functionality including alpha blending, buffering, mapping, and special effects. Direct3D accesses all graphical hardware, such as: video cards, graphics cards, standard processors, random memory access (RAM) and output devices to show 3D objects and images on the screen. 

(Author : Unknown, Created/Last Updated: Unknown, Last Accessed: 30/01/2021) 



Picture source: https://www.opengl.org// 

OpenGL is an application programming interface (API) developed to render 2D and 3D images, short for ‘Open Graphics Library.’ It offers a standard collection of commands which can be used in various applications and on multiple platforms to manipulate graphics. Due to its extensive use in 3D graphics, OpenGL is widely associated with video games. It offers a convenient way for developers to build cross-browser games or port a game from one platform to another. For several CAD software, such as AutoCAD and Blender, OpenGL is often used as the graphics library. 

 A developer may use the same code to render graphics on a Mac, PC, or mobile device with the use of OpenGL. Nearly all modern operating systems and hardware devices support OpenGL, making it a convenient option for graphics development. Furthermore, numerous video cards and built-in GPUs are designed for OpenGL, helping them to more easily process OpenGL commands than other libraries of graphics. 

Examples of OpenGL commands involve drawing polygons, assigning shapes with colors, adding polygon textures, zooming in and out, transforming polygons, and rotating objects. OpenGL is also used to monitor the effects of illumination, such as light sources, coloring, and shadows. It may also produce effects that can be added to a particular entity or a whole scene, such as haze or fog. 

(Author: Unknown, Last Updated: May 8 2018, Last Accessed: 30/01/2021) 


Graphics Pipeline 

Picture source: https://www.pcmag.com/encyclopedia/term/graphics-pipeline 

The steps taken to turn a three-dimensional image into a two-dimensional screen are graphic pipelining. The properties which are given at the end points which are referred to as vertices is the data that gets processed throughout the various stages and to render the images the control points of the geometric primitives are used to specify such. Lines and triangles are the traditional primitives of 3D graphics. Coordinates (x,y,z), RGB values, translucency, texture, reflectivity and other characteristics are used in the form of properties given to each vertex. 

Through the graphics pipeline procedure there are a numerous amount of stages which are performed by the graphics processing unit commonly shortened to ‘GPU’. These stages are: 

Bus Interface/Front End 

System Interface for transmitting and receiving data and commands. 

Vertex Processing 

In order to determine the colour, each vertex is transformed into a 2D screen location, and illumination can be added. For effects such as warping or shape deformation, a programmable vertex shader helps the application to perform custom transformations. 


This excludes image sections that are not apparent in the 2D screen view, such as the back of the objects or regions that are hidden by the application or window system. 

Primitive Assembly, Triangle Setup 

Vertices are gathered into triangles and transformed. Data is compiled that will allow the properties of each pixel associated with the triangle to be correctly generated by later phases. 


The triangles are packed with pixels called “fragments,” which, if there is no improvement to the pixel or if it ends up being obscured, may or may not end up in the frame buffer. 

Occlusion Culling 

Delete secret (occluded) pixels from other objects in the scene. 

Parameter Interpolation 

The values are calculated for each rasterized pixel, depending on colour, fog, texture and so forth. 

Pixel Shader 

This process applies the pieces to the textures and final colors. A programmable pixel shader, also called a ‘fragment shader,’ allows the application to combine the properties of a pixel, such as color, depth and location on the screen, with textures to create custom shading effects in a user-defined way. 

Pixel Engines 

The final fragment colour, its coverage and the level of transparency are mathematically combined with the current data stored in the frame buffer at the relevant 2D location to create the final color for the pixel to be stored at that location. The output is the pixel’s depth (Z) value. 

(Author: Unknown, Last Created/Updated: Unknown, Last Accessed: 30/01/2021) 



Picture source: https://3d-ace.com/press-room/articles/how-light-3d-scene-overview-lighting-techniques#:~:text=A%20point%20light%20casts%20rays,Christmas%20tree%20lights%2C%20or%20others. 

There are many excellent 3D lighting techniques, which technique is most suitable in the situation is mostly predetermined by the atmosphere. For example, in an interior setting, certain techniques function well whereas they make very little sense in exterior modeling.  

Point or Omni Light 

In a 3D environment a point or omni light will shoot out rays in all different directions from one tiny source. These rays will have no particular size or shape. In a 3D scene, point lights provide ‘fill lighting’ effect, as well as mimic any other light source such as candles, Christmas tree lights, or others. 

Directional Light 

A directional light is the reverse of a point or omni light, by displaying a very distant light source (like the moon light). In a single direction, directional rays go parallel. In order to mimic sunshine, this form of 3D lighting is commonly used. You may change the location or color of the light and rotate the directional light source to alter the lighting of the scene. 

Area light 

Inside a specified boundaries of a certain size and form, area lights emit light (rectangular or circular). This type of light source is also used both in architecture models and in product lightning visualization. Soft-edged shadows are created by area lights that make the rendering appear more realistic, accurate and natural. Area light, since it goes in both directions and does not emit parallel rays, is the inverse of directional light. 

Volume Light 

Volume light is extremely similar to omni-light, since from a certain point it casts rays in all directions. However, a light volume has a defined form (any primitive geometric) and size. Only surfaces inside the fixed volume illuminate this volumetric light. The effect of smoke, fog, and so on is given by volume Light. 

Ambient Light 

Ambient light is not comparable to any other form of light. In all directions, it casts subtle rays, but it has no certain directionality and produces no shadow on the floor. It often supports a 3D scene by adding color to the primary light source. 

(Author: Unknown, Created/Last Updated: Unknown, Last Accessed: 30/01/2021) 


Viewing and Clipping 

In computer graphics, the major purpose of clipping is to delete objects, lines, or line segments which are beyond the viewing pane. The viewing transformation is insensitive to the location of points relative to the viewing volume, particularly those points behind the observer, and before producing the view, it is important to remove these points. 

By defining the clipping slot, a polygon may also be clipped. For polygon clipping, the Sutherland Hodgeman polygon clipping algorithm is used. Every single vertex of the polygon in this algorithm is clipped against each edge of the clipping window. 

In order to obtain new polygon vertices, the polygon is first clipped against the left edge of the polygon window. These new vertices are used to clip the polygon, as seen in the diagrams below, against the right edge, top edge, bottom edge, of the clipping window. 

(Author: Unknown, Created/Last Updated: Unknown, Last accessed: 30/01/2021) 


Scan conversion 

Picture source: https://www.javatpoint.com/computer-graphics-scan-converting-a-point 

As a set of discrete pixels, the method of representing continuous graphics objects is known as scan conversion. A line that is determined by its two endpoints and the equation of that line and a circle described by its mid-point and the radius will all prove this. 

The process of transforming any primitive object depicted on the graphics screen into a series of pixels which is its simplest form, is called scan conversion or rasterization. Scan conversion is the conversion of graphical screen lines and objects represented in pixels. 

Therefore, the scan conversion process includes some specific rules and instructions to be followed. 

  • The visibility of all objects drawn on the graphics display should be equal in luminosity. For each point, the value of brightness must remain the same so that the conversion is made simple. 
  • Objects drawn on the graphics display should be exclusive of length and size and exempt of any orientation. 

There are different techniques that can be used in order to perform scan conversion. 

Analog Method 

For large numbers of delay cells, this conversion is performed and is ideal for an analog film. A specific advanced scan converter vacuum tube may be used to do it. The analog approach is also referred to as the “non-retentive, memory-less, or real-time method.” 

Digital Method 

In this specific system, any image is stored in a n1 speed (data rate) line or frame buffer and is read at n2 speed, many image processing techniques are available while the image is stored in buffer memory, including forms of interpolation from basic to intelligent high order comparisons, motion detection.  Digital method may be referred to as a “retentive or buffered method.” 

Scan conversion can be applied to the following objects: 

  • Line 
  • Point 
  • Polygon 
  • Rectangle 
  • Filled Region 
  • Arc 
  • Character 
  • Sector 
  • Ellipse 

(Author: Monika Sharma, Created: April 12 2020, Last Accessed: 30/01/2021) 


Texturing and Shading 

Picture source: https://www.britannica.com/topic/computer-graphics/Shading-and-texturing 

Visual presentation requires more than just form and colour, it is also important to correctly model texture and surface finish (e.g. matt, satin, glossy). In essence, the influence of these characteristics on the presence of an object depend on the light which could be diffuse, from a single source, or both. There are many methods for rendering light’s contact with surfaces. Flat, Gouraud, and Phong are the easiest shading methods. 

No textures are used in flat shading and only one colour tone is used for the whole object, with varying quantities of white or black applied to each face of the object to mimic shading. The resulting model is flat and unrealistic in appearance. 

In Gouraud shading, textures such as wood, stone, stucco, and others may be used. Each edge of the object is given a colour that factors in illumination, and the program interpolates (calculates intermediate values) to create a smooth gradient across each face. This results in a picture which is much more natural. In real time, advanced computer graphics systems can render images of Gouraud. 

When using Phong shading, each pixel takes every texture and all light sources into account. Generally speaking, it gives more realistic outputs but takes much longer. 

In truth, objects are not only solely lighted by a light source such as the Sun or a lamp, but also by reflected light from other objects more diffusely. In computer graphics, this form of illumination is re-created by radiosity techniques which simulate light as energy rather than rays and which considers the impact on the presence of all the elements in a scene on each object. 

(Author: David Hemmendinger, Created/Last Updated: Unknown, Last Accessed: 30/01/2021) 



Picture source: https://www.apple.com/uk-business/shop/product/HMUB2B/A/lg-ultrafine-5k-display 

A display is a device output device and projecting mechanism which uses a liquid crystal display referred as an LCD, cathode ray tube referred as a CRT, light-emitting diode, gas plasma, or some form of other image projection technology to display text and quite often graphic images to the computer user. The monitor or projector surface and the component that generates the data on the screen are generally known to both be included in the monitor. The screen is integrated in a different device called a monitor on certain computers. In certain other computers, the monitor is combined with the processor and other machine parts. 

As an input to the process of displaying pictures on the screen, most electronic devices use analog signals. This necessity and the need to constantly update the monitor image mean that a monitor or screen adapter is often required for the device. Using a digital-to-analog converter, the video adapter takes the digital data sent by device applications, stores it in video random access memory and transforms it to analog data by using a digital to analog converter. 

Displays typically treat the input of data as character maps or bitmaps. A monitor has a pre-allocated amount of pixel space for each character in a character-mapping mode. It receives an accurate representation of the screen image in bitmap mode that is to be projected in the form of a series of bits representing the color values for particular x and y co-ordinates beginning at a given position on the screen. 

(Author: MayankjtpCreated: Dec 12 2019, Last Accessed: 30/01/2021) 


(Author: Unknown, Last Updated: April 2005, Last Accessed: 30/01/2021) 



Picture source: http://www.formz.com/manuals/renderzonerendering/!SSL!/WebHelp/6_0_1_what_is_radiosity.html 

Radiosity is a rendering process focused on a careful study of light reflections from diffuse surfaces. The images created by a radiosity renderer are defined by soft gradual shadows. Radiosity is usually used to render photographs of the interior of buildings, and for scenes composed of diffuse reflective surfaces, it can produce highly photo-realistic effects. 

Radiosity is a radiant energy metric, i.e. the quantity of energy (light) that over a period of time leaves a surface. Since the 1950s, engineers have used the radiosity principle to solve problems in radiative heat transfer and to quantify the volume of light energy transmitted between two surfaces. In 1984, researchers at Cornell implemented it as a method for rendering 3D images. 

The application of implementing radiosity to graphic design first requires the development of a computer model of the image. The surfaces are broken into tiny areas called patches in the model. An algorithm is then used to determine for each patch a radiosity value, the quantity of energy that each of the surfaces represented in the patch absorbs or reflects. 

(Author: Unknown, Created: 26 June 2019, Last Accessed: 30/01/2021) 


Ray tracing 

Picture source: https://www.techradar.com/uk/news/ray-tracing 

Ray tracing is a method for rendering that can create unbelievably realistic lighting effects. Basically, an algorithm will map the direction of light and then model how light interacts in the computer-generated space with the simulated objects it eventually reaches. 

Ray tracing provides, along with much-improved translucence and scattering, for significantly more lifelike shadows and reflections. The algorithm takes into account where the light reaches and measures the contact and interplay as actual light, shadows, and reflections will be perceived by the human eye, for example, in the universe, the way light enters objects often determines what colors you see. 

When creating computer graphics imagery for movies and TV shows, ray tracing is commonly used, but that’s due to studios being able to leverage the resources of an entire server farm (or cloud computing) to get the job done, and it can still be a slow, laborious process even then. For current gaming hardware, doing it on the run is now much too taxing making it impractical. 

(Authors: Bill Thomas, Andrew Hayward, Created: August 20 2019, Last Accessed: 30/01/2021) 


Rendering engines 

Picture source: https://helpcenter.archvision.com/knowledge/which-render-engines-support-rpc-technology 

The rendering engine in a software application is the component that is appropriate for producing the graphical output. Essentially, the role of a rendering engine is to transform the internal image of the applications into a series of pixel lights that can be shown by a monitor (or other graphical devices such as a printer).  

The rendering engine could take a set of 3D polygons as inputs (as well as camera and lighting data), for example in a 3D game and use it to create 2D images to be output to the display. 

In text editing software, the rendering engine can input a string of characters and font data (and other properties such as images) and transform them to a well-formatted image that you can see on the screen or printed on a page. 

To take advantage of graphics card capabilities, rendering engines are also written as they are optimized at specific functions such as highly parallelized matrix operations. It is essential to have good knowledge of geometry in order to write code for a rendering engine. 

(Author: Unknown, Created: November 24, 2015, Last accessed: 30/01/2021) 


Distributed rendering 

Picture source: https://www.awsthinkbox.com/blog/distributed-rendering-a-guide#:~:text=Distributed%20rendering%20is%20a%20rendering,some%20of%20them%20to%20render. 

Distributed rendering is a rendering method where an individual frame of a scene or image is rendered by several computers through a network. The frame is split up into smaller regions, and some of them are obtained by each machine to render. It is returned to the client computer after each region has been rendered and merged with other rendered regions to create the final image. 

The primary benefit of using distributed rendering over local rendering is that distributed rendering helps you to take advantage of the capabilities of your network of machines and render far quicker than rendering with only one computer, in essence this idea shares similarity with the concept of multi-processing distributing out the workload to finish tasks quicker and more efficiently. It also gives you the ability to off-load the work needed to render your scene onto your render machines from your local machine, enabling you to continue operating without taking any major hit on the performance of the computer. 

(Author: Unknown, Created: Jul 8 2016, Last Accessed: 30/01/2021) 



Picture source: https://enacademic.com/dic.nsf/enwiki/183101 

In 3D computer graphics, distance fog is a method used to increase the illusion of distance by replicating fog. 

Since many of the shapes are fairly flat in graphical landscapes and complicated shadows are difficult to render, many graphics engines use a “fog” gradient so that haze and aerial perspective gradually obscure objects away from the camera. This method replicates the effect of light scattering, which allows further objects to seem lower in contrast, particularly distant objects. 

Distance fog was also implemented into mid-to-late nineties games as the processing power of the machines in that time was nowhere near capable of rendering far viewing distances, on top of that clipping was also implemented.     

The result, however, was fairly irritating as bits and pieces of polygons would flicker immediately in and out of focus, and by adding a medium range fog the clipped polygons would blend in more realistically from the haze, even if the effect in some situations could have been appeared unrealistic in some scenarios (such as dense fog inside of a building).  

This effect was used by several early Nintendo 64 and PlayStation titles, such as Turok: Dinosaur Hunter, Bubsy 3D, Star Wars: Rogue Squadron, and Superman. 

Elena, Deza; Michel Marie, Deza (2009). Encyclopedia of Distances. Springer-Verlag. pp. 513. http://www.liga.ens.fr/~deza/495-528.pdf. 


As a result of occlusion, shadows are a product of the lack of light. If the light rays of a light source do not reach an object because some other object is concealing it, the object is in darkness. Shadows bring a lot of realism to a lit scene and allow a spectator to observe physical interactions between objects more easily. They give our scenes and objects a better sense of depth. This can be seen in the below diagram of a scene with and without shadows, for instance: 

Picture source: https://learnopengl.com/Advanced-Lighting/Shadows/Shadow-Mapping 

You can see that how the objects connect to each other becomes even more apparent with shadows. The fact that one of the cubes hovers above the others, for instance, is only really visible when we have shadows. 

However, shadows are a bit difficult to incorporate, specifically because a perfect shadow algorithm has not yet been established in current real-time (rasterized graphics) research. There are a variety of good methods for shadow approximation, but they all have their little unique features and nuisances that we have to keep in mind. 

A technique although that is commonly used in videogames which gives good outcomes is shadow mapping, it does not require too much work to be done by the machine making it ideal for today’s current hardware capabilities for games. 

Shadow mapping 

The concept of shadow mapping is fairly basic, a scene is rendered from the light’s perspective and all that we see from the point of light is illuminated and everything that we can’t see must be in shadow. As you can see in the diagram below: 

The fragments that the light source will see are represented here by all the blue lines. The pieces that are hidden are seen as black lines: these are made as shadows. We can see that the ray first reaches the floating container before reaching the right-most container if we were to draw a line or ray from the light source to a fragment on the right-most box. As an outcome, the fragment of the floating container is illuminated and the fragment of the right-most container is not lit and therefore in shadow. 

Vertex Shader 

Picture source: https://www.youtube.com/watch?v=F7bpcyPhiH8 

A vertex shader is a graphics processing feature that, by mathematical functions on an object, transforms vertex data values on the X (length), Y (height) and Z (depth) 3D planes. Such variations vary from colour differences, texture coordinates, space orientations, fog (how thick it can be at a certain elevation) and point size. 

It substitutes the fixed-function pipeline for vertices whenever a vertex shader is activated. The shader does not at all work on an individual primitive such as a cube, it only operates on a single vertex. It is unable to create or break vertices, a vertex shader can only transform the vertices. The shader program executes for each vertex to be processed. 

(Author: Unknown, Created/Last Updated: Unknown, Accessed: 31/01/2021) 


Pixel shader 

Picture source: https://www.youtube.com/watch?app=desktop&v=TDZMSozKZ20 

A pixel shader is a short piece of software that processes pixels and operates on the graphics processing unit. 

During the rasterization process for each pixel, a pixel shader is performed on the graphics card’s GPU. It offers us a service to directly access/manipulate each It offers us a service to directly access/manipulate each pixel. This direct pixel control helps us to perform a range of special effects, such as multi texturing, illumination per pixel, field depth, cloud simulation, fire simulation, and advanced techniques for shadowing. 

A function of your video card is Pixel Shader support. Either your video card is capable of performing Pixel Shading capable, or it is not.  

 (Author: Unknown, Created/Last Updated: Unknown, Accessed: 31/01/2021) 


(Author: Unknown, Created: August 5 2013, Accessed: 31/01/2021) 


Level of detail 

Picture source: http://mkrus.free.fr/CG/LODS/xrds/ 

The level of detail, often shortened to ‘LOD’ in computer graphics, refers to the sophistication of a 3D model image. When the model shifts away from the observer or according to other parameters such as object importance, viewpoint-relative speed or location, the amount of detail of an object may be reduced.  

Level of detail is very important as it improves the performance of rendering by minimizing the workload that is done on the graphics pipeline phases, which are usually vertex transformations.  Due to the object being viewed from a great distance or travelling at a rapid speed the reduced visual quality of that model is rarely noticed by the viewer. 

(Author: Unknown, Last Updated: 15 December 2020, Accessed: 31/01/2021) 


Geometric Theory 


Picture source: https://www.mathsisfun.com/geometry/vertices-faces-edges.html 

Vertices are pieces that intersect with two sides. There will be two vertices at the end of a straight line on the edge, one on each side. As it takes a large number of edges to construct a curve, more vertices are also needed to build the curve. Without a curve, a box shape will need a very small number of vertices. To calculate their exact position, vertices are given coordinates. Each vertex will have its own x, y and z coordinates. 

The vertices can be selected and easily changed when editing a polygon in a 3D program to alter the model’s form. They can make a huge difference to a model that doesn’t already have many vertices, or to do something like smooth it out, a minor difference to a complex model. When editing a mesh to improve the smoothness or just to add more items, they can still be applied to a model. 


Picture source: https://www.cl.cam.ac.uk/teaching/1213/CompGraph/progenv.html 

In a 3D program, a line is a simple shape you can represent. A single line is connected by two vertices and more can be added to create a more complex form, lines may either be curved or straight. A line that has already been generated can be attached to vertices, so they can be shifted around and positioned in various positions to create base models that can be extruded, lathed, smoothed and several other choices. Splines are identical to lines, but can be changed. Splines can be extruded, extended on a single axis or transformed to a polygon, expanding the shape. 


Picture source: https://dev.to/isaacdlyman/explain-how-to-readwrite-bezier-curves-like-im-five 

Curves are the options that can be used when using a vertex that is primarily used with splines, which are known as Bezier curves. A line or “path” used to construct vector graphics is a Bezier curve. It is composed of two or more control points that define the line’s size and shape. The first and last points mark the beginning and end of the path, while the path’s curvature is defined by the intermediate points. 

To create smooth curved lines, Bezier curves are used, which are common in vector graphics. Since they are defined by control points, without losing their smooth appearance, Bezier curves can be resized. 

(Author: Unkown, Last Updated: April 3 2014, Last Accessed: 29/01/2021) 



Picture source: https://knowledge.autodesk.com/support/maya-lt/learn-explore/caas/CloudHelp/cloudhelp/2015/ENU/MayaLT/files/Polygons-overview-Introduction-to-polygons-htm.html 

The edge is the polygon’s edges, or sides, and is present in both models and primitives. More literally, it is defined as a connection between two vertices. It is easy to shift these edges around when a model or a primitive is transformed to an editable poly, to create something somewhat different from what you already have. The edges can also be modified using curves such as the Bezier curves mentioned before. 


Picture source: https://www.youtube.com/watch?v=jTZs8bRoWxE 

Traditionally, polygons are a plane figure connected by a closed path or a circuit. It is made up of a finite sequence of segments of straight line that are called its edges or its sides, and the vertices or corners are the points where the edges intersect. We’d call these the vertices in 3D graphics, though. The full polygon is called the body, or the element, and the faces can also be altered in 3D programs, and the vertices, sides, faces and something called a boundary can all be altered on a polygon to construct the shapes and models you want. In a 3D application, the easiest polygon you can construct is a triangle. Polygons are formed most frequently from the extrusion of a shape or spline. 


Picture source: https://www.turbosquid.com/Search/3D-Models/free/element 

The combination of edges, vertices, and polygons to produce a 3D model is an element. Each 3D model is an element of its own. On a basic 3D model, combining more than one element into one expands the model. The element is often referred to as the body, and it is normally the complete singular model or primitive you are dealing with.  Having each model as an individual makes it far easier to define and change without modifying the whole model. 


Picture source: https://www.flatpyramid.com/3d-models/characters-3d-models/3d-face/ 

A face is a closed series of edges in which there are three edges to a triangle face and four edges to a quad face. A coplanar set of faces is a polygon. Polygons and faces are equal in systems which support multi-sided faces. However, only 3 or 4 sided faces are provided by most rendering hardware, so polygons are depicted as multiple faces.  


Picture source: https://docs.blender.org/manual/en/latest/modeling/meshes/primitives.html 

In 3D modelling we have primitives, primitives are three-dimensional geometric shapes that are the basic building blocks that are used to create more complex geometric models. It is possible to create most of these primitive objects by lathing or extruding 2D shapes, the majority of 3d modelling software packages such as Maya, 3D Studio Max, Blender, Houdini, Lightwave, AutoCAD any many more will come these primitive objects by default for speed and convenience.  

The most commonly used 3D primitives that are used are cubes, pyramids, cones and spheres. Just like 2D shapes, artists can assign a resolution level to them in order to make them appear smoother by boosting the number of sides and steps used to define them. It is a classic beginner mistake to incorrectly use unmodified primitives. 

Primitive objects can be manipulated in multiple ways including: moving, rotating, grouping, hidden and transformed from one shape into another. Each primitive object has property values which determine their appearance, behavior and other features of that primitive. 3D modelling software will typically have a window where you change the primitive’s properties to suit the needs of the artist. 


Picture source: https://www.cs.cornell.edu/projects/stitchmeshes/ 

Meshes are a series of vertices, edges and faces in 3D software that describe the form of an object or model. Polygon mesh, which is composed of a set of vertices, edges and faces that then make up the structure of a polyhedral object, is the most common type of mesh. In general, the faces can consist of triangles, quadrilaterals or other common convex polygons because it makes rendering easier. They may also be made of more complex concave polygons or polygons that contain holes. 

Different types of elements, including vertices, corners, faces, polygons and surfaces must be stored by objects generated using polygon meshes. However, a renderer may accept only 3-sided faces, depending on the program being used, so polygons must be built using a lot of these. Many renderers are capable of supporting quads and higher-sided polygons, or triangulating polygons to triangles on the fly, making it impractical to store a triangulated mesh. There are other mesh representations, such as Vertex-vertex, Face-vertex, Winged edge and Render dynamic meshes, next to polygon meshes. 


Picture source: https://www.lugher3d.com/component/joomgallery/cars-and-bikes-mental-ray-render/fiat-500-3d-model-for-maya-wireframe-731 

A skeletal description of a 3D object is a wireframe model. In a wireframe model, there are no surfaces, it is composed of just points, curves, and lines which represent the edges of the object. In order to generate wireframe models, you would place 2D objects in a 3D space. Some 3D wireframe objects, such as 3D polylines and splines, are also supported. This method of modeling can be the most time-consuming since each object that makes up a wireframe model must be drawn and placed separately. 

Coordinate Geometry 

The Cartesian coordinate system is the system used in all 3D applications, providing the feeling of operating in three-dimensional space. This system is used to describe the physical dimensions of space, which are width, length and height. A French mathematician called Rene Descartes was the man who invented this system and he did this in the year 1637. He invented this in order to mix algebra and Euclidean geometry, but in the future it became associated for other things, such as 3D applications. 

In an attempt to combine algebra and Euclidean geometry, the French mathematician Rene Descartes first developed the Cartesian coordinate system in 1637. In the growth of analytic geometry, calculus and cartography, his work has played an important role. 

2D Coordinate System 

Picture source: https://mathinsight.org/cartesian_coordinates 

The X and Y axes are the two axes that used to describe the 2-dimensional Cartesian system; X is the horizontal axis and Y is the vertical. Together, these axes form the plane of xy and the arrows on the axes illustrate that they extend in the same direction infinitely. This implies that, based on its position, a value on the x or y-axis can be positive (+) or negative (-). The root, labelled O, is regarded as the point where these axes meet. This root represents the center of the coordinate universe 

To find some point on the xy plane relative to the origin, we first need to assign a value to the x-axis and then the y-axis in the form (x,y). 

3D Coordinate System 

Picture source: https://www.skillsyouneed.com/num/cartesian-coordinates.html 

A third axis of measurement named ‘z’ was applied to the system in the early 19th century.  This dimension is now referred to as the axis of depth which runs at a perpendicular angle to the plane of xy, extending in all directions infinitely. It is the third axis that enables us in 3-dimensional space to find any points. 

If we wanted to find a point in 3-dimensional space, we would write it in the format (x,y,z), including the z this time, 


Picture source: https://all3dp.com/2/surface-modeling-cad-simply-explained/ 

A more advanced methodology for representing objects than wireframe modeling is known to be surface modeling. Compared to wireframe modeling, surface modeling has much less confusing interface functionalities, but not as many or advanced as solid modeling. Conversions of different three-dimensional modeling styles are commonly involved in the technique. 

The surface simulation methodology makes use of B-splines and Beizer mathematical techniques when it comes to manipulating curves. One of the special features of surface models is that, like solid models, they cannot be cut open. Unlike solid modelling, where it has to be accurate, the objects used in surface modeling can be geometrically inaccurate. 

Mesh construction 

Box modelling 

Box modelling is a 3D modeling technique that is commonly used by beginners to the 3D modelling world. It can help creating finely detailed models in a methodical set of steps and for sketching ideas to try them out to see if they would be good or not.  

An artist or designer will start off with a low-resolution primitive object such as a cube, pyramid, cylinder, sphere or other. It is usually best to start with a box like object such as a cube because cubes are much easier to manipulate due to them only having 8 vertices in total and it is simpler to create box-like shapes than rounded circular shapes.  

After the artist has picked their shape, they will start blocking out the overall shape and form the object they desire to model, they can do this through a number of methods such as extruding, scaling, or rotating the faces and edges to modify it to the end model. The artists will then add detail to the shape by manually adding edge loops or by the technique of subdivision. Subdivision is where the artist will dive up each of the faces into smaller more detailed faces to which will make the shape look blockier. 

Here is an example of a sci-fi-spaceship created by Luca Giammattei a 3D artist who specializes in modelling and texturing for visual effects and real-time production. 

Picture source: https://discover.therookies.co/2020/09/12/3d-modelling-techniques-for-film-and-games/ 

Luca gives the tip “Don’t rush the blocking stage, what you come up with will be the foundation of the entire project.” 

He expands on this by saying Lining up the block out with the concept can be useful to match at 99% the proportions. If you chose a 3D concept this part is easier, but If it was reshaped during the painting process, aligning it can be tricky. Try to not mess with the focal length and to match at least one side of it. The majority of concepts don’t have a perfect perspective. 

I usually break the blocking stage in various refinements steps. Getting the main shapes in place at first is very important. You can block pretty much everything with basic primitives. Don’t worry too much if meshes co-penetrate, remember to use fewer polygons as possible. 

To block the cockpit, the body and the wings of the Manta 33 I used one sphere and 4 cubes. I didn’t put too much emphasis on secondary and tertiary shapes. 

(Author: Luca Giammattei, Created: 2020/09/12, Last Accessed: 27/01/2021) 


Extrusion modelling 

Most 3D modelling software such as Maya, 3D Studio Max, Blender, Houdini, Lightwave, AutoCAD any many more will come with an extruder tool in the package. The extruder tool is a simple tool which modelers will start off with when shaping one of their models. The extruder tool is applied to a set of faces or one individual face and will generate new faces of the same size and shape connected to all existing edges of that face. There are two ways in which a mesh can be manipulated by extrusion, an artist can either collapse a face in itself or they can extrude it outwards. 

Picture sourcehttps://3d-ace.com/press-room/articles/polygonal-3d-modeling-techniques 

Two examples of extrusion are: 

A modeler can transform a primitive such as a pyramid with a 4-edged base into a complex shape with more vertices. They could do this by extruding the pyramid base downward in the negative Y direction which would give the modeler four new vertical faces between the base and the cap of the model. This process of performing extrusion on a model can be seen in a house as can be seen in the picture above or table legs modelling. 

Another example is the extrusion of edges. Extrusion of edges is mainly used in the contour modelling which works by duplicating the edge to either further pull it or rotate it in any direction together in combination of a new automatically created polygonal face. This will connect the two edges. 

Use of primitives 

Picture source: http://what-when-how.com/3d-animation-using-maya/modeling-primitives-wireframes-surfaces-and-normals-essential-skills-3d-animation-using-maya-part-1/ 

In 3D modelling we have primitives, primitives are three-dimensional geometric shapes that are the basic building blocks that are used to create more complex geometric models. It is possible to create most of these primitive objects by lathing or extruding 2D shapes, the majority of 3d modelling software packages such as Maya, 3D Studio Max, Blender, Houdini, Lightwave, AutoCAD any many more will come these primitive objects by default for speed and convenience.  

The most commonly used 3D primitives that are used are cubes, pyramids, cones and spheres. Just like 2D shapes, artists can assign a resolution level to them in order to make them appear smoother by boosting the number of sides and steps used to define them. It is a classic beginner mistake to incorrectly use unmodified primitives. 

Primitive objects can be manipulated in multiple ways including: moving, rotating, grouping, hidden and transformed from one shape into another. Each primitive object has property values which determine their appearance, behavior and other features of that primitive. 3D modelling software will typically have a window where you change the primitive’s properties to suit the needs of the artist. 

In summary, this technique of starting off with a basic simple primitive object as the starting point for an artist’s model is known as primitive-up modelling where the basic shape of a primitive object is modified and the attributes are changed to make them into either a much more complex shape or not complex at all. 


Polygon Count 

A polygon is basically a flat two-dimensional shape that has straight sides which are fully closed and joined up, a polygon can have any number of sides. In 3D modelling polygons are combined to create faces to stitch together a model, despite how complex or simple a model is, polygons will always be involved to create whatever 3D model the artist desires. Polygon count is not the number of polygons which are used in a model or anything else that was created in a 3D modelling software, it is a very important factor consider because it can become one of your greatest constraints if not kept in mind. 

This is because every polygon in your model will be composed of points which are known as vertices, (vertices is the plural for vertex). All of the vertex data gets stored in a contiguous block of memory which is referred to as a vertex buffer, the information about the shape that they represent is either directly coded into the rendering program or gets stored in another block of memory called an index buffer.  

As you can possibly see when you have thousands of polygons just to represent one object this will take up lots of memory which will begin to effectively slow down the computer since so much memory is being hogged up and also because there are going to be tens or hundreds of thousands of vertices the CPU will have to process each of them individually which will take long and in turn eventually show a decrease in response speed. This constraint of polygon count becomes even more important in models that are used as assets in games as it is just extra work for the computer handle and take care of. 

As an example, in simulation or strategy type games such as Zoo Tycoon and The Battle of Middle Earth, there are tons of character units within just one scene thus it is extremely crucial to have a polygon count on the lower end which would be around 500 – 1000 polygons, this is because the game needs to minimize the amount of frame drops in the frame rate so the gameplay is as smooth as possible for the end user and so the PC does not unexpectedly crash due to the great number of resources. 

Picture source: https://gamefabrique.com/games/the-lord-of-the-rings-the-battle-for-middle-earth-2/ 

Picture sourcehttps://www.reddit.com/r/aoe2/comments/bmfuy6/zoo_tycoon/ 

Another example is an MMORPG such as World of Warcraft. A game like this would typically a polygon count in the range between 2000 – 6000, this is due to them focusing mainly on one single character which the player is controlling so they want their single character to be detailed to a higher degree otherwise the character would look boring disinteresting the player.  

However, other MMORPGS and World of Warcraft have had models which are much lower like 100 polygons but have also had models that have surpassed 7500 polygons hence this shows that games are not restricted to a specific range of polygon count but they try their best to it due to the capabilities of modern PCs. 

Inserting image...Picture source: https://www.forbes.com/sites/hnewman/2019/05/15/world-of-warcraft-classic-feels-like-a-totally-different-game/?sh=12f6e74e5f77 

File size 

File size which is simply the size of files such as the 3D models and scenes is quite important for the developers to be mindful of so they can maintain a decent processing speed and keep the disk space required to a minimum. The file size holds the information of the measured size of these digital files and are seen commonly in these formats: KB, MG, GB and TB. In terms of how many bytes each of these formats can hold it is an exponential growth through KB up to TB as seen below: 

  • 1 KB = 1024 Bytes 
  • 1 MB = 1048,576 Bytes 
  • 1 GB = 1073741824 Bytes 
  • 1 TB = 1099511627776 Bytes 

The disk space is determined by two factors, the hard drive or solid state drive that was bought and how much space it can hold up to before files need to start being removed, the other factor is the size of files that are stored on it because the bigger the files the less remaining disk space there will be and the slower the computer will operate as well. This is one of the main reasons why file size is a constraint and needs to be dealt with mindfully not bloating up the end user’s computer with large files that hog up space on their drive and ultimately slow it down, also when the user goes to install for example this game the larger the files the longer it will take for them to install and to further add onto that the end user may not have a good stable internet connection potentially taking them a few days just to install a 20 gigabyte game. 

For example, World of Warcraft a MMOPRG with lots of 3D models in it and scenes says on their website that it requires 100 Gigabytes of space preferably on a solid state drive which is ridiculous amount considering that the average computer user only has between 250 – 500 GB on their PC, and they also recommend a solid state drive over a hard disk drive due to performance issues that a user may experience with a hard disk drive. In comparison to a game such as VALORANT which only requires 10 Gigabytes of space on either a hard disk or solid state drive which is ten times less. 

Picture sourcehttps://arstechnica.com/gaming/2020/04/valorant-closed-beta-the-tactical-hero-shooter-i-never-knew-i-wanted/ 

Rendering Time 

The rendering time can vary depending on factors such as the length and the size of the model or animation that is being dealt with during the 3D production, it can also vary depending on the individual frames as well. This rendering time can take any time between a few seconds to minutes to hours to days and in extreme scenarios even weeks.  

A smaller more amateurish animation and less complex models would not take too long generally to render and save however with bigger, greater more professional companies and studios such as Pixar, Disney and Dreamworks who deal with very complex, finely detailed and realistic animations the rendering time starts to become a genuine concern along with the fact that these films last hours as well. These big companies although would have very fast computers with the best processors and graphics card and storage drives that are optimized to deal with rendering tasks in order to make the rendering process a lot faster and for less potential errors to creep in.  

Rendering is not only limited to animations it also can extend into the range of real time web 3D rendering as mentioned before which will require the machine to render and display this in real time making the speed of this rendering depend on the computer’s specifications. Rendering also extends into video games including consoles and mobiles as well as just computers.  

Typically, on these games as they are booted up there will be a load screen where the game will need time to render out certain scenes and models ahead of times to make them available for when the player in real time comes to interact with them which smoothens and enhances the whole gameplay experience. 

Picture source: https://metro.co.uk/2017/11/01/why-does-gta-v-take-so-long-to-load-7041927/ 

For example, GTA 5 is a game which requires 72 GB of disk space so you can already tell that the loading screen will take quite a while and for most people on it takes around 5 – 20 minutes just for the game to load and render all the models and scenes which is dependent on the machine’s processor speed, graphics card, memory and type of storage device. This is due to GTA 5 being an open world game and extremely detailed.  

Answer from ‘Frazzi Li’ on Quora goes more into detail as to why it generally takes longer for the GTA 5 loading screen 

“For starters it’s going to unencrypt the .rpf archives into which the game is split, these archives are named from “x64a.rpf” all the way to around “x64x.rpf” , I’m not sure if they go all the way to Z now, haven’t looked at V files in quite some time 

Anyway these rpf files contain different individual files within them , and most contain about 20 different rpf archives inside them (rage package file (I think)) 

Now when you’re loading in , it needs to decide which vehicles to spawn it, and find the respective texture for those, the texture dictionaries are in a .YTD format , where they’re all stored as .DDS and compression using DXT mostly 

Now the cat’s name in the files is for example adder.yft , and it’s textures are stored in adder.ytd , which means there’s minimal searching going on, and shared textures between vehicles are stored in YTD’s also, this shared texture means that it’s always in the RAM, and it’s loaded in always at startup, now it needs to load the terrain, which itself is split into parts, also contained .rpf archives, honestly it depends where you’re spawning in, if it’s somewhere like Trevor’s airfield, there’s not a lot of props compared to spawning at Venice beach , meaning the game doesn’t have to load as many props or objects, which are in stored in a different format, with embedded textures, this makes it more efficient to load them in, however one of these props is for example, the front side of the cinema , meaning that a few hundred of these objects have to be loaded in from different archives located around the 70Gb game so you can imagine… 

Then follow the scripts , players, physics, shadows , and graphics overall which are done using draw calls.” 

(Author: Frazz Li, Created: July 26 2017, Last Accessed: 27/01/2021) 


3D Development Software 

3DS Max 

3DS Max is a computer graphics software that is used for making 3D models, animations and digital images. It is one of the most well known software within the computer graphics industry and is notorious for its robust toolset for 3D artists.  

3DS max is owned by a company called Autodesk which also own other popular computer graphics software such as Maya and AutoCAD which I will discuss later.  

3DS max is most commonly used for the modelling of characters, animations and rendering photorealistic images of buildings or other objects. In terms of speed and simplicity 3DS Max is one of the best. The program is able to handle various stages of the animation procedure such as: visualization, layout, cameras, modeling, texturing, rigging, animation, VFX, lighting and rendering.  

As a result of 3DS Max’s efficient workflow and powerful modelling tools it can save artists a significant amount of time, for example popular TV commercials and film special effects 3DS Max often use it to generate graphics for use alongside their live action work. The movies Iron Man and Spider-Man 3 were created using 3DS Max. 

Here is a blog from ‘Felipe Fierro’ a character artist who was 19 years old at the time of writing this blog. He has been a big fan of Spider Man drawing pictures of the character since Felipe was 4 years old.  

He first starts off with creating the base mesh in 3DS Max and then exports the model into another tool called MudBox where he can begin to sculpt the body and some folds in order to give it a ‘spandex’ texture. He then uses it to make the textures such as the webbing and the spandex tile which took him a lot of patience he mentions. He notes that “it’s important to use a good maps resolution in order to get more detail”. He then sends it all over to 3dS max and uses the default render and sets a simple lighting set with two front lights and one back light. Finally, he uses a big format like HD in order to see all the work done to the model and textures. 

(Author: Felipe Fierro, Created: 12 Jun 2014, Last Accessed: 28/01/2021) 


Many other industries also use 3DS max to create graphics that are mechanical or organic in nature such as: engineering, manufacturing, education and medical all make use for their visualizations needs. Real estate and architectural industries also use 3DS max to create photorealistic images of buildings in the design phase as it assists their clients visualizing what they are paying for accurately and to make changes based on their needs.  

3DS max offers the modeling technique called ‘polygon modeling’ which is very common in game design, it allows artist to have a high degree of control over individual polygon giving them a greater range of detail and precision on their models. After a model is finished, they can then further use other tools to create the needed materials and textures to make their model stand out more. By adding surface details like colors, gradients and textures this will leave them with a much higher quality render and aesthetic game assets. 

(Author: Josh Petty, Created/Last Updated: Unknown, Last Accessed: 28/01/2021) 



Maya is another 3D computer graphics software which is used in the development of video games, animated films, TV series, 3D applications and any visual effects. The software has great potential allowing the artist to create finely heavy detailed models which can help giving ultra 3D effects that creates a realistic view for the viewer.  

As mentioned before Maya belongs to Autodesk which also owns 3DS Studio Max and AutoCAD. However, the software maya previously belonged to a corporation named ‘Alias Systems’ but Autodesk took all rights under its name. The software can operate on Windows, Mac and Linux meaning it is platform independent. 

Maya is usually used in the film industry, so artists can create animated movies, cartoons and they can add any special effects they want to any video. The program gives a wide range of scope to generate live models which give actual live 3D effects. An artist who learns Maya is able to make any of the games are newly trendy in the market and they can play the role of a cameraman in any type of short movie, TV serials, movies and advertisements. The tool is the basic need of every graphic designer who wants to turn their 3D imaginations into pictures and where they can transform small video clips into a short film by adding dynamic effects and other sorts of things. 

The software Maya is a very highly developed software with advanced features for users who are very skilled at 3D modelling allowing them to maximize their skills. These advanced features include:  

  • Motion graphic features like additional MASH nodes, 3D type and motion graphic toolset 
  • 3D animation features such as Parallel rig evaluation, Geodesic Voxel binding, General animation tools, Time editor and more 
  • 3D modelling features such as UV editor, polygon modelling, open subdivision support 
  • Dynamics and effects such as interactive hair grooming, deep adaptive fluid simulation, adaptive aero solver in Bifrost and more. 

One of the big advantages of working in Maya which is that it has a much faster performance as it has ‘Cached Playback’ which allows the artist to check their animation in Viewport, also Maya is more responsive which helps speed up the process for artists as they can see their work in many different angles and in real time making work much easier for them. 

The other advantages are that completed work can be checked at a faster speed which saves time due to it getting render sooner, it has tons of dynamic effects which makes the experience more realistic and makes you feel that you are live, the video editing task is versatile in the sense that you can include clips you want to add and filter the effects. 

Top companies that use Maya in their work includes: Accenture, Ark Info Solutions Pvt. Ltd, Visual Connections, Namco, Epic, Polyphony, Core, Square, EA, and Nihilistic and many more. 

Here is a blog from designers who had worked in Planet of the Apes using Maya 

Florian says: “Maya is the foundation of our 3D pipeline. We’re actively testing Maya 2017 on other projects, but we had already locked into the 2016 version for this latest Apes feature – and that familiarity was helpful to us. Almost all of our polygonal modeling happens in Maya, as does all of the ape topology. 

For hair and grooming, we use our proprietary Wig software, running in Maya. The flexibility Maya offers in that regard is wonderful, since we can just design Maya plugins (like Wig, Lumberjack, and Totara) to meet our needs, and that’s exactly how we handled tricky stuff like the fur, or the trees and vines that form the backdrop of so much of the movie. We rely a lot on custom Python and open Maya tools, and that versatility is huge for us. “ 

(Author: Unknown, Last Updated: 22 Feb 2018, Last Accessed: 28/01/2021) 



Blender is a free open-source for 3D computer graphics, it supports nearly every aspect of 3D development with its strong foundation of modeling capabilities and it’s robust texturing, rigging, animation, lighting and a variety of other tools for creating 3D models. The program was developed out of the Blender Foundation which was a nonprofit organization created in 2002. In 2007, the spin-off Blender institute was formed and it now hosts the foundation which is now a home base for continued development and creative projects.  

Despite Blender being a free tool, it is still powerful and valuable for a wide range of users, from the beginner hobbyist to the professional amateur. For example, NASA even uses it for a lot of its public models. Although due to it being continually refined by advanced users, it can be somewhat of a learning curve for total amateurs. The heart of blender is all about being a tool with great access allowing people to use their creative thinking to build whatever they want. 

Blender is filled with lots of useful tools for artists, although some tools are more relevant to beginners than others. These tools include: 3D modeling, UV unwrapping, texturing, raster graphics editing, rigging and skinning, fluid and smoke simulation, particle simulation, soft body simulation, sculpting, animating, match moving, rendering, motion graphics, video editing, and compositing. These are all extremely important tools for the development of 3D applications. 

Blender’s interface is great for beginners as it simply and straightforward, you have all your main tools located on the left, all your properties and options on the right and the main controls at the bottom. You also have the option to modify and change your Viewports in a variety of ways, for example if you have a dual monitor setup you can split it between the two monitors to give yourself more room to work with. The more later versions of Blender comes with a greatly improved system that allows quick commands to be entered and has radial menus which both allow you to get to the tools you need in as few clicks as possible making it faster and easier to use.  

Picture sourcehttps://blender.stackexchange.com/questions/16039/can-the-mesh-select-mode-menu-converted-to-a-pie-menu-in-blender-2-72 

At this website here you can download the various 3D models that NASA uses as mentioned before and play around with them yourselves in Blender: https://nasa3d.arc.nasa.gov/models 

Picture source: https://www.nasa.gov/content/3d-models 


Houdini is another 3D computer graphics program which was developed by Side Effects Software, they are a twenty-five-year-old company that is based out of Toronto. Houdini was designed for artists that work in 3D animation and VFX for film, TV, video games and virtual reality. Side Effects’ former flagship product was called ‘PRISMS’ which is a package of 3D graphics tools that served as the basics for the development of Houdini.  

Houdini differs from most other 3D animation software as it uses a node-based procedural workflow, this makes it much easier to explore iterations as artists refine their work, whereas programs like Maya or Blender store changes made in a user history which makes it trickier to return to a previous version of work. Houdini’s unique node-based approach that allows for multiple iterations is what makes it easier for artists to make changes and further develop their animations and effects.  

Houdini is best known for its advanced dynamic simulation tools which allows artist to generate extremely realistic visual effects. The software features an efficient workflow which is well suited for small studios and individual artists, new efficiencies in the software enables artists to reach state of the art effects on less advanced hardware. Assets in Houdini are mainly created by connecting a series of nodes, the advantage of this system is that it enables artists to create detailed objects in a shorter number of steps in comparison to other programs.  

The program comes with the powerful rendering engine named Mantra, but it also supports other 3rd party rendering engines for example, Renderman. Houdini has an open environment and allows scripting through a variety of APIs with the most common choice being Python for most of the packages. Internal scripting enables users to automate certain tasks as well as creating custom tools and plugins, this makes Houdini extremely versatile software. 

Houdini is used by major VFX companies including: Walt Disney Animation Studios, Pixar, DreamWorks Animation, Double Negative, ILM, MPC, Sony Pictures Imageworks and more. 

Picture source: https://conceptartempire.com/what-is-houdini-software/ 


Picture source: https://www.awn.com/news/lightwave-3d-offers-free-90-day-lightwave-trial 

Lightwave is also another 3D computer graphics software that was developed by an organization named NewTek. Lightwave has been used in various places including: films, television, motion graphics, digital matte painting, visual effects, video game development, product design, architectural visualizations, virtual production, music videos, pre-visualizations and advertising. 

Lightwave is used for rendering 3D images, both animated and static ones. The program comes with a fast rendering engine that supports features that are more advanced such as: caustics, radiosity, realistic reflection and 999 render nodes.  

The 3D modeling component that Lightwave is packaged with supports both the modeling techniques subdivision surfaces and polygon modeling. The animation component contains features line forward kinematics and inverse for character animation, dynamics and particle systems. Lightwave just like Houdini enables programmers to be able to expand the tool’s capabilities by including an SDK which offers Python, LScript and C language interfaces so the tool can be customized to suit individual needs and custom plugins, effects and other features. 

Companies that currently use Lightwave include: Viacom, Amazon, Tencent, PETERSON, RFA Engineering, Lockheed Martin and many more. 


Picture source: https://www.autodesk.co.uk/products/autocad/included-toolsets/autocad-architecture 

AutoCAD is a piece of software that allows users to professionally create and edit 2D geometry and 3D models with solids, surfaces and objects. The name AutoCAD is derived from Auto which is talking about Autodesk which is the company that owns it alongside the other popular 3D animation programs 3DS Studio Max and Maya, and the CAD stands for ‘Computer Aided Design’. It’s great variety of editing possibilities that it offers it what makes it one of the most well known CAD software, hence why it is a tool that is widely used by engineers, architects, industrial designers and others. 

The very first version of AutoCAD came with only a modifiable drawing and a small portion of functionality. However, even though the software was so simple at the current time it was a real revolution as it enabled artists to replace the traditional hand drawings with digital drawings. It was also not designed for 3D design, only capable of two dimensional modelling. 

The program is available on the operating systems Windows and Mac, and the support programming APIS include VBA, AutoLISP, Visual LISP, ActiveX Automation, ObjectARX and .NET. However, it all comes down to the experience and programming needs of the user to determine which type of interface will be most suitable for them to use. 

In AutoCAD there are four types of 3D modeling. One of them is solid modeling where the artist can play with different masses, another one is surface modeling which gives the artist precise control of curved surfaces, the third one would be wireframe modeling where different models and modifications can be made by creating a three dimensional structure to use as a reference geometry, lastly mesh modeling enables the artist to freely sculpt shapes, smoothen them and create folds. These 3D models have the option to be exported in a .STL format which makes it possible for them to be 3D printed. 

Cinema 4D 

Picture source: https://www.domestika.org/en/courses/293-creation-of-characters-with-zbrush-and-cinema-4d/units/1406-let-s-make-our-first-character/lessons/5556-materials-lights-and-render-in-cinema-4d-2 

Cinema 4D is an all-purpose comprehensive 3D software package made by the organization ‘Maxon’ which is based in Friedrichsdorf, Germany. Cinema 4D has been around for quite a while now which has allowed the program to mature into a well rounded stable program. It is also used extensively in architectural design, product Visualization, medical industry, film & TV, advertising, game industries. 

One of its striking features is its Motion Graphics module named MoGraph and many other high level features. Cinema4D is also capable of procedural and polygonal modeling, materials, sculpting, UV Edit, MoGraph, animation, body paint 3D, camera, lighting, rendering, character rigging and Xpresso. 

The thing that highlights Cinema4D out in comparison to its other competitors such as Maya, 3DS Studio Max, Blender, Houdini and more is because of its learning curve. Cinema4D is very easy to learn as it features and intuitive interface and a logical workflow and has lots of resources, tutorials and courses to learn the basics from. Due to its interface generally being much more intuitive than the other 3D applications it enables beginners to get up and running with the 3D concepts in little to no time. 

Unlike most of the 3D software out there, Cinema4D has its own programming language which can be used to develop platform independent plug-ins, it is named ‘C.O.F.F.E.E’. There also an additional three software packages that Maxon have distributed, these are: the XL bundle that includes Net Render, PyroCluster and MOCCA, Thinking Particles and Advanced Render and lastly the Studio Bundle which includes all of the modules.  

Due to its increase in popularity, film studios have begun to use Cinema4D effectively in the development of a few famous films, such as: The Chronicals of Narnia, The Girl With The Dragon Tattoo, The Polar Express and Open Season. 

Picture source: https://www.pinterest.com/pin/40391727877949141/ 

Softimage XSI 

Picture source: https://www.pinterest.com/pin/706431891520566963/ 

Softimage XSI otherwise known as Autodesk Softimage is a discontinued 3D computer graphics program with the last release being in 2015. It was used for producing 3D models, 3D computer graphics and computer animation. It was predominantly used in the film, advertising and video game industries in order to generate computer created characters, objects and environments.  

The Autodesk Softimage contains various tools that are often used in computer graphics. The modeling tools allow the generation of polygonal or NURBS model, subdivision modeling works directly on the polygonal geometry and all modeling operations are stored on the construction history stack allowing artists to easily return to previous versions of their work.  

Control rigs are generated using bones with automatic inverse kinematics, constraints and specialized solvers such as the spine or tail. There are also animation features such as layers and a mixer which enables the combination of animation clips non-linearly, just like modeling, animation operators are stored in their own separate construction history stack which allows users to change the underlying geometry of characters or objects that have been animated. 

It also has an FX tree, the FX tree is a built in node based compositor that can directly access image clips that were used in the scene. The FX tree also has the ability to apply compositing effects to image clips that are present in the fully rendered scene which allows Softimage to render scenes that uses textures modified in various ways within the same scene. 

Just like the other 3D applications, Softimage comes with an extensive API and scripting environment that users can use to customize the software to their needs, the scripting languages which can be used are: C#, VBScript, JScript, Python. In its time it was used in a few movie projects such as: Predators, District 9 and Thor. 

File formats 

3ds file 

Picture source: https://www.iconfinder.com/icons/229197/3ds_file_format_max_type_icon 

A file with a .3ds extension is a file which represents 3D Studio mesh file format that can be opened up in AutoDesk 3D Studio Max. Since the 1990s the AutoDesk 3D studio has been in the 3D file format and has now upgraded to 3D Studio Max for working with animation, modeling and rendering. A 3DS file contains data for the 3D representation of images and scenes and is one among the foremost popular file formats for 3D data import and export. 

It takes into consideration information like mesh data, lighting information, camera locations, smoothing group data, viewport configurations, bitmap references and attributes to form vertices and polygons so as for a scene to be rendered. The 3DS is a binary file format which follows a predefined structure for the storage and retrieval of data, as a result of this binary format makes the 3DS file format faster and smaller as compared to text based file format, the info that gets stored inside the 3DS file is stored as chunks. 

MB file 

Picture source: https://fileinfo.com/extension/mb 

A mb file is a project file that is created with Maya which is a 3D modeling and animation program, it stands for ‘maya binary’. These files that are related to Maya are stored in the Maya 3d binary scene with the .mb extension. All of the textures, 3D models, animation data, lighting effects and other properties are stored in the .mb file format as opposed to the regular ASCII format of Maya which has a .ma extension. The advantage to a .mb file is that the binary format is smaller and in some cases quicker to load whereas the advantage to the .ma extension is that ASCII text can be opened by any text editor and read just like a plain file. These files can be accessed on all operating system which can run the Maya program so Windows, Mac and Linux thus it’s platform-independent. 

LWO file 

Picture source: https://fileinfo.com/extension/lwo 

A file that ends in the .lwo file extension is a file which can be opened by the Lightwave software, it stands for ‘lightwave object’.  Lightwave is a 3D computer graphics program that is used for 3D modeling, animation and rendering. The file information will store the relevant information such as vertices, polygons and surfaces which describe and object’s shape and its appearance, it can also contain references to image files that were used for the object’s textures. There is also the .lw file extension which is the binary format file that can potentially be smaller and faster to load. 

C4d file 

Picture source: https://www.svgrepo.com/svg/153019/c4d 

The C4D file which stands for ‘Cinema 4D’ is a file extension that is primary associated with the use of the professional 3D computer graphics software Maxon developed by MAXON Computer GmbH. Inside the C4d file format it will store relevant information such as the scene: which includes one or more objects that have a position, rotation, pivot points, meshes and animation information. C4d files also contain the possibility to be exported to image editing programs like Illustrator and Photoshop and they can also be exported to video editing software such as Final Cut Pro and After Effects.  

DXF file 

Picture source: https://www.lifewire.com/dxf-file-4138558 

Files with the DXF extension which stands for Drawing Exchange Format were first introduced in 1982 and were developed by AutoDesk as a type of universal format in order to store computer aided design models.  

The purpose of this is that allows files that are stored in this file format to be accessed by many different 3D modeling programs, so they can be imported or exported without hassle. DXF files have a wider range of use in CAD software as they are text-based ASCII format which makes them by default much easier to integrate in these types of software. You can open DXF files with any software that supports it for example, Autodesk Viewer, eDrawings Viewer, TurboCAD, ADobe Illustrator, CoreICAD, Design Review and many more. 

OBJ file 

Picture source: https://all3dp.com/1/obj-file-format-3d-printing-cad/ 

The obj file extension which can also be referred to as the ‘Wavefront 3D object file’ was developed by Wavefront Technologies. The file format is mainly used for a 3D object that consists of 3d coordinates such as polygon lines and points and texture maps and other object data. The file format contains a standard 3D image format which when exported allows various different 3D image editing programs access to open it.  

These object files can either be stored in the ASCII format which is .obj or they can be in the binary format .mod, but the binary format does not store the color definitions for faces whereas the ASCII simple text format structure can lead to huge OBJ file sizes if they store 3D objects that are rather complex and large.  

Files that have are stored with the obj extension can be opened my software that supports it such as Autodesk maya, Blender and MeshLab in all platforms such as Windows, Mac and Linux making it platform independent. 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Create your website with WordPress.com
Get started
%d bloggers like this: