What is 3D modelling?
Three-dimensional (3D) modelling is a way of providing a visual concept of what a product, building or asset will look like and can be used in many different areas from architecture to toy making and many more.
It is the process of developing a mathematical representation of any surface of an object that can be either inanimate or living and can be created manually, using specialised 3D production software that allows for polygonal surfaces to be created and deformed; or by scanning real-world objects into a set of data points, that can be used to represent the object digitally. 3D modelling software is a class of 3D computer graphics software, used to produce 3D models. Individual programs of this class are called modelling applications or modelers.
There are two primary types of 3D models used in the film and games industry – a non-uniform rational B-spline (NURBS) and polygonal model. The main differences between the two are the way they’re created and manipulated
Applications of 3D
3D modelling can be used in various industries such as engineering, architecture, entertainment, film, special effects, animation and gaming, interior design and commercial advertising. It can also be used in the medical industry for the interactive representations of anatomy; within educational settings, to help teach subjects such as Chemistry; CAD/CAM related software is used in manufacturing as the software allows you to construct the parts, assemble and observe their functionality before large-scale manufacturing takes place.
In the gaming industry 3D models are found in the form of characters, weapons and various other assets. 2D animation can be costly, as each frame must be drawn individually. 3D animation is more cost effective and quicker than 2D animation, due to tools such as motion capture.
In the film industry, the use of 3D modeling is used in stage and set design; to create sets for visual effects and to produce visual effects, such as explosions, in a safer environment which is more cost effective as there is no need for experienced crew or specialists. It is more cost effective to create a 3D modelled environment and there are less restrictions than with a physical set.
In architecture 3D modelling is used to show the finished design of a building to clients or to be able to walk clients through a design before construction commences.
3D modelling is very useful in the modern world and is always in demand because it can provide a way to make something without wasting resources and is also a good way to demonstrate your product without actually having to make it.
Displaying 3D polygon animations
To display 3D polygon animation, you need an application programming interface (API) such as Direct3D / DirectX on Windows. Direct3D / DirectX normally come pre-installed on your PC but there are other options such as OpenGL, an open source program that provides the same functionality. These programs create the shape that will then be displayed on the screen.
Vulkan is a relatively new API that was released in 2016. It is based on AMD’s Mantle API that they donated to Khronos. The aim was to create a more up to date and easier to use version of OpenGL.
Application Programming Interface
An application Programming Interface (API) is a way for software to communicate with other software. A good API makes it easier to develop a computer program by providing all the building blocks, which are then put together by the programmer, allowing the use of specific software or drivers without having to write to code yourself.
The design of an API has significant impact on its usage. The design of an API attempts to provide only the tools a user would expect, creating a standard that everyone can use, making life easier for developers and providing a more consistent and familiar experience for users.
APIs can be found in a number of functions such as hardware for games consoles, whereby the API allows the game to understand the inputs from the controller, without the user having to write code for it.
DirectX and OpenGL are APIs that communicate between graphics applications and your graphics card. These APIs allow the graphics card to produce the required information, no matter what hardware, such as AMD / Intel / NVIDIA etc, is used and eliminates the need for the game-maker to write more.
Direct3D targets the Microsoft Windows platform. The OpenGL API is an open standard, which means that various hardware makers and operating system developers can freely create an OpenGL implementation as part of their system. OpenGL implementations exist for a wide variety of platforms, most notably dominating graphics API of Unix-like computer systems.
In the earliest days of 3D accelerated gaming, performance and reliability were key factors and several 3D accelerator cards competed against each other. Software was written for a specific brand of graphics card. Over time, however, OpenGL and Direct3D emerged as software layers above the hardware. Competition between the two rose, as each game developer would choose either one or the other. In the console world proprietary native APIs are dominant, with some consoles providing an OpenGL wrapper around its native API e.g. PS3. The original XBox supported Direct3D 8.1 as its native API, while the XBox 360 supports DirectX9. Most console developers prefer to use the native APIs for each console to maximise performance, making OpenGL and Direct3D comparisons relevant for mostly PC platforms.
Graphics Pipeline
When a 3D model has been created in any computer animation, the graphics pipeline is the process of turning that 3D model into what is shown on the PC screen, carried out by the GPU. It can be broken down into three main parts: Application, Geometry, Rasterization and then what is displayed on your screen. APIs such as DirectX or OpenGL are used in this process.
Modelling
There are three types of modelling used to create 3D models. These are spline / patch modeling; box modeling and poly modeling.
Box modelling is the most common form of modelling used to create polygon models. It is a technique in 3D modeling where a primitive shape is used to make the basic shape of the final model. It consists of creating a basic shape and then creating additional geometry by cutting, connecting and extruding edges to form a model. It uses a number of repetitive steps to reach the final product.
Spline modeling was the first type of 3D modeling. It can also be referred to as patch modeling. It allows a curve to be creased with the use of two control points. This form of modeling is best for objects that are not going to be animated, as they require a number of modifications to be suitable for the animation process. Cars, furniture and real estate models would be created using spline modeling.
Poly modeling is also referred to as edge extrusion. Beginning with a 3D image, consisting of points, which are built upon, it is one of the most precise techniques. The model is created from the bottom up.
Extrusion modelling is the process of extruding and moving a flat plane around until it resembles a character or object, but this isn’t used as much as it is very time consuming
Rendering Techniques
Rendering is the automatic process of generating an image from a 2D or 3D model by computer programs. Models are held collectively in scene files. The scene file contains objects in a data structure. The file could contain viewpoint, geometry, texture, lighting and shading as a description of the virtual scene. The data in the file is then passed to a rendering program e.g. Blender, to be processed and output to a digital image.
The importance of generating realistic images from electronically stored scenes has increased over the last few years. The two most popular methods for calculating realistic images are radiosity and ray tracing.
Ray tracing is a type of rendering technique which follows all rays from the eye of the viewer back to the light sources. It is capable of producing a very high degree of visual realism and is best suited for applications, where the image can be rendered slowing ahead of time. It is capable of creating a wide variety of optical effects, such as reflection, scattering etc but is poorly suited for real-time applications such as video games.
Radiosity is a method rendering the lighting in a scene by bouncing light of an illuminated object to illuminate other objects in your scene, this makes your scene look more realistic compared to ray tracing. Ray tracing is a method used to simulate the natural flow of light, interpreted as particles but are normally projected in straight lines when rendered.
Rendering Engines
A rendering engine is software that draws text and images on the screen. The engine draws structured text from a document e.g. HTML, and formats is based on the given style declarations. Examples of layout engines include Blink, Gecko, Edge, Webkit.
One rendering engine is Blender, it is an open source, complete 3D modelling software package and has its own integrated rendering engine. It supports most file types including 3DS max. Rendering engines are used to complete the rendering process and make your 3D models more smooth and ready to export.
Distributed Rendering Techniques
Distributed rendering (or Parallel rendering) is a technique of using the processing power of other PCs on a network to render scenes. Rendering graphics can require a large amount of resources for complex scenes that arise in scientific visualization, medical visualization, CAD applications, and virtual reality. Recent research has also suggested that parallel rendering can be applied to mobile gaming to decrease power consumption and increase graphical fidelity.
Examples would be the open source software package Chromium which provides parallel rendering for existing applications. Equalizer is an open source rendering framework and resource management system for multipipe applications. Golem is an open source decentralised application, that currently works with rendering in Blender.
Lighting
Lighting refers to the simulation of light in computer graphics. Most scenes contain a lighting source that provides different lighting positions to make the object in your scene look more realistic. It is to give the user a more realistic idea of what their model will look like. 3D objects can look unconvincing and flat if lighting is not carried out properly. Whereas well-chosen lighting techniques can significantly enhance the project. 3D designers and animators use several lighting techniques to light a 3D scene.
There are several 3D lighting techniques such as point or omni light, directional light, spot light, area light, volume light and ambient light.
Point or Omni light casts rays in every direction from a single, small source in 3D environment. It has no specific shape and size. Point lights can add fill lighting to a 3D scene. It can also simulate any light source, such as candlelight etc.
Directional light presents very distant source of light. Directional rays go parallel in a single direction and is often used to simulate sunlight.
Spot light is often used to simulate light fixtures, such as desk lamps, as it casts a focused ray of light.
Area lights emit light within a set boundary of a certain size and shape. It is often used in visualisation of architectural models. Area lights produce soft-edged shadows that makes rendering look more realistic. It is the opposite of directional light, as it goes in all directions and does not emit parallel rays.
Volume light is similar to omni light, as it casts rays in all directions from a certain point. However is has a specified shape (any geometric primitive) and size. It illuminates only surfaces within the set volume.
Ambient light casts soft rays in every direction and emits no shadow on the ground. Often it is used as addition to the colour of the main light source for a 3D scene.
Textures
Textures are 2D images which influence a 3D model’s appearance. Textures show high detail, surface texture or colour information on a computer-generated graphic or 3D model. Advances in complex mapping such as height, bump, normal, reflection etc have made it possible to simulate near-photorealism in real time. This is done by reducing the number of polygons and lighting calculations needed to create a realistic and functional 3D scene.
An example of texturing would be when you create an axe. You would place a wood texture over the handle and a metal texture over the actual axe head to create a realistic image.
Pixel shaders is the term given to the method of applying a shader to every pixel on a model. Pixel shaders are used in textures providing surfaces with the actual feeling the object would have in real life, for example a brick that had small pieces of sand and felt grainy. Applying a pixel shader simulates the real world ‘texture’ of an object.
Vertex shaders are functions used to manipulate the vertex data. Using vertex shaders, you can manipulate the position and colours, changing only their value and not the way the data is stored. Vertex shaders can manipulate the vertex position to create more fluid animations. This also includes effects that may not seem to affect the vertex, but the way the object is seen, for example fog, heat wave and motion blur can all be simulated using vertex shaders.
Fogging
Distance fog is a technique used in 3D computer graphics to enhance the perception of distance by shading distant object differently. Many graphics engines use a fog gradient, so objects further away are progressively more obscured by haze and by aerial perspective. This causes more distant objects to appear lower in contrast, especially in outdoor environment.
In 1990s games, when processing power was not enough to render far viewing distances, clipping was used. This caused bits and pieces of polygons to flicker in and out of view instantly, and by applying distance fog, the clipped polygons would appear at a far enough distance, that they would be obscured by the fog, fading in as the player approached. This effect was used with Turok: Dinosaur Hunter, Star Wars: Rogue Squadron, Spider-Man and Tony Hawk’s Pro Skater and many others. Silent Hill worked fogging into the game’s storyline, with the town being covered by a dense layer of fog, as the result of the player having entered and alternate reality. It worked so well, that the technique continued to be used in each of the game’s sequels, despite improved technology.
Shadowing
This is used when there is light applied to a scene, it is used to make the 3D shape more realistic and it acts the same way it does in real life. When the light hits your object it will cast a shadow onto your scene, this adds depth and realism to your object.
Level of Detail
Level of detail refers to how details a model is. Models that will spend a lot of time close up to the camera need to be very detailed and realistic, whereas the background does not need to be as detailed, as it is not the central focus.
Level of detail techniques increase the efficiency of rendering by decreasing the workload on graphics pipeline stages, usually vertex transformations. The reduced visual quality of the model is often not noticed because of the small effect on the objects appearance when distant or moving fast.
A majority of the time the level of detail is applied to geometry detail only, however recently level of detail techniques are including shader management, to keep control of pixel complexity. A form of level of detail management has been applied to texture maps, providing higher rendering quality.
Geometric Theory
A vertex is the point or corner on a shape, where faces and edges come together and meet. They can be manipulated to make a different looking shape.
A line is one of the shapes that can be created in a 3D program. Vertices can be added to a line that has already been created, so they can be moved around and put into specific positions, to create base models that can be extruded, lathed, smoothed etc.
The face of an object is a flat surface and is a closed set of edges. The ‘edge’ is the line that connects two faces together with each other. Edges are found in all models and primitives. When a model or primitive is converted to an editable polygon, it is possible to move these edges to create something different to the original.
‘Curves’ describes how rounded your shape is and can be useful when working with rounded shapes and other shapes.
A polygon is a closed, two-dimensional shape, made up of three or more straight line segments, connected end to end to end. Traditionally a plane figure that is bound by a closed path or a circuit. The complete polygon is called the body or the ‘element’. A number of companies put polygon restrictions on their work to minimize the amount of storage space they use. The polygon count refers to the number of triangles in each polygon.
An element is what an entire shape is called, for example if you made a box in your application then the box would be an element.
Coordinate Geometry
The Cartesian co-ordinate system is used in 3D software and creates the illusion of working in three-dimensional space. It is a popular system to represent the physical dimensions of space, which are width, length and height. A Cartesian coordinate system in a place has two perpendicular lines, the x and y-axis. In three-dimensional space, it has three lines, the x, y and z-axis.
Cartesian coordinates are used to calculate where a 2D or 3D model is located in the scene.
Primitives
Primitives are the building blocks in 3D. Standard primitive shapes include the box, cone, sphere, cylinder, tube, pyramid etc. Primitives are simple polygonal models that 3D software can create. They are the simplest shapes the system can create and are generally a base for a model that can be created. Other primitives that can be created include the Hedra, Torus Knot, Spindle etc.
Meshes
Meshes are a collection of vertices, edges and faces that define the shape of an object or model in 3D software and modelling. The most common type of mesh it the polygon mesh. The faces generally include triangles, quadrilaterals and other simply convex polygons, which makes rendering easier.
Wireframe is the collection of vertices, edges and faces that define the shape of the polyhedral object, this simplifies rendering because the faces usually consist of a triangle mesh.
Mesh construction involves taking all the different points of the shape i.e vertices, lines, edges, faces and polygons and putting them all together to make a solid 3D shape.
3D development software eg 3D Studio Max, Maya, Lightwave, AutoCAD, Cinema 4D, Softimage|XSI
There are a number of 3D software packages available. The most common is 3Ds Max and Maya.
Autodesk 3Ds Max, formerly 3D Studio, then 3D Studio Max, is a professional 3D computer graphics program for making 3D animations, models, games and images. The software is of games industry standard.
Autodesk Maya, is a 3D computer graphics software that runs on Windows, macOS and Linux. It was originally developed by Alisa Systems Coporaton. The software can be used for animation, modelling, simulation and rendering. The software is fllm industry standard.
LightWave is a 3D computer graphics software, developed by NewTek. It has been used in film, television, motion graphics, video games development, product design etc. It is used for rendering 3D images, both animated and static.
AutoCAD is a computer-aided drafting software program, used to create blueprints for buildings, bridges etc. it is used across a wide range of industries, by architects, engineers, graphic designers etc.
File formats
There are a number of different types of file formats such as 3ds, .mb, .lwo, .C4d, .dxf etc. A file format is a standard way that information is encoded for storage in a computer file. It specifies how bits are used to encode information.
3DS file is a 3D image format used by Autodesk 3D Studio. It contains mesh data, bitmap references, lighting information etc. 3D files may also include object animation data.
The .mb file extension is associated with Autodesk Maya and used for binary output files created by this 3D software suite. The file contains 3D model data, animation data, lighting settings etc.
The .lwo file extension is associated with LightWave 3D. It contains points, polygons and surfaces that describe the object’s shape and appearance. It may also contain references to image files used for object textures.
Plug-ins
The term plugin is used for separate pieces of software that is created by a third party to be used within a program, normally it is to make it easier to use and to speed the process up.
2018-6-13-1528926671