Samiam’s Scribble Pad

February 23, 2014

Inverse decay of light and an alternative to traditional image based lighting and a move to incident light fields.

Filed under: Uncategorized — Tags: — admin @ 4:06 pm

Let me take you back, way back to year 9 in high school in 1985, where I was introduced to light and photography and the inverse squared rule of light decay.

enlarger

In black and white photography, you expose of piece of photosensitive paper to light for a period of time using an enlarger then go an develop that exposure with a chemical reaction in a tray and the darkness develops in front of your eyes.

The more light that the paper gets, the darker the colour will be. That is why you use a negative in the enlarger so you mask of the black areas.

Here comes the inverse squared law. If you want to make a print of a a4 piece of paper you might have worked out you need to expose for 10 seconds to get the image that you want.

But then you want something that is bigger. An a3 print, you need to wind back the enlarger so that the image is projected onto a greater area. The lamp still has the same brightness the negative still masks off the same amount of light.

The paper still has the same chemical response. So you expose the a3 sheet for the same 10 seconds. The image comes out very pale.

Why?

The inverse squared rule of decay. Because the light is further away from the photosensitive paper, not as much light reach per unit area, so to get the same chemical reaction for the same density of blacks, you need to expose for longer.

The rule is as follows, if you double the distance between you and a light source, you end up with one quarter the amount of light per unit area.

So it follows to get the same exposure you might need to expose the a3 sheet for more like 40 seconds compared to the a4 sheet of photosensitive paper.

Just to prove I am not making this up look at this Wikipedia page http://en.wikipedia.org/wiki/Inverse-square_law

So the inverse square law is real, I have seen it in action when developing prints in 1985.

The reality is that luminance is measure in candella per square metre see http://en.wikipedia.org/wiki/Candela_per_square_metre

So on a film set you can get an approximation of this by shooting a high dynamic range image, by assembling a number of low dynamic range images at a number of different shutter speeds.

see:

http://www.researchgate.net/publication/220506295_High_Dynamic_Range_Imaging_and_Low_Dynamic_Range_Expansion_for_Generating_HDR_Content/file/d912f508ae7f8b733c.pdf

 

But something is usually forgotten with this process.

 

  1. Calibration
  2. The effect of distance and the inverse square law

The calibration could be easily overcome by using a light emitter of known energy in an off state and and on state.

So to do this you have an LED of a known size at known distance from the camera.

You measure the luminance at 1 cm from the light source with a http://en.wikipedia.org/wiki/Radiometer and get a result of light sources luminance in candella per meter squared.

Then you create a HDR of this same light source at a fixed distance from the light source, say 1m.

If this is a standard LED then you dont need the radiometer every time.

If you had access to the radio meter you could just measure the energy of your light sources in candella per meter squared on set.

From this you can then derive what a pixel value on the film back of the image taking the HDRI is equivlant to in candella per meter squared.

Great!

So we have a HDRI and we know the energy levels of the light arriving at the film back of the camera taking the HDRI.

Now to go further.

If you want to know the energy levels of the light at the surface they are being emitted from, you need to reverse the inverse square decay algorithm.

So if you have two light sources in your HDRI with equivalent pixel values, then the luminance of those two light sources is equivalent at the film back of the camera.

But what if the distance of those lights sources were 1 m away from camera and 2m away from camera, both occupying the same pixel area in a cubic cross HDRI.

It follows that the one 2 m away would be 4 times the intensity as the light that is 1 m away.

Someone else has covered projecting the HDRI spherical projection onto geometry here:

https://www.fxguide.com/fxguidetv/fxguidetv-165-scott-metzger-on-mari-and-hdr/

This is valid for using this as a measure of the light at the surface of the object as an albedo.

http://en.wikipedia.org/wiki/Albedo

But if you want to use this as a light source for illuminating synthetic object, with correct attenuation.

You need to take into account the inverse squared falloff of the light from its surface to reaching the film back where it is measured and the luminance at the light source.

Further more you can put some importance sampling into your light sources.

Here is a cool paper

http://renderwonk.com/publications/s2010-shading-course/snow/sigg2010_physhadcourse_ILM.pdf

Anyway, this page http://webstaff.itn.liu.se/~jonun/web/IBL.php

Explains the concept a whole lot better than me.

But this came to me at 11pm on a Saturday night, when I was trying to go to sleep, so I thought I would scribble it down on a piece of paper so the insomnia didnt get the better of me.

Now it is 3:32 on a Sunday afternoon, the lawn has been mowed and my blog entry is complete for now.

May 25, 2013

Thread about tiled normal and colour maps in the Maya viewport

Filed under: Uncategorized — Tags: — admin @ 12:19 pm

Thread on the Area Forums

The script that saved my bacon:

  • Multi Channel Setup MEL
  • Mel script itself

    Which used an Add/Multiply node instead of a multiLayer texture.

    Works with Mental Ray and in inbuilt renderer, but with 3delight and normal maps not so much.

    Now our playblasts can look sweet with tiled UVs and Viewport 2.0

December 22, 2012

Sketchy model for printing

Filed under: Uncategorized — admin @ 3:46 pm

simple model

simple model


Here is the STL

An STL for printing

Based on a few conversations online I have determined there is a 3d printer at the Grote St library at the Adelaide City Council for free use.

So I knocked up a quick model in Wings3d.

Maia also made one too, which she said is a jewellery holder

Maia's Computer model of holding jewellery

I will upload her OBJ too and a screen shot of her model

But it is too large for upload without compression

Sam

November 12, 2011

trying to calculate localVisibility with Spherical Harmonics

Filed under: Uncategorized — Tags: , , — admin @ 5:59 pm
Spherical Harmonics Coefficients of Local Visibility

Spherical Harmonics Coefficients of Local Visibility

In hidsight, baking a lookup table of vectors and the spherical harmonics, might lead to artifacts, but with 1024 samples it doesnt look too bad

Feel free to download the source to sing along, its all based on the Sony Paper from 2003 see the PDF from SCEA

Python executable that write Spherical Harmonics Renderman header: sh1.py

Please find the resulting header file attached: SH.h

And a simple shader to calculate transmission based on the table of stratified samples above: localVisibility.sl

Its a bit clunky because I couldn’t work out how to do two dimensional arrays in RSL, im glad to fix it up if it is possible.

The preprocessor thing isnt that sweet either :/

As below:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
#include "SH.h"
 
surface localVisibility(
        uniform float maxDistance = 10; 
        uniform string outputFolder = "";
)
{
        SHVECTOR
        SHSPH0
        SHSPH1
        SHCOEFF0
        SHCOEFF1
        SHCOEFF2
        SHCOEFF3
        SHCOEFF4
        SHCOEFF5
        SHCOEFF6
        SHCOEFF7
        SHCOEFF8
        vector Nworld = vector(transform("world",N));
        point Pworld = transform("world",P);
        uniform float numSamples = 1024;
        uniform float numCoeffs = 9;
        varying float results[9]={0,0,0,0,0,0,0,0,0};
        uniform float i,j;
        varying float faceforward = 0;
        varying float occl = 0;
        for(i=0;i<numSamples;i=i+1){
                float Hs = samplesVector[i].Nworld;
                point destinationWorld = Pworld + samplesVector[i]*maxDistance;
                point destinationCurrent = transform("world","current",destinationWorld);
                if (Hs > 0){
                        faceforward += 1;
                        float isHit = comp(transmission(P,destinationCurrent),0);
                        if (isHit > 0)
                        {
                                occl += 1;
                                for(j=0;j<numCoeffs;j=j+1){
                                        varying float value = isHit;
                                        if (j == 0)
                                        {
                                                value *= samplesCoeffs0[i];
                                        }
                                        if (j == 1)
                                        {
                                                value *= samplesCoeffs1[i];
                                        }
                                        if (j == 2)
                                        {
                                                value  *= samplesCoeffs2[i];
                                        }
                                        if (j == 3)
                                        {
                                                value *= samplesCoeffs3[i];
                                        }
                                        if (j == 4)
                                        {
                                                value *= samplesCoeffs4[i];
                                        }
                                        if (j == 5)
                                        {
                                                value *= samplesCoeffs5[i];
                                        }
                                        if (j == 6)
                                        {
                                                value *= samplesCoeffs6[i];
                                        }
                                        if (j == 7)
                                        {
                                                value *= samplesCoeffs7[i];
                                        }
                                        if (j == 8)
                                        {
                                                value *= samplesCoeffs8[i];
                                        }
                                        results[j] += value;
 
                                        }       
                                }
                        } 
                }
        for (j=0;j<numCoeffs;j=j+1){    
                results[j] /= faceforward;
                }
        occl /= faceforward;
        faceforward /= numSamples;
        Ci = color(results[0],results[1],results[2]);
        Oi = 1;
        Ci *= Oi;
 
}

I dont think it is working yet, but it compiles and renders

Sam

November 6, 2010

Open Computer Graphics Storage Formats

Filed under: Uncategorized — admin @ 12:42 pm

We all know that OpenEXR has pretty much standardised the storage formate for image planes

But now there are few new contenders for

With the future of the industry being more widely outsourced this standardisation between packages seems pretty important.

Somethings happened a long time ago

  • Documents: PDF
  • Edits: EDL
  • Halfbaked scene descriptions: FBX
  • Render Intermediates: RIB

But I really hope that people can start to be a little more uniform with thier formats that they are using

October 31, 2010

Python Implementation of Spherical Harmonics Stratified Sampling

Filed under: Uncategorized — admin @ 7:32 pm

Its Sunday afternoon and its time to write some code, this is pretty much lifted verbatim from Robin Green’s 2003 Paper: Spherical Harmonic Lighting the Gritty Details

So for the next trick is to put this as a large table into a shader so I can bake out a set of 9 cooefficients (4 bands) for Shadowed Diffuse Transfer, these I will store per sample in a point cloud to be looked up during a shading stage.

Then after that I can implement the Image Based Lights, then I can do really quick Image Based lighting using Spherical Harmonics

Lots of fun!

I made a big mess of this but luckily Markus Kransler was able to fix it up:

Here is the amended code:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
#!/usr/bin/env python 
 
class SHSample():
	sph=(0.0,0.0)
	vec=(0.0,0.0,0.0)
	coeff={}
	pass
 
def P(l,m,x):
	import math
	#Associated Legendre Polynomial P(l,m,x) at x
	pmm = 1.0
	if m > 0:
		somx2=math.sqrt(1.0-(x*x))
		fact = 1.0
		for i in xrange(1,m+1,1):
			pmm *= (-fact)*somx2
			fact += 2.0
	if l == m:
		return pmm
 
	pmmp1 = x * ((2.0*m)+1.0)*pmm
 
	if l == m+1:
		return pmmp1
 
	plm = 0.0
 
	for ll in xrange(m+2,l+1,1):
		plm = ((2.0*ll - 1.0) * x * pmmp1 - (ll + m - 1.0) * pmm) / (ll - m);
		pmm = pmmp1
		pmmp1 = plm
 
	return plm
 
def K(l,m):
	import math
	temp = float((((2.0*l)+1.0)*math.factorial(l-m))/(4.0*math.pi*math.factorial(l+m)))
	return math.sqrt(temp)
 
def SH(l,m,theta,phi):
	import math
	sqrt2 = math.sqrt(2.0)
	if m==0:
		return K(l,0)*P(l,0,math.cos(theta))
	elif m > 0:
		return sqrt2*K(l,m)*math.cos(m*phi)*P(l,m,math.cos(theta))
	else:
		return sqrt2*K(l,-m)*math.sin(-m*phi)*P(l,-m,math.cos(theta))
 
 
def setupSamples(sqrtNumSamples=64,numBands=4):
	import random,math
	counter = 0
	oneOverN = 1.0/float(sqrtNumSamples)
	samples = [SHSample() for i in range(sqrtNumSamples*sqrtNumSamples)]
 
	for i in range(sqrtNumSamples):
		for j in range(sqrtNumSamples):
			x = (i+ random.random())*oneOverN
			theta = 2.0*math.acos(math.sqrt(1-x))
 
			y = (j+ random.random())*oneOverN
			phi = 2.0*math.pi*y
 
			samples[counter].sph=(theta,phi)
 
			vec = (math.sin(theta)*math.cos(phi),\
			math.sin(theta)*math.sin(phi),\
			math.cos(theta))
			samples[counter].vec = vec
 
			tmpDict = {}
			for l in range(numBands):
				for m in xrange(-l,l+1,1):
					index = l*(l+1)+m
					sh= SH(l,m,theta,phi)
					tmpDict[index]=sh
			samples[counter].coeff=tmpDict
			counter +=1
	return samples
 
 
for i in setupSamples():
	print i.coeff

October 19, 2010

Some papers to read and implement

Filed under: Uncategorized — admin @ 12:51 pm

October 14, 2010

Point Cloud in Nuke using “PositionToPoints”, 3delight Rendman Shader Language and cortex-vfx

Filed under: Uncategorized — Tags: , , , , — admin @ 1:51 pm

Here is the end result:

Nuke Node "PositionToPoints" with 3d EXR inputs

First things first I need a model to work with

Model

Shader

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
surface bakeColourAndPosition(
uniform float diffuseAmount = 1;
varying color surfaceColour = color(0.18,0.18,0.18);
varying color opacityColour = color(0.99,0.99,0.99);
uniform string bakeFile="/tmp/out.bake";
)
{
   varying normal Nn = normalize(N);
   Ci = diffuse(Nn)*surfaceColour*diffuseAmount*Cs;
   Oi = opacityColour*Os;
   varying point Pworld = transform("current","world",P);
   bake(concat(bakeFile,"Position"),s,t,Pworld);
   bake(concat(bakeFile,"Colour"),s,t,Ci);
   Ci *= Oi;
}

This shader will produce two text “bakefile” files in the /tmp directory

Note: The texture coordinates are ignored, only using the 3rd,4th and 5th values

  1. one named out.bakeColour with colour information
  2. one named out.bakePosition with position information

Output Bakefiles

  • Download .tar.gz here…

    Due to the SIMD nature of shaders the line count of each of the bake files is the same so they should contain line for line the same information about the Position and Colour Respectively.

    They are in ASCII format so they are easy enough to parse with Python.

    here is an example of thier content:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    
    out.bakePositionmh
    3
    0 1 -0.07415867 0.17987273 -0.05079475
    0 1 -0.073529155 0.1800126 -0.051191031
    0 1 -0.07289961 0.18015243 -0.051587344
    0 1 -0.072270096 0.18029229 -0.051983685
    0 1 -0.07164058 0.18043211 -0.052379965
    0 1 -0.07101102 0.18057197 -0.052776248
    0 1 -0.07038155 0.1807118 -0.053172619

    Creating Position and Colour EXR files using cortex-vfx

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    70
    71
    72
    73
    74
    75
    76
    77
    
    #!/usr/bin/env python
     
    import sys,os,math
     
    IECoreInstallPath = "/usr/lib/python2.6/site-packages"
     
    if IECoreInstallPath not in sys.path:
    	sys.path.append(IECoreInstallPath)
     
    from IECore import *
     
    bakeFolder = "/tmp"
     
    colorBakeFileLocation = os.path.sep.join([bakeFolder,"out.bakeColour"])
     
    positionBakeFileLocation = os.path.sep.join([bakeFolder,"out.bakePosition"])
     
     
     
     
    def parseBakeFile(bakeFileLocation):
    	data = []
    	counter = 0
    	bakeFile = open(bakeFileLocation,"r")
    	for line in bakeFile.readlines():
    		counter +=1
    		if counter > 2:
    			stuff = line.strip().split(" ")
    			if len(stuff) > 2:
    				data.append((float(stuff[2]),float(stuff[3]),float(stuff[4])))
    	print "Completed parsing %d lines of file %s" % (len(data),bakeFileLocation)
    	bakeFile.close()
    	return data
     
    colourData = parseBakeFile(colorBakeFileLocation)
     
    positionData = parseBakeFile(positionBakeFileLocation)
     
    if len(colourData) == len(positionData):
    	squareSize = int(math.sqrt(len(positionData))) +1
    	print "Square Size: %d, Excess Pixels : %d" % (squareSize,squareSize*squareSize - len(colourData))
    	width = squareSize
    	height = squareSize
    	x = FloatVectorData( width * height )
    	y = FloatVectorData( width * height )
    	z = FloatVectorData( width * height )
     
    	r = FloatVectorData( width * height )
    	g = FloatVectorData( width * height )
    	b = FloatVectorData( width * height )
     
    	for i in range(len(colourData)):
    		r[i]=colourData[i][0]
    		g[i]=colourData[i][1]
    		b[i]=colourData[i][2]
    		x[i]=positionData[i][0]
    		y[i]=positionData[i][1]
    		z[i]=positionData[i][2]
     
    	boxColour = Box2i( V2i( 0, 0 ), V2i( width-1, height-1 ) )
    	boxPosition = Box2i( V2i( 0, 0 ), V2i( width-1, height-1 ) )
     
    	imageColour = ImagePrimitive( boxColour, boxColour )
    	imagePosition = ImagePrimitive( boxPosition, boxPosition )
     
    	imagePosition["R"]= PrimitiveVariable( PrimitiveVariable.Interpolation.Vertex, x)
    	imagePosition["G"]= PrimitiveVariable( PrimitiveVariable.Interpolation.Vertex, y)
    	imagePosition["B"]= PrimitiveVariable( PrimitiveVariable.Interpolation.Vertex, z)
     
    	imageColour["R"]= PrimitiveVariable( PrimitiveVariable.Interpolation.Vertex, r)
    	imageColour["G"]= PrimitiveVariable( PrimitiveVariable.Interpolation.Vertex, g)
    	imageColour["B"]= PrimitiveVariable( PrimitiveVariable.Interpolation.Vertex, b)
     
    	writePosition = Writer.create( imagePosition, "/tmp/outPosition.exr" )
    	writeColour = Writer.create( imageColour, "/tmp/outColour.exr" )
    	writePosition.write()
    	writeColour.write()

    See more about cortex-vfx on google code:

    Using Nuke to read the Position and Colour Data

    File > Script Command [X] > PositionToPoints

    File > Script Command > PositionToPoints

    If you werent able to create your own pair of EXRs you can download the pair here in .tar.bz2 format

    So then you just need to connect them up to the input nodes for the PositionToPoints 3d node as follows:

    Nuke Node "PositionToPoints" with 3d EXR inputs

    If you thought this was useful leave a comment, or if you thought it was stupid leave a comment about how to improve it

    Sam

September 30, 2010

Alias Wavefront OBJ Export in Maya Python with Examples

Filed under: Uncategorized — Tags: , , , , , — admin @ 5:45 pm

Before I start I wanted to explain why I think this is a classic exercise

If you can represent a mesh in OBJ text format you pretty much understand how the data polygonal data exists in the package you are using and you have enough skill to pack it into a data structure to put it to disk.

So here is what you should get a good understanding of:

  • The classic Wavefront OBJ file format
  • The the components of a polygon mesh
  • Using the Python Commands to access the polygon information from the scene
  • Writing data structures to disk

The Structure of a Wavefront OBJ

so we should start with the OBJ of a 1x1x1 cube

An OBJ is great at encoding the surface of an object

which is made up of:

  • Position of the Surface: P
  • Normal of the Surface: N
  • Texture Values of the Surface: st

It is described by stating the values of the surface at discrete points in space along with some connectivity information of how those points are joined together in space as a mesh.

Using the connectivity information a small amount of discrete information can be interpolated continuously to form the surface.

The inner structure of an OBJ has lots of parts

  • “g” 2 groups: of components belonging to one mesh
  • “v” 8 vertices: 3d points, (x-pos,y-pos,z-pos) describing the Position of the surface
  • “vt” 14 texture verticies: 2d points (u-pos,v-pos) describing the layout of the texture on the surface
  • “vn” 24 vertex normals: 3d vectors (x-direction,y-direction,z-direction) describing the angle of the surface
  • “f” 8 faces: each containing indecies to 4 vertices, 4 texture verticies and 4 vertex normals describing the connectivity of the positional, normal and texture information

A diagram of what is the difference between a face, vertex, vertex normal and a texture vertex, would be great here, but you will have to use your imagination

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
# This file uses centimeters as units for non-parametric coordinates.
 
mtllib cube.mtl
g default
v -0.500000 -0.500000 0.500000
v 0.500000 -0.500000 0.500000
v -0.500000 0.500000 0.500000
v 0.500000 0.500000 0.500000
v -0.500000 0.500000 -0.500000
v 0.500000 0.500000 -0.500000
v -0.500000 -0.500000 -0.500000
v 0.500000 -0.500000 -0.500000
vt 0.375000 0.000000
vt 0.625000 0.000000
vt 0.375000 0.250000
vt 0.625000 0.250000
vt 0.375000 0.500000
vt 0.625000 0.500000
vt 0.375000 0.750000
vt 0.625000 0.750000
vt 0.375000 1.000000
vt 0.625000 1.000000
vt 0.875000 0.000000
vt 0.875000 0.250000
vt 0.125000 0.000000
vt 0.125000 0.250000
vn 0.000000 0.000000 1.000000
vn 0.000000 0.000000 1.000000
vn 0.000000 0.000000 1.000000
vn 0.000000 0.000000 1.000000
vn 0.000000 1.000000 0.000000
vn 0.000000 1.000000 0.000000
vn 0.000000 1.000000 0.000000
vn 0.000000 1.000000 0.000000
vn 0.000000 0.000000 -1.000000
vn 0.000000 0.000000 -1.000000
vn 0.000000 0.000000 -1.000000
vn 0.000000 0.000000 -1.000000
vn 0.000000 -1.000000 0.000000
vn 0.000000 -1.000000 0.000000
vn 0.000000 -1.000000 0.000000
vn 0.000000 -1.000000 0.000000
vn 1.000000 0.000000 0.000000
vn 1.000000 0.000000 0.000000
vn 1.000000 0.000000 0.000000
vn 1.000000 0.000000 0.000000
vn -1.000000 0.000000 0.000000
vn -1.000000 0.000000 0.000000
vn -1.000000 0.000000 0.000000
vn -1.000000 0.000000 0.000000
s off
g pCube1
usemtl initialShadingGroup
f 1/1/1 2/2/2 4/4/3 3/3/4
f 3/3/5 4/4/6 6/6/7 5/5/8
f 5/5/9 6/6/10 8/8/11 7/7/12
f 7/7/13 8/8/14 2/10/15 1/9/16
f 2/2/17 8/11/18 6/12/19 4/4/20
f 7/13/21 1/1/22 3/3/23 5/14/24

Feel free to read the Wavefront OBJ File format Specification that I found on the internet.

If you want a copy of this file, your copy and paste buffer will work a treat or you could open up Maya and save out a unit cube on the origin.

Its a unit cube on the origin, named pCube1, using initialShadingGroup as a material.

You can see the eight vertices are 0.5 units away from the origin in each three axes with the eight permuations of the signs of each axis. This makes the spacing between vertices 1 unit apart.

The vertex normals are unit in length facing either up, down, left, right, front or back, these normalised vectors are repeated a number of times as there are only 6 discrete values but 24 records.

The texture vertices make squares that are 0.25 units in texture space in an upside down T-layout, with some of the texture verticies being shared between texture faces.

The order in which the faces are created is a left hand rule where the order of the vertices specifies which is the inside of the face as indicated by the fingers and the direction of the face normal is specified by the thumb.

The same left hand rule applies to the texture verticies determining if the texture is mirrored or not.

Edges are implied as the edges between verticies that make up a face.

The number of edges in a face is determined by how many Point/Texture/Normal groups there are in a face

Lets explain that a little further with a single face OBJ

1
2
3
4
5
6
7
8
9
10
11
12
13
mtllib cube.mtl
g default
v -0.5 -0.5 0.5
v 0.5 -0.5 0.5
v -0.5 0.5 0.5
v 0.5 0.5 0.5
vt 0 0
vt 1 0
vt 1 1
vt 0 1
vn 0 0 1
g zPlanePointFive
f 1/1/1 2/2/1 4/3/1 3/4/1

This is a Unit sized face on the Z Plane at +0.5 units in Z

On line 13 we can see the single face definition, it creates a four sided face that goes in the following order

13
f 1/1/1 2/2/1 4/3/1 3/4/1

v/vt/vn : vertex position/ vertex texture / vertex normal, repeated for the number of faces

The edge created from the last vertex back to the first is implied but not explicity defined.

The order is as follows:

for vertex position: 1,2,4,3
for vertex texture position: 1,2,3,4
for vertex normals direction: 1,1,1,1

For Position we can see the face formed on the Z Plane

  • vert #1 : -x,-y (inital point no edge)
  • vert #2 : +x,-y (edge #1, left to right)
  • vert #4 : +x, +y (edge #2, up)
  • vert #3 : -x, +y (edge #3, right to left)
  • back to inital vert : (edge #4, down, return to initial point)

For texture it goes around 0-1 UV space, left to right, up, right to left and then down.

All of the faces recycle the same face normal, the only face normal, face normal number ONE!!1!

Now we understand the OBJ we understand how the components are connected in the OBJ, we just need to find out about the same info in Maya and we can write an OBJ exporter

Getting at the Geometric Data for Polygonal Object in Maya with Python

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
import maya
 
def log(message,prefix="Debug",hush=False):
	if not hush:
		print("%s : %s " % (prefix,message))
 
def getData(shapeNode):
 
    vertexValues = []
    vertNormalValues =[]
    textureValues =[]
 
    vertList = []
    vertNormalList = []
    vertTextureList = []
 
 
    oldSelection = maya.cmds.ls(selection=True)
    maya.cmds.select(shapeNode)
 
    #Verts
    numVerts = maya.cmds.polyEvaluate(vertex=True)
    log("NumVerts : %s" % numVerts)
    vertexValues = [maya.cmds.pointPosition("%s.vtx[%d]" % (shapeNode,i)) for i in range(numVerts)]
    log("Verticies:" +  str(vertexValues))
 
 
    #Normals
    faceNormals=[]
    numFaceNormals = 0
    for face in range(maya.cmds.polyEvaluate(face=True)):
        maya.cmds.select("%s.f[%d]" % (shapeNode,face))
        vertexFaces = maya.cmds.polyListComponentConversion(fromFace=True,toVertexFace=True)
        vertexFaces= maya.cmds.filterExpand(vertexFaces,selectionMask=70,expand=True)
        faceNormals.append([])
        for vertexFace in vertexFaces:
            vertNormalValues.append(maya.cmds.polyNormalPerVertex(vertexFace, query=True, xyz=True))
            numFaceNormals  += 1
            faceNormals[-1].append(numFaceNormals )
    log("Num Face Normals: " + str(numFaceNormals))
    log("Face Normals: " + str(vertNormalValues))
 
    #Texture Coordinates
    numTexVerts = maya.cmds.polyEvaluate(uvcoord=True)
    log("NumTexVerts: " + str(numTexVerts))
    textureValues = [maya.cmds.getAttr("%s.uvpt[%d]" % (shapeNode,i)) for i in range(numTexVerts)]
    log("Texture Coordinates: " + str(textureValues))
 
    #Faces
    numFaces = maya.cmds.polyEvaluate(face=True)
    log("NumFaces : %s" % numFaces)
    vnIter = 0
    faceValues = []
    for i in range(numFaces):
        log("Face %d of %d" % (i+1,numFaces))
        maya.cmds.select("%s.f[%d]" % (shapeNode,i))
 
        #Verts (v)
        faceVerts = maya.cmds.polyInfo(faceToVertex=True)
 
        #This is hacky and should be replaced with snazzy regex
        faceVerts =  [int(fv)+1 for fv in faceVerts[0].split(":")[-1].strip().replace("  "," ").replace("  "," ").replace("  "," ").replace(" ",",").split(",")]
        log("v: " + str(faceVerts) )
        vertList.append(faceVerts)
 
        #Normals (vn)
        maya.cmds.select("%s.f[%d]" % (shapeNode,i))
        log("vn: " + str(faceNormals[i]))
        vertNormalList.append(faceNormals[i])
 
        #Texture (vt)
        maya.cmds.select("%s.f[%d]" % (shapeNode,i))
        tex = maya.cmds.polyListComponentConversion(fromFace=True,toUV=True)
        tex= maya.cmds.filterExpand(tex,selectionMask=35,expand=True)
        tex=[int(i.split("map")[-1].strip("[]")) +1 for i in tex]
        log("vt: " + str(tex))
        #Order is incorrect, need to get in same order as vertex ordering
        tmpDict = {}
        for t in tex:
            maya.cmds.select("%s.map[%d]" % (shapeNode,t-1))
            vertFromTex = maya.cmds.polyListComponentConversion(fromUV=True,toVertex=True)
            tmpDict[int(vertFromTex[0].split("[")[-1].split("]")[0]) + 1] = t
        orderedTex=[]
        for i in vertList[-1]:
            orderedTex.append(tmpDict[i])
        vertTextureList.append(orderedTex)
 
        face = " ".join( ["%d/%d/%d" % (vertList[-1][i], vertTextureList[-1][i], vertNormalList[-1][i]) for i in range( len( vertTextureList[-1] ) ) ] )
 
        faceValues.append(face)    
        log("")
 
        log("f: " +  face)
        log("--")
    maya.cmds.select(oldSelection)
    return {"v":vertexValues,"vn":vertNormalValues,"vt":textureValues,"f":faceValues,"g":shapeNode}
 
print "GO!"
maya.cmds.file(new=True,force=True)
maya.cmds.polyCube(ch=True,o=True,w=1,h=1,d=1,cuv=4)
dataDict = getData("pCubeShape1")

OK so apart from the messy code structure there is not much to getting the Point/Vertex (v), Normal (vn), Texture/UV/st (vt) data from the scene:

These are the three commands that make it possible to get to the data you want:

  • polyInfo
  • polyEvaluate
  • polyListComponentConversion

but we need to clean up the output with:

  • filterExpand

As well as getting the data out from the scene using

  • pointPosition: P
  • polyNormalPerVertex: N
  • getAttr: st/UV

Writing the OBJ data to Disk using Python File I/O

So to finish it off we simply replace the end of the code with the following:

85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
#Continues from the def "getData"
 
def writeData(dataDict):
	outString = "\ng default\n"
	for i in dataDict["v"]:
		log(str(i))
		outString+= "v %f %f %f \n" % (i[0],i[1],i[2])
	for i in dataDict["vt"]:
		log(str(i))
		outString+= "vt %f %f \n" % (i[0][0],i[0][1])
	for i in dataDict["vn"]:
		log(str(i))
		outString+= "vn %f %f %f \n" % (i[0],i[1],i[2])
	outString += "g %s\n" % dataDict["g"]
	for i in dataDict["f"]:
		log(str(i))
		outString += "f %s\n" % i
	outString += "\n"
	log(outString)
	return outString
 
print "GO!"
maya.cmds.file(new=True,force=True)
maya.cmds.polyCube(ch=True,o=True,w=1,h=1,d=10,cuv=4)
fileLocation = "/Users/hodgefamily/out.obj"
f = open(fileLocation,"w")
data = getData("pCubeShape1")
string = writeData(data)
f.writelines(string)
f.close()
maya.cmds.file(new=True,force=True)
maya.cmds.file(fileLocation,i=True,type="OBJ",rpr="out")
print "STOP"

A friend of mine took my code and cleaned up up Thanks Katrin, you can download it here Katrin's Source Code

July 14, 2010

SALA 2010 Stencil

Filed under: Uncategorized — Tags: , — admin @ 10:10 pm

SALA 2010 Stencils

I have finished the long part of creating my artwork for South Australian Living Artists Week 2010.

The piece will be either titled “Industry” or “Hive”, im still trying to make up my mind.

Now I just need to layout my stencils and spray them up the piece will be 60cm x 90 cm. I am hoping to put it up on Saturday but I will wait and see what the weather brings rather than trying to deal with the wind

I will update the post once the exhibition is on

« Newer PostsOlder Posts »

Powered by WordPress