Archive for the ‘Uncategorized’ Category

Replacing Capacitors in a Cisco 877

Sunday, August 23rd, 2015

Just when you think you have reached the level of maximum geekiness, you step it up a notch.

I work making visual effects for Hollywood movies, that is pretty geeky.

I am a mature aged student studying Computer Science, reading pure math on a Sunday, that is pretty geeky.

I have participated in a Community Wireless Networking group Air-Stream in fact I designed their logo, that is pretty geeky.

As a result I happened upon having a DSL router, that is made by Cisco that brings the internet into my home.

I bought it second hand off ebay in November 2014.

Installing that and being able to navigate Cisco’s operating system IOS is a pretty high level of geeky.

http://forums.whirlpool.net.au/archive/2333379

But when said modem starts playing up, and switch it off and switch it off again gets too annoying.

http://forums.whirlpool.net.au/archive/2425478

You COULD throw is away and get a new one.

OR you could google the hell out of the issue and find out about which capacitors are worn out.

see this thread here:

http://forums.whirlpool.net.au/archive/1737576

 

IMG_6255

IMG_6257

IMG_6258

IMG_6259

 

So this afternoon I will head to the local Jaycar and see if I can get my hands on

6800µF 105°C 6.3V 15mm Aluminum Electrolytic Capacitor for under $5 each.

Then on Wednesday night, I am going to turn up to the local hackspace after work and see if I can have a crack at soldering in the new parts.

Then I will be back to the previous glory of this

ADSL2_November_2014_SNR_after_a_week

 

November2014_ADSL2

 

Why just have the internet when you can solder your own internet together.

Today’s life is all too prefabricated.

I got a recipe for coconut and pumpkin soup.

The recipe asked for a can of pumpkin.

I only had a pumpkin grown in the neighbour’s back yard given to us as a gift.

Needless to say I was still able to make the recipe without getting the pumpkin put into a metal can and shipped half way around the world.

Make your own pumpkin soup, make your own internet better.

Steering behaviour for wander as 1d noise of angular value.

Saturday, May 23rd, 2015

Found a cool tute on a wander behaviour for my cat simulator

 

http://gamedevelopment.tutsplus.com/tutorials/understanding-steering-behaviors-wander–gamedev-1624

This is a classic Ordinary Differential Equation for solving 2d position and velocity.

I was thinking about this.

And it looks like a lot of steps to produce a fairly continuous angular rotation with a random nature to it.

Continuous is both its first and second derivative.

The I thought, that is exactly what Simplex Noise does, produces gradients that are continuous in the first and second derivative.

Also this system only solves the position and velocity of the avatar, from this you need to derive the transform from the position and the direction of travel.

What it doesnt give you is the angular velocity. This is required if you need to work out how much of a turn left or a turn right animation to blend in.

So I thought an alternative would be work in polar coordinates for velocity so you have an angular position and angular velocity and either a constant or varying forward position.

 

Anyway now I think about it is a bad idea, but it was fun while it lasted.

Inverse decay of light and an alternative to traditional image based lighting and a move to incident light fields.

Sunday, February 23rd, 2014

Let me take you back, way back to year 9 in high school in 1985, where I was introduced to light and photography and the inverse squared rule of light decay.

enlarger

In black and white photography, you expose of piece of photosensitive paper to light for a period of time using an enlarger then go an develop that exposure with a chemical reaction in a tray and the darkness develops in front of your eyes.

The more light that the paper gets, the darker the colour will be. That is why you use a negative in the enlarger so you mask of the black areas.

Here comes the inverse squared law. If you want to make a print of a a4 piece of paper you might have worked out you need to expose for 10 seconds to get the image that you want.

But then you want something that is bigger. An a3 print, you need to wind back the enlarger so that the image is projected onto a greater area. The lamp still has the same brightness the negative still masks off the same amount of light.

The paper still has the same chemical response. So you expose the a3 sheet for the same 10 seconds. The image comes out very pale.

Why?

The inverse squared rule of decay. Because the light is further away from the photosensitive paper, not as much light reach per unit area, so to get the same chemical reaction for the same density of blacks, you need to expose for longer.

The rule is as follows, if you double the distance between you and a light source, you end up with one quarter the amount of light per unit area.

So it follows to get the same exposure you might need to expose the a3 sheet for more like 40 seconds compared to the a4 sheet of photosensitive paper.

Just to prove I am not making this up look at this Wikipedia page http://en.wikipedia.org/wiki/Inverse-square_law

So the inverse square law is real, I have seen it in action when developing prints in 1985.

The reality is that luminance is measure in candella per square metre see http://en.wikipedia.org/wiki/Candela_per_square_metre

So on a film set you can get an approximation of this by shooting a high dynamic range image, by assembling a number of low dynamic range images at a number of different shutter speeds.

see:

http://www.researchgate.net/publication/220506295_High_Dynamic_Range_Imaging_and_Low_Dynamic_Range_Expansion_for_Generating_HDR_Content/file/d912f508ae7f8b733c.pdf

 

But something is usually forgotten with this process.

 

  1. Calibration
  2. The effect of distance and the inverse square law

The calibration could be easily overcome by using a light emitter of known energy in an off state and and on state.

So to do this you have an LED of a known size at known distance from the camera.

You measure the luminance at 1 cm from the light source with a http://en.wikipedia.org/wiki/Radiometer and get a result of light sources luminance in candella per meter squared.

Then you create a HDR of this same light source at a fixed distance from the light source, say 1m.

If this is a standard LED then you dont need the radiometer every time.

If you had access to the radio meter you could just measure the energy of your light sources in candella per meter squared on set.

From this you can then derive what a pixel value on the film back of the image taking the HDRI is equivlant to in candella per meter squared.

Great!

So we have a HDRI and we know the energy levels of the light arriving at the film back of the camera taking the HDRI.

Now to go further.

If you want to know the energy levels of the light at the surface they are being emitted from, you need to reverse the inverse square decay algorithm.

So if you have two light sources in your HDRI with equivalent pixel values, then the luminance of those two light sources is equivalent at the film back of the camera.

But what if the distance of those lights sources were 1 m away from camera and 2m away from camera, both occupying the same pixel area in a cubic cross HDRI.

It follows that the one 2 m away would be 4 times the intensity as the light that is 1 m away.

Someone else has covered projecting the HDRI spherical projection onto geometry here:

https://www.fxguide.com/fxguidetv/fxguidetv-165-scott-metzger-on-mari-and-hdr/

This is valid for using this as a measure of the light at the surface of the object as an albedo.

http://en.wikipedia.org/wiki/Albedo

But if you want to use this as a light source for illuminating synthetic object, with correct attenuation.

You need to take into account the inverse squared falloff of the light from its surface to reaching the film back where it is measured and the luminance at the light source.

Further more you can put some importance sampling into your light sources.

Here is a cool paper

http://renderwonk.com/publications/s2010-shading-course/snow/sigg2010_physhadcourse_ILM.pdf

Anyway, this page http://webstaff.itn.liu.se/~jonun/web/IBL.php

Explains the concept a whole lot better than me.

But this came to me at 11pm on a Saturday night, when I was trying to go to sleep, so I thought I would scribble it down on a piece of paper so the insomnia didnt get the better of me.

Now it is 3:32 on a Sunday afternoon, the lawn has been mowed and my blog entry is complete for now.

Thread about tiled normal and colour maps in the Maya viewport

Saturday, May 25th, 2013

Thread on the Area Forums

The script that saved my bacon:

  • Multi Channel Setup MEL
  • Mel script itself

    Which used an Add/Multiply node instead of a multiLayer texture.

    Works with Mental Ray and in inbuilt renderer, but with 3delight and normal maps not so much.

    Now our playblasts can look sweet with tiled UVs and Viewport 2.0

Sketchy model for printing

Saturday, December 22nd, 2012

simple model

simple model


Here is the STL

An STL for printing

Based on a few conversations online I have determined there is a 3d printer at the Grote St library at the Adelaide City Council for free use.

So I knocked up a quick model in Wings3d.

Maia also made one too, which she said is a jewellery holder

Maia's Computer model of holding jewellery

I will upload her OBJ too and a screen shot of her model

But it is too large for upload without compression

Sam

trying to calculate localVisibility with Spherical Harmonics

Saturday, November 12th, 2011
Spherical Harmonics Coefficients of Local Visibility

Spherical Harmonics Coefficients of Local Visibility

In hidsight, baking a lookup table of vectors and the spherical harmonics, might lead to artifacts, but with 1024 samples it doesnt look too bad

Feel free to download the source to sing along, its all based on the Sony Paper from 2003 see the PDF from SCEA

Python executable that write Spherical Harmonics Renderman header: sh1.py

Please find the resulting header file attached: SH.h

And a simple shader to calculate transmission based on the table of stratified samples above: localVisibility.sl

Its a bit clunky because I couldn’t work out how to do two dimensional arrays in RSL, im glad to fix it up if it is possible.

The preprocessor thing isnt that sweet either :/

As below:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
#include "SH.h"
 
surface localVisibility(
        uniform float maxDistance = 10; 
        uniform string outputFolder = "";
)
{
        SHVECTOR
        SHSPH0
        SHSPH1
        SHCOEFF0
        SHCOEFF1
        SHCOEFF2
        SHCOEFF3
        SHCOEFF4
        SHCOEFF5
        SHCOEFF6
        SHCOEFF7
        SHCOEFF8
        vector Nworld = vector(transform("world",N));
        point Pworld = transform("world",P);
        uniform float numSamples = 1024;
        uniform float numCoeffs = 9;
        varying float results[9]={0,0,0,0,0,0,0,0,0};
        uniform float i,j;
        varying float faceforward = 0;
        varying float occl = 0;
        for(i=0;i<numSamples;i=i+1){
                float Hs = samplesVector[i].Nworld;
                point destinationWorld = Pworld + samplesVector[i]*maxDistance;
                point destinationCurrent = transform("world","current",destinationWorld);
                if (Hs > 0){
                        faceforward += 1;
                        float isHit = comp(transmission(P,destinationCurrent),0);
                        if (isHit > 0)
                        {
                                occl += 1;
                                for(j=0;j<numCoeffs;j=j+1){
                                        varying float value = isHit;
                                        if (j == 0)
                                        {
                                                value *= samplesCoeffs0[i];
                                        }
                                        if (j == 1)
                                        {
                                                value *= samplesCoeffs1[i];
                                        }
                                        if (j == 2)
                                        {
                                                value  *= samplesCoeffs2[i];
                                        }
                                        if (j == 3)
                                        {
                                                value *= samplesCoeffs3[i];
                                        }
                                        if (j == 4)
                                        {
                                                value *= samplesCoeffs4[i];
                                        }
                                        if (j == 5)
                                        {
                                                value *= samplesCoeffs5[i];
                                        }
                                        if (j == 6)
                                        {
                                                value *= samplesCoeffs6[i];
                                        }
                                        if (j == 7)
                                        {
                                                value *= samplesCoeffs7[i];
                                        }
                                        if (j == 8)
                                        {
                                                value *= samplesCoeffs8[i];
                                        }
                                        results[j] += value;
 
                                        }       
                                }
                        } 
                }
        for (j=0;j<numCoeffs;j=j+1){    
                results[j] /= faceforward;
                }
        occl /= faceforward;
        faceforward /= numSamples;
        Ci = color(results[0],results[1],results[2]);
        Oi = 1;
        Ci *= Oi;
 
}

I dont think it is working yet, but it compiles and renders

Sam

Open Computer Graphics Storage Formats

Saturday, November 6th, 2010

We all know that OpenEXR has pretty much standardised the storage formate for image planes

But now there are few new contenders for

With the future of the industry being more widely outsourced this standardisation between packages seems pretty important.

Somethings happened a long time ago

  • Documents: PDF
  • Edits: EDL
  • Halfbaked scene descriptions: FBX
  • Render Intermediates: RIB

But I really hope that people can start to be a little more uniform with thier formats that they are using

Python Implementation of Spherical Harmonics Stratified Sampling

Sunday, October 31st, 2010

Its Sunday afternoon and its time to write some code, this is pretty much lifted verbatim from Robin Green’s 2003 Paper: Spherical Harmonic Lighting the Gritty Details

So for the next trick is to put this as a large table into a shader so I can bake out a set of 9 cooefficients (4 bands) for Shadowed Diffuse Transfer, these I will store per sample in a point cloud to be looked up during a shading stage.

Then after that I can implement the Image Based Lights, then I can do really quick Image Based lighting using Spherical Harmonics

Lots of fun!

I made a big mess of this but luckily Markus Kransler was able to fix it up:

Here is the amended code:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
#!/usr/bin/env python 
 
class SHSample():
	sph=(0.0,0.0)
	vec=(0.0,0.0,0.0)
	coeff={}
	pass
 
def P(l,m,x):
	import math
	#Associated Legendre Polynomial P(l,m,x) at x
	pmm = 1.0
	if m > 0:
		somx2=math.sqrt(1.0-(x*x))
		fact = 1.0
		for i in xrange(1,m+1,1):
			pmm *= (-fact)*somx2
			fact += 2.0
	if l == m:
		return pmm
 
	pmmp1 = x * ((2.0*m)+1.0)*pmm
 
	if l == m+1:
		return pmmp1
 
	plm = 0.0
 
	for ll in xrange(m+2,l+1,1):
		plm = ((2.0*ll - 1.0) * x * pmmp1 - (ll + m - 1.0) * pmm) / (ll - m);
		pmm = pmmp1
		pmmp1 = plm
 
	return plm
 
def K(l,m):
	import math
	temp = float((((2.0*l)+1.0)*math.factorial(l-m))/(4.0*math.pi*math.factorial(l+m)))
	return math.sqrt(temp)
 
def SH(l,m,theta,phi):
	import math
	sqrt2 = math.sqrt(2.0)
	if m==0:
		return K(l,0)*P(l,0,math.cos(theta))
	elif m > 0:
		return sqrt2*K(l,m)*math.cos(m*phi)*P(l,m,math.cos(theta))
	else:
		return sqrt2*K(l,-m)*math.sin(-m*phi)*P(l,-m,math.cos(theta))
 
 
def setupSamples(sqrtNumSamples=64,numBands=4):
	import random,math
	counter = 0
	oneOverN = 1.0/float(sqrtNumSamples)
	samples = [SHSample() for i in range(sqrtNumSamples*sqrtNumSamples)]
 
	for i in range(sqrtNumSamples):
		for j in range(sqrtNumSamples):
			x = (i+ random.random())*oneOverN
			theta = 2.0*math.acos(math.sqrt(1-x))
 
			y = (j+ random.random())*oneOverN
			phi = 2.0*math.pi*y
 
			samples[counter].sph=(theta,phi)
 
			vec = (math.sin(theta)*math.cos(phi),\
			math.sin(theta)*math.sin(phi),\
			math.cos(theta))
			samples[counter].vec = vec
 
			tmpDict = {}
			for l in range(numBands):
				for m in xrange(-l,l+1,1):
					index = l*(l+1)+m
					sh= SH(l,m,theta,phi)
					tmpDict[index]=sh
			samples[counter].coeff=tmpDict
			counter +=1
	return samples
 
 
for i in setupSamples():
	print i.coeff

Some papers to read and implement

Tuesday, October 19th, 2010

Point Cloud in Nuke using “PositionToPoints”, 3delight Rendman Shader Language and cortex-vfx

Thursday, October 14th, 2010

Here is the end result:

Nuke Node "PositionToPoints" with 3d EXR inputs

First things first I need a model to work with

Model

Shader

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
surface bakeColourAndPosition(
uniform float diffuseAmount = 1;
varying color surfaceColour = color(0.18,0.18,0.18);
varying color opacityColour = color(0.99,0.99,0.99);
uniform string bakeFile="/tmp/out.bake";
)
{
   varying normal Nn = normalize(N);
   Ci = diffuse(Nn)*surfaceColour*diffuseAmount*Cs;
   Oi = opacityColour*Os;
   varying point Pworld = transform("current","world",P);
   bake(concat(bakeFile,"Position"),s,t,Pworld);
   bake(concat(bakeFile,"Colour"),s,t,Ci);
   Ci *= Oi;
}

This shader will produce two text “bakefile” files in the /tmp directory

Note: The texture coordinates are ignored, only using the 3rd,4th and 5th values

  1. one named out.bakeColour with colour information
  2. one named out.bakePosition with position information

Output Bakefiles

  • Download .tar.gz here…

    Due to the SIMD nature of shaders the line count of each of the bake files is the same so they should contain line for line the same information about the Position and Colour Respectively.

    They are in ASCII format so they are easy enough to parse with Python.

    here is an example of thier content:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    
    out.bakePositionmh
    3
    0 1 -0.07415867 0.17987273 -0.05079475
    0 1 -0.073529155 0.1800126 -0.051191031
    0 1 -0.07289961 0.18015243 -0.051587344
    0 1 -0.072270096 0.18029229 -0.051983685
    0 1 -0.07164058 0.18043211 -0.052379965
    0 1 -0.07101102 0.18057197 -0.052776248
    0 1 -0.07038155 0.1807118 -0.053172619

    Creating Position and Colour EXR files using cortex-vfx

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    70
    71
    72
    73
    74
    75
    76
    77
    
    #!/usr/bin/env python
     
    import sys,os,math
     
    IECoreInstallPath = "/usr/lib/python2.6/site-packages"
     
    if IECoreInstallPath not in sys.path:
    	sys.path.append(IECoreInstallPath)
     
    from IECore import *
     
    bakeFolder = "/tmp"
     
    colorBakeFileLocation = os.path.sep.join([bakeFolder,"out.bakeColour"])
     
    positionBakeFileLocation = os.path.sep.join([bakeFolder,"out.bakePosition"])
     
     
     
     
    def parseBakeFile(bakeFileLocation):
    	data = []
    	counter = 0
    	bakeFile = open(bakeFileLocation,"r")
    	for line in bakeFile.readlines():
    		counter +=1
    		if counter > 2:
    			stuff = line.strip().split(" ")
    			if len(stuff) > 2:
    				data.append((float(stuff[2]),float(stuff[3]),float(stuff[4])))
    	print "Completed parsing %d lines of file %s" % (len(data),bakeFileLocation)
    	bakeFile.close()
    	return data
     
    colourData = parseBakeFile(colorBakeFileLocation)
     
    positionData = parseBakeFile(positionBakeFileLocation)
     
    if len(colourData) == len(positionData):
    	squareSize = int(math.sqrt(len(positionData))) +1
    	print "Square Size: %d, Excess Pixels : %d" % (squareSize,squareSize*squareSize - len(colourData))
    	width = squareSize
    	height = squareSize
    	x = FloatVectorData( width * height )
    	y = FloatVectorData( width * height )
    	z = FloatVectorData( width * height )
     
    	r = FloatVectorData( width * height )
    	g = FloatVectorData( width * height )
    	b = FloatVectorData( width * height )
     
    	for i in range(len(colourData)):
    		r[i]=colourData[i][0]
    		g[i]=colourData[i][1]
    		b[i]=colourData[i][2]
    		x[i]=positionData[i][0]
    		y[i]=positionData[i][1]
    		z[i]=positionData[i][2]
     
    	boxColour = Box2i( V2i( 0, 0 ), V2i( width-1, height-1 ) )
    	boxPosition = Box2i( V2i( 0, 0 ), V2i( width-1, height-1 ) )
     
    	imageColour = ImagePrimitive( boxColour, boxColour )
    	imagePosition = ImagePrimitive( boxPosition, boxPosition )
     
    	imagePosition["R"]= PrimitiveVariable( PrimitiveVariable.Interpolation.Vertex, x)
    	imagePosition["G"]= PrimitiveVariable( PrimitiveVariable.Interpolation.Vertex, y)
    	imagePosition["B"]= PrimitiveVariable( PrimitiveVariable.Interpolation.Vertex, z)
     
    	imageColour["R"]= PrimitiveVariable( PrimitiveVariable.Interpolation.Vertex, r)
    	imageColour["G"]= PrimitiveVariable( PrimitiveVariable.Interpolation.Vertex, g)
    	imageColour["B"]= PrimitiveVariable( PrimitiveVariable.Interpolation.Vertex, b)
     
    	writePosition = Writer.create( imagePosition, "/tmp/outPosition.exr" )
    	writeColour = Writer.create( imageColour, "/tmp/outColour.exr" )
    	writePosition.write()
    	writeColour.write()

    See more about cortex-vfx on google code:

    Using Nuke to read the Position and Colour Data

    File > Script Command [X] > PositionToPoints

    File > Script Command > PositionToPoints

    If you werent able to create your own pair of EXRs you can download the pair here in .tar.bz2 format

    So then you just need to connect them up to the input nodes for the PositionToPoints 3d node as follows:

    Nuke Node "PositionToPoints" with 3d EXR inputs

    If you thought this was useful leave a comment, or if you thought it was stupid leave a comment about how to improve it

    Sam