## computer graphics – Ray tracing tree

Let’s suppose that we have a light source. Which of the following are correct for the tree of recursive ray-tracing (ray tree) that create when we make the colors for each pixel in the screen.
Which of the following are correct:

a) if scene contains of one non-convex polyhedron (random shape and position) then we can’t have a bound for the height of tree in general. (Even the camera is out of range of object)

b) if scene contains of two convex polyhedrons then the height of tree can be at most 2 independent of the position of camera and the position of objects (the camera is out of range of the two objects)

c) if scene contains of two non-convex polyhedrons (random shape and position) then we can’t have a bound for the height of tree in general. (Even the camera is out of range of object)

d) if scene contains of one convex polyhedron then the height of tree can be at most 1 independent of the position of camera and the position of objects (the camera is out of range of the object)

e) if scene contains of one non-convex polyhedron then the height of tree can be at most 1 independent of the position of camera and the position of object (the camera is out of range of the object)

I am confused about the choice of polyhedrons (convex and non-convex), i can’t understand the difference.
Also i don’t understand how the camera out of range of objects can help to make the ray tree.

## Unreal gives error when line tracing collision

I am currently using Unreal Engine 4.26.
I have been following this tutorial to make a parkour game in Unreal.

I followed the tutorial exactly, and made a walk/jump/double jump game.

However, when I made the wall climbing system, Unreal started giving an error.

The wall climbing still works, but it shows this error when it ends:

``````Blueprint Runtime Error: "Accessed None trying to read property CallFunc_BreakHitResult_HitActor". Blueprint:  Parkour_BP_Char Function:  Execute Ubergraph Parkour BP Char Graph:  EventGraph Node:  Branch
``````

It has hundreds of copies of that error in the message log.

I would provide my unreal file, but I don’t know how.

Here is the double jump blueprint setup;
Image 1
Image 2
Image 3

(Sorry about 3 images, it’s a large setup.)

Does anyone know how I can solve the error?

## raytracing – Ray tracing Bug with Diffuse material

I am trying to do ray tracing in python (following the tutorial given in Ray Tracing in a weekend). Basically I am shooting rays from eye and the recursively bounce around, each time they hit something they become weaker (actually this is reverse ray casting but you get the idea). The output I am getting is incorrect.

My output:

Expected output:

The shadows are messed up. What could be wrong?

My code:

``````import numpy as np
import sys
import random
from PIL import Image
from math import *
from util import *

width  = 60
height = 60

samples = 20

#-----------------------------------------------------------------------

def reflected(vector, axis):
return vector - axis * 2 * vector.dot(axis)

def RandomPointInSphere():
p = None
while True:
p = Vector(random.uniform(0,1),random.uniform(0,1),random.uniform(0,1))*2 - Vector(1,1,1)
if(p.dot(p) < 1):
break
return p

def GetNearestObject(objects, ray):
nearest_obj = None
min_hit = Intersection(None, INF, None, None)

for obj in objects:
hit = obj.intersect(ray)
if(hit.distance < min_hit.distance):
nearest_obj = obj
min_hit = hit

return min_hit

#-----------------------------------------------------------------------

def RayColor(objects, ray):
# Part 1: Diffuse Material
result = GetNearestObject(objects, ray)
if(result.point != None):
P = result.point
N = result.normal
E = RandomPointInSphere()
target = P + N + E

newRay = Ray(ray.origin, (target - ray.origin).normalize())
return RayColor(objects, newRay)*0.5
else:
t = 0.5 * (ray.direction.y + 1.0);
color = Vector(1.0, 1.0, 1.0)*(1.0 - t) + Vector(0.5, 0.7, 1.0)*t
color.x = min(color.x,1.0)
color.y = min(color.y,1.0)
color.z = min(color.z,1.0)
return Vector(1,1,1)

#-----------------------------------------------------------------------
def main():
global bitmap

eye    = Vector(0,0,1)
ratio  = float(width) / height
screen = Screen(-1, 1 / ratio, 1, -1 / ratio, 0)

objects = ()
objects.append(Sphere(Vector(-0.2,0,-1),  0.7, Material(Vector(0.1,0,0),  Vector(0.7,0,0),    Vector(1,1,1), 100, 0.5)))

objects.append(Sphere(Vector(0,-9000,0),  9000-0.7, Material(Vector(0.1,0.1,0.1),Vector(0.6,0.6,0.6),Vector(1,1,1), 100, 0.5)))

light = Light(Vector(5,5,5), Material(Vector(1,1,1),Vector(1,1,1),Vector(1,1,1)))

for frame in range(1):

img    = Image.new(mode = "RGB", size=(width, height), color=Color.WHITE)
bitmap = img.load() # create the pixel data

#--------------------------------------------------------------
#--------------------------------------------------------------
sys.setrecursionlimit(10000)

#breakpoint()

deltaX = (screen.right - screen.left)/(width-1)
deltaY = (screen.top - screen.bottom)/(height-1)

for y in range(height):
for x in range(width):
pixel     = Vector(screen.left+x*deltaX, screen.top-y*deltaY, screen.z)
direction = (pixel - eye).normalize()
pixelRay = Ray(eye, direction)

# Part 1: Diffuse Material
color = Vector(0,0,0)
for s in range(samples):
color += RayColor(objects, pixelRay)

color *= 1.0/samples
#color = Vector(sqrt(color.x), sqrt(color.y), sqrt(color.z))
bitmap(x,y) = (int(color.x*255), int(color.y*255), int(color.z*255))

print("progress: %d %%" % ((y+1)/height*100.0))

#--------------------------------------------------------------
#--------------------------------------------------------------
#--------------------------------------------------------------

img.show()
img.save("pic1.png")
#img.save("images/fig" + f'{frame:06}' + ".png")
#print("Saving ---> images/fig" + f'{frame:06}' + ".png")
#img.close()

main()
$$```$$
``````

## directx – Compile shader and root signature of a ray tracing shader into a single binary using DXC

I’m new to DXR, so please

If I got it right, when we want to compile a ray tracing shader using the DirectX Shader Compiler, we need to specify `lib_6_*` as the target profile.

Now assume I’ve got a HLSL file containing a single ray generation shader `RayGen` whose root signature is specified by a RootSignature attribute of the form

``````#define MyRS "RootFlags(LOCAL_ROOT_SIGNATURE),"
"DescriptorTable("
"UAV(u0, numDescriptors = 1),"
"SRV(t0, numDescriptors = 1))"
(rootsignature(MyRS))
void RayGen() {}
``````

Using `IDxcCompiler::Compile`, I’m able to compile both the shader itself using the target profile `lib_6_3` and the root signature using the target profile `rootsig_1_1`, but if I got it right it’s not possible to invoke `IDxcCompiler::Compile` such that the created `IDxcBlob` contains both the shader and the root signature. (I’ve tried to add the argument `-rootsig-define MyRS` to the call for the compilation of the shader, but it seems to me that the compiler expects the root signature specified in this way to be a global root signature.)

So, I end up with two `IDxcBlob`‘s. Is there any possibility to “merge” them into a single one which can later be used to specify the shader as well in a call of `ID3D12Device5::CreateRootSignature`?

## correlation – Problem with tracing a marker in an image

my problem seems simple: I try to follow a marker across a number of images. All work fine, but when I update the marker from image to image I observe a wired oscillatory behavior. I isolated the problem and it appears also in the case below, where I try to find the marker several times in the same image.

``````image = (!(enter image here)(1))(1) ;
size = 40 (*marker size*)
row = 256 (*marker center row*)
col = 1013(*marker center column*)
Do(
{
rowMin = row - size,
rowMax = row + size,
colMin = col - size,
colMax = col + size,
marker = ImageTake(image, {rowMin, rowMax}, {colMin, colMax}),(*marker images*)
corr = ImageCorrelate(image, marker, SquaredEuclideanDistance), (*correlate image and marker*)
min = PixelValuePositions(corr, "Min") // First ,
Print(min),
row = min((2)),
col = min((1)),
Print(marker)
}, {i, 1, 5})
``````

I would expect to see the marker image five times. Instead it appears every second time and in between another section of the image appears.
Any thought why this is happens, and what I can do against it, would be very much appreciated.

Christian

## c++ – Path tracing: how to ensure the new direction vector is a valid direction vector with respect to a BSDF?

Given the BSDF function and the `Normal` vector of the intersection point in world space, how can I generate a new direction vector `wi` that is valid? Does the method for generating valid `wi`s change based on the BSDF?

Here’s an example of what I’m thinking to do for ideal diffuse material the BSDF: I generate a new direction vector `wi` as points on a unit hemisphere as follow and then compute the `dot` product of the produced vector with the `Normal` vector. If the `dot` product result is positive the direction vector `wi` is valid. Otherwise I negate `wi` as suggested here.

Here’s how I get a random `wi`:

``````float theta = 2 * M_PI * uniform01(generator);
float phi = acos(uniform01(generator));
float x = sin(phi) * cos(theta);
float y = sin(phi) * sin(theta);
float z = cos(phi);
Vector3f wi(x, y, z);

if (dot(wi, Normal) > 0){
return wi;
}
else{
return -wi;
}
``````

However, this doesn’t seem to be the right approach based on a conversation I had with someone recently. Apparently the new direction vector produced this way is somehow not in the right space (not sure whether it was world or object space) and could only work if my material is ideal diffuse. So I will have to apply some transformations in order to be able to get the right `wi`. Is this correct? If so, can someone provide a solution that includes doing such transformation? Also, is there a general way to ensure all of my produced `wi`s are valid with respect to the BSDF (not just ideal diffuse)? Should I still use this method for importance sampling?

## Tracing Bitcoins to their original blocks

Can I find (always, without fault) the specific set of blocks from which the Bitcoins that I hold originated?

If yes, would such a trace-back be computationally costly?