field of view – Geocoordinate calculation for aerial oblique image using camera and plane yaw, pitch, roll, and position data

I have a requirement to calculate the ground footprint for an aerial camera. The photos are TerraPhotos. TerraPhoto user guide provide camera position and plane orientation in .IML file. Additionally, I have the camera calibration file.

In TerraPhoto guide, the yaw, pitch, and roll of the aircraft are defined as follows:

  • yaw (heading): from North clock-wise direction
  • roll: positive, if left-wing is up.
  • pitch: positive, if the nose of the aircraft is up

The camera calibration details are as follows:

(TerraPhoto calibration)
Version=20050513
Description= Nikon D800E BW 50mm
TimeOffset= 0.0000
Exposure= 0.00000
LeverArm= 0.0000 0.0000 0.0000
AntennaToCameraOffset= 0.0000 0.0000 0.0000
AttitudeCorrections(HRP)= -0.4546 0.7553 -34.7538
PlateSize= 7630.00000000 4912.00000000
ImageSize= 7630 4912
Margin= 0
FiducialRadius= 40
FiducialMarks= 0
Orientation= BOTTOM
PrincipalPoint(XoYoZo)= -77.40000000 112.80000000 -10476.54389508
LensModel=Balanced
LensK0=0.000000E+000
LensK1=0.000000E+000
LensK2=0.000000E+000
LensP1=0.000000E+000
LensP2=0.000000E+000

Here, I see that AttitudeCorrection for the camera is given. Hence, I believe it is the orientation of the aerial camera according to the local frame (i.e. aircraft).

with respect to a given aerial photo, I have the following details, which I obtained from the.IML file (please check page 344 for more info).

Image=SLR2_443_20150326_144759_C_B_3489_DUBLIN_AREA_2KM2_FL_300_2232888
Time=402972.957799
Xyz=316440.819 234424.606 312.938
Hrp=-113.33234 2.03435 -1.87426
  • Image represent the name of the image
  • XYZ (i.e. camera easting, northing, and elevation)
  • aircraft yaw, pitch, roll

With this specific information at hand, I am attempting to calculate the ground coordinates of the Image. I intend to use Horizontal FoV, and vertical FoV.

I’ve been attempting this for some time, but still unable to estimate the geocoordinates properly. I did attempt, pin-hole model as well. I obtain results around the area of interest, but my results do not confirm the actual geolocations.

I intend to use either pinhole model or Horizontal and Vertical field of view (FoV) to calculate my geocoordinates.

A guide in the right direction is appreciated.

Code with respect to FoV calculation is provided.

def createRollMatrix(yaw,pitch,roll):
  '''
     Uses the Eigen formatted rotation matrix
     pulled directly from Eigen base code to python
  '''
  # convert degrees to radians
  yaw = np.radians(yaw)
  pitch = np.radians(pitch)
  roll = np.radians(roll)

  su = np.sin(roll)
  cu = np.cos(roll)
  sv = np.sin(pitch)
  cv = np.cos(pitch)
  sw = np.sin(yaw)
  cw = np.cos(yaw)

  rotation_matrix = np.zeros((3,3))
  
  rotation_matrix(0)(0) = cv*cw
  rotation_matrix(0)(1) = su*sv*cw - cu*sw
  #rotation_matrix(0)(2) = su*sw + cu - cu*sw
  rotation_matrix(0)(2) = su*sw + cu*sv*cw
  
  rotation_matrix(1)(0) = cv*sw
  rotation_matrix(1)(1) = cu*cw + su*sv*sw
  rotation_matrix(1)(2) = cu*sv*sw - su*cw

  rotation_matrix(2)(0) = -sv
  rotation_matrix(2)(1) = su*cv
  rotation_matrix(2)(2) = cu*cv

  return rotation_matrix

#### CAMERA misalignment angles
yaw = -0.4546 #  
pitch = -34.7538  # 
roll = 0.7553 #  0 

#### aircraft's yaw pitch roll
yaw1 =  -113.33234
pitch1 =  -1.87426
roll1 = 2.03435

R = createRollMatrix(yaw,pitch,roll)
R2 = createRollMatrix(yaw1,pitch1,roll1)

Corrected_R = (R2.dot(R))

yaw = math.atan(Corrected_R(1)(0)/ Corrected_R(0)(0))
yaw

roll =  math.atan(Corrected_R(2)(1)/ Corrected_R(2)(2))
roll

pitch = math.atan(-Corrected_R(2)(0)/ math.sqrt( (math.pow(Corrected_R(2)(1), 2) + math.pow(Corrected_R(2)(2), 2))))
pitch

Subsequently, I use the following code to calculate the geocoordinates.

import math
import numpy as np 

# pip install vector3d
from vector3d.vector import Vector


class CameraCalculator:
    """Porting of CameraCalculator.java
    This code is a 1to1 python porting of the java code:
        https://github.com/zelenmi6/thesis/blob/master/src/geometry/CameraCalculator.java
    referred in:
        https://stackoverflow.com/questions/38099915/calculating-coordinates-of-an-oblique-aerial-image
    The only part not ported are that explicetly abandoned or not used at all by the main
    call to getBoundingPolygon method.
    by: milan zelenka
    https://github.com/zelenmi6
    https://stackoverflow.com/users/6528363/milan-zelenka
    example:
        c=CameraCalculator()
        bbox=c.getBoundingPolygon(
            math.radians(62),SLR2_443_20150326_144759_C_B_3489_DUBLIN_AREA_2KM2_FL_300_2233046
            math.radians(84),
            117.1, 
            math.radians(0),
            math.radians(33.6),
            math.radians(39.1))
        for i, p in enumerate(bbox):
            print("point:", i, '-', p.x, p.y, p.z)
    """

    def __init__(self):
        pass

    def __del__(delf):
        pass

    @staticmethod
    def getBoundingPolygon(FOVh, FOVv, altitude, roll, pitch, heading):
        '''Get corners of the polygon captured by the camera on the ground. 
        The calculations are performed in the axes origin (0, 0, altitude)
        and the points are not yet translated to camera's X-Y coordinates.
        Parameters:
            FOVh (float): Horizontal field of view in radians
            FOVv (float): Vertical field of view in radians
            altitude (float): Altitude of the camera in meters
            heading (float): Heading of the camera (z axis) in radians
            roll (float): Roll of the camera (x axis) in radians
            pitch (float): Pitch of the camera (y axis) in radians
        Returns:
            vector3d.vector.Vector: Array with 4 points defining a polygon
        '''
        # import ipdb; ipdb.set_trace()
        ray11 = CameraCalculator.ray1(FOVh, FOVv)
        ray22 = CameraCalculator.ray2(FOVh, FOVv)
        ray33 = CameraCalculator.ray3(FOVh, FOVv)
        ray44 = CameraCalculator.ray4(FOVh, FOVv)

        rotatedVectors = CameraCalculator.rotateRays(
                ray11, ray22, ray33, ray44, roll, pitch, heading)
        
        #origin = Vector(0, 0, altitude) # 
        #origin = Vector(0, 0, altitude) # 

   
   ###   FW ---- SLR1
    

        #  origin = Vector(316645.779, 234643.179, altitude)

        '''
        BW ===== SLR2 
        '''
        origin = Vector(316440.819, 234424.606, altitude)
        #origin = Vector(316316, 234314, altitude)
        intersections = CameraCalculator.getRayGroundIntersections(rotatedVectors, origin)

        return intersections


    # Ray-vectors defining the the camera's field of view. FOVh and FOVv are interchangeable
    # depending on the camera's orientation
    @staticmethod
    def ray1(FOVh, FOVv):
        '''
        tasto
        Parameters:
            FOVh (float): Horizontal field of view in radians
            FOVv (float): Vertical field of view in radians
        Returns:
            vector3d.vector.Vector: normalised vector
        '''
        pass
        ray = Vector(math.tan(FOVv / 2), math.tan(FOVh/2), -1)
        return ray.normalize()

    @staticmethod
    def ray2(FOVh, FOVv):
        '''
        Parameters:
            FOVh (float): Horizontal field of view in radians
            FOVv (float): Vertical field of view in radians
        Returns:
            vector3d.vector.Vector: normalised vector
        '''
        ray = Vector(math.tan(FOVv/2), -math.tan(FOVh/2), -1)
        return ray.normalize()

    @staticmethod
    def ray3(FOVh, FOVv):
        '''
        Parameters:
            FOVh (float): Horizontal field of view in radians
            FOVv (float): Vertical field of view in radians
        Returns:
            vector3d.vector.Vector: normalised vector
        '''
        ray = Vector(-math.tan(FOVv/2), -math.tan(FOVh/2), -1)
        return ray.normalize()

    @staticmethod
    def ray4(FOVh, FOVv):
        '''
        Parameters:
            FOVh (float): Horizontal field of view in radians
            FOVv (float): Vertical field of view in radians
        Returns:
            vector3d.vector.Vector: normalised vector
        '''
        ray = Vector(-math.tan(FOVv/2), math.tan(FOVh/2), -1)
        return ray.normalize()

    @staticmethod
    def rotateRays(ray1, ray2, ray3, ray4, roll, pitch, yaw):
        """Rotates the four ray-vectors around all 3 axes
        Parameters:
            ray1 (vector3d.vector.Vector): First ray-vector
            ray2 (vector3d.vector.Vector): Second ray-vector
            ray3 (vector3d.vector.Vector): Third ray-vector
            ray4 (vector3d.vector.Vector): Fourth ray-vector
            roll float: Roll rotation
            pitch float: Pitch rotation
            yaw float: Yaw rotation
        Returns:
            Returns new rotated ray-vectors
        """
        sinAlpha = math.sin(yaw) #sw OK
        sinBeta = math.sin(pitch) #sv OK
        sinGamma = math.sin(roll) #su OK
        cosAlpha = math.cos(yaw) #cw OK
        cosBeta = math.cos(pitch) #cv OK
        cosGamma = math.cos(roll) #cu OK
        m00 = cosBeta * cosAlpha # cosAlpha * cosBeta  #cw*cv 
        m01 = sinGamma * sinBeta * cosAlpha - cosGamma * sinAlpha # cosAlpha * sinBeta * sinGamma - sinAlpha * cosGamma     #cw*sv#cu
        m02 = sinGamma * sinAlpha +  cosGamma * cosAlpha * sinBeta#cosAlpha * sinBeta * cosGamma + sinAlpha * sinGamma
        m10 = sinAlpha * cosBeta
        m11 = sinAlpha * sinBeta * sinGamma + cosAlpha * cosGamma
        m12 = sinAlpha * sinBeta * cosGamma - cosAlpha * sinGamma
        m20 = -sinBeta
        m21 = cosBeta * sinGamma
        m22 = cosBeta * cosGamma
        
        # Matrix rotationMatrix = new Matrix(new double()(){{m00, m01, m02}, {m10, m11, m12}, {m20, m21, m22}})
        rotationMatrix = np.array(((m00, m01, m02), (m10, m11, m12), (m20, m21, m22)))

        # Matrix ray1Matrix = new Matrix(new double()(){{ray1.x}, {ray1.y}, {ray1.z}})
        # Matrix ray2Matrix = new Matrix(new double()(){{ray2.x}, {ray2.y}, {ray2.z}})
        # Matrix ray3Matrix = new Matrix(new double()(){{ray3.x}, {ray3.y}, {ray3.z}})
        # Matrix ray4Matrix = new Matrix(new double()(){{ray4.x}, {ray4.y}, {ray4.z}})
        ray1Matrix = np.array(((ray1.x), (ray1.y), (ray1.z)))
        ray2Matrix = np.array(((ray2.x), (ray2.y), (ray2.z)))
        ray3Matrix = np.array(((ray3.x), (ray3.y), (ray3.z)))
        ray4Matrix = np.array(((ray4.x), (ray4.y), (ray4.z)))
        
        res1 = rotationMatrix.dot(ray1Matrix)
        res2 = rotationMatrix.dot(ray2Matrix)
        res3 = rotationMatrix.dot(ray3Matrix)
        res4 = rotationMatrix.dot(ray4Matrix)
        
        rotatedRay1 = Vector(res1(0, 0), res1(1, 0), res1(2, 0))
        rotatedRay2 = Vector(res2(0, 0), res2(1, 0), res2(2, 0))
        rotatedRay3 = Vector(res3(0, 0), res3(1, 0), res3(2, 0))
        rotatedRay4 = Vector(res4(0, 0), res4(1, 0), res4(2, 0))
        rayArray = (rotatedRay1, rotatedRay2, rotatedRay3, rotatedRay4)
        
        return rayArray

    @staticmethod
    def getRayGroundIntersections(rays, origin):
        """
        Finds the intersections of the camera's ray-vectors 
        and the ground approximated by a horizontal plane
        Parameters:
            rays (vector3d.vector.Vector()): Array of 4 ray-vectors
            origin (vector3d.vector.Vector): Position of the camera. The computation were developed 
                                            assuming the camera was at the axes origin (0, 0, altitude) and the python
                                            results translated by the camera's real position afterwards.
        Returns:
            vector3d.vector.Vector
        """
        # Vector3d () intersections = new Vector3d(rays.length);
        # for (int i = 0; i < rays.length; i ++) {
        #     intersections(i) = CameraCalculator.findRayGroundIntersection(rays(i), origin);
        # }
        # return intersections

        # 1to1 translation without python syntax optimisation
        intersections = ()
        for i in range(len(rays)):
            intersections.append( CameraCalculator.findRayGroundIntersection(rays(i), origin) )
        return intersections

    @staticmethod
    def findRayGroundIntersection(ray, origin):
        """
        Finds a ray-vector's intersection with the ground approximated by a planeç
        Parameters:
            ray (vector3d.vector.Vector): Ray-vector
            origin (vector3d.vector.Vector): Camera's position
        Returns:
            vector3d.vector.Vector
        """
        # Parametric form of an equation
        # P = origin + vector * t
        x = Vector(origin.x,ray.x)
        y = Vector(origin.y,ray.y)
        z = Vector(origin.z,ray.z)
        
        # Equation of the horizontal plane (ground)
        # -z = 0
        
        # Calculate t by substituting z
        t = - (z.x / z.y)
        
        # Substitute t in the original parametric equations to get points of intersection
        return Vector(x.x + x.y * t, y.x + y.y * t, z.x + z.y * t)

algorithms – Suffix -Tree checks existence of P pattern before k position in T string

I need to design an algorithm that given a T string of n length, after a process O(n), for every string P of m length and a k value between 1 to n, to checks in O(m) time, if P appears on T before k position, only using Suffix Tree.

Unfortunately there are not any good bioinformatics books with fair examples and practical methodologies. Dan Gusfield book does not offer a solution manual.

opengl – Screen space reflections shown at incorrect position

I have been trying to add a SSR post-processing effect into my engine for a while now, but it always seems to fail on the same thing: Reflections are not properly positioned below the object instead they are skewed and disappears from and to whilst moving the camera.

I’m using a g-buffer system which renders positions and normals in view-space (i have also tried to reconstruct the position from the depth buffer but it gives the same result). Shaders will be listed below.

I suspect that my normals might be wrong, but, I’m also using SSAO with the same buffers which works just fine.

I have read severals tutorials on this topic and tried them, but it always fails with this problem.

Screenshot showing reflections which are skewed.

enter image description here

Screenshot 2, rotated ~270 degrees, no reflections
enter image description here

G-buffer vertex shader:

#version 450 core

layout (location = 0) in vec3 in_position;
layout (location = 1) in vec3 in_normal;
layout (location = 2) in vec2 in_uv;
layout (location = 3) in vec3 in_tangent;
layout (location = 4) in vec3 in_bitangent;

out vec3 worldPosition;
out vec3 viewNormal;
out vec3 viewPosition;
out vec2 texCoord;
out mat3 TBN;

uniform mat4 projection;
uniform mat4 view;
uniform mat4 model;
uniform mat3 normal_matrix;

void CalculateTBN(mat4 modelViewMatrix, vec3 tangent, vec3 bitangent, vec3 normal) {
    TBN = mat3(
        normalize(vec3(modelViewMatrix * vec4(tangent, 0.0))),
        normalize(vec3(modelViewMatrix * vec4(bitangent, 0.0))),
        normalize(vec3(modelViewMatrix * vec4(normal, 0.0)))
    );    
}

void main()
{
    vec4 position = vec4(in_position, 1.0);
    vec3 normal = in_normal;

    mat4 modelViewMatrix = view * model;
    CalculateTBN(modelViewMatrix, in_tangent, in_bitangent, normal);

    worldPosition = vec3(model * position);
    viewNormal = vec3(normalize(modelViewMatrix * vec4(normal, 1.0)));

    viewPosition = vec3(view * vec4(worldPosition, 1.0));
    texCoord = in_uv;    
    gl_Position = projection * view * model * position; 
}

G-buffer fragment shader:

#version 450 core

layout (location = 0) out vec3 g_position;
layout (location = 1) out vec3 g_normal;
layout (location = 2) out vec4 g_albedo;
layout (location = 3) out vec3 g_metallness_roughness;
layout (location = 4) out vec4 g_emissive;
layout (location = 5) out float g_depth;

in vec3 worldPosition; // Position in world space
in vec3 viewNormal; // Normal in view space
in vec3 viewPosition; // Position in view space
in vec2 texCoord; 
in mat3 TBN; 

vec2 uv = texCoord;

struct Material {
    float shininess;
    vec3 diffuse_color;
    bool is_solid;

    bool has_specular;
    bool has_normal;
    bool has_emissive;
    bool has_ao;
    bool has_metallic;
    bool has_roughness;
}; 

uniform Material material;
uniform bool force_solid = false;
uniform vec3 force_color = vec3(0.);
uniform float emissive_pow = 1.0;
uniform bool flip_uv = false;
uniform float mesh_transparency = 1.0;
uniform vec3 tint = vec3(0.); 

layout (binding = 0) uniform sampler2D albedoMap;
layout (binding = 1) uniform sampler2D normalMap;
layout (binding = 2) uniform sampler2D metallicMap;
layout (binding = 3) uniform sampler2D roughnessMap;
layout (binding = 4) uniform sampler2D emissiveMap;

float get_metallic(vec2 uv) {
    if (material.has_metallic) return texture(metallicMap, uv).r;
    return 1.;
}

float get_roughness(vec2 uv) {
    if (material.has_roughness) return texture(roughnessMap, uv).r;
    return 1.;
}

vec3 get_emissive(vec2 uv) {
    if (material.has_emissive) return texture(emissiveMap, uv).rgb * emissive_pow;
    return vec3(0.);
}

vec2 get_uv() {
    if (flip_uv) return vec2(uv.x, 1. - uv.y);
    return uv;
}

void main()
{
    vec3 viewNormal;
    bool use_sampler = material.has_normal;
    if (use_sampler) {
        viewNormal = texture2D(normalMap, texCoord).rgb;
        viewNormal = normalize(viewNormal * 2.0 - 1.0);
        viewNormal = normalize(TBN * viewNormal);
    }
    else{
        viewNormal = viewNormal;    
    }

    g_position = viewPosition;
    g_normal = viewNormal;
    g_albedo.rgb = texture(albedoMap, get_uv()).rgb;

    float spec = (g_albedo.r + g_albedo.g + g_albedo.b)/3.0;
    g_albedo.a = spec;
    g_metallness_roughness.r = get_metallic(get_uv());
    g_metallness_roughness.g = get_roughness(get_uv());
    g_emissive.rgb = get_emissive(get_uv());
    g_emissive.a = mesh_transparency;

    g_depth.r = gl_FragCoord.z;
}

The normal buffer is declared as RGB32F, same for the position buffer.

And the SSR shader is declared like this:
(based on http://imanolfotia.com/blog/update/2017/03/11/ScreenSpaceReflections.html)

#version 450 core

layout (location = 0) uniform sampler2D gAlbedo;
layout (location = 1) uniform sampler2D gPosition;
layout (location = 2) uniform sampler2D gNormal;
layout (location = 3) uniform sampler2D gMetallicRoughness;

out vec4 FragColor;

uniform mat4 invView;
uniform mat4 projection;
uniform mat4 invProjection;
uniform mat4 view;
uniform float near = 0.1;
uniform float far = 100.0;
uniform vec2 resolution = vec2(1440.0, 810.0);
uniform vec3 cameraPos;

float Near = near;
float Far = far;

in vec2 TexCoords;
vec2 TexCoord = TexCoords;
vec2 texCoord = TexCoords;

uniform int raymarch_iterations = 60;
uniform float raymarch_step_size = 0.25;
uniform float raymarch_min_steps = 0.1;
uniform int numBinarySearchSteps = 10;

uniform vec3 skyColor = vec3(0.0);
uniform int binarySearchCount = 20;
uniform float LLimiter = 0.9;

// SSR based on tutorial by Imanol Fotia
// http://imanolfotia.com/blog/update/2017/03/11/ScreenSpaceReflections.html
#define GetPosition(texCoord) texture(gPosition, texCoord).xyz

vec2 BinarySearch(inout vec3 dir, inout vec3 hitCoord, inout float dDepth) {
    float depth;

    vec4 projectedCoord;
 
    for (int i = 0; i < binarySearchCount; i++) {
        projectedCoord = projection * vec4(hitCoord, 1.0);
        projectedCoord.xy /= projectedCoord.w;
        projectedCoord.xy = projectedCoord.xy * 0.5 + 0.5;
 
        depth = GetPosition(projectedCoord.xy).z;
 
        dDepth = hitCoord.z - depth;

        dir *= 0.5;

        if (dDepth > 0.0) {
            hitCoord += dir;
        } else {
            hitCoord -= dir;
        }
    }

    projectedCoord = projection * vec4(hitCoord, 1.0);
    projectedCoord.xy /= projectedCoord.w;
    projectedCoord.xy = projectedCoord.xy * 0.5 + 0.5;
 
    return vec2(projectedCoord.xy);
}

vec2 RayCast(vec3 dir, inout vec3 hitCoord, out float dDepth) {
    dir *= raymarch_step_size;
    
    for (int i = 0; i < raymarch_iterations; i++) {
        hitCoord += dir;

        vec4 projectedCoord = projection * vec4(hitCoord, 1.0);
        projectedCoord.xy /= projectedCoord.w;
        projectedCoord.xy = projectedCoord.xy * 0.5 + 0.5; 

        float depth = GetPosition(projectedCoord.xy).z;

        dDepth = hitCoord.z - depth;

        if ((dir.z - dDepth) < 1.2 && dDepth <= 0.0) {
            return BinarySearch(dir, hitCoord, dDepth);
        }
    }

    return vec2(-1.0);
}

#define Scale vec3(.8, .8, .8)
#define k 19.19

vec3 Hash(vec3 a) {
    a = fract(a * Scale);
    a += dot(a, a.yxz + k);
    return fract((a.xxy + a.yxx)*a.zyx);
}

// source: https://www.standardabweichung.de/code/javascript/webgl-glsl-fresnel-schlick-approximation
#define fresnelExp 15.0

float Fresnel(vec3 direction, vec3 normal) {
    vec3 halfDirection = normalize(normal + direction);
    
    float cosine = dot(halfDirection, direction);
    float product = max(cosine, 0.0);
    float factor = 1.0 - pow(product, fresnelExp);
    
    return factor;
}

void main() {
    float reflectionStrength = 1. - texture(gMetallicRoughness, texCoord).r; // metallic in r component
    if (reflectionStrength == 0.0) {
        FragColor = vec4(0., 0., 0., 1.); 
        return;
    }

    vec3 normal = texture(gNormal, texCoord).xyz;
    vec3 viewPos = GetPosition(texCoord);

    vec3 worldPos = vec3(vec4(viewPos, 1.0) * inverse(view));
    vec3 jitt = Hash(worldPos) * texture(gMetallicRoughness, texCoord).g; // roughness in g component

    vec3 reflected = normalize(reflect(normalize(viewPos), normalize(normal)));

    vec3 hitPos = viewPos;
    float dDepth; 
    vec2 coords = RayCast(jitt + reflected * max(-viewPos.z, raymarch_min_steps), hitPos, dDepth);

    float L = length(GetPosition(coords) - viewPos);
    L = clamp(L * LLimiter, 0, 1);
    float error = 1 - L;

    float fresnel = Fresnel(reflected, normal);
    
    vec3 color = texture(gAlbedo, coords.xy).rgb * error * fresnel;

    if (coords.xy != vec2(-1.0)) {
        vec3 res = mix(texture(gAlbedo, texCoord), vec4(color, 1.0), reflectionStrength).rgb;
        FragColor = vec4(res, 1.0);
        return;
    }
    
    vec3 rescol = mix(texture(gAlbedo, texCoord), vec4(skyColor, 1.0), reflectionStrength).rgb;
    FragColor = vec4(rescol, 1.0);
}

If you have faced the same situation, can lead me to some example that can give me more information here, please let me know since I have fighted with this one for over two weeks now. All help is highly appreciated!

Thanks in advance!

ux designer – Is it a bad practice if we position breadcrumb to the right side of page header?

Most breadcrumbs are on the left of the page because it’s simply not possible to know exactly how far it will extend to the right as the user moves through the site.

Because this layout is so common, most users will expect to be able to find the breadcrumb trail on the left of the page.

It is, of course, possible to place it anywhere you wish provided that you know you have room for it – The one caveat being that, if you are going to place it in the right side of the page, it should always extend to the right:

level 1 >
level 1 > level 2 >
level 1 > level 2 > level 3 >

and not extend left-wards:

                    level 1 >
          level 1 > level 2 >
level 1 > level 2 > level 3 >

to prevent the user from becoming confused with which link they want to click on

xml – Critical CSS file position

Is there a way to include inline styles after the <title> tag as shown on this image:

enter image description here

If I edit the file default_head_blocks.xml I can only get this to show after the merged css file which defeats the purpose of using critical-css. One possible way is to add this bit of code into default.xml but then the inline css is rendered right at the top of <head> before all the <meta> tags so I’m not sure if this is bad practice or not

<referenceBlock name="require.js">
       <action method="setTemplate">
           <argument name="template" xsi:type="string">Magento_Theme::/header/inline-css.phtml</argument>
       </action>
</referenceBlock>

Is there a way to create a link to a particular position inside a Google Doc?

Is there a way to create a link to a particular position inside a Google Doc? – Web Applications Stack Exchange

java – 2D Isometric Movement – Straight Line to Mouse Position

I’m developing a 2d isometric game on a engine made by myself (as a way to practice Java), however i’m stuck on what it seems to be a math problem, since I’m very bad at it. I can’t figure out to get the angle and change the X and Y velocity based on the angle.

I wish I could have some help on how to implement a 360 degree movement system based on Mouse Position, but the character should move in a straight line, based on the mouse angle relative to the center of the screen (where the player will be). The screen itself should work like a compass, and where the mouse clicks on it, the character should walk on said angle.

Example, my native resolution is 512×288. Center point is 256×144. So, If I click in in the coordinate of X = 256 and Y is 0~144, the character should walk on a perfect stright line upwards. If Y is 145~288, then, a perfect line downwards.

Any help is welcome!

Edit: The camera is always fixed on the character, so it will always render on the middle of screen.

enter image description here

unreal 4 – Get Skeletal Mesh Vertex Position in Niagara

I have a custom HLSL node in a simulation stage in Niagara. One of the input variables to this node is of type Skeletal Mesh and I want to get the position and normal of a given vertex on that skeletal mesh by the index of the vertex.

In other words, I want to do the equivalent of the GetSkinnedVertexDataWS node, but in HLSL. Is this possible?

I couldn’t find any documentation on it. I tried this code

bool myBool;
float3 VertexPosition;
ParticleMesh.GetSkinnedVertexDataWS<Attribute="Position">(i, myBool, VertexPosition);
float3 VertexNormal;
ParticleMesh.GetSkinnedVertexDataWS<Attribute="Normal">(i, myBool, VertexNormal);

where ‘i’ is the index of the vertex I want (from a loop), but I get the error:

“‘GetSkinnedVertexDataWS_SystemParticleMesh_AttributePosition’: cannot convert output parameter from ‘float3’ to ‘bool’ SPH_Emitter, Particle GPU Compute Script”.

I also tried a few variations to no avail.

How can I solve this?

DreamProxies - Cheapest USA Elite Private Proxies 100 Private Proxies 200 Private Proxies 400 Private Proxies 1000 Private Proxies 2000 Private Proxies ExtraProxies.com - Buy Cheap Private Proxies Buy 50 Private Proxies Buy 100 Private Proxies Buy 200 Private Proxies Buy 500 Private Proxies Buy 1000 Private Proxies Buy 2000 Private Proxies ProxiesLive Proxies-free.com New Proxy Lists Every Day Proxies123