opengl – Incorrect generation of UV ball nets

I follow the same algorithm to create the net of a UV sphere as described in this wiki: http://wiki.unity3d.com/index.php/ProceduralPrimitives#C.23_-_Sphere

My implementation is in C ++. I am not sure what is wrong with my implementation. The indices seem to be wrong. Does anybody know what I'm doing wrong?

Rendering:Procedural UV ball

Network generation code:

struct Sphere {
    float radius_; 
    math::Vec3f center_; 
}; 

struct Vertex {
    math::Vec3f position_;
    math::Vec3f normal_;
    math::Vec2f texture_coordinate_;
};

void MeshGenerator::Generate(const math::Sphere& sphere,
                             std::vector& vertices,
                             std::vector& indices) {

    constexpr uint32_t latitude_count = 16;
    constexpr uint32_t longitude_count = 24;

    vertices.clear();
    indices.clear();

    const uint32_t vertex_count = (longitude_count + 1) * latitude_count + 2;
    vertices.resize(vertex_count);

    // Generate Vertices
    vertices(0).normal_ = math::Vec3f::Up();
    vertices(0).position_ = (vertices(0).normal_ * sphere.radius_) + sphere.center_;
    vertices(0).texture_coordinate_ = math::Vec2f(0.0F, 1.0F);
    for(uint32_t lat = 0; lat < latitude_count; ++lat) {
        float a1 = math::kPi * static_cast(lat + 1) / (latitude_count + 1);
        float sin1 = math::Sin(a1);
        float cos1 = math::Cos(a1);
        for(uint32_t lon = 0; lon <= longitude_count; ++lon) {
            float a2 = math::kTwoPi * static_cast(lon == longitude_count ? 0 : lon) / longitude_count;
            float sin2 = math::Sin(a2);
            float cos2 = math::Cos(a2);
            Vertex vertex{};
            vertex.normal_.x_ = sin1 * cos2;
            vertex.normal_.y_ = cos1;
            vertex.normal_.z_ = sin1 * sin2;
            vertex.position_ = (vertex.normal_ * sphere.radius_) + sphere.center_;
            vertex.texture_coordinate_.x_ = static_cast(lon) / longitude_count;
            vertex.texture_coordinate_.y_ = static_cast(lat) / latitude_count;
            vertices(lon + lat * (longitude_count + 1) + 1) = vertex;
        }
    }
    vertices(vertex_count - 1).normal_ = math::Vec3f::Down();
    vertices(vertex_count - 1).position_ = (vertices(vertex_count - 1).normal_ * sphere.radius_) + sphere.center_;
    vertices(vertex_count - 1).texture_coordinate_ = math::Vec2f::Zero();

    // Generate Indices
    // Top
    for (uint32_t lon = 0; lon < longitude_count; ++lon) {
        indices.push_back(lon + 2);
        indices.push_back(lon + 1);
        indices.push_back(0);
    }

    // Middle
    for(uint32_t lat = 0; lat < latitude_count - 1; ++lat) {
        for(uint32_t lon = 0; lon < longitude_count; ++lon) {
            const uint32_t current = lon + lat * (longitude_count + 1) + 1;
            const uint32_t next = current + longitude_count + 1;

            indices.push_back(current);
            indices.push_back(current + 1);
            indices.push_back(next + 1);

            indices.push_back(current);
            indices.push_back(next + 1);
            indices.push_back(next);
        }
    }

    // Bottom
    for (uint32_t lon = 0; lon < longitude_count; ++lon) {
        indices.push_back(vertex_count - 1);
        indices.push_back(vertex_count - (lon + 2) - 1);
        indices.push_back(vertex_count - (lon + 1) - 1);
    }
}

The following is the OpenGL mesh rendering code. I left out the shader program and setup code.

... set view port ... 
... clear color/depth/stencil buffers ... 
... create/use shader program and set uniforms ... 
glGenVertexArrays(1, &vao_);
glGenBuffers(1, &vbo_);
glGenBuffers(1, &ebo_);

glBindVertexArray(vao_);

// Set vertex data
glBindBuffer(GL_ARRAY_BUFFER, vbo_);
glBufferData(GL_ARRAY_BUFFER,
             vertices_.size() * sizeof(decltype(vertices_)::value_type),
             vertices_.data(),
             GL_STATIC_DRAW);

// Set index data
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ebo_);
glBufferData(GL_ELEMENT_ARRAY_BUFFER,
             indices_.size() * sizeof(decltype(indices_)::value_type),
             indices_.data(),
             GL_STATIC_DRAW);

// Set vertex attribute pointers
// Position
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), (void*)offsetof(Vertex, position_));
// Normal
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), (void*)offsetof(Vertex, normal_));
// Texture Coordinate
glEnableVertexAttribArray(2);
glVertexAttribPointer(2, 2, GL_FLOAT, GL_FALSE, sizeof(Vertex), (void*)offsetof(Vertex, texture_coordinate_));

glDrawArrays(GL_TRIANGLES, 0, indices_.size());

Passwords – Do additional rules for the generation scheme of my diceware list reduce security?

I have read Simon Singh's code book and am interested in playing with some of the ideas in the book to improve my own understanding. I do not intend to implement the following in subsequent settings. I am only interested in examining the security implications.

I want to create alternative diceware lists that have quirks, e.g. B. that each word is entered only with the left hand, or keystrokes alternately with the hand. Suppose I can generate 7776 different strings and follow all other guidelines for diceware. Are all diceware lists equally secure?

In the German Enigma machine, no letter could be encrypted for itself (e.g. a cannot be encoded a). This detail helped crack the code. However, I do not think that this is the case here. The strength of the password does not depend on encryption. I don't understand why 6 or 7 strings randomly selected from a list of 7776 wouldn't have the same entropy regardless of the list. Theoretically, it could only consist of 7776 different binary lines, right?

I understand that additional password generation rules sometimes reduce security. If an attacker knows my diceware list, does it matter whether each entry consists of only 15 unique left-hand characters? Is there less entropy?

Plotting – Automatic generation of file names from column values ​​and automatic PlotLegends labels from file names?

I have a two-part question about the automatic generation of file names and plot legends, labels.

Part 1:
I usually work with very large data matrices with columns of the form:

MASTERdataset: {a-parameter, b-parameter, c-parameter, x-Var, y-Var}

An example record is below:

masterDATA = {{1200, 700, 150, 285.29323135045837`, 
    124.81439541987501`}, {1200, 700, 150, 286.60945594708426`, 
    126.30947680625`}, {1200, 700, 150, 287.92561104172626`, 
    127.73505620875001`}, {1200, 700, 150, 289.24169417515805`, 
    129.08867440106252`}, {1200, 700, 150, 290.5577029045866`, 
    130.367888590125`}, {1200, 700, 150, 291.8736347915017`, 
    131.5702602656875`}, {1200, 700, 150, 293.1894874091171`, 
    132.69336264156252`}, {1200, 700, 150, 294.5052583400343`, 
    133.7347783193125`}, {1200, 700, 150, 295.8209451736646`, 
    134.692096710125`}, {1200, 700, 150, 297.1365455140142`, 
    135.5629218201875`}, {1200, 700, 150, 298.4520569668892`, 
    136.3448594556875`}, {1200, 700, 150, 299.7674771578958`, 
    137.0355352228125`}, {1200, 700, 150, 301.0828037094413`, 
    137.63257152881252`}, {1200, 700, 150, 302.39803426883464`, 
    138.13361568262502`}, {1200, 700, 150, 303.7131664753014`, 
    138.5363069099375`}, {1200, 700, 150, 305.02819799763193`, 
    138.83831400093752`}, {1200, 700, 150, 306.34312649281645`, 
    139.0372939459375`}, {1200, 700, 150, 307.65794964948805`, 
    139.1309353780625`}, {1200, 700, 150, 308.9726651454576`, 
    139.116916108125`}, {1100, 700, 150, 285.20258900103653`, 
    136.52080780656252`}, {1100, 700, 150, 286.51872642831677`, 
    138.33050686200002`}, {1100, 700, 150, 287.8347978496311`, 
    140.07419995162502`}, {1100, 700, 150, 289.15080077907106`, 
    141.7494011668125`}, {1100, 700, 150, 290.4667327507974`, 
    143.3536446684375`}, {1100, 700, 150, 291.7825912998558`, 
    144.8844655020625`}, {1100, 700, 150, 293.0983739766935`, 
    146.339414115`}, {1100, 700, 150, 294.41407833772337`, 
    147.716046920125`}, {1100, 700, 150, 295.729701949836`, 
    149.01193080800002`}, {1100, 700, 150, 297.0452423911218`, 
    150.2246438690625`}, {1100, 700, 150, 298.36069724507706`, 
    151.35176959956252`}, {1100, 700, 150, 299.6760641116824`, 
    152.3909079801875`}, {1100, 700, 150, 300.99134059121155`, 
    153.33965928456252`}, {1100, 700, 150, 302.30652430565584`, 
    154.195645504125`}, {1100, 700, 150, 303.62161287225007`, 
    154.95648387356252`}, {1100, 700, 150, 304.93660393503`, 
    155.61981842875002`}, {1100, 700, 150, 306.2514951283887`, 
    156.1832835626875`}, {1100, 700, 150, 307.5662841177902`, 
    156.64454473943752`}, {1100, 700, 150, 308.8809685567008`, 
    157.0012554253125`}, {1200, 650, 140, 294.568511670944`, 
    152.44652887431252`}, {1200, 650, 140, 295.5572385555695`, 
    153.244130479`}, {1200, 650, 140, 296.5459073608416`, 
    153.983652730375`}, {1200, 650, 140, 297.5345169136301`, 
    154.663922498125`}, {1200, 650, 140, 298.52306602239275`, 
    155.28374823993752`}, {1200, 650, 140, 299.5115535361726`, 
    155.84197899893752`}, {1200, 650, 140, 300.49997828288434`, 
    156.33744269000002`}, {1200, 650, 140, 301.48833910612274`, 
    156.768982907625`}, {1200, 650, 140, 302.4766348644878`, 
    157.135458251875`}, {1200, 650, 140, 303.46486439545527`, 
    157.43570619850001`}, {1200, 650, 140, 304.4530265778131`, 
    157.668605535625`}, {1200, 650, 140, 305.44112027103967`, 
    157.83301574143752`}, {1200, 650, 140, 306.4291443464566`, 
    157.92780813756252`}, {1200, 650, 140, 307.41709769382004`, 
    157.95187248018752`}, {1200, 650, 140, 308.4049791815327`, 
    157.904077172125`}, {1200, 650, 140, 309.39278771614994`, 
    157.7833287685625`}, {1200, 650, 140, 310.3805221893237`, 
    157.5885189215`}, {1200, 650, 140, 311.3681814999415`, 
    157.3185465185`}, {1200, 650, 140, 312.3557645686668`, 
    156.97233222306252`}, {1200, 650, 140, 313.34327029479664`, 
    156.54877533212502`}, {1200, 650, 140, 314.33069761222396`, 
    156.046809738625`}, {1200, 650, 140, 315.3180454445702`, 
    155.4653590640625`}, {1200, 650, 140, 316.3053127179819`, 
    154.80334945506252`}, {1200, 650, 140, 317.29249838358635`, 
    154.05973203875`}, {1150, 600, 140, 298.23873411795546`, 
    164.05272120981252`}, {1150, 600, 140, 299.2273312653466`, 
    164.975212389875`}, {1150, 600, 140, 300.2158717606931`, 
    165.8410515253125`}, {1150, 600, 140, 301.20435441922945`, 
    166.64905385056252`}, {1150, 600, 140, 302.1927780834343`, 
    167.3980618443125`}, {1150, 600, 140, 303.18114157670266`, 
    168.0868989015625`}, {1150, 600, 140, 304.169443745497`, 
    168.7144114848125`}, {1150, 600, 140, 305.1576834377865`, 
    169.27944756325002`}, {1150, 600, 140, 306.14585949313545`, 
    169.7808467010625`}, {1150, 600, 140, 307.13397078169055`, 
    170.2174790450625`}, {1150, 600, 140, 308.1220161542818`, 
    170.58819542525`}, {1150, 600, 140, 309.1099944821404`, 
    170.89186707275002`}, {1150, 600, 140, 310.0979046420354`, 
    171.1273707566875`}, {1150, 600, 140, 311.08574549827347`, 
    171.293570783625`}, {1150, 600, 140, 312.0735159488727`, 
    171.38936517175`}, {1150, 600, 140, 313.06121487255257`, 
    171.4136326405625`}, {1150, 600, 140, 314.0488411653967`, 
    171.36526927356252`}, {1150, 600, 140, 315.03639373319686`, 
    171.24318086262502`}, {1150, 600, 140, 316.0238714652131`, 
    171.04625666781251`}, {1150, 600, 140, 317.011273287306`, 
    170.77342254962502`}, {1150, 600, 140, 317.99859810628294`, 
    170.42358531543752`}, {1150, 600, 140, 318.98584484300574`, 
    169.995665827125`}, {1150, 600, 140, 319.9730124318714`, 
    169.48859848168752`}, {1150, 600, 140, 320.96009978832717`, 
    168.90129872643752`}};

I partition the matrix into submatrices that correspond to the values ​​of the parameters (a, b, c) with:

SELECTfxn(data_,a_,b_,c_):=Select(data,#((1";;" 3))=={a,b,c}&)"

To plot the data or perform other manipulations, I need to delete the first three columns and assign a file name that uses the (a, b, c) parameters so that I can keep track of which one is later. So far I have done this "by hand" with many strategic copy / paste and find / replace processes. I have named the files according to the following scheme:

a1200b700c150 = SELECTfxn(masterDATA, 1200, 700, 150)((All , {4, 5}));
a1100b700c150 = SELECTfxn(masterDATA, 1100, 700, 150)((All , {4, 5}));
a1200b650c140 = SELECTfxn(masterDATA, 1200, 650, 140)((All , {4, 5}));
a1150b600c140 = SELECTfxn(masterDATA, 1150, 600, 140)((All , {4, 5}));

That costs a lot of time and is prone to errors. I would like an operation that does the following:

AUTOfileNAMES (data _): = "Execute the operation SELECTfxn (data_, a_, b_, c_) for all unique combinations of (a, b, c) and assign file names according to the values ​​(a, b, c) that are similar to those from "to" hand & # 39; example “

I am sure that this is possible, but I cannot find out. Can someone help?

Part 2:
The second part of my question is how this information can be integrated into PlotLegends.

I like the data with different groupings and look for patterns. Doing this by hand takes forever and I often make mistakes in the labels for PlotLegends. Is there a way to automatically assign legend labels using the following form:

a = 1200, b = 700, c = 150

Where do the values ​​(1200, 700, 150) come from file names of the form: file = a1200b700c150?

Here is an example of what the end product should look like:

fontsize = 16;
SAMPLEdataset = {a1200b700c150, a1100b700c150, a1200b650c140, 
   a1150b600c140};
ListPlot(SAMPLEdataset, PlotStyle -> PointSize(Large), Frame -> True, 
 Axes -> False, FrameLabel -> {"x", "y"}, 
 PlotLegends -> 
  Placed(LineLegend({"a=1200, b=700, c=150", "a=1100, b=700, c=150", 
     "a=1200,b=650, c=140", 
     "a=1150, b=600, c = 140"},(*LegendFunction(Rule)"Frame",*)
    LegendMarkerSize -> 20, LabelStyle -> Directive(Bold, fontsize), 
    LegendLayout -> "Column"), Right))

Has anyone done this before?

Thanks a lot!

procedural generation – How are more uniform Perlin noises generated?

So I've learned procedural stuff lately (especially Perlin noise) and somehow bumped into a wall.
Enter image description here
This is the result I've seen everyone, but that's what I get
Enter image description here

This is my code (I'm using Python):

import random
import math
import PIL.Image

class Noise:

    def __init__(self, x, y):
        self.x = x
        self.y = y

    def lerp(self, start, stop, t):
        return start*(1-t) + (stop*t)

    def smoothstep(self, t):
        return 3*(t**2) - 2*(t**3)

    def gradients(self):
        grads = ()
        for _ in range(4):
            x = random.randrange(0, 361)
            y = random.randrange(0, 361)
            grads.append((math.cos(x), math.sin(y)))
        return grads

    def cvectors(self):
        botleft = (self.x, self.y)
        botright = (self.x-1, self.y)
        topleft = (self.x, self.y-1)
        topright = (self.x-1, self.y-1)
        cvecs = (botleft, botright, topleft, topright)
        return cvecs

    def makenoise(self):
        dot = ()
        grads = self.gradients()
        cvecs = self.cvectors()
        for i in range(4):
            (x, y) = cvecs(i)
            (a, b) = grads(i)
            dot.append((x*a)+(y*b))
        st = self.lerp(dot(0), dot(1), self.smoothstep(x))
        uv = self.lerp(dot(2), dot(3), self.smoothstep(x))
        stuv = self.lerp(st, uv, self.smoothstep(self.y))
        return stuv

    size = 128
    img = PIL.Image.new('L', (size, size))
    for x in range(size):
        for y in range(size):
            perlin = Noise(random.uniform(0, 1), random.uniform(0, 1))
            img.putpixel(tuple((x, y)), perlin.makenoise())
    img.save("nois.png")

This is just a guess, but I think it's a scaling problem. Thanks in advance and also an apology for the terrible code.

Code generation – would C be a good compiler backend?

In these and these stack overflow questions, the answers indicate that C is a bad idea as a compiler backend.

But Why?

C has many compilers that can greatly optimize it. Each platform has a compiler that supports it and can be compiled for any existing architecture. Languages ​​such as Nim and V also support the generation of C code.

So I don't understand why C would be a bad idea at all. In my opinion, it seems like a pretty good choice.

Geometry – Tilemap collision generation – tracing polygons with slopes

I used Godot for my game, but it has unfortunate quirks with tilemap collisions, where physical objects bounce off near tile seams and kinematic bodies often get stuck in seams. For early alpha work, I could avoid this by not using a tilemap collision and then tracing the contours by hand … well, that doesn't scale well to 200 rooms (or whatever I end up with) and can easily lead to mismatched collision data, so I decided that this process needs to be automated.

There are several different types of tiles in Tilemaps: Filled, diagonal and 2: 1 gradient.

In my first attempt, each of these types was matched with half a tile size, marking each vertex as fixed or not fixed, and then combining all vertices (with their overlap) into a single grid of vertices. I could then use all of this together by walking along the perimeter and groping forward to see corners. It has been found that this works well with convex corners, but serious problems occur with concave corners. In addition, there is not enough data to properly recognize gradients. Therefore, this often resulted in additional parts becoming slippery or being cut only slightly on slopes.

From then on, I decided to add some additional information about the vertex grid, such as: For example, whether a vertex is a slope edge and doing wise things that affect the properties of the overlap, but there is a rapid explosion in many specific cases.

I've been thinking a lot about adding vertex normals to every tile definition today, but I realized that this wouldn't work without adding a number of different tile solidity types.

Are there better approaches than what I think, that are not about brittle / lossy intermediate representations or ridiculous amounts of special cases?

Network – 18.04.3 LTS installation media do not recognize network controllers on a 10th generation NUC

tl; dr: If the installation media is not recognized by the installation media when installing on a recently released computer, use an old USB to Ethernet adapter instead,

I tried to install 18.04.3 LTS on a NUC10i7FNK, but the kernel on the installation media was probably too old to recognize either the Ethernet or the WLAN controller.

Instead, I released a USB to Ethernet dongle (all I had was an Apple USB2-to-100BaseT dongle, and it worked, I assume others would do the same). The computer was able to connect to the network and get updates during the installation.

I restarted, ran apt update; apt upgrade, restarted, and both the wired and wireless controllers were recognized! (They may have been recognized before the update, but I am Not Reinstall just to check that :))

Obviously this is not NUC specific, it could have happened on any sufficiently new machine.

I hope that saves someone some grief!