spring boot – manage Jenkins frontend and backend parties

I have a two part application on springboot and angular, I want to deploy this application using continuous integration with Jenkins,
in other words I want to make this application exploitable for front-end clients, this is the devops part of my graduation project, and my question here is how to deploy the two parts and make them communicate with each other through a server application for example, I have micro services in the backend (eureka, zuul, ribbon …)
tools: git, Jenkins, Gitlab …

Payload Encryption Design for the Web application with Spring Boot Backend

We are working on designing the solution for financial institution as a product,which comprises of Different application [channel] ex Andorid, Web Portal

  • Wrt to the web portal, we thought of going with Spring Security Auth Server [ ouath2 + open id cconnect]
    and there are uses cases [ Sso Inbound, sso outbound, Sso between products internal applications ]

With all this mind

  1. Key Cloak was ironed out for OIDC and OAUTH2 + SSO Integration

Doubt

  • Lets a say web application [ Angular + HTML5+ css] interacts with backend REST / Normal webservice in json
  • Security team recommended [ Message Security] that whatever sent from browser to server and server back to browser should be encrypted [ TLS alone not enough]
  • Security team tested in Burrp suite and indicated, Privacy fields are going in clear and visible so encryption is required for stateless services

HOw to achieve for web portal , Android application as general.

magento2 – Domain level admin Permission In Magento 2 Backend

I have experience with magento for a long time, When creating multiple domain sites, i always thought Magento 2 has, by default, the option to have a specific admin for each domain (that’s what I sold). I’m working on 2.3.5 and i dont see it anywhere, I’m missing something? is hidden somewhere? maybe 2.4.1 have it? it it doesn’t, please advise if there is a extension for this already developed

thanks

Rust blog backend in Rocket and Diesel, lots of clones

I have a small blog backend in Rust with Rocket and Diesel on Postgres. It reads and writes to and from the database fine, but to get to where it is I used a lot of clone() calls on strings. I feel like it could be written more efficiently with lifetimes, but my grasp of lifetimes in Rust is quite tenuous and I get a lot of compiler complaints.

The blog is divided into collections called ‘atlases’ with sub pages in each atlas. Each page has properties including ancestors, content blocks, and user IDs and roles. Each request comes with a cookie containing a session ID. To get a page the program receives a POST request to an endpoint containing an atlas ID and a page ID. It uses the cookie to authenticate the request, then queries the database through a Diesel schema to get the content blocks, ancestors, and other bits and bobs belonging to the page, and returns them as JSON data.

The main.rs file is generally standard Rocket boilerplate. The relevant bits for the POST handler to get a page from an atlas:

mod atlas;
#(macro_use)
extern crate rocket;
#(macro_use)
extern crate rocket_contrib;
#(macro_use)
extern crate diesel;

use response::{ResponseJson, ResponsePublicMedia, ResponseStaticFile, STATIC_FILES};
use rocket::http::CookieJar;
use rocket::Data;

pub mod schema;

#(database("postgres_conn"))
pub struct Db(diesel::PgConnection);

... // various endpoints and handlers

#(post("/endpoint/atlas.page.get/<atlas_id>/<page_id>"))
async fn tel_atlas_page_get(
    conn: Db,
    cookies: &CookieJar<'_>,
    atlas_id: String,
    page_id: String,
) -> ResponseJson {
    atlas::page_get(conn, cookies, atlas_id, page_id).await
}

...

#(launch)
fn rocket() -> rocket::Rocket {
    rocket::ignite()
        .mount(
            "/",
            routes!(
                ...
                tel_atlas_page_get,
                ...
            ),
        )
        .attach(Db::fairing())
}

The handler internals file:

use crate::atlas;
use crate::response::ResponseJson;
use crate::users::auth;
use crate::utilities::ERR;
use crate::Db;

use rocket::http::CookieJar;
use rocket_contrib::databases::diesel::prelude::*;

pub async fn page_get(
    conn: Db,
    cookies: &CookieJar<'_>,
    atlas_id: String,
    page_id: String,
) -> ResponseJson {
    // Page data
    let a_clone_page = atlas_id.clone();
    let p_clone_page = page_id.clone();
    let page = match conn.run(|c| get_page(c, a_clone_page, p_clone_page)).await {
        Some(p) => p,
        None => return ResponseJson::message("page.404", ""),
    };

    // User ID
    let user_id = match cookies.get("token") {
        Some(cookie) => match auth::check(cookie).await {
            Some((_email, user_id, _name, _token)) => Some(user_id),
            None => None,
        },
        None => None,
    };

    // Atlas role
    let a_clone_role = atlas_id.clone();
    let role = match user_id.clone() {
        Some(u) => conn.run(|c| get_atlas_role(c, u, a_clone_role)).await,
        None => "none".to_owned(),
    };

    // Page users
    let a_clone_users = atlas_id.clone();
    let p_clone_users = page_id.clone();
    let users = match user_id.clone() {
        Some(u) => {
            conn.run(|c| get_users(c, u, a_clone_users, p_clone_users))
                .await
        }
        None => Vec::new(),
    };

    if page.public || role != "none" || (user_id.is_some() && !users.is_empty()) {
        // Page ancestors
        let a_clone_ancestors = atlas_id.clone();
        let p_clone_ancestors = page_id.clone();
        let ancestors = conn
            .run(|c| get_ancestors(c, a_clone_ancestors, p_clone_ancestors))
            .await;

        // Page children
        let a_clone_children = atlas_id.clone();
        let p_clone_children = page_id.clone();
        let children = conn
            .run(|c| get_children(c, a_clone_children, p_clone_children))
            .await;

        // Page blocks
        let timestamp_clone = page.timestamp.clone();
        let blocks = conn
            .run(|c| get_blocks(c, atlas_id, page_id, timestamp_clone))
            .await;

        // JSON encode
        let page_encoded = match serde_json::to_string(&page) {
            Ok(e) => e,
            Err(_) => return ResponseJson::error(ERR.generic),
        };
        let users_encoded = match serde_json::to_string(&users) {
            Ok(e) => e,
            Err(_) => return ResponseJson::error(ERR.generic),
        };
        let ancestors_encoded = match serde_json::to_string(&ancestors) {
            Ok(e) => e,
            Err(_) => return ResponseJson::error(ERR.generic),
        };
        let children_encoded = match serde_json::to_string(&children) {
            Ok(e) => e,
            Err(_) => return ResponseJson::error(ERR.generic),
        };
        let blocks_encoded = match serde_json::to_string(&blocks) {
            Ok(e) => e,
            Err(_) => return ResponseJson::error(ERR.generic),
        };
        ResponseJson::message(
            "atlas.page",
            &format!(
                "{{"page":{},"atlas_role":"{}","users":{},"ancestors":{},"blocks":{},"children":{}}}",
                page_encoded, role, users_encoded, ancestors_encoded, blocks_encoded, children_encoded
            ),
        )
    } else {
        ResponseJson::message("auth.forbidden", "")
    }
}

fn get_page(
    conn: &mut diesel::PgConnection,
    atlas_id: String,
    page_id: String,
) -> Option<atlas::Page> {
    use crate::schema::atlas_pages;

    if &page_id == "maps" || &page_id == "locations" {
        match atlas_pages::table
            .select((
                atlas_pages::page_id,
                atlas_pages::atlas_id,
                atlas_pages::title,
                atlas_pages::image,
                atlas_pages::public,
                atlas_pages::parent,
                atlas_pages::timestamp,
            ))
            .filter(atlas_pages::atlas_id.eq(&atlas_id))
            .filter(atlas_pages::page_id.eq("index"))
            .first::<atlas::Page>(conn)
        {
            Ok(a) => Some(atlas::Page {
                page_id,
                atlas_id,
                title: a.title,
                image: a.image,
                public: a.public,
                parent: "index".to_owned(),
                timestamp: a.timestamp,
            }),
            Err(_) => None,
        }
    } else {
        match atlas_pages::table
            .select((
                atlas_pages::page_id,
                atlas_pages::atlas_id,
                atlas_pages::title,
                atlas_pages::image,
                atlas_pages::public,
                atlas_pages::parent,
                atlas_pages::timestamp,
            ))
            .filter(atlas_pages::atlas_id.eq(&atlas_id))
            .filter(atlas_pages::page_id.eq(&page_id))
            .first::<atlas::Page>(conn)
        {
            Ok(a) => Some(a),
            Err(_) => None,
        }
    }
}

fn get_users(
    conn: &mut diesel::PgConnection,
    user_id: String,
    atlas_id: String,
    page_id: String,
) -> Vec<atlas::AtlasUser> {
    use crate::schema::page_users;

    let user = match page_users::table
        .select((page_users::user_id, page_users::user_type))
        .filter(page_users::user_id.eq(&user_id))
        .filter(page_users::atlas_id.eq(&atlas_id))
        .filter(page_users::page_id.eq(&page_id))
        .first::<atlas::AtlasUser>(conn)
    {
        Ok(a) => a,
        Err(_) => return Vec::new(),
    };

    if user.user_type == "owner" {
        match page_users::table
            .select((page_users::user_id, page_users::user_type))
            .filter(page_users::atlas_id.eq(&atlas_id))
            .filter(page_users::page_id.eq(&page_id))
            .load::<atlas::AtlasUser>(conn)
        {
            Ok(a) => a,
            Err(_) => Vec::new(),
        }
    } else {
        vec!(user)
    }
}

fn get_atlas_role(conn: &mut diesel::PgConnection, user_id: String, atlas_id: String) -> String {
    use crate::schema::page_users;

    match page_users::table
        .select((page_users::user_id, page_users::user_type))
        .filter(page_users::user_id.eq(&user_id))
        .filter(page_users::atlas_id.eq(&atlas_id))
        .filter(page_users::page_id.eq("index"))
        .first::<atlas::AtlasUser>(conn)
    {
        Ok(a) => a.user_type,
        Err(_) => "none".to_owned(),
    }
}

fn get_ancestors(
    conn: &mut diesel::PgConnection,
    atlas_id: String,
    page_id: String,
) -> Vec<atlas::Ancestors> {
    use crate::schema::page_ancestors;

    match page_ancestors::table
        .select((
            page_ancestors::page_id,
            page_ancestors::atlas_id,
            page_ancestors::title,
            page_ancestors::index,
            page_ancestors::ancestor,
        ))
        .filter(page_ancestors::atlas_id.eq(&atlas_id))
        .filter(page_ancestors::page_id.eq(&page_id))
        .order(page_ancestors::index.asc())
        .load::<atlas::Ancestors>(conn)
    {
        Ok(a) => a,
        Err(_) => Vec::new(),
    }
}

fn get_children(
    conn: &mut diesel::PgConnection,
    atlas_id: String,
    page_id: String,
) -> Vec<atlas::Page> {
    use crate::schema::atlas_pages;

    match atlas_pages::table
        .select((
            atlas_pages::page_id,
            atlas_pages::atlas_id,
            atlas_pages::title,
            atlas_pages::image,
            atlas_pages::public,
            atlas_pages::parent,
            atlas_pages::timestamp,
        ))
        .filter(atlas_pages::atlas_id.eq(&atlas_id))
        .filter(atlas_pages::parent.eq(&page_id))
        .order(atlas_pages::title.desc())
        .load::<atlas::Page>(conn)
    {
        Ok(a) => a,
        Err(_) => Vec::new(),
    }
}

fn get_blocks(
    conn: &mut diesel::PgConnection,
    atlas_id: String,
    page_id: String,
    timestamp: String,
) -> Vec<atlas::Block> {
    use crate::schema::page_blocks;

    match page_blocks::table
        .select((
            page_blocks::page_id,
            page_blocks::atlas_id,
            page_blocks::index,
            page_blocks::label,
            page_blocks::block,
            page_blocks::text,
            page_blocks::data,
            page_blocks::timestamp,
        ))
        .filter(page_blocks::atlas_id.eq(atlas_id))
        .filter(page_blocks::page_id.eq(page_id))
        .filter(page_blocks::timestamp.eq(timestamp))
        .order(page_blocks::index.asc())
        .load::<atlas::Block>(conn)
    {
        Ok(a) => a,
        Err(_) => Vec::new(),
    }
}

“Router not found for request”: Configuring nginx reverse proxy so that backend on port 8080 is served to a path on port 80

I run an app called trilium on a VPS
Although I have it working on port 8080, I would like to set it up so that the port 8080 stuff is only on the backend, and going to mydomain.com/trilium presents the page that is currently shown when I go to mydomain.com:8080.

this is my current nginx configuration file for the site:


upstream websocket  {
      server 127.0.0.1:8080; #Trilium
}

server {
        listen 443 ssl;
        include /etc/nginx/includes/ssl.conf;
        server_name mydomain.com;
        # Add index.php to the list if you are using PHP
        index index.html index.htm index.nginx-debian.html;
        # Path to the root of your installation
         client_max_body_size 0;

        location /trilium {
                proxy_pass https://websocket;
                proxy_http_version 1.1;
                proxy_set_header Upgrade $http_upgrade;
                proxy_set_header Connection 'upgrade';
                proxy_set_header Host $host;
        }
}

With this setup, going mydomain.com:8080 loads the app start page. But when I go to mydomain.com/trilium, I get a page that says: “{“message”:”Router not found for request /trilium”}”

On the backend, the Trilium app is throwing these errors:

Error: Router not found for request /trilium
    at /home/admin/trilium-linux-x64-server/src/app.js:69:17
    at Layer.handle (as handle_request) (/home/admin/trilium-linux-x64-server/node_modules/express/lib/router/layer.js:95:5)
    at trim_prefix (/home/admin/trilium-linux-x64-server/node_modules/express/lib/router/index.js:317:13)
    at /home/admin/trilium-linux-x64-server/node_modules/express/lib/router/index.js:284:7
    at Function.process_params (/home/admin/trilium-linux-x64-server/node_modules/express/lib/router/index.js:335:12)
    at next (/home/admin/trilium-linux-x64-server/node_modules/express/lib/router/index.js:275:10)
    at Layer.handle (as handle_request) (/home/admin/trilium-linux-x64-server/node_modules/express/lib/router/layer.js:91:12)
    at trim_prefix (/home/admin/trilium-linux-x64-server/node_modules/express/lib/router/index.js:317:13)
    at /home/admin/trilium-linux-x64-server/node_modules/express/lib/router/index.js:284:7
    at Function.process_params (/home/admin/trilium-linux-x64-server/node_modules/express/lib/router/index.js:335:12) {
  status: 404
}

I am guessing this is a problem with my nginx configuration, not the app itself. Any help figuring out what that problem is would be wonderful. I am not familiar enough with js to know which of these errors is the original one.

The lines pointed to by the first message:
/home/admin/trilium-linux-x64-server/src/app.js:69

// catch 404 and forward to error handler
app.use((req, res, next) => {
    const err = new Error('Router not found for request ' + req.url);
    err.status = 404;
    next(err);
});

I was thinking to make a minimal example using a simple “hello world” app in Python, but it seems that nginx needs a wsgi server to communicate with nginx, and therefore I am not sure it would even be an analogous example. Help with figuring out what sort of test app would make a good minimal example would be just as welcome as help with the configuration itself.

ssl – HAPROXY : Redirect incoming HTTPS request to HTTP backend

I’m a newbie with HAProxy, and I want to use it to redirects HTTPS incoming requests to my HTTP backends servers.

I know, how it is possible to do it with Nginx, like this :

#SSL for all
server {
    listen 443 ssl ;
    server_name www.example.com;
    absolute_redirect off;
    proxy_redirect off;

    access_log /var/log/nginx/example.com-ssl-access.log;
    error_log /var/log/nginx/example.com-ssl-error.log;

    ssl_protocols TLSv1.2 TLSv1.1 TLSv1 ;
    ssl_certificate /etc/letsencrypt/live/www.example.com/fullchain.pem; 
    ssl_certificate_key /etc/letsencrypt/live/www.example.com/privkey.pem; 


    location / {
        proxy_pass http://bo.example.com;
    }
}

But I don’t know how I can do it with HAProxy ?

I have already tried several things. But each time I only had HTTPS to HTTPS redirects.

Can you help me ?

This is my current HAProxy configuration :

global
    log /dev/log    local0
    log /dev/log    local1 notice
    chroot /var/lib/haproxy
    stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
    stats timeout 5s
    user haproxy
    group haproxy
    daemon

    tune.ssl.default-dh-param 2048

defaults

    log     global

    mode    http
    option  httplog
    option  dontlognull
    timeout connect 5000
    timeout client  50000
    timeout server  50000
    errorfile 400 /etc/haproxy/errors/400.http
    errorfile 403 /etc/haproxy/errors/403.http
    errorfile 408 /etc/haproxy/errors/408.http
    errorfile 500 /etc/haproxy/errors/500.http
    errorfile 502 /etc/haproxy/errors/502.http
    errorfile 503 /etc/haproxy/errors/503.http
    errorfile 504 /etc/haproxy/errors/504.http
    stats enable
    stats hide-version
    stats refresh 30s
    stats uri /hastats

frontend www-http
        # Frontend listen port - 80
    bind *:80
    #Mode de fonctionnement
    mode http

    reqadd X-Forwarded-Proto: http

    # Test URI to see if its a letsencrypt request
    acl letsencrypt-acl path_beg /.well-known/acme-challenge/
    use_backend letsencrypt-backend if letsencrypt-acl

    # Set the default backend
    default_backend www-backend
    # Enable send X-Forwarded-For header
    #option forwardfor
    #option httpchk GET /
    # log reqs http
    #option httplog

    # acl
    #acl prod_acl  hdr(host) prod.local

    #use_backend apache_backend_servers if prod acl


# Define frontend ssl
frontend www-ssl
        bind *:443 ssl crt /etc/haproxy/certs/example.com.pem
        reqadd X-Forwarded-Proto: https
        default_backend www-backend


# define backend

backend www-backend
    mode http
    option httpchk
    option forwardfor except 127.0.0.1

    http-request add-header X-Forwarded-Proto https if { ssl_fc }
    
    redirect scheme http if { hdr(Host) -i example.com } { ssl_fc }
    balance roundrobin
    #Define the backend servers
    server  web1    XXX.XXX.XXX.101  check inter 3s port 80
    server  web2    XXX.XXX.XXX.102  check inter 3s port 80

backend letsencrypt-backend
    server letsencrypt 127.0.0.1:8080

git – Should frontend and backend be on separate GitHub repos?

We are new to git, but this fundamental question needs to be sorted out before we can begin. It’s two devs who have been working standalone for a while. Now the time has come to adopt git (at the first sight of sending each other zips and poking the same files). I work on both front&back, he works only on the back. So teamwork only happens on the backend. It’s a WordPress plugin that currently has a standalone backend and a frontend and they are installed separately. (Commercial, so no SVN here.) Obviously they will be merged into one, especially for production/release. What’s the best practice here? My ideas:

  • A. 1 repo that clones into the /wp-content/plugins/ folder of our dev WP installations, ourplugin-front and ourplugin-back then .gitignore any other folders from plugins. One day when we are ready to forge the two, we’ll just create a common ourplugin folder and move the files there.
  • B. 2 repos, one for each side. Eventually one side will get abandoned when its files begin existing on the other one. We’d rename the winning repo, while losing versions/history of the transferred files.
  • C. 2 repos, but combining the actual repos once we no longer work standalone. Since I’m new to this, it might be a clusterfck but I read that it’s possible. Or we could decide what we want now and avoid this as it’d turn into A. anyway.
  • D. 2 repos. Combine only at production build and do not store the built/combined version on git at all. Not sure what tool would pull from 2 repos, build, and combine things into one. Sounds fancy. Would need to keep the front up to date for the backend guy on his machine though (scheduled git pull or something).

why and when is queues used in backend architectures?

since instead of a lot of users should wait a bit for the user asking for the backend to do the syncrhonous task to be performed then the user asking for it would potentially wait a long time

No one said the work on the queue was being done synchronously.

  • Many such implementations will have a thread pool performing several pieces of work in parallel.
  • Other implementations would use a Priority Queue to ensure that quicker jobs were done first
  • Even better implementations would use both with a high priority pool and low priority pool for obtaining work on.

The relevance of a queue is to ensure that requests once received are remembered till they can be processed.

Isn’t most webservers implemented with queues in the first place?

Yes but those queues are on the front end, and once your program receives the request the timer starts. You have exactly X seconds to complete processing before the web server terminates the connection. A Queue permits those requests to keep being processed beyond this limit.

Since a server can respond to multiple requests asynchronously?

Also those requests are processed very quickly (assuming a multi-threaded server) which could easily overwhelm your actual resource budget. Delaying work till later is something a queue is good for.

Queues are also useful for transferring longer running work to other back-end servers tooled for intensive processing.

Would this only be relevant if the external service uses another protocol than http then?

No queues are supremely useful data structures. They are also known as:

  • Futures
  • Rendezvous
  • Channels
  • Streams
  • Pipes

They are used everywhere.

But more specifically the network protocol has zero to do with how a server should be implemented.

http – How to get the cookies from the backend of Meteor or Node js server

I built a small app to help me in some daily work tasks, the app can extract some html data from a website and process this data according to my needs. The issue is that the website requires authentication and the cookies expires every hour, so I have to extract the cookies again from the browser and put it into my code in the http requests every time. Is there an alternative way to get the cookie through the http request?

Note: the website only authenticate through otp
Note2: the session remains active but the cookie expires

Ordering posts by custom field named “date” in backend

I defined a custom field named “date” and now want to order the post list in the backend by this field.

I defined the filters to make ordering work and header is clickable and shows the order arrow icon. The problem is that the post list gets ordered by publish date instead of custom field “date”.

function crimes_columns($columns) {
    $columns('crimedate') = 'Crime date';
    return $columns;
}
add_filter('manage_posts_columns', 'crimes_columns');

function crimes_show_columns($name) {
    global $post;
    switch ($name) {
        case 'crimedate':
            echo get_post_meta($post->ID, "date", true);
            break;
    }
    return $row_output;
}
add_action('manage_posts_custom_column',  'crimes_show_columns');

add_filter( 'manage_edit-sortable_columns', 'crimes_add_custom_column_make_sortable' );
function crimes_add_custom_column_make_sortable( $columns ) {
    $columns('crimedate') = 'crimedate';
    return $columns;
}

add_action( 'pre_get_posts', 'crimes_orderby_meta' );
function crimes_orderby_meta( $query ) {
    if(!is_admin())
        return;
 
    $orderby = $query->get( 'orderby');
 
    if( 'crimedate' == $orderby ) {
        $query->set('meta_key','date');
        $query->set('meta_type', 'DATE');
        $query->set('orderby','meta_value_date');
    }
}

Do you think the problem is the name of the field, do I have to rename the field to make it work? Or is there something wrong with my code?