views – Auto add content to specific entity queue dependant on chosen taxonomy term

I have a People content type, which has an Expertise field which is a taxonomy term reference field.

There will then be an EntityQueue set up for each Expertise term.

What I’d like to do is automatically add all People profiles to the end of their relevant Expertise EntityQueue. For example, Management EntityQueue will auto populate with all people who are managers, and you will be able to reorder them by editing the EntityQueue, so the most senior managers appear first.

Looks like there was a module in Drupal 7 which added entities to queues by taxonomy term, but I’ve not found a way to easily do this in Drupal 9.

I could just create a view which shows all the People filtered by their Expertise, but that wouldn’t give the ability to reorder.
There is a Drupal 9 Auto Entityqueue module, but this only allows items of a specific content type to be added to an EntityQueue.
I could also add all people to one EntityQueue and then use a view to filter by the relevant Expertise, but I don’t really want one massive People queue, and it makes it tricky as some people might belong to more than one Expertise and their place in the full list of People would be “global,” and you couldn’t reorder them in a individual queue.
I have a feeling that setting up some rules which trigger on save is the way to go, but it’s not something I’ve ever touched, and I’m hoping there is another, less technical, way of achieving this.

web api – What would be the best way to queue asynchronous calls received simultaneously by a web service cluster?

I have an API where every time is called there are multiple calls to multiple webservices in the background, sometimes it takes up to 20 seconds to process the request calling up to 10 different web services depending the specific use case.

The API is developed using .NET Core hosted in IIS behind a web balancer, the web balancer uses a round robin logic to distribute the requests to the web server cluster.

The API has been in production for several years, but just recently there is an API client that is making two request almost simultaneously. These requests are sent each to different IIS, so there is a race condition between the two web servers handling a request associated to the same customer, and since there isn’t any synchronization between the servers the behavior is somewhat unpredictable, the API was designed to process requests associated to the same customer sequentially.

Our first approach, was to ask the API client to just wait until the first request is fulfilled before sending the second one. For reasons out of the scope of this question, the API client cannot be changed.

Our second approach, was to ask the web balancer responsible to send requests associated to the same customers to the same web server, if they were able to do this, we can easily check if a request for a customer is currently being processed process by the server, and wait the first request to be completed before processing the next one. But also, for reasons out of the scope of this questions, the web balancer logic cannot be changed.

So the question would be, what do you think could be a good solution to this issue?, almost every solution that I think is not that simple and/or could add a lot of overhead to the API since this is just a very specific use case, normally, requests associated to the same customer are received with a difference of days, even weeks.

Maybe synchronization to “lock” the customer, via a shared database, or via communication between every web server of the cluster with all the corner cases and overhead that we would need to take into account. Or I don’t know, maybe there is already a product designed to be used with this type of cases in mind or maybe just add another layer of web balancers were we have the control to change the logic and therefore be able to send requests associated to the same customer always to the same web servers.

Any help would be greatly appreciated.

Best Regards, Mario

views – Auto add content to specific entity queue dependant on chosen taxonomy term – D9

SCENARIO:

I have a ‘People’ content type, which has an ‘Expertise’ field which you select from a list of taxonomy terms. For instance a person could be categorised as ‘Management’, ‘Sales, ‘IT’, etc.

There will then be an EntityQueue set up for each ‘Expertise’ term.

What I’d like to do is automatically add all People profiles to the end of their relevant ‘Expertise’ EntityQueue, eg. ‘Management’ EntityQueue will auto populate with all people who are managers, and you will be able to reorder them by editing the EntityQueue so the most senior managers appear first.

Looks like there was a module in D7 which added entities to queues by taxonomy term , but I’ve not found a way to easily do this in D9.


SOME OPTIONS:

  1. You could just create a view which shows all the People filtered by
    their Expertise, but that wouldn’t give you the ability to reorder.

  2. There is a D9 ‘Auto Entityqueue’ module, but this only allows items
    of a specific content type to be added to an EntityQueue.

  3. You could also add all people to one EntityQueue and then use a view
    to filter by the relevant ‘Expertise’, but I don’t really want one
    massive People queue, and it makes it tricky as some people might
    belong to more than one ‘Expertise’ and their place in the full list
    of People would be ‘global’, and you couldn’t reorder them in an
    individual queue.

  4. I have a feeling that setting up some ‘Rules’ which trigger on save is the way to go, but it’s not something I’ve ever touched, and I’m hoping there
    is another ‘less technical’ way of achieving this.

Sendmail not working with Queue group

I’ve tried to use the queue group…but seems not to work
Using sendmail v 8.14.7

on my sendmail.mc

#######  Queue group ##############
define(`QUEUE_DIRECTORY',`/var/spool/mqueue')
QUEUE_GROUP(`slowmail', `P=/var/spool/mqueue/slowq, I=15m, F=f, R=1, r=1')
QUEUE_GROUP(`fastmail', `P=/var/spool/mqueue/fastq, I=1m, F=f, R=1, r=1')
FEATURE(`queuegroup',`slowmail')
define(`LOCAL_MAILER_QGRP',`fastmail')
```
``
into access.db
QGRP:example1.com        slowmail
QGRP:example2.com      fastmail

Unfortunately, when an email is sent with the domain name: example2.com it goes to slowmail.. Everything is going to slowmail ( apparently defined by “queuegroup” )

It seems that the access file is not considered..
Feature( Access_db) is defined above the “Queue group”

Thanks for your help

Approval Queue Spam Link Highlighting

Admin submitted a new resource:

Approval Queue Spam Link Highlighting – Lists all links for messages that get caught in the spam phrase filters

When a thread/post/whatever gets caught in the spam phrase filters, it will also show a list of links that have been placed (as they can be hidden easily within e.g. commas or periods)

Screenshot from 2021-04-04 13-34-04.png

Useful spam phrases:

Code:
(url*
http*

Read more

.(tagsToTranslate)null scripts(t)nulled(t)nulled sites(t)nulled forum(t)nulled script(t)nulled scripts(t)nulled script forum(t)best nulled scripts site(t)xenforo nulled(t)xenforo 2 nulled(t)xenforo nulled themes(t)seo xenforo 2(t)xenforo themes(t)seo for xenforo(t)vbulletin nulled(t)vbulletin 5 nulled(t)whmcs nulled(t)hexa whmcs(t)whmcs addons nulled(t)whmcs templates nulled(t)whmcs template nulled(t)whmcs modules nulled(t)whmcs themes nulled(t)cs-cart nulled(t)xfilesharing pro nulled(t)blesta nulled(t)arrowchat nulled(t)multi vendor ecommerce script nulled(t)seo providers(t)adsense alternative

message queue – Interactive and Batch traffic in one service

I am designing a workflow, and am trying to avoid parallel deployments of the same service. Thus I am looking to have one service that handles both interactive and batch traffic. My main concern is how to ensure that my service can horizontally scale fast enough, with large batch runs, and not interfere with the interactive traffic. Are there any design patterns for this? Additionally we are primarily using AWS technologies, kubernetes, and JVM deployed languages. Additionally I think it is important to know we will have two endpoints with some traffic going through /service/interactive and /service/batch. We could use a few different mechanism to throttle but I think its a bad experience for our batch users to have to retry if we through a 429. We could also use something like a reply-to queue or a 2-way queue for the batch traffic, but how would we scale up our service to handle more traffic if we have defined a fixed dequeue rate. Can we set the dequeue rate at the queue level instead of at each instance of the service, and can that number dynamically change?

Really just looking for any patterns to handle both batch and interactive traffic in one service. Even if we have to have a parallel implementation for interactive/batch traffic how do we scale batch since it all comes at once? The batches could come at different times through out the day, so time based scaling is not an option, plus I have never been a fan of time based scaling as it is brittle.

Thanks in advance!

algorithms – dijkstra with adjacency list and minimum heap as queue vs adjacency matrix and a normal array as “queue”

Which one should be faster? I have written a script comparing both run time, initially the implementation with adjacency list and minimum heap performs faster, but as the number of nodes/edges increases, it seems the latter perform faster
Is this result expected?

c++ – How to avoid unnecessary copy while pushing/popping data into/from queue

In my learning course, I’ve implemented a message queue to which data gets pushed by some thread and later gets processed by some other thread. My implementation is not that effective as it involves creating minimum three copies of the same data which is not acceptable. So is there any way to avoid these unnecessary copies? This is my sample working code:

#include <iostream>
#include <string>
#include <list>
#include <thread>
#include <mutex>
#include <condition_variable>

struct Data {
    std::string topic {};
    std::string msg {};

    Data(const std::string& topic, const std::string& msg) {
        this->topic = topic;
        this->msg = msg;
    }
};

std::mutex net_mutex {};
std::mutex pro_mutex {};

std::condition_variable net_cond_var {};
std::condition_variable pro_cond_var {};

std::list<Data> net_list;
std::list<Data> pro_list;

void pro_thread() {
    while (true) {
        std::unique_lock<std::mutex> ul(pro_mutex);

        pro_cond_var.wait(ul, () () { return not pro_list.empty(); });
        Data data = pro_list.front(); // third copy
        pro_list.pop_front();

        ul.unlock();

        // do processing
    }
}

void relay_thread() {
    while (true) {
        // relays received network data to different processing threads based upon topic

        std::unique_lock<std::mutex> ul(net_mutex);

        net_cond_var.wait(ul, () () { return not net_list.empty(); });
        Data data = net_list.front(); // second copy 
        pro_list.pop_front();

        ul.unlock();

        if (data.topic == "A") { // push data into pro_list queue
            pro_mutex.lock();

            pro_list.emplace_back(data);
            pro_cond_var.notify_one();

            pro_mutex.unlock();
        }
    }
}

void net_thread() {
    while (true) {
        // receives data from socket and pushes into net_list queue

        Data data("A", "Hello, world!");
        net_mutex.lock();

        net_list.emplace_back(data); // first copy
        net_cond_var.notify_one();

        net_mutex.unlock();
    }
}

int main() {
    std::thread net(net_thread);
    std::thread relay(relay_thread);
    std::thread pro(pro_thread);

    net.join();
    relay.join();
    pro.join();
}