pygame – Ensure that random enemies don’t spawn just near the player

I need to find a method using which I can ensure that random enemies and objects don’t spawn just near the player. In fact, if they do, the player will be more likely to collide with them and lose.

These enemies and objects spawn relatively to the player’s score and to some userevents.

I’ve tried to use a recursive function in these objects’ classes in order to check whether the object’s coordinates obey to the condition that they shouldn’t be near the player’s coordinates and if they are valid (they obey to that condition), I can blit their images to the screen.

But I found that this idea isn’t a good one and demands lots of code so I abandonned it.

So is there any other method to implement that ?

NOTE :

Enemies coordinates are generated randomly like this:

self.x = random.randrange(2, 502, 20)
self.y = random.randrange(2, 502, 20)

Then, they’re blitted to the screen at these coordinates :

screen.blit(myenemy.img, (self.x, self.y))

coordinates – Pygame ensure that random enemies don’t spawn just near the player

I need to find a method using which I can ensure that random enemies and objects don’t spawn just near the player. In fact, if they do, the player will be more likely to collide with them and lose.

These enemies and objects spawn relatively to the player’s score and to some userevents.

I’ve tried to use a recursive function in these objects’ classes in order to check whether the object’s coordinates obey to the condition that they shouldn’t be near the player’s coordinates and if they are valid (they obey to that condition), I can blit their images to the screen.

But I found that this idea isn’t a good one and demands lots of code so I abandonned it.

So is there any other method to implement that ?

restore – How to Ensure all Log backups are restored on secondary server in log shipping

I have some concerns regarding the restore job on secondary server in log shipping for below scenario :

I have two servers, primary and secondary. The backup, copy and restore job frequency is 5 mins.
Suppose some disaster happened and primary server is crashed and now we don’t have any access to primary server.

On secondary server we have copied 3 log files through log shipping job which are not restored yet.

At present 1st log file is restoring and rest two files still not restored.
since our primary server is crashed so we have to make the secondary server Up and make it as primary server.

So before making secondary server as primary How do I ensure that all 3 log files are restored or not ??

optimization – Is there a way to ensure local fonts load from CDN when CDN is enabled?

I’ve set up a WordPress site with one local font. It is stored in the a fonts folder in the child theme. It is loaded with the following in the child theme style.css

@font-face {
    font-family: 'Last Paradise';
    src: url('fonts/LastParadise.eot');
    src: url('fonts/LastParadise.eot?#iefix') format('embedded-opentype'),
        url('fonts/LastParadise.woff2') format('woff2'),
        url('fonts/LastParadise.woff') format('woff'),
        url('fonts/LastParadise.ttf') format('truetype'),
        url('fonts/LastParadise.svg#LastParadise') format('svg');
    font-weight: normal;
    font-style: normal;
}

The site has a CDN, which is currently activated by the bunny.net CDN plug-in. Although I’ve tried various other options, such as the CDN Enabler plug-in from KeyCDN, and also via the CDN options that are in various caching plug-ins I’ve tested.

In all cases, the above-mentioned font does not load from the CDN. It is called from the website URI.

I notice that there are a couple of other fonts that are loaded by plug-ins, and these also loading from the site URI not the CDN URI.

Questions

  1. Is there a specific way to import/enqueue/reference a local font such that its URI will be picked up by a CDN plug-in, and thus converted to using the CDN URI when CDN is enabled? (I realise I could put in the full CDN based URI in the above @font-face CSS, but I’d like to know if there’s a way to reference the font locally, such that when CDN is enabled then its URI will be converted to CDN URI.

  2. Is there a way to over-ride (like to de-enqueue and then re-enqueue) fonts that are being loaded by plug-ins? Such that I could force it to load from the CDN URI (by using the full CDN based URI when re-referencing it?)

man in the middle – Tactics to ensure payload has not been modified

When sending a request (POST, PUT, etc). I have a security requirement to ensure that the data in the payload has not been tampered with.

In other words I need to know with certainty that the data entered was entered by the user and has not been intercepted in flight with additional or altered data.

What are strategies and tactics to accomplish this?

I thought of using a key from a previous GET request followed by hashing the whole payload along with a timestamp. However I don’t see this as a solution since if there were a reverse proxy or key logger listening to an end-user’s requests, then that keylogger and reverse proxy would just as soon know what the key used to hash was and could just overwrite the payload with the key and hash anyway – using the servers expected key and looking like a legitimate hash of the payload. Any ideas?

postgresql – SQL statements to ensure that the same location cannot be reserved more than once on the same day?

I have two tables one for venues (3 locations hall1, hall2 and hall3) and one for orders (a row could be 441,2004,50895,’Requested’,’2021-01-02 10:53′,’2021-03-06 11:46′,7,3) as follow:

CREATE TABLE loc (
    loc_id       INT NOT NULL,
    name_loc     VARCHAR(150) NOT NULL,
    description  VARCHAR(150) NOT NULL,
    type_loc     VARCHAR(150) NOT NULL,
                 CONSTRAINT pk_loc PRIMARY KEY (loc_id)
);

CREATE TABLE orders (
    o_id             INT NOT NULL, 
    o_code           INT NOT NULL, 
    type_id          INT NOT NULL, 
    degree           VARCHAR(150), 
    creation         TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
    status           TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP, 
    c_user           INT NOT NULL,
    loc_id           INT NULL,
    loc_dt           TIMESTAMP NULL,
                     CONSTRAINT pk_orders PRIMARY KEY(o_id),
                     CONSTRAINT fk_loc FOREIGN KEY ( loc_id ) REFERENCES loc ( loc_id )
);

I cannot change the table structure with indexes and I have to use transactions to prove my theory.
At first, I was trying to solve the problem as if there’s no concurrency, I was doing something like this:

START TRANSACTION READ COMMITED;

UPDATE orders
SET loc_id = 1 -- 2 or 3
WHERE 
    o_id IN
    (
        SELECT o2.id
        FROM orders AS o2
        WHERE o2.loc_id IS NULL AND CURRENT_TIME != loc_dt 

    )
COMMIT;

Which SQL statements do I need to ensure that the same location cannot be reserved more than once on the same day? How can I test it with and without concurrency?
I don’t know if my SQL statements are okay or how to continue when there are two transactions executing at the same time.

All suggestions are welcome.

views – addField() does not properly ensure table?

When I create a view with a table display, I configure the table columns to be sortable and set Dinstinct to avoid duplicates. I get the following SQL error.

General error: 3065 Expression #1 of ORDER BY clause is not in SELECT list, references column ‘XYZ’ which is not in SELECT list

A proposed solution from this issue on drupal.org is to add a custom filter which iterates through all the fields set as sortable and add them to the query. The query method of my custom filter plugin looks like this.

public function query() {
  // Force View to DISTINCT (because multiple matches makes duplicate
  // query results), and ensure all fields in DISTINCT exist in query to avoid
  // SQL error "ORDER BY clause is not in SELECT list": Table column click
  // to sort (and its default order) adds an additional field too late in the
  // game, so we preemptively add all these fields to the query to anticipate
  // it.
  $this->query->distinct = TRUE;

  // Fields the table sorting plugin might use which might not be in the query.
  $sortable_fields = array_keys(array_filter($this->view->getDisplay()->getOption('style')('options')('info'), function ($item) {
    return $item('sortable') ?? FALSE;
  }));

  // Get field table meta and add to query:
  foreach ($sortable_fields as $field_name) {
    $field = $this->view->field($field_name) ?? NULL;
    if ($field) {
      $this->query->addField($field->table, $field->realField);
    }
  }
}

This works when the fields/columns are stored in the base table, but when a field that comes from a joined table is used, this fails. The SQL error in this case is the following.

Column not found: 1054 Unknown column ‘politician.first_name’ in ‘field list’

When I look at the query which is shown with the error, I see that the politican table is joined but with politician_candidacies_mandates as alias. I find the whole query somehow confusing, but the base table of the view is sidejob.

SELECT COUNT(*) AS "expression" 
FROM 
(SELECT DISTINCT 
"sidejob"."id" AS "id", "politician"."first_name" AS "politician_first_name", "politician"."last_name" AS "politician_last_name", "politician"."party" AS "politician_party", "sidejob"."job_title" AS "sidejob_job_title", "sidejob"."job_title_extra" AS "sidejob_job_title_extra", "sidejob_organization"."name" AS "sidejob_organization_name", "sidejob"."category" AS "sidejob_category", "sidejob"."interval" AS "sidejob_interval", "sidejob"."income_level" AS "sidejob_income_level", "sidejob"."created" AS "sidejob_created", "sidejob"."changed" AS "sidejob_changed", "sidejob"."data_change_date" AS "sidejob_data_change_date", "candidacies_mandates_sidejob__mandates"."id" AS "candidacies_mandates_sidejob__mandates_id", "politician_candidacies_mandates"."id" AS "politician_candidacies_mandates_id", "sidejob_organization_sidejob"."id" AS "sidejob_organization_sidejob_id", 1 AS "expression" FROM {sidejob} "sidejob" 

LEFT JOIN {sidejob__mandates} "sidejob__mandates" ON sidejob.id = sidejob__mandates.entity_id AND sidejob__mandates.deleted = :views_join_condition_0 

LEFT JOIN {candidacies_mandates} "candidacies_mandates_sidejob__mandates" ON sidejob__mandates.mandates_target_id = candidacies_mandates_sidejob__mandates.id 

LEFT JOIN {politician} "politician_candidacies_mandates" ON candidacies_mandates_sidejob__mandates.politician = politician_candidacies_mandates.id 

LEFT JOIN {sidejob_organization} "sidejob_organization_sidejob" ON sidejob.sidejob_organization = sidejob_organization_sidejob.id
) "subquery"

When I dig into addField() and ensureTable(), I don’t understand how I could assure the correct relation is used. Maybe the whole solution is not a solution.

Any hint and idea is highly appreciated.

How to ensure Unity does not rearrange the sub-meshes of a mesh on import?

having a bit of a head scratcher problem here. I have my artist’s making multiple unique meshes in Maya that contain 3 materials. Each material is assigned to a sub-mesh in a specific order. This order is identical across all the meshes being created. When bringing over these meshes to Unity, we have noticed that some meshes keep their material/sub-mesh order from Maya, and some do not. It’s been making code generated material swaps a nightmare. So a few questions:

1.] What is Unity’s logic when importing meshes and ordering the sub-meshes?

2.] Is there a simple way to make sure Unity does NOT reorder the sub-meshes/material order?

3.] Is there a way on Maya’s side when exporting to make sure the sub-meshes/material order is as intended?

Any help or ideas is appreciated. Thanks!!

How can you ensure order of execution in concurrent tasks?

Here is what I am specifically doing:

  1. I have a thread-safe queue
  2. One ‘write’ thread constantly writes to the queue with data that comes from another service
  3. Multiple ‘read’ threads take from the queue, each thread taking several items at once and doing some processing with them
  4. Once each ‘read’ thread processes its current batch of items taken from the queue, it is written to a database

The problem is that items need to be written to the database in step 4 in the same order that they come in step 1.

So for instance if I am processing some events, this rare case could happen:

  1. ‘Entity 1 Created’ event is added to the queue.
  2. Some other events are added to the queue.
  3. ‘Entity 1 Deleted’ event is added to the queue.
  4. Read thread #1 takes its next batch which includes ‘Entity 1 Created’ and a few others
  5. Read thread #2 takes its next batch which includes ‘Entity 1 Deleted’ and a few others
  6. If for any reason read thread #2 reaches the database request sooner than Read thread #1, then the database will get the request for ‘Entity 1 Deleted’ first and since it doesn’t exist yet, it will do nothing, then it will get the ‘Entity 1 Created’ request and it will add an entity and it will remain undeleted.

Is there any way to prevent the problem at step 6. without completely destroying performance.
I could limit the ‘read’ threads to 1, but I am looking for ways to have multiple read threads.

9 – Views: addField() does not properly ensure table?

Background: when you create a view with a table display, you configure the columns of the table to be sortable and then you set “Dinstinct” for the view to avoid duplicate results you get the following SQL error:

General error: 3065 Expression #1 of ORDER BY clause is not in SELECT list, references column 'XYZ' which is not in SELECT list

A proposed solution from this issue on drupal.org is to add a custom filter which iterates through all the fields set as sortable and add them to the query. The query method of my custom filter plugin for views looks like that:

public function query() {
  // Force View to DISTINCT (because multiple matches makes duplicate
  // query results), and ensure all fields in DISTINCT exist in query to avoid
  // SQL error "ORDER BY clause is not in SELECT list": Table column click
  // to sort (and its default order) adds an additional field too late in the
  // game, so we preemptively add all these fields to the query to anticipate
  // it.
  $this->query->distinct = TRUE;

  // Fields the table sorting plugin might use which might not be in the query.
  $sortable_fields = array_keys(array_filter($this->view->getDisplay()->getOption('style')('options')('info'), function ($item) {
    return $item('sortable') ?? FALSE;
  }));

  // Get field table meta and add to query:
  foreach ($sortable_fields as $field_name) {
    $field = $this->view->field($field_name) ?? NULL;
    if ($field) {
      $this->query->addField($field->table, $field->realField);
    }
  }
}

This is working when the fields / columns are stored in the base table but when a field is used for sorting which comes from a joined table this fails. The new SQL error:

Column not found: 1054 Unknown column ‘politician.first_name’ in ‘field list’

When I look at the query which is shown with the error I see that the politican table is joined but with an alias “politician_candidacies_mandates”. I find the whole query somehow confusing to be honest… but the base table of the view is the table “sidejob”.

SELECT COUNT(*) AS "expression" 
FROM 
(SELECT DISTINCT 
"sidejob"."id" AS "id", "politician"."first_name" AS "politician_first_name", "politician"."last_name" AS "politician_last_name", "politician"."party" AS "politician_party", "sidejob"."job_title" AS "sidejob_job_title", "sidejob"."job_title_extra" AS "sidejob_job_title_extra", "sidejob_organization"."name" AS "sidejob_organization_name", "sidejob"."category" AS "sidejob_category", "sidejob"."interval" AS "sidejob_interval", "sidejob"."income_level" AS "sidejob_income_level", "sidejob"."created" AS "sidejob_created", "sidejob"."changed" AS "sidejob_changed", "sidejob"."data_change_date" AS "sidejob_data_change_date", "candidacies_mandates_sidejob__mandates"."id" AS "candidacies_mandates_sidejob__mandates_id", "politician_candidacies_mandates"."id" AS "politician_candidacies_mandates_id", "sidejob_organization_sidejob"."id" AS "sidejob_organization_sidejob_id", 1 AS "expression" FROM {sidejob} "sidejob" 

LEFT JOIN {sidejob__mandates} "sidejob__mandates" ON sidejob.id = sidejob__mandates.entity_id AND sidejob__mandates.deleted = :views_join_condition_0 

LEFT JOIN {candidacies_mandates} "candidacies_mandates_sidejob__mandates" ON sidejob__mandates.mandates_target_id = candidacies_mandates_sidejob__mandates.id 

LEFT JOIN {politician} "politician_candidacies_mandates" ON candidacies_mandates_sidejob__mandates.politician = politician_candidacies_mandates.id 

LEFT JOIN {sidejob_organization} "sidejob_organization_sidejob" ON sidejob.sidejob_organization = sidejob_organization_sidejob.id
) "subquery"

When I dig into this addField() and ensureTable() methods I do not understand how I could assure that the correct relation is used. Maybe the whole solution is not a solution?! I do not know – any hint and idea is highly appreciated.