PostgresQL Delete Trigger does not delete row

I’m currently creating archive records for my database, where each table x has a corresponding x_archive table. My solution was to create a trigger for each table that needed replication that would insert the deleted data into the archive table. The trigger seems to run just fine, with the data being replicated to the archive table. The original data, however, is not deleted. The trigger only seems to run on the first delete query. If I run the same delete statement again (on the original data), both the row in the original table and the archive table is deleted. I’ve also tried to create trigger functions that are specifically tailored to inserting into a specific table, but the same result occurs. Any help is greatly appreciated.

Trigger function

CREATE OR REPLACE FUNCTION archive_record()
RETURNS TRIGGER AS $$
BEGIN
    RAISE NOTICE 'Running trigger';
    EXECUTE format('INSERT INTO %I.%I SELECT $1.*', TG_TABLE_SCHEMA, (TG_TABLE_NAME || '_archive'))
    USING OLD;
    RETURN NULL;
END;
$$ LANGUAGE PLPGSQL;

Example trigger

CREATE TRIGGER delete_test
    AFTER DELETE
    ON test
    FOR EACH ROW
    EXECUTE PROCEDURE archive_record();

Example table

create table test
(
    id                  serial primary key,
    name                varchar(128) not null,
);

Example archive table

CREATE TABLE test_archive (
) INHERITS(test);

What’s the best way to upload a million row spreadsheet as Google spreadsheet?

I have an Excel spreadsheet with over a million rows that I would like to have as a Google sheet.

I tried uploading the file as a CSV to my Google drive, which worked. Then I tried opening it as a Google sheet, but that took over 2 days and didn’t open.

I tried copying and pasting from the CSV into the Google sheet. The page keeps saying it is unresponsive, I say wait and that loop continues. I’m not sure if this is the right approach or how long to expect to wait.

Can Google sheets access my hard drive? Is there some code I could write in Google sheets to upload the records one at a time?

I’m willing to wait as long as it takes. Maybe there’s a better approach I am missing?

I can code in VBA. Is there some way excel could write to a Google sheet?

Ultimately, I would like to access the data in a Google ad script. I saw Google ad scripts can access Google sheets hence my approach. But, if there is another Google cloud solution I can upload the data to, I am also open to that.

Thanks

array – I want to Prepend the Row Number of the Matrix to all Elements within that row

Platform: Mathematica

Hello all!

I have a matrix of nested lists that looks like this:

matrix =
{{{1, 2}, {3, 4}, {5, 6}, {7, 8}, {9, 0}},
{{2, 3}, {4, 5}, {6, 7}, {8, 9}, {0, 1}},
{{1, 3}, {2, 4}, {3, 5}, {4, 6}, {5, 7}}}

and I want to Prepend the row number ‘x’ to each of the elements within each row like this:

{{{x, 1, 2}, {x, 3, 4}, {x, 5, 6}, {x, 7, 8}, {x, 9, 0}},
{{x, 2, 3}, {x, 4, 5}, {x, 6, 7}, {x, 8, 9}, {x, 0, 1}},
{{x, 1, 3}, {x, 2, 4}, {x, 3, 5}, {x, 4, 6}, {x, 5, 7}}}

so that the final product looks like this:

{{{1, 1, 2}, {1, 3, 4}, {1, 5, 6}, {1, 7, 8}, {1, 9, 0}},
{{2, 2, 3}, {2, 4, 5}, {2, 6, 7}, {2, 8, 9}, {2, 0, 1}},
{{3, 1, 3}, {3, 2, 4}, {3, 3, 5}, {3, 4, 6}, {3, 5, 7}}}

I have played around with something like MapIndexed[Prepend[#,x]&,matrix,{3}] which successfully gets me to the intermediate matrix as described above where “x” is prepended, but I can’t figure out how to make “x” conditionally equal the index of the row.

Your help is very much appreciated! Thanks so much in advance!!!

Best regards,
Taylor

google sheets – Googlesheets pivot table won’t show totals for more than one Row item

I’ve got Totals checked or both Common Name and for Pot Description. Why won’t more than one display? If I move Pot description up, it will total then Common Name won’t.

If I untick Common Name totals, there is still no total.

If I delete Common Name, I still don’t get totals.

What am I missing?

screenshot 1

sql server – Creating row number for multiple columns?

I have the below data, and I want to only take one Location per Order ID ordered by the distance. This data set takes customer zips and compares to store zips and then returns the distance. I want to only choose the store they’re closest to.

I have this so far:

SELECT 
*
FROM (
    SELECT 
    t.*,
    row_number() over(PARTITION BY orderid ORDER BY dts) rn
    FROM (
        SELECT 
        location,
        orderid,
        group1,
        group2,
        group3,
        group4,
        group5,
        custid,
        dts,
        sum(qty) AS units,
        sum(bsk) AS demand
        FROM osfdist
        GROUP BY 1,2,3,4,5,6,7,8,9
    ) t 
) a 

But this just starts the increment at 1 for the first row ordered by Distane and then increments to the total number of rows. I want it to say:

Downtown    1
Downtown    1
Downtown    1
Downtown    1
Coastal     2
Coastal     2
Coastal     2
Coastal     2

etc, so I can select where row_number=1 and only select the Downtown records.

Location OrderID Group 1 Group 2 Group 3 Group 4 Group 5 Customer ID Distance Qty Sales
Downtown 1 FOOTWEAR SHOES SHOES (LOW) M RUN abc123 8.724497523 1 90
Downtown 1 APPAREL PANTS PANTS (1/1) F FOO abc123 8.724497523 1 22.5
Downtown 1 FOOTWEAR SHOES SHOES (LOW) U ORI abc123 8.724497523 1 55
Downtown 1 APPAREL SHORTS SHORTS M TRA abc123 8.724497523 3 50
Downtown 1 APPAREL PANTS TRACK PANT F ORI abc123 8.724497523 1 35
Downtown 1 APPAREL PANTS TRACK PANT M ORI abc123 8.724497523 1 35
Downtown 1 FOOTWEAR SHOES SHOES (LOW) M ORI abc123 8.724497523 1 65
Downtown 1 APPAREL JACKETS LIGHT JACKET F OUT abc123 8.724497523 2 100
Downtown 1 APPAREL PANTS PANTS (1/1) M TRA abc123 8.724497523 1 27.5
Downtown 1 FOOTWEAR SANDALS/SLIPPERS SLIDES M RUN abc123 8.724497523 1 17.5
Downtown 1 APPAREL TIGHTS TIGHT LONG F TRA abc123 8.724497523 2 35
Coastal 1 APPAREL PANTS TRACK PANT F ORI abc123 8.888442956 1 35
Coastal 1 FOOTWEAR SHOES SHOES (LOW) M RUN abc123 8.888442956 1 90
Coastal 1 FOOTWEAR SANDALS/SLIPPERS SLIDES M RUN abc123 8.888442956 1 17.5
Coastal 1 FOOTWEAR SHOES SHOES (LOW) U ORI abc123 8.888442956 1 55
Coastal 1 APPAREL PANTS TRACK PANT M ORI abc123 8.888442956 1 35
Coastal 1 APPAREL PANTS PANTS (1/1) M TRA abc123 8.888442956 1 27.5
Coastal 1 FOOTWEAR SHOES SHOES (LOW) M ORI abc123 8.888442956 1 65
Coastal 1 APPAREL JACKETS LIGHT JACKET F OUT abc123 8.888442956 2 100
Coastal 1 APPAREL SHORTS SHORTS M TRA abc123 8.888442956 3 50
Coastal 1 APPAREL PANTS PANTS (1/1) F FOO abc123 8.888442956 1 22.5
Coastal 1 APPAREL TIGHTS TIGHT LONG F TRA abc123 8.888442956 2 35

javascript – Rendering efficiency for retro 2d grid chipset games. Draw by texture, or by row col?

This was the closest question on found here: Help with a Fast 2D Grid-Based world rendering technique

I’m doing a basic 2d grid where each box is filled with a texture pulled from a chipset. I’m doing this in JavaScript.

There are two ways I can imagine going about it:

  1. By Row / Col: iterate over every row and column, finding the texture for each block, then drawing it. The pros are that the 2d array iterates better (should be fast access in RAM), as does writing pixels to the canvas (should also still be pretty fresh in RAM).

  2. By Texture: organize the blocks by texture (heavy load once, up front), then draw each texture filling in every block it occupies. The pro here is that I’m not constantly loading up a new texture to draw for each step.

I’m pretty sure #2 is better. The loss for changing texture buffers over and over (all done on RAM) should be significantly worse than what is lost by iterating a relatively small array awkwardly (the world map blocks), or what’s lost by jumping around the canvas.

Any insight on chosing between these two approaches is what I’m looking for.

mariadb – Get rows above and below a certain row, based on two criteria SQL (with SUM)

Say I have a table like so:

+----------+---------+------+---------------------+
|student_no|level_id |points|      timestamp      |
+----------+---------+------+---------------------+
|     4    |    1    |  70  | 2021-01-14 21:50:38 |
|     3    |    2    |  90  | 2021-01-12 15:38:0  |
|     1    |    1    |  20  | 2021-01-14 13:10:12 |
|     5    |    1    |  50  | 2021-01-13 12:32:11 |
|     7    |    1    |  50  | 2021-01-14 17:15:20 |
|     8    |    1    |  55  | 2021-01-14 09:20:00 |
|    10    |    2    |  99  | 2021-01-15 10:50:38 |
|     2    |    1    |  45  | 2021-01-15 10:50:38 |
+----------+---------+------+---------------------+

What I want to do is find the total points for each person (student_no), and show 5 of these rows in a table, with a certain row (e.g. where id=5) in the middle and have the two rows above and below it (in the correct order – with highest at the top). This will be like a score board but only showing the user’s total points (over all levels) with the two above and two below. So because points could be equal, the timestamp column will also need to be used – so if two scores are equal, then the first person to get the score is shown above the other person.

I have tried this below but it is not outputting what I need.

SELECT 
    student_no, SUM(points)
FROM
    (
    (SELECT 
        student_no, SUM(points), 1 orderby
    FROM student_points a
    HAVING
        SUM(points) > (SELECT SUM(points) FROM student_points WHERE student_no = 40204123)
    ORDER BY SUM(points) ASC LIMIT 3) 
     
     UNION ALL 
     
     (SELECT student_no, SUM(points), 2 orderby
    FROM student_points a
    WHERE student_no = 40204123) 
     
     UNION ALL 
     
     (SELECT student_no, SUM(points), 3 orderby
    FROM student_points a
    HAVING
        SUM(points) <= (SELECT SUM(points) FROM student_points WHERE student_no = 40204123)
            AND student_no <> 40204123
    ORDER BY SUM(points) DESC LIMIT 3)
    ) t1
ORDER BY orderby ASC , SUM(points) DESC

This is a dbfiddle of what I am trying:
https://dbfiddle.uk/?rdbms=mariadb_10.4&fiddle=5ada81241513c9a0be0b6c95ad0f2947

mysql – Query to find the second highest row in a subquery

The goal is to send notifications about the customer updates but only for the first one if there are consecutive updates from the customer in a ticketing system.

This is the simplified query that I’m using to get the data that I need. There are a few more columns in the original query and this subquery for threads is kind of required so I can also identify if this is a new ticket or if existing one was updated (in case of update, the role for the latest threads will be a customer):

SELECT t.ref, m.role 
  FROM tickets t 
  LEFT JOIN threads th ON (t.id = th.ticket_id) 
  LEFT JOIN members m ON (th.member_id = m.id) 
 WHERE th.id IN ( SELECT MAX(id) 
                    FROM threads 
                   WHERE ticket_id = t.id
                )

It will return a list of tickets so the app can send notifications based on that:

+------------+----------+
| ref        | role     |
+------------+----------+
| 210117-001 | customer |
| 210117-002 | staff    |
+------------+----------+

Now, I want to send only a single notification if there a multiply consecutive updates from the customer.

Question:

How I can pull last and also one before last row to identify if this is consecutive reply from the customer?

I was thinking about GROUP_CONCAT and then parse the output in the app but tickets can have many threads so that’s not optimal and there are also a few more fields in the query so it will violate the ONLY_FULL_GROUP_BY SQL mode.

db<>fiddle here

database design – if Transaction 2 then attempted to UPDATE that row as well, a serialization failure would occur

Transaction 1
/* Query 1 */

 SELECT * FROM users WHERE id = 1;

Transaction 2

/* Query 2 */

UPDATE users SET age = 21 WHERE id = 1;
COMMIT; /* in multiversion concurrency
          control, or lock-based READ COMMITTED */

/* Query 1 */

SELECT * FROM users WHERE id = 1;
COMMIT; /* lock-based REPEATABLE READ */
                                  

Under multiversion concurrency control, at the SERIALIZABLE isolation level, both SELECT queries see a snapshot of the database taken at the start of Transaction 1. Therefore, they return the same data. However, if Transaction 2 then attempted to UPDATE that row as well, a serialization failure would occur and Transaction 1 would be forced to roll back.

If Transaction 2 attempted to UPDATE that row, how would a serialization failure occur?

list manipulation – Replacing a random ith row and column from a matrix

Currently I am trying to delete a randomly-chosen $i^{th}$ row and column from a square $n times n$ matrix $A$. So far I come up with the following code:

Drop[A, {RandomInteger[{1, 400}]}, {RandomInteger[{1, 400}]}]

The problem with this command is that the random integer for {i} is not the same as the random integer for {j}.

Is there a way of making them consistent, so that I drop the ith row and corresponding column while maintaining the randomness of selecting $i$?

And if the aim was to not delete the row and column entirely but to replace all their elements with, say 0, how would you go about it?

Thank you.