postgresql – Updating multiple fields in a related row within a trigger

I have a search_results table that contains all of the fields a user could search for. It includes fields populated from multiple tables along with a computed search_document column for when the user is not searching by a specific field.

I have triggers on the associated tables to update the search_results table when data is updated, but I’m not sure how to handle when multiple fields are updated.

For example, here is one AFTER UPDATE on customer_tickets:

DECLARE
    search_result search_results%ROW_TYPE;
    search_result_needs_update boolean DEFAULT false;
BEGIN
    SELECT * INTO search_result
    FROM search_results
    WHERE customer_ticket_id = NEW.id;

    IF
        NEW.ticket_type <> OLD.ticket_type
    THEN
        search_result_needs_update := true;
        search_result.customer_ticket_ticket_type = NEW.ticket_type;
    END IF;

    IF
        NEW.status <> OLD.status
    THEN
        search_result_needs_update := true;
        search_result.customer_ticket_status = NEW.status;
    END IF;

    IF
        search_result_needs_update = true
    THEN
        -- ???
    END

    RETURN NEW;
END;

That last piece is what I’m having trouble with. I have the row with the updated columns, but don’t know how to update the row itself in search_results.

Another option would be to build the changed fields without loading the search_results row, then only doing one UPDATE at the end with all of the changed field values.

postgresql multicolumn partitioning – Database Administrators Stack Exchange

Postgres 11

I want to make a table calendars, that is at the same time partitioned by range (one month of data) and a list of keys.

The reason is that:

  1. I want to prune months that are not longer updated.
  2. I need all the data to be partitioned by list of ids for further joins with and other partitioned table listings, that is partitined by the same list of keys.

The documentation says:

Declarative partitioning only supports range, list and hash partitioning, whereas table inheritance allows data to be divided in a manner of the user’s choosing.

Thus, if I get it right, this means that my taks can not be done with Declarative partitioning, but probably can be done using Inheritance.

So, I try to reproduce the given example with my modification

CREATE TABLE measurement (
                city_id         int not null,
                logdate         date not null,
                peaktemp        int,
                unitsales       int
            ) PARTITION BY RANGE (logdate); # this already does not accept mixed types of partitioning

than I try to create a partition with my mixed rules of partitining:

CREATE TABLE measurement_y2006m02 (
                CHECK ( logdate >= DATE '2006-02-01' AND logdate < DATE '2006-03-01' 
                        AND paektamp >0 AND peaktemp <=10)
                      ) INHERITS (measurement);

this gives:

cannot inherit from partitioned table "measurement"

Thank you!

concurrency – What is “special” about PostgreSQL update vs delete+insert

My understanding is that an update locks a tuple, marks it as deleted, and then adds a new tuple.

Essentially, a delete + insert.

But that’s not true. There seems something fundamentally different in the MVCC about update than delete + insert.


Setup:

CREATE TABLE example (a int PRIMARY KEY, b int);
INSERT INTO example VALUES (1, 1);

Update (two concurrent sessions):

BEGIN; -- session A
UPDATE example SET b = 2 WHERE a = 1; -- session A
DELETE FROM example WHERE a = 1; -- session B (blocks)
COMMIT; -- session A
-- 0 rows in table example (1 row was deleted by session B)

Delete and insert (two concurrent sessions):

BEGIN ;-- session A
DELETE FROM example WHERE a = 1; -- session A
INSERT INTO example VALUES (1, 2); -- session A
DELETE FROM example WHERE a = 1; -- session B (blocks)
COMMIT; -- session A
-- 1 row in table example (nothing was deleted by B)

Thus

UPDATE example SET b = 2 WHERE a = 1;

is different than

DELETE FROM example WHERE a = 1; -- session A
INSERT INTO example VALUES (1, 2); -- session A

How am I to understand the MVCC nature of update? Does the tuple has some sort of MVCC “identity” that is preserved during the update? What is it?

java – Make current postgresql server accesible by other clients/users via intellij / Ubuntu

With a load of trials/errors I have managed to get a local server running with postgresql databases, and able to connect to it via an intellij project. Next step, is to make it accessible for other users that work with the same project.

Status: Server created via ubuntu terminal, I can log in via the user postgres and password.
I can connect to the localhost:5432 server via intellij with the given user.

Next step: I want to make the server accessible for other users, with that db. I realize that this is not the most simple thing to do but if anyone could outline how (In the simplest way possible), it would be great.

postgresql – What are some good Auto vacuuming starting points Postgres? for high write,high update,and mostly read table types?

What are good Auto Vacuum settings (Recommendations) for tables like

  • High Write Table Insert load that range between 30-10,000 inserts in a day the table can idle in weeks without load but can get burst of insert at least 3 times in a week

  • High Update Table it Uses Partition table data is 3-8 times the size of my table from a single insert High Write Table a single row gets updated only ones but burst of unique key update in a day
    and needs to be updated it could be 30-10,000 key update

  • High Read Table Most tables are high Read Table set to Fill Factor 80 for my data warehouse housing table that came from the computation of High Update Table

  • My Delete happens in monthly and in batches everything relating to the key gets deleted or moved as backup

currently my FillFactor is set to be 10-20 for high update table

using rds db.t3.large but i switch to db.t3.micro when during low traffic

Additional Question does setting FillFactor to really low slow down select?

postgresql – logical worker replication says the publication does not exist, however this seems to actually exist?

I’m trying to setup logical replication between two database instances.

I’ve created a publication on the main db, and a subscription on the replica. Specifically:

on main db:

create publication chris for all tables; 

on replica db:

CREATE SUBSCRIPTION chris
    CONNECTION 'host=localhost port=5540 dbname=finder connect_timeout=10'
    PUBLICATION chris;

However for some reason the replica does not seem to be recognizing the publication (essentially ERROR: could not receive data from WAL stream: ERROR: publication "chris" does not exist) (logs from the replica):

Oct 20 18:18:31 test2 8y7qb8wz8ps875yrvpmg9c1zsffls4hy-unit-script-postgresql-start(289): 2020-10-20 18:18:31.013 GMT (289) LOG:  background worker "logical replication worker" (PID 3035) exited with exit code 1
Oct 20 18:18:36 test2 8y7qb8wz8ps875yrvpmg9c1zsffls4hy-unit-script-postgresql-start(289): 2020-10-20 18:18:36.020 GMT (3036) LOG:  logical replication apply worker for subscription "chris" has started
Oct 20 18:18:36 test2 8y7qb8wz8ps875yrvpmg9c1zsffls4hy-unit-script-postgresql-start(289): 2020-10-20 18:18:36.022 GMT (3036) ERROR:  could not receive data from WAL stream: ERROR:  publication "chris" does not exist
Oct 20 18:18:36 test2 8y7qb8wz8ps875yrvpmg9c1zsffls4hy-unit-script-postgresql-start(289):         CONTEXT:  slot "chris", output plugin "pgoutput", in the change callback, associated LSN 0/6BA6EC8
Oct 20 18:18:36 test2 8y7qb8wz8ps875yrvpmg9c1zsffls4hy-unit-script-postgresql-start(289): 2020-10-20 18:18:36.023 GMT (289) LOG:  background worker "logical replication worker" (PID 3036) exited with exit code 1
Oct 20 18:18:41 test2 8y7qb8wz8ps875yrvpmg9c1zsffls4hy-unit-script-postgresql-start(289): 2020-10-20 18:18:41.030 GMT (3039) LOG:  logical replication apply worker for subscription "chris" has started
Oct 20 18:18:41 test2 8y7qb8wz8ps875yrvpmg9c1zsffls4hy-unit-script-postgresql-start(289): 2020-10-20 18:18:41.033 GMT (3039) ERROR:  could not receive data from WAL stream: ERROR:  publication "chris" does not exist
Oct 20 18:18:41 test2 8y7qb8wz8ps875yrvpmg9c1zsffls4hy-unit-script-postgresql-start(289):         CONTEXT:  slot "chris", output plugin "pgoutput", in the change callback, associated LSN 0/6BA6EC8
Oct 20 18:18:41 test2 8y7qb8wz8ps875yrvpmg9c1zsffls4hy-unit-script-postgresql-start(289): 2020-10-20 18:18:41.034 GMT (289) LOG:  background worker "logical replication worker" (PID 3039) exited with exit code 1

The main db’s logs strangely seems to also be showing the same error (ERROR: publication "chris" does not exist):

Oct 20 18:22:56 pgtestpublic 22rj0nr8pn1k0r3q8q6pxixp6hn2zi4i-unit-script-postgresql-start(260): 2020-10-20 18:22:56.545 GMT (514) CONTEXT:  slot "chris", output plugin "pgoutput", in the change callback, associated LSN >
Oct 20 18:23:01 pgtestpublic 22rj0nr8pn1k0r3q8q6pxixp6hn2zi4i-unit-script-postgresql-start(260): 2020-10-20 18:23:01.555 GMT (515) LOG:  starting logical decoding for slot "chris"
Oct 20 18:23:01 pgtestpublic 22rj0nr8pn1k0r3q8q6pxixp6hn2zi4i-unit-script-postgresql-start(260): 2020-10-20 18:23:01.555 GMT (515) DETAIL:  Streaming transactions committing after 0/6BA6E60, reading WAL from 0/6BA6E28.
Oct 20 18:23:01 pgtestpublic 22rj0nr8pn1k0r3q8q6pxixp6hn2zi4i-unit-script-postgresql-start(260): 2020-10-20 18:23:01.555 GMT (515) LOG:  logical decoding found consistent point at 0/6BA6E28
Oct 20 18:23:01 pgtestpublic 22rj0nr8pn1k0r3q8q6pxixp6hn2zi4i-unit-script-postgresql-start(260): 2020-10-20 18:23:01.555 GMT (515) DETAIL:  There are no running transactions.
Oct 20 18:23:01 pgtestpublic 22rj0nr8pn1k0r3q8q6pxixp6hn2zi4i-unit-script-postgresql-start(260): 2020-10-20 18:23:01.556 GMT (515) ERROR:  publication "chris" does not exist
Oct 20 18:23:01 pgtestpublic 22rj0nr8pn1k0r3q8q6pxixp6hn2zi4i-unit-script-postgresql-start(260): 2020-10-20 18:23:01.556 GMT (515) CONTEXT:  slot "chris", output plugin "pgoutput", in the change callback, associated LSN >
Oct 20 18:23:06 pgtestpublic 22rj0nr8pn1k0r3q8q6pxixp6hn2zi4i-unit-script-postgresql-start(260): 2020-10-20 18:23:06.566 GMT (518) LOG:  starting logical decoding for slot "chris"
Oct 20 18:23:06 pgtestpublic 22rj0nr8pn1k0r3q8q6pxixp6hn2zi4i-unit-script-postgresql-start(260): 2020-10-20 18:23:06.566 GMT (518) DETAIL:  Streaming transactions committing after 0/6BA6E60, reading WAL from 0/6BA6E28.
Oct 20 18:23:06 pgtestpublic 22rj0nr8pn1k0r3q8q6pxixp6hn2zi4i-unit-script-postgresql-start(260): 2020-10-20 18:23:06.566 GMT (518) LOG:  logical decoding found consistent point at 0/6BA6E28

However I do see the correct expected output from:

select * from pg_catalog.pg_publication;
select * from pg_catalog.pg_publication;

on the main database.

What could I investigate further? And what may likely be the cause of the error?

postgres – How to install drupal 9 with PostgreSQL using Docksal?

I use docksal for developing on Drupal.

Recently I decided to install Drupal 9 with PostgreSQL as a database using Docksal.

Firstly, I cloned this git-repository.
https://github.com/docksal/boilerplate-drupal9

Secondly, I modified some files accordingly to this manual
https://github.com/docksal/docksal/issues/193#issuecomment-376343111

Lastly, I ran the “fin init” command and saw an error message

(warning) Failed to drop or create the database. Do it yourself before
installing. ERROR 2002 (HY000): Can’t connect to MySQL server on ‘db’
(115)

(notice) Starting Drupal installation. This takes a while.

In install.core.inc line 2298:
Database settings:
Array

Frankly speaking, I can’t understand what does MySQL has to do with it?
My DB driver should be PostgreSQL, my default database should be also PostgreSQL.

Has anybody encountered this problem?
Is it possible to install Drupal 9 + PostgreSQL via the docksal service?

foreign key – Postgresql Create ForeignKey with repeated table name in value of ref contrain key

I am working on a database which does not contain any foreign key. When i open table I see strange format key because table of parent reference is stored in the value.

exemple

CREATE TABLE foo 
  ( 
  id SERIAL,
  name text
  );

CREATE TABLE bar
  ( 
  id SERIAL,
  foo_id text,
  name text
  );

INSERT INTO foo("name")
VALUES ('john');

INSERT INTO bar("foo_id","name")
VALUES (CONCAT('foo.'|| 1),'doe');
INSERT INTO bar("foo_id","name")
VALUES (CONCAT('foo.'|| 1),'kelly');

Can I create an FK with this type of structure? I have never seen this before