Recovery with Postgresql logical replication

I’m using Postgresql 13 on Debian 10.6 and learning about logical replication.

I’ve set up logical replication with one publisher and one subscriber of one table. I’m wondering what my options are for recovering data (or rolling back) when, for example, someone accidentally does something on the publisher side like updating all the data in the table with the wrong value, or even deleting everything from a table. With logical replication these unintentional changes will of course be applied to the subscriber.

I’ve relentlessly searched online but have had no luck finding out what my options are.
I read about PITR but I’m thinking that’s suited more for physical replication, whereas I want to test rolling back changes on a specific database on a server.

sql – ON DELETE SET DEFAULT não funciona no PostgreSQL?

Estou tentando deixar o valor default 'SEM CURSO' na tabela aluno quando o curso for excluído, porém da forma que estou fazendo o valor da chave estrangeira não está ficando default e sim NULL. Estou fazendo o script da forma incorreta ou o PostgreSQL não suporta ON DELETE SET DEFAULT?

O estranho é que mesmo que o PostgreSQL não suporte, o script roda normalmente.

Segue:

CREATE TABLE curso (
    nome varchar(30) primary key default 'SEM CURSO'
);

CREATE TABLE aluno (
    nro_matric int primary key,
    nome varchar(50),
    curso_id varchar(30),
        foreign key (curso_id) references curso(nome) on delete set default on update cascade
);

insert into curso(nome) values ('Sistemas de Informação');
insert into curso(nome) values ('Ciencia da Computação');

insert into aluno(nro_matric, nome, curso_id) values (201701, 'Marcio Alves', 'Sistemas de Informação');
insert into aluno(nro_matric, nome, curso_id) values (201702, 'Caio Ribeiro', 'Sistemas de Informação');
insert into aluno(nro_matric, nome, curso_id) values (201703, 'Leticia Alves', 'Ciencia da Computação');
insert into aluno(nro_matric, nome, curso_id) values (201704, 'Fabio Antonio', 'Sistemas de Informação');

delete from curso where nome = 'Ciencia da Computação';

select * from aluno;

PostgreSQL extended statistics – Database Administrators Stack Exchange

I have a 1 TB read-only database where performance is critical. It’s difficult to predict queries since they are dynamically generated by the users (the whole thing is basically a visualization platform atop a large collection of medical studies, and users select what they want to visualize). Queries can often be complex and involve 10+ joins. I recently learned about the extended statistics feature, but I find little information online about when best to use it (other than what’s in the documentation).

The DB is pretty well normalized, but makes extensive use of materialized views which are de-normalized. Is there any performance penalty or other issue with creating extended statistics (dependency and top n) for all pairwise columns? It would result in, say, 500 statistics on some 70 tables. Time for analyze or inserts is not of relevance, only read performance. Also, is there a tool or code snippet to help me do this?

I’m using Postgresql 12 and it’s optimized as far as possible w.r.t. indexing.

Does PostgreSQL have some ability to “compare” a backup with the live database automatically and neatly?

I make full backups of my PG databases each day. I store these pg_dumped archives/blobs in a dir on the same computer, as well as on offline disks, up to a year before they are culled.

I’m constantly worried that my “live” data is being either deleted or modified by accident or through malice. It would be extremely useful to be able to do something like:

pg_compare.exe "path to pg_dump of my database X" "Database X" --deletes --updates

And then it would return a JSON array of any records found in the dump but not in the live database, as well as any changes in existing data found between the backup and the live database. (It would not care about new INSERTed rows, of course.)

Basically (translated to human form):

1. The row in table "invoices" with "id" 4825 has changed its numeric "amount" column cell from "49" to "499".
2. The row in table "bookkeeping" with "id" 13459 is no longer present.
3. ...

At the very least, this would help against honest mistakes and severely help against my paranoia and fear. As I’m working in pgAdmin 4, for example, I’m always worried that maybe I had selected a row out of sight or something as I press that “trash can” icon and then the “accept changes” icon. I don’t want to lose data.

It’s important that this can be done in an automated fashion, constantly behind the scenes (or maybe once a day if it’s resource-demanding). It must not involve manual creating of databases and a bunch of complex custom SQL queries etc. And please, no extensions! I hate extensions.

Is there such an ultra-useful feature built into PG?

postgresql – Postgres insert query automation

In our organization, we are creating a new table where all the employees’ tasks, time spent on a task, and various fields will be there, basically an activity tracking table, from which we can extract some data as we have some tools for data visualisation.

Some of the team members don’t have Postgres experience at all. We currently have two ways, upload the CSV and give the specific query so that they can update the table.. but both of them are manual tasks, so, Is there any way or tool through which these insertions can be automated and easy for all.

postgresql – Default Users for Postgres

The default User is created during installation of the server instance. But you can query for a list of all current Users using this query:

SELECT usename AS role_name,
  CASE 
     WHEN usesuper AND usecreatedb THEN 
       CAST('superuser, create database' AS pg_catalog.text)
     WHEN usesuper THEN 
        CAST('superuser' AS pg_catalog.text)
     WHEN usecreatedb THEN 
        CAST('create database' AS pg_catalog.text)
     ELSE 
        CAST('' AS pg_catalog.text)
  END role_attributes
FROM pg_catalog.pg_user
ORDER BY role_name desc;

For more information please see this article.

You can change the Users and their passwords using ALTER USER, as discussed here. You can also disable the login for the default User as stated here.

postgresql – How to check if an sequence is above a certain number and if not change it in Postgres

I have a problem where my sql sequence has to be above a certain number to fix a unique constraint error. Now i started to write an if-statement which checks for the certain number and if its below it should be incresed to certain number. The statement is for postgres.

I got the seperate parts running but the connection over if is throwing an error and i dont know why.

First for selecting the current number:

SELECT nextval('mySequence')

Then to update the number:

SELECT setval('mySequence', targetNumber, true)

The full statement looks something like this in my tries:

            IF (SELECT nextval('mySequence') < targetNumber)
            THEN (SELECT setval('mySequence', targetNumber, true))
            END IF;

and the error is

ERROR:  syntax error at »IF«

Can someone explain to me what i did wrong there because the error message isnt giving me much to work with. I would appreciate your help.

postgresql – How to use a postgres variable in the select clause

With MSSQL it’s easy, the @ marking the start of all variable names allows the parser to know that it is a variable not a column.

This is useful for things like injecting constant values where a select is providing the input to an insert table when copying from a staging table.

declare @foo varchar(50) = 'bar';
select @foo;

How do you express this for postgres?