index – PostgreSQL – how do I list all UNIQUE indexes for a particular database?

I want to get a list of all UNIQUE indexes (for unique column combinations) for a specific database in PostgreSQL. I searched, but could not find the query for it.

I found these two questions.

SELECT indexname FROM pg_indexes;
SELECT * from pg_indexes where schemaname = & # 39; public & # 39 ;; 

but neither fits my needs.
I only need those who identify indices for unique columns.
Many Thanks.

Postgresql Vacuum and Replication Database Administrators Stack Exchange

I have a cascade replication architecture with 3 postgresql db

Enter image description here

Because the master is getting too big (~ 170 gigabytes), I wanted to do a cleanup job over the weekend that performs several DELETE operations on millions of rows after the stack and VACUUM table immediately after.
Sorry, my cleanup script could not be completed because the DB2 disk was full (pg_xlogs?)

2019-06-21 17: 41: 08.770 UTC [1136] FATAL: File "base / 34163166 / 44033600.20" could not be expanded: There is no space available on the device
2019-06-21 17: 41: 08.770 UTC [1136] TIP: Check the free space. 2019-06-21 17: 41: 08.770 UTC [1136] CONTEXT: xlog redo at 662 / 6A087C30 for heap / INSERT + INIT: off 1 2019-06-21 17: 41: 09.188 UTC [13036] FATAL: Could not write to the file "pg_xlog / xlogtemp.13036": No more space on the device

I still have to run my script, but I'm thinking of how to do it so that my db2 does not implode (replication stops) or reconfigure it? Also, I'm not really sure why the disk was filled by db2: /, do you have an idea?

Thank you for your help, cheers

How can I set up users / groups in PostgreSQL so that each user has permissions to objects created by other users in the same group?

I have created a group (role) named "Employee" and some users who are a member and inherit their rights. I own a database of the group "employees".

The goal: to set things up so that all users can work with all objects in the database.

The Problem: I can not expect users to set the owner to "Employees" when they create a new object because they use different limited interfaces to work with the database. When you create a schema or a table, it is created with the user as the owner. This means that the other users have no rights for this schema / table.

I use PostgreSQL 11.2.

Query Performance – SELECT optimization in Postgresql 10

I want to select the last unique lines based on the time.

    SELECT DISTINCT ON (title) *
FROM eco.tracks WHERE id> (SELECT id FROM eco.tracks WHERE time_track <((SELECT time_track FROM eco.tracks ORDER BY id DESC LIMIT 1) - INTERVAL & # 39; 300 seconds & # 39;) ORDER BY id DESC LIMIT 1 )
ORDER BY track, time_track DESC;

It gives me 20s, that's too slow.
If I replace ID with real value. it gives me 2ms

                    SELECT DISTINCT ON (title) *
FROM eco.tracks WHERE id> 48000000
ORDER BY track, time_track DESC;

This query

SELECT id FROM eco.tracks WHERE Time Track <((SELECT Time Track FROM eco.tracks ORDER BY id DESC LIMIT 1) - INTERVAL & # 39; 300 seconds & nds; ORDER BY id DESC LIMIT 1

only gives 2ms ..

What is wrong?!

PostgreSQL triggers to track table changes

I'm trying to create a trigger (Postgres 9.6) to track changes to a table. That's my approach:

$ BODY $
IF TG_OP = & # 39; DELETE & # 39; THEN
INSERT INTO history.taxon (operacao, "data", tecnico, original_oid, taxon)
VALUES (& # 39; DELETE & # 39 ;, current_timestamp, current_user, old.oid, old.taxon);

ELSIF TG_OP = & # 39; UPDATE & # 39; THEN
INSERT INTO history.taxon (operacao, "data", tecnico, original_oid, taxon)
VALUES (& # 39; DELETE & # 39 ;, current_timestamp, current_user, old.oid, old.taxon);

ELSIF TG_OP = & # 39; INSERT & # 39; THEN
INSERT INTO history.taxon (operacao, "data", tecnico, original_oid, taxon)
VALUES (& # 39; INSERT & # 39 ;, current_timestamp, current_user, new.oid, new.taxon);
$ BODY $
LANGUAGE plpgsql;

CREATE TRIGGER history_taxon
FOR EACH ORDER PROCEDURE taxon_history ();

If, however, something in the taxon No record is added in the table taxon_history Table. I also get no error message and I'm in the dark, why nothing happens. What am I doing wrong?

postgresql – Materialized view rights can not be deleted by the user

Legacy database for me, pg version 9.6.12 runs on a Linux Ami.

I have a user (& # 39; example_user & # 39;) whom I want to delete. I have confirmed that & # 39; example_user & # 39; has no objects. But when I run DROP USER & # 39; example_user & # 39 ;; I get the following error (cut off for brevity):

exampledb = # user & # 39; example_user & # 39; Clear;
ERROR: The role & # 39; example_user & # 39; can not be deleted because some objects depend on it
DETAIL: Permissions for materialized view "example-prod" .view_foo
Permissions for materialized view "example-prod" .view_bar
Permissions for materialized view "example-dev" .view_foo
Permissions for materialized view "example-dev" .view_bar

I've tried about 15 different revocation statements to kill the privileges, and in some cases, Postgres does not even complain. For example:

exampledb = # "example_user" revokes all permissions on all tables in the public scheme;

# OR

revoke all permissions for example-prod ".view_foo from" example_user ";

I've tried countless different revocations of each schema, view, database, and nothing seems to work. The permissions are not removed and I get the same error messages when I try to delete the user. I am not sure if it is related in any way, but pg complains if I do not double quote the user.

How can these specific or indeed all permissions be removed from this user? Are there any other strategies to delete this user if I do not have to worry about keeping objects or permissions?

Many Thanks!

Postgresql – Postgres – Vacuum does not stop at large / busy table

We have a neatly active PG database hosted on AWS. Recently we received notifications like the following:

    The age of the transaction ID reached 750 million. Auto vacuum parameter values ​​for [autovacuum_vacuum_cost_limit, autovacuum_vacuum_cost_delay, autovacuum_naptime] are updated to make the automatic vacuum more aggressive.

I also noticed that disk usage for this particular table was rapidly increasing. Here is the space used:

    "oid": "16413",
    "table_schema": "public",
    "table_name": "connections",
    "row_estimate": 1.01476e+07,
    "total_bytes": 518641270784,
    "index_bytes": 478458511360,
    "toast_bytes": 30646272,
    "table_bytes": 40152113152,
    "total": "483 GB",
    "index": "446 GB",
    "toast": "29 MB",
    "table": "37 GB"

Then we did an analysis for something else and found a long-lasting vacuum process (5 days ago):

    "pid": 14747,
    "duration": "14:11:41.259451",
    "query": "autovacuum: VACUUM ANALYZE public.connections (to prevent wraparound)",
    "state": "active"

(This was actually a new one, but looked just like the last one that was never finished).

For confirmation, I see that the links The table has not been vacuumed automatically since the 15th and there are a lot of things to clean up:

    "relid": "16413",
    "schemaname": "public",
    "relname": "connections",
    "seq_scan": 19951154,
    "seq_tup_read": 226032655046,
    "idx_scan": 41705151351,
    "idx_tup_fetch": 375484186787,
    "n_tup_ins": 8029742,
    "n_tup_upd": 13217694302,
    "n_tup_del": 542670,
    "n_tup_hot_upd": 96750657,
    "n_live_tup": 10237553,
    "n_dead_tup": 887751401,
    "n_mod_since_analyze": 350036721,
    "last_vacuum": null,
    "last_autovacuum": "2019-06-15 17:05:51.526792+00",
    "last_analyze": null,
    "last_autoanalyze": "2019-06-15 17:06:27.310486+00",
    "vacuum_count": 0,
    "autovacuum_count": 4190,
    "analyze_count": 0,
    "autoanalyze_count": 4165

I've read a lot about configuring autovacuum_vacuum_scale_factor and autovacuum_analyze_scale_factor different for very active tables. And that's all good, but it does not look like it's going to get through when it's running.

I also read about optimizations autovacuum_vacuum_cost_limit and autovacuum_vacuum_cost_delay make it more aggressive in the work it needs to do.

I've tried to change some of them for the table, but it's only there when I try to write values ​​for that particular table.

What is the best way to vacuum the table?

Also, would restarting the database affect all of this?

how to get the number of columns in postgreSQL

I have a question below

Choose emp_id,
Sum (with leave_type = & # 39; planned & # 39; then 1 else 0 & nbsp; final) Planned,
sum (for leave_type = & # 39; Not Informed & then 1 else 0 end) NotInformed,
sum (case when leave_type = Inform informed # then 1 else 0 end) Informs
from the table where activity_type = & # 39; Leave & # 39;
group of emp_id

There are results below

Enter image description here

How to get the number of columns above for 567 is shown as 3
and for 619, 4 is displayed in a separate column, please help

PostgreSQL correlation between max. Connections and numbackends

I'm trying to figure out why the numbackends field in pg_stat_database never goes above 16 (and often stays the same after a reboot). I set max_connections to 100 and max_locks_per_transaction to 64. Shared_buffer is set to 128 MB. We are about to disconnect the database from the same server as the application. Obviously, there are resource conflicts, but is there any other property or field I should pay attention to?

Running PostgreSQL 9.4 on RHEL 6.

I've also just emptied the memory cache and the expression of "free -h" shows 16G used, 15G free, 139M shared, 39 buffers and 7.6 cached.

Any help would be appreciated.

postgresql – Postgres databases must be physically stored in a subfolder of the Ubuntu user's home directory

I'm setting up an Ubuntu 18.04 server that will grant SSH access to many different users, including access to the server's Postgres SQL and the ability to create their own Postgres databases.

What I want to achieve is the following. When users create their own Postgres databases, these new databases should automatically be physically stored in a subfolder of the user's home directory.

Note that I know that in addition to the default Postgres location, there are ways to explicitly select alternate physical disk locations for databases to be created. However, it does not seem possible to configure Postgres to work enforce all The databases created by a particular user are automatically stored in the home directory of this Ubuntu user. That is, without the users having to select their own subdirectories as the target of the new database.

Sure, I could automatically create an alternate Postgres location in the user's home directory for each Ubuntu user (via the link above). However, it still does not force databases created by the user to be physically stored there instead of the default Postgres installation location.

Hints would be very grateful.