Compress JSON string stored in PostgreSQL, e.g. B. MessagePack?

JSON strings are currently stored in a PostgreSQL 11 table in a text box. For example, a line can contain the text field asks with the string:

{"0.000295":1544.2,"0.000324":1050,"0.000325":40.1,"0.000348":0}

Question: Is it possible to save it in a format that takes up less space? The use of some CPU cycles to serialize / deserialize the JSON string is acceptable as a compromise for the use of less memory. The JSON data does not have to be searchable.

I am particularly interested in using JSON encoding / compression algorithms like MessagePack with zlib, but how can we use this when we insert the record into the PostgreSQL table?

Note: Also use Node.js with Knex.js to communicate with PostgreSQL. Currently, JSON objects are converted to strings using nodes JSON.stringify Function.

postgresql – Fill a table with an X number of rows with default values ​​with a large insert query? Or can it be done somehow by default?

I have a table, let's say calendar

Table Calendar --> idCal (int) , numberOfFields (int)

And each calendar has assigned a number of fields

Table Field --> idField(int), textField(text), idCal (int) *fk

The point is now, every time a user registers, they are assigned a calendar. Once the number of fields is filled in, I select this value and generate an insert query similar to the following:

INSERT INTO Field (textField, idCal) Values ("","idOfTheGeneratedCalendar") , ("","idOfTheGeneratedCalendar") ...... 

Until I have a number of rows that correspond to numberOfFields from the table calendar. Each idField (int) begins with 0 and an automatic increment up to any number of fields takes place

I do this for every user. The point is … is there a better way to do this without building large insert queries, each with around 3000 values, using a for iteration? Should I be concerned?

Penetration test – PostgreSQL exploit is not loaded into MSFConsole by ExploitDB

I'm just trying to get a PostgreSQL exploit (32847.txt – Low Cost Function) from Exploitdb for execution in msfconsole. After pulling my hair out to find out why they're not loading, I'm here. I use Kali Linux, 64 bit Debian in Virtual Box, Windows Host.

So far I've done the following:

Downloaded the corresponding exploit from Exploitdb and put .txt into a folder I created, … /. Msf4 / modules / Exploits / PostgreSQL. I run ls on the command line, it's there. Np.

Run Terminal again, database updated with -updatedb

Msfconsole was started and the exploit was not loaded into msfdb (I cannot access it via "Use exploit …". I tried to update the database, I tried to copy / paste the exploit directly via the GUI in the exploits folder and the command line.

My error is "Error loading module: Exploit / PostgreSQL / 32847.txt"

I put the exploit in the wrong folder or?

Please help!

postgresql – MAX (UPPER (area)) cannot be accelerated with a GiST index for the area?

To get the last session, I have to SELECT MAX(UPPER(range)) AS end FROM sessions WHERE session_id = ?. I have a GiST index for (session_id, range). This query is extremely slow and takes almost 30 seconds. I added a normal btree index (session_id, UPPER(range)) and that has reduced it to less than a millisecond, but it seems the index should allow queries at its upper limit. Is there a way to do this with just one index? Am I doing something wrong, either in the query or in the index? Should I use an index type other than GiST?

postgresql – How do I create an "empty" subindex or an equivalent index in Postgres?

I have a Boolean column that is wrong for> 99.9% of the rows. I need to get all the rows efficiently where this column is true.

What is the best option? Create an index for the column? Create a sub-index where the column is true? But I don't know which rows the subindex would / should contain.

This is not analyzed, but there is a way to do something like: CREATE INDEX mytable_cond ON mytable () WHERE cond = TRUE; ? Where the index literally contains zero rows?

It takes a long time to retrieve data from the PostgresQL database with millions of rows

I've worked on a system where users can register as users, create a book club, and invite other people (members) to join. The user and member can add all books to the club and can also vote for books that other members have added. I recently tried adding a lot of data to check if the database was performing well. After that I found that it took a long time to get the data that I actually wanted. I want to receive all the books in a club, including their votes and the name of the members who voted for them.

My database diagram (created via dbdiagram.io, check it out)

Database diagram
In order to be able to query the database freely without much effort, I chose Hasura, an open source service that can create a GraphQL backend by just looking at the data structure (I use PostgresQL). I use the following query to get the data I want:

query GetBooksOfClubIncludingVotesAndMemberName {
  books(
    where: {
      club_id: {_eq: "3"}, 
      state:{_eq: 0 }
    }, 
    order_by: (
      { fallback : asc },
      { id: asc }
    )
  ) {
    id
    isbn
    state
    votes {
      member {
        id
        name
      }
    }
  }    
}

This query is of course converted into an SQL statement

SELECT
  coalesce(
    json_agg(
      "root"
      ORDER BY
        "root.pg.fallback" ASC NULLS LAST,
        "root.pg.id" ASC NULLS LAST
    ),
    '()'
  ) AS "root"
FROM
  (
    SELECT
      row_to_json(
        (
          SELECT
            "_8_e"
          FROM
            (
              SELECT
                "_0_root.base"."id" AS "id",
                "_0_root.base"."isbn" AS "isbn",
                "_7_root.ar.root.votes"."votes" AS "votes"
            ) AS "_8_e"
        )
      ) AS "root",
      "_0_root.base"."id" AS "root.pg.id",
      "_0_root.base"."fallback" AS "root.pg.fallback"
    FROM
      (
        SELECT
          *
        FROM
          "public"."books"
        WHERE
          (
            (("public"."books"."club_id") = (('3') :: bigint))
            AND (("public"."books"."state") = (('0') :: smallint))
          )
      ) AS "_0_root.base"
      LEFT OUTER JOIN LATERAL (
        SELECT
          coalesce(json_agg("votes"), '()') AS "votes"
        FROM
          (
            SELECT
              row_to_json(
                (
                  SELECT
                    "_5_e"
                  FROM
                    (
                      SELECT
                        "_4_root.ar.root.votes.or.member"."member" AS "member"
                    ) AS "_5_e"
                )
              ) AS "votes"
            FROM
              (
                SELECT
                  *
                FROM
                  "public"."votes"
                WHERE
                  (("_0_root.base"."id") = ("book_id"))
              ) AS "_1_root.ar.root.votes.base"
              LEFT OUTER JOIN LATERAL (
                SELECT
                  row_to_json(
                    (
                      SELECT
                        "_3_e"
                      FROM
                        (
                          SELECT
                            "_2_root.ar.root.votes.or.member.base"."id" AS "id",
                            "_2_root.ar.root.votes.or.member.base"."name" AS "name"
                        ) AS "_3_e"
                    )
                  ) AS "member"
                FROM
                  (
                    SELECT
                      *
                    FROM
                      "public"."members"
                    WHERE
                      (
                        ("_1_root.ar.root.votes.base"."member_id") = ("id")
                      )
                  ) AS "_2_root.ar.root.votes.or.member.base"
              ) AS "_4_root.ar.root.votes.or.member" ON ('true')
          ) AS "_6_root.ar.root.votes"
      ) AS "_7_root.ar.root.votes" ON ('true')
    ORDER BY
      "root.pg.fallback" ASC NULLS LAST,
      "root.pg.id" ASC NULLS LAST
  ) AS "_9_root";

When executing this instruction with EXPLAIN ANALYZE before that, it tells me that it took about 9217 milliseconds to finish. Check the analysis answer below

                                                                         QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------------------------------
 Aggregate  (cost=12057321.11..12057321.15 rows=1 width=32) (actual time=9151.967..9151.967 rows=1 loops=1)
   ->  Sort  (cost=12057312.92..12057313.38 rows=182 width=37) (actual time=9151.856..9151.865 rows=180 loops=1)
         Sort Key: books.fallback, books.id
         Sort Method: quicksort  Memory: 72kB
         ->  Nested Loop Left Join  (cost=66041.02..12057306.09 rows=182 width=37) (actual time=301.721..9151.490 rows=180 loops=1)
               ->  Index Scan using book_club on books  (cost=0.43..37888.11 rows=182 width=42) (actual time=249.506..304.469 rows=180 loops=1)
                     Index Cond: (club_id = '3'::bigint)
                     Filter: (state = '0'::smallint)
               ->  Aggregate  (cost=66040.60..66040.64 rows=1 width=32) (actual time=49.134..49.134 rows=1 loops=180)
                     ->  Nested Loop Left Join  (cost=0.72..66040.46 rows=3 width=32) (actual time=0.037..49.124 rows=3 loops=180)
                           ->  Index Only Scan using member_book on votes  (cost=0.43..66021.32 rows=3 width=8) (actual time=0.024..49.104 rows=3 loops=180)
                                 Index Cond: (book_id = books.id)
                                 Heap Fetches: 540
                           ->  Index Scan using members_pkey on members  (cost=0.29..6.38 rows=1 width=36) (actual time=0.005..0.005 rows=1 loops=540)
                                 Index Cond: (id = votes.member_id)
                                 SubPlan 2
                                   ->  Result  (cost=0.00..0.04 rows=1 width=32) (actual time=0.000..0.000 rows=1 loops=540)
                     SubPlan 3
                       ->  Result  (cost=0.00..0.04 rows=1 width=32) (actual time=0.000..0.000 rows=1 loops=540)
               SubPlan 1
                 ->  Result  (cost=0.00..0.04 rows=1 width=32) (actual time=0.001..0.002 rows=1 loops=180)
 Planning Time: 0.788 ms
 JIT:
   Functions: 32
   Options: Inlining true, Optimization true, Expressions true, Deforming true
   Timing: Generation 4.614 ms, Inlining 52.818 ms, Optimization 113.442 ms, Emission 81.939 ms, Total 252.813 ms
 Execution Time: 9217.899 ms
(27 rows)

With table sizes from:

   relname    | rowcount
--------------+----------
 books        |  1153800
 members      |    19230
 votes        |  3461400
 clubs        |     6410
 users        |        3

It takes far too long. In my previous design, I had no indexes, which made it slow down. I've added indexes, but I'm still not entirely happy that I have to wait that long. Can I improve anything on the data structure or something?

postgresql – Pagination when sorting an ambiguous text column

What are the ways to page a table in an ambiguous text column?

On a unique timestamp Column you can index it and run the following to go forward

SELECT * FROM users WHERE created > '' ORDER BY created ASC LIMIT 20;

If it is a numeric primary key

SELECT * FROM users WHERE id > 100 ORDER BY id ASC LIMIT 20;

But what can you do if the table is sorted according to ambiguous columns of text?first_name for example]?

The first thing that comes to mind is that OFFSET Method, but there is a performance penalty for large tables.

Alternatively, you could use a cursor, but the Internet agrees that it is not recommended in one way Publicity Internet application.

Python – SQLite to PostgreSQL Not Migrated – Django 3.0

location

  • I have created a Django 3.0 project with some applications.
  • Then I created an authentication application acc
  • All of this was done in one SQLite Database
  • Before that I tried a PostgreSQL Early application database that worked well
  • but now if i turn off settings.py in the file, the SQLite to PostgreSQL I get an error message and try to log in
  • If I switch the settings.py back to SQLite everything works fine (e.g. authentication, user login, user who does things on the website with his own settings)
  • I use decorators.py to keep logged in users on the login and login pages. This leads to errors when I switch to postgresql. I only use here HttpResponse that contains the error message

decorators.py

from django.http import HttpResponse
from django.shortcuts import redirect
...
def allowed_users(allowed_roles=()):
    def decorator(view_func):
        def wrapper_func(request, *args, **kwargs):

            group = None
            if request.user.groups.exists():
                group = request.user.groups.all()(0).name

            if group in allowed_roles:
                return view_func(request, *args, **kwargs)
            else:
                return HttpResponse('NO AUTHORISATION TO BE HERE')
        return wrapper_func
    return decorator

ERROR

When I log in while using settings.py PostgreSQL. When I log out, everything works fine again. If I use SQL Lite, I can log in and everything works fine

ValueError at /
The view accounts.decorators.wrapper_function didn't return an HttpResponse object. It returned None instead.
Request Method: GET
Request URL:    http://localhost...
Django Version: 3.0
Exception Type: ValueError
Exception Value: The view accounts.decorators.wrapper_function didn't return an HttpResponse object. It returned None instead.
Exception Location: /Users/.../python3.7/site-packages/django/core/handlers/base.py in _get_response, line 126
Python Executable:  /Users/.../bin/python3
Python Version: 3.7.3
.....

Request information
USER MYUSERNAME
GET No GET data
POST No POST data
FILES  No FILES data
COOKIES ...
...

Tired

  • I have data but tables in both databases PostgreSQL for some old employees and SQLite for everything.
    • I tried to migrate them SQLite to PostgreSQL with this guide.
    • I have successfully made a copy of the SQLite database
    • but when i changed the settings on postgres and i try python manage.py migrate it says Running migrations: No migrations to apply.
    • python manage.py loaddata db.json
    • The users are migrated from SQLite (I can log in with them and get errors, just like the only SQlite users, if I incorrectly enter the user or password that I am not allowed), but I do not see any of the data tables in Postgresql when I look it up with an IDE
  • I have spoken to other people on forums that many said that it is the decoration file that is problematic, but it only occurs when changing databases.