Network Attached Storage – Can I set up Synology DSM on just one hard drive and then retrieve data to set up RAID 1?

Yes, that will work. When installing DSM on the first hard drive, make sure to use "SHR" as the volume / RAID type.

In the "Storage Manager" synology, you manage your hard drives, volumes and storage pools. After installing DSM on the first drive, do exactly what you said: Copy all the data from MyCloud. After copying the data, insert the second hard drive into the NAS.

To add it to RAID, navigate to Storage Manager, select the storage pool you created when you installed DSM for the first time, and click Add Drive.

At https://www.synology.com/en-uk/knowledgebase/DSM/help/DSM/StorageManager/storage_pool_expand_add_disk you will find Synology's specific instructions.

All in all, while this should work, I would strongly recommend that you have a backup copy of this data somewhere else in case something goes wrong. The synchronization and copying of so much data leads to a high wear of the drives.

It takes a long time to retrieve data from the PostgresQL database with millions of rows

I've worked on a system where users can register as users, create a book club, and invite other people (members) to join. The user and member can add all books to the club and can also vote for books that other members have added. I recently tried adding a lot of data to check if the database was performing well. After that I found that it took a long time to get the data that I actually wanted. I want to receive all the books in a club, including their votes and the name of the members who voted for them.

My database diagram (created via dbdiagram.io, check it out)

Database diagram
In order to be able to query the database freely without much effort, I chose Hasura, an open source service that can create a GraphQL backend by just looking at the data structure (I use PostgresQL). I use the following query to get the data I want:

query GetBooksOfClubIncludingVotesAndMemberName {
  books(
    where: {
      club_id: {_eq: "3"}, 
      state:{_eq: 0 }
    }, 
    order_by: (
      { fallback : asc },
      { id: asc }
    )
  ) {
    id
    isbn
    state
    votes {
      member {
        id
        name
      }
    }
  }    
}

This query is of course converted into an SQL statement

SELECT
  coalesce(
    json_agg(
      "root"
      ORDER BY
        "root.pg.fallback" ASC NULLS LAST,
        "root.pg.id" ASC NULLS LAST
    ),
    '()'
  ) AS "root"
FROM
  (
    SELECT
      row_to_json(
        (
          SELECT
            "_8_e"
          FROM
            (
              SELECT
                "_0_root.base"."id" AS "id",
                "_0_root.base"."isbn" AS "isbn",
                "_7_root.ar.root.votes"."votes" AS "votes"
            ) AS "_8_e"
        )
      ) AS "root",
      "_0_root.base"."id" AS "root.pg.id",
      "_0_root.base"."fallback" AS "root.pg.fallback"
    FROM
      (
        SELECT
          *
        FROM
          "public"."books"
        WHERE
          (
            (("public"."books"."club_id") = (('3') :: bigint))
            AND (("public"."books"."state") = (('0') :: smallint))
          )
      ) AS "_0_root.base"
      LEFT OUTER JOIN LATERAL (
        SELECT
          coalesce(json_agg("votes"), '()') AS "votes"
        FROM
          (
            SELECT
              row_to_json(
                (
                  SELECT
                    "_5_e"
                  FROM
                    (
                      SELECT
                        "_4_root.ar.root.votes.or.member"."member" AS "member"
                    ) AS "_5_e"
                )
              ) AS "votes"
            FROM
              (
                SELECT
                  *
                FROM
                  "public"."votes"
                WHERE
                  (("_0_root.base"."id") = ("book_id"))
              ) AS "_1_root.ar.root.votes.base"
              LEFT OUTER JOIN LATERAL (
                SELECT
                  row_to_json(
                    (
                      SELECT
                        "_3_e"
                      FROM
                        (
                          SELECT
                            "_2_root.ar.root.votes.or.member.base"."id" AS "id",
                            "_2_root.ar.root.votes.or.member.base"."name" AS "name"
                        ) AS "_3_e"
                    )
                  ) AS "member"
                FROM
                  (
                    SELECT
                      *
                    FROM
                      "public"."members"
                    WHERE
                      (
                        ("_1_root.ar.root.votes.base"."member_id") = ("id")
                      )
                  ) AS "_2_root.ar.root.votes.or.member.base"
              ) AS "_4_root.ar.root.votes.or.member" ON ('true')
          ) AS "_6_root.ar.root.votes"
      ) AS "_7_root.ar.root.votes" ON ('true')
    ORDER BY
      "root.pg.fallback" ASC NULLS LAST,
      "root.pg.id" ASC NULLS LAST
  ) AS "_9_root";

When executing this instruction with EXPLAIN ANALYZE before that, it tells me that it took about 9217 milliseconds to finish. Check the analysis answer below

                                                                         QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------------------------------
 Aggregate  (cost=12057321.11..12057321.15 rows=1 width=32) (actual time=9151.967..9151.967 rows=1 loops=1)
   ->  Sort  (cost=12057312.92..12057313.38 rows=182 width=37) (actual time=9151.856..9151.865 rows=180 loops=1)
         Sort Key: books.fallback, books.id
         Sort Method: quicksort  Memory: 72kB
         ->  Nested Loop Left Join  (cost=66041.02..12057306.09 rows=182 width=37) (actual time=301.721..9151.490 rows=180 loops=1)
               ->  Index Scan using book_club on books  (cost=0.43..37888.11 rows=182 width=42) (actual time=249.506..304.469 rows=180 loops=1)
                     Index Cond: (club_id = '3'::bigint)
                     Filter: (state = '0'::smallint)
               ->  Aggregate  (cost=66040.60..66040.64 rows=1 width=32) (actual time=49.134..49.134 rows=1 loops=180)
                     ->  Nested Loop Left Join  (cost=0.72..66040.46 rows=3 width=32) (actual time=0.037..49.124 rows=3 loops=180)
                           ->  Index Only Scan using member_book on votes  (cost=0.43..66021.32 rows=3 width=8) (actual time=0.024..49.104 rows=3 loops=180)
                                 Index Cond: (book_id = books.id)
                                 Heap Fetches: 540
                           ->  Index Scan using members_pkey on members  (cost=0.29..6.38 rows=1 width=36) (actual time=0.005..0.005 rows=1 loops=540)
                                 Index Cond: (id = votes.member_id)
                                 SubPlan 2
                                   ->  Result  (cost=0.00..0.04 rows=1 width=32) (actual time=0.000..0.000 rows=1 loops=540)
                     SubPlan 3
                       ->  Result  (cost=0.00..0.04 rows=1 width=32) (actual time=0.000..0.000 rows=1 loops=540)
               SubPlan 1
                 ->  Result  (cost=0.00..0.04 rows=1 width=32) (actual time=0.001..0.002 rows=1 loops=180)
 Planning Time: 0.788 ms
 JIT:
   Functions: 32
   Options: Inlining true, Optimization true, Expressions true, Deforming true
   Timing: Generation 4.614 ms, Inlining 52.818 ms, Optimization 113.442 ms, Emission 81.939 ms, Total 252.813 ms
 Execution Time: 9217.899 ms
(27 rows)

With table sizes from:

   relname    | rowcount
--------------+----------
 books        |  1153800
 members      |    19230
 votes        |  3461400
 clubs        |     6410
 users        |        3

It takes far too long. In my previous design, I had no indexes, which made it slow down. I've added indexes, but I'm still not entirely happy that I have to wait that long. Can I improve anything on the data structure or something?

Retrieve by inserting ORDER of merged table in SQL Server

I have 3 tables like "BASE_Customer", "BASE_Invoice" and "BASE_Payment" the following structure,

Enter the image description here

Query:

CREATE TABLE BASE_Customer
(
    CustomerId INT IDENTITY(1,1),
    CustomerName VARCHAR(45)
    PRIMARY KEY(CustomerId)
)
INSERT INTO BASE_Customer (CustomerName) VAlUES ('LEE')

CREATE TABLE BASE_Invoice
(
    InvoiceId INT IDENTITY(1,1),
    InvoiceDate DATE,
    CustomerId INT NOT NULL,
    InvoiceMethod VARCHAR(45) NOT NULL, -- CASH or CREDIT
    Amount MONEY,
    PRIMARY KEY(InvoiceId)
)
INSERT INTO BASE_Invoice (InvoiceDate, CustomerId, InvoiceMethod, Amount) VAlUES ('2020-01-16', 1, 'CASH', 1000);
INSERT INTO BASE_Invoice (InvoiceDate, CustomerId, InvoiceMethod, Amount) VAlUES ('2020-01-16', 1, 'CREDIT', 2000);
INSERT INTO BASE_Invoice (InvoiceDate, CustomerId, InvoiceMethod, Amount) VAlUES ('2020-01-17', 1, 'CREDIT', 500);
INSERT INTO BASE_Invoice (InvoiceDate, CustomerId, InvoiceMethod, Amount) VAlUES ('2020-01-18', 1, 'CREDIT', 2000);
INSERT INTO BASE_Invoice (InvoiceDate, CustomerId, InvoiceMethod, Amount) VAlUES ('2020-01-18', 1, 'CASH', 150);
INSERT INTO BASE_Invoice (InvoiceDate, CustomerId, InvoiceMethod, Amount) VAlUES ('2020-01-19', 1, 'CREDIT', 1000);
INSERT INTO BASE_Invoice (InvoiceDate, CustomerId, InvoiceMethod, Amount) VAlUES ('2020-01-19', 1, 'CREDIT', 3000);
INSERT INTO BASE_Invoice (InvoiceDate, CustomerId, InvoiceMethod, Amount) VAlUES ('2020-01-19', 1, 'CASH', 1000);
INSERT INTO BASE_Invoice (InvoiceDate, CustomerId, InvoiceMethod, Amount) VAlUES ('2020-01-20', 1, 'CREDIT', 2250);

CREATE TABLE BASE_Payment
(
    PaymentId INT IDENTITY(1,1),
    PaymentDate DATE,
    CustomerId INT NOT NULL,
    InvoiceId INT NULL,
    PaymentMethod VARCHAR(45) NOT NULL, -- CASH or CREDIT, ADVANCE
    Amount MONEY,
    PRIMARY KEY(PaymentId)
)
INSERT INTO BASE_Payment (PaymentDate, CustomerId, InvoiceId, PaymentMethod, Amount) VAlUES ('2020-01-16', 1, 1, 'CASH', 1000);
INSERT INTO BASE_Payment (PaymentDate, CustomerId, PaymentMethod, Amount) VAlUES ('2020-01-16', 1, 'CREDIT', 500);
INSERT INTO BASE_Payment (PaymentDate, CustomerId, InvoiceId, PaymentMethod, Amount) VAlUES ('2020-01-18', 1, 4, 'ADVANCE', 500);
INSERT INTO BASE_Payment (PaymentDate, CustomerId, PaymentMethod, Amount) VAlUES ('2020-01-18', 1, 'CREDIT', 2000);
INSERT INTO BASE_Payment (PaymentDate, CustomerId, InvoiceId, PaymentMethod, Amount) VAlUES ('2020-01-18', 1, 5, 'CASH', 150);
INSERT INTO BASE_Payment (PaymentDate, CustomerId, PaymentMethod, Amount) VAlUES ('2020-01-18', 1, 'CREDIT', 5000);
INSERT INTO BASE_Payment (PaymentDate, CustomerId, PaymentMethod, Amount) VAlUES ('2020-01-19', 1, 'CREDIT', 1200);
INSERT INTO BASE_Payment (PaymentDate, CustomerId, InvoiceId, PaymentMethod, Amount) VAlUES ('2020-01-19', 1, 7, 'ADVANCE', 500)
INSERT INTO BASE_Payment (PaymentDate, CustomerId, InvoiceId, PaymentMethod, Amount) VAlUES ('2020-01-19', 1, 8, 'CASH', 1000);
INSERT INTO BASE_Payment (PaymentDate, CustomerId, InvoiceId, PaymentMethod, Amount) VAlUES ('2020-01-20', 1, 9, 'ADVANCE', 750);
  • The customer table should be linked with both tables "BASE_Invoice" and "BASE_Payment" (customer ID is the foreign key in both tables).
  • when inserting data into the "BASE_Invoice" table, if the CASH calculation method column at the time of inserting the invoice payment using the CASH payment method,

  • If you insert data into the "BASE_Invoice" table, if the CREDIT calculation method column sometimes inserts the payment with the ADVANCE payment method, or if we do not insert a payment into the payment table.

  • CREDIT payment methods can only be inserted in the payment table

I have to put all the tables together once and get the order of the output as shown in the screenshot below.

Enter the image description here

The problem is how to merge tables and get the order of transactions

magento2.3 – Magento 2.3 & pwa Studio problem: How can I access a different memory to update data or to retrieve data other than that of the standard memory?

The store consists of different websites with multiple stores and multiple store views.

I am developing a functionality to call up the respective business view depending on the location of the customer and to call up the product catalog with the prices.

However, when I get the information from current memory, I'm always in standard memory using the GraphQL query.

  • How can I get the information from another store or set values ​​in another store?

  • Is there a GraphQl query or can I ask the rest API to change the store to retrieve the products from a store using the store ID or code?

magento2.3 – Magento 2.3 & pwa studio: How can I access a different memory to update data or to retrieve data other than that of the standard memory?

The store consists of different websites with multiple stores and multiple store views.

I am developing a functionality to call up the respective business view depending on the location of the customer and to call up the product catalog with the prices.

However, when I get the information from current memory, I'm always in standard memory using the GraphQL query.

  • How can I get the information from another store or set values ​​in another store?

  • Is there a GraphQl query or can I ask the rest API to change the store to retrieve the products from a store using the store ID or code?

magento2.3 – PWA Studio: If you work with several branches, is it possible to update data or to retrieve data from someone other than the standard storage?

As a React developer, I was asked to develop a PWA with PWA Studio. The store consists of various websites with multiple stores and multiple store views.

I am developing a functionality to call up the respective business view depending on the location of the customer and to call up the product catalog with the prices.

But when I get the information from the current business, I'm always in the standard business using the GraphQL query. How can I get the information from another store or set values ​​in another store? Is there a GraphQl query or can I ask the rest API to change the store to retrieve the products from a store using the store ID or code?

Retrieve data from the angle router

I update my page title and text in an HTML element (with the pageTitle variable) at angle 9. ActivatedRoute is used here to get a snapshot. I was wondering if I could extract the same data from the router event I subscribed to and if I was using the correct RouterEvent. I'm not sure if it did it would make the code clearer. All feedback is welcome :).

import { Component } from '@angular/core';
import { ActivatedRoute, Router, NavigationEnd } from '@angular/router';
import { Title } from '@angular/platform-browser';

@Component({
  selector: 'app-main-nav',
  templateUrl: './main-nav.component.html',
  styleUrls: ('./main-nav.component.scss')
})
export class MainNavComponent {
  public pageTitle = null;

  constructor(
    private activatedRoute: ActivatedRoute,
    private router: Router,
    private title: Title
  ) {}

  ngOnInit() {
    this.router.events
    .subscribe( event => {
      if( event instanceof NavigationEnd )
        this.processRouteData();
    });
  }

  private processRouteData() {
    let data = this.activatedRoute.snapshot.firstChild.data;

    this.title.setTitle( data.title );
    this.pageTitle = data.title;
  }

}

Primary key – is there a database type that can directly retrieve an element with a pointer in O (1) time?

I am currently using levelDb, a key-> store database. The only data that is inserted is small binary blobs (which are nodes for a large Merkle tree). If I put A value in the database, I don't care what the key is, just what I have something Key to be used as a reference (from the parent nodes that will be written later).

As you can see, the key-> store does the job, but since I don't need control over key generation, I think this is unnecessarily slow. So that a key-> value database can look up a key, a B-tree is probably used. This means that when a single value is found, O (log n) disk reads are performed.

For my use case, it should be possible to get closer to O (1) reads / writes, where I put new data items into the database and it returns something like a direct address. I can use this address as a reference.

(This can already be done in the file system by calculating offsets, but I need a "real" transaction ACID database.)

Is there such a thing? I tried to search but I don't know what to call it other than "pointer based database".

json rpc – retrieve historical orphan blocks (chaintips) without old nodes

I'm trying to search for orphaned blocks on the Bitcoin network. As I understand it, the best and really only way to get data for blocks that are no longer in the main chain / branch (including orphaned blocks) is to do this getchaintips RPC command. This command seems to depend on how long the particular node has been running (since blocks that are not in the main chain have no reason to be downloaded to newer nodes).

Unfortunately, I no longer have access to the node or data that I set up in 2013, so I can only process a limited number of orphaned blocks (my current node is only a few months old).

Alternatively, I ran this command here for an online RPC query, but the earliest recorded block height is 283,421. Fortunately, I can also use it to query getblock and actually display the information from the discarded blocks, but do not list blocks before 283,421 (dismantled in January 2014).

The online block explorer Blockchain.info tracks orphaned blocks, but unfortunately they only return to block # 291.122.

While I googled a bit, I found more of an "extensive" list on PasteBin, which goes back to block height 179,641. However, I cannot use it getblock on one of these hashes from my node since it didn't exist when these blocks were originally mined.


My question: Are there ways to query discarded or orphaned blocks without the need for a node old enough to watch them?