postgresql 9.4 database cluster move to debian buster

I have a postgresql folder backup (no dump) from postgresql 9.4.

On my Debian Buster server I want to get this old database working, so I re-installed postgresql-9.4 with this tutorial: https://wiki.postgresql.org/wiki/Apt#Quickstart

Now it says

pg_createcluster 9.4 main --start
Configure existing cluster (configuration: /etc/postgresql/9.4/main, data: /var/lib/postgresql/9.4/main, owner: 117: 120)
Error: move_conffile: Required configuration file /var/lib/postgresql/9.4/main/postgresql.conf does not exist

I was able to start a new Cluster2: https://stackoverflow.com/a/48624460/1069083

But this one is empty. My old main cluster contains all my data.

How do I start the old main cluster?

Cluster monitoring of Kubernetes by Prometheus

prometheus よ に よ cubernetes の の ラ ス ス

Kubernetes setup for Prometheus and Grafana

The following comment has been made with reference to

kubectl apply 
--Filename https://raw.githubusercontent.com/giantswarm/kubernetes-prometheus/master/manifests-all.yaml
[root@instance-1 ~]# kubectl get pods --namespace = monitoring
NAME READY STATUS CREATES THE AGE
alertmanager-78cbf8f796-crk8k 1/1 Running 0 42m
grafana-core-7f65444f84-2rg6q 1/1 Running 0 42m
grafana-import-dashboards-h4bp5 0/1 Completed 0 42m
kube-state-metrics-5f4c7f9d47-s2ndv 1/1 Running 0 42m
Node-Directory-Size-Metrics-57lm5 2/2 Running 0 42m
Node-Directory-Size-Metrics-5NCXD 2/2 Running 0 42m
prometheus-core-5c96ddd598-srk4l 1/1 Running 0 42m
prometheus-node-exporter-b8wfz 1/1 Running 0 42m
prometheus-node-exporter-rbfkh 1/1 Running 0 42m
[root@instance-1 ~]# kubectl get svc --namespace = monitoring
NAME TYPE CLUSTER IP EXTERNAL IP PORT (S) AGE
alertmanager NodePort 10.19.254.177           9093: 30576 / TCP 44m
grafana NodePort 10.19.244.179           3000: 31362 / TCP 44 m
kube-state-metrics ClusterIP 10.19.241.158           8080 / TCP 44m
prometheus NodePort 10.19.241.218           9090: 30472 / TCP 44m
prometheus-node-exporter ClusterIP None                    9100 / TCP 44m

I have, but I can not see the GUIs of Prometheus and Grafana.
You can not access this site.
Please tell me how to use the GUI
Cluster monitoring of Kubernetes by Prometheus
Since we're going to create NodePort, we'll check it using the URL. I have no choice but … please.

Execute DDLs in Galera Cluster

I have 3 knots production Mariadb Galera cluster with many schemes. I have to run ALTERs quite a lot, and since it's a gallery, cluster-wide locking is in progress, and currently wsrep_OSU_method = TOI. I know some options like this:

  • RSU method: But we also have to execute DDLs on single nodes
    where no other nodes should get the data into it.
  • pt-online schema: Does not work if triggers already exist. All tables have triggers in our production.
  • ghost: does not work with innoDB.

So I'm thinking about turning off 2 nodes from 3 node clusters. Then apply a long-running DDL to a node and start other nodes after the data is retrieved from the IST.
Does this still lock all schemas in node 1 while I execute DDLs while other 2 nodes are out of order? still wsrep_osu_method = TOI specified in node 1?

repmgr – The PostgreSQL cluster has two primary databases after a network failure

My test environment had a network failure at the weekend, and two primary databases were running in the cluster.

    ID | Name | Role | Status | Upstream | Location | Priority | Connection string
---- + ----------------------------------- + --------- + ----------- + ---------- + ---------- + ---------- + ---- -------------------------------------------------- ------------------------------
1 | Node-1 | primary | ! Running | | default | 100 | host = node-1 user = repmgr database_name = repmgr connect_timeout = 2
2 | Node-2 | primary | * Running | | default | 100 | host = node-2 user = repmgr database_name = repmgr connect_timeout = 2

Is there a way to put the new primary server back on standby and continue WAL delivery instead of running the standby clone?

PgSQL version: 10.7

Repmgr version: 4.3

Any help is greatly appreciated.

SQL Server – To set up a SQL cluster between two sites in active passive mode with no shared memory

We need to install SQL in 2 locations, 1 primary and 1 DR. We do not need to pay for the DR license and need to know how to set up a SQL cluster / SQL replication / alwayson / bag or any other method that uses the sql Service is limited to DR, and only if the primary service is unavailable, the SQL service starts.

Currently we have a BAG installed and at both sites the SQL service is running. However, we only use one location at a time. The examiners identified him as active / active while we thought he was active / passive.

Thank you in advance for your suggestions.

Failover Cluster – Azure Cloud Witness with HTTP proxy

The Windows Server documentation states:

Proxy considerations with Cloud Witness

Cloud Witness uses HTTPS (default port 443) to communicate with the Azure Blob service. Make sure the HTTPS port is accessible through the network proxy.

How do I configure the network proxy on WFCS cluster members for this feature only? I want to avoid setting a system-wide proxy setting.

File Cluster – Update the file to a different server

Hello
We have 2 servers with Nginx FTP and general software,

The main site and the files are uploaded to Server1. There is a file called Feed.php, which is located on Server 1 and is updated every minute with new content.
Our next project has access to Server2 and I have to upload Feed.php to Server2
Problem is, how can I update feed.php in Server2 if this file is updated in Server1?

Load Balancing – Firewall Cluster Active / Active

I want to create a load balanced and high availability firewall cluster. I thought to use Proxmox for the cluster and create 2 nodes for load balancing and 2 more for HA.

The problem is, how can I load balance?

I thought, make a LACP or a technique that can balance the entered traffic on the Switch Previus to the nodes. But I'm not sure if that's possible.

Do you know if anyone has ever tried this?

Thanks and sorry for my english

Access Control List – Configure the Consul Cluster with ACL enabled

Hello everyone and thanks for reading.

I'm pretty new to the Consul. I have been reading and practicing the documentation for some time so that I was able to properly configure the consul in some nodes.

Now I want to enable ACLs so that I can manage the security of my Consul cluster, but I can not get it up and running. I follow this guide: https://learn.hashicorp.com/consul/security-networking/production-acls#create-the-agent-policy.

My scnario:

  • Node 1: the & # 39; bootstrap & # 39; node. IP: 172.20.10.41.
  • Node 2: the "slave" node. IP: 172.20.10.40

What I expect:

  • Set up and run ACLs to control which processes / nodes connect to the cluster and read / write information.

I can enable ACLs for a consul agent and run with the following command:

consul agent -server -bootstrap-config-dir = / etc / consul / conf.d / agent.json -data-dir = / tmp / consul / -ui -client = 0.0.0.0

Here is my agent.json file:

{
"primary_datacenter": "dc1",
"acl": {
"activated": true,
"default_policy": "allow",
"down_policy": "extend-cache"
}
}

As soon as the Consul is running, I run

# consul acl bootstrap

what gives me

AccessorID: 3c354e3c-2d1c-24b1-41ce-0645fdd6c3e7
SecretID: 1e026ae6-8902-eae2-6a18-6b0fb36bbed4
Description: Bootstrap Tokens (Global Management)
Local: wrong
Time to create: 2019-05-03 12: 41: 18.038389106 -0300 -03
Guidelines:
00000000-0000-0000-00000000000001 - global administration

I'll create a policy and a token to allow all node things:

# consul acl policy create -name "Agent-write-policy" -description "Write Agent Generating Policy" Rules @ agent_write_policy.hcl -token "1e026ae6-8902-eae2-6a18-6b0fb36bbed4"

And

# consul acl token create -description "agent write token" -policy-name "agent-write-policy" -token "1e026ae6-8902-eae2-6a18-6b0fb36bbed4"

AccessorID: 7324d2d0-f82f-cea8-44d1-82c2d07cd35a
SecretID: 11dfcacf-7eae-a286-f108-990c1963fb29
Description: Agent write token
Local: wrong
Time to create: 2019-05-03 12: 30: 11.292590345 -0300 -03
Guidelines:
0171cfc2-06f3-6702-9c46-df117eb1bd53 - Agent Write Policy

Then I go to my second server node and start the consul

# consul agent -server -data-dir = / tmp / consul-config-dir = / etc / consul / conf.d / agent.json

My agent.json file:

{
"primary_datacenter": "dc1",
"acl": {
"activated": true,
"default_policy": "allow",
"down_policy": "extend-cache",
"token": {
"default": "11dfcacf-7eae-a286-f108-990c1963fb29"
}
}
}

I run with my second knot

# Join Consul 172.20.10.41

Error occurred when entering the address & # 39; 172.20.10.41 & # 39 ;: Unexpected Response Code: 403 (ACL not found)
Nodes could not be connected.

I also tried adding -token = "" to the join command.

If I disable acl in node 2, I can join the cluster, but node / service information is not synchronized.

2019/05/03 12:35:26 [WARN] agent: Update of node information blocked by ACLs
2019/05/03 12:35:51 [WARN] agent: Coordinates update blocked by ACLs

What am I doing wrong?

Maybe there are many things that I am doing wrong. If one of you has a beginner companion for me, I am very grateful.

Thanks for your time. (and sorry for my bad english)

Best regards.

Cluster configuration for multiple data centers in Cassandra

We decided to use cassandra as the database and now want to use a cluster with two datacenters, DC1 and DC2. This DC2 is for backup purposes and is a replica of DC1.
Initially, we expect the data size in DB to be about 2 terabytes, and that size will ultimately grow larger.

Since I am new to cassandra, I need some help here to get the server / node configuration estimate for this cluster

  • How many nodes are enough to handle this data? The CPU configurations such as RAM, ROM and the number of CPU cores

If I want to increase server capacity in the future, is it better to add disks to existing nodes? or better to add a new node? what is right.

It would be grateful if someone could help me with at least rough assessments.

Note: We are using OpenSource Cassandra version 3.11