java – Pattern for syncing databases with undo option

Sounds like a nightmare.

In theory, if you publish the events to a message queue you can have them processed against the master database and have undo event work fine.

However, with multiple clients all processing transactions I’m not sure I would trust it not to go out of sync. Really each client should potentially be getting a lock on all databases before writing to maintain atomicy, (this would I assume kill any performance advantage).

I think I would go with making the local DBs read only caches and having all writes processed by an API which writes to the master DB.

This will allow you to use simple one way replication from the master and undos will work fine.

If the local DBs needs to see the local writes prior to getting a sync from the master, then you have the bigger problem of them not seeing the updates from other clients, which they need, and your system is already broken.

This would be a step on the way to getting rid of the local DBs entirely and running everything through the api, which can cache, cache invalidate and scale better than a central DB.

But imagine that is the “Big Rewrite” option, which is never palatable.

mariadb – I deleted a folder from /var/lib/mysql and now all of my databases seem to be unaccessible

I was receiving this error when trying to drop a database:

ERROR 1010 (HY000): Error dropping database (can't rmdir './redpopdigital@002ecom', errno: 39 "Directory not empty")

So I went into /var/lib/mysql and just did an rm -rf. I did not know that this would screw with literally every other database.

Now it seems all of my databases are inaccessible.

I tried this as a troubleshooting step:

ubuntu@blainelafreniere:~$ mysqlcheck --repair blainelafreniere -u root -p
Enter password: 
blainelafreniere.wp_commentmeta
Error    : Table 'blainelafreniere.wp_commentmeta' doesn't exist in engine
status   : Operation failed
blainelafreniere.wp_comments
Error    : Table 'blainelafreniere.wp_comments' doesn't exist in engine
status   : Operation failed
blainelafreniere.wp_links
Error    : Table 'blainelafreniere.wp_links' doesn't exist in engine
status   : Operation failed

what’s weird is, all of the data appears to be chilling in /var/lib/mysql… but I can’t access it for some reason?

Is there any hope of recovering the data from /var/lib/mysql or am I completely screwed?

Thanks.

Storing external databases data into one

I need to store different datasets coming from a provider where each dataset has it’s own release path.
These datasets can be combined together to get the full picture of the data available from the provider.

I know from the doc of the publisher the combination of versions that are allowed.
My pain point is that I need to keep track of the version for each dataset.

Example of data:

“Main” dataset from publisher “ABC” has version “1.0”.
“Ext” dataset from publisher “ABC” has version “release_3”

My schema is as follow:

Provider
ProviderId

Dataset
DatasetId
ProviderId

Version
VersionId
DatasetId

Main
MainId
VersionId

Ext
ExtId
MainId
VersionId

Based on that, is it a problem that FK “VersionId” in tables “Main” and “Ext” references a different record in table “Version” ?

I’m afraid that any user querying the DB will not expect to have diverging Version (as the FK name is identical in both tables).
Unfortunately that’s the reality of the data provided by the publisher.

Is there a better design to accomplish the same result ?

NB: It is possible that in the future, datasets from different providers need to be combined.

Thanks

Route connections to individual PostgreSQL databases

I have this technical problem. There are PostgreSQL databases “a1“, “a2” … sitting on server “A“. There are also databases “b1“, “b2” … residing on server “B“. Database names do not overlap meaning if there is a databases “xyz” on server “A” then there will be no “xyz” on server “B” and vice versa.

I need to set up a server “X” which acts a a gateway listening for PostgreSQL connections and depending on what database the connection is requested for routes that request to the server either “A” or “B“. Server “X” has knowledge whether any given database lives on server “A” or “B“.

I looked into pgbouncer. It allows per-database mapping but the pass-through mode is not straightforward.

I wonder if there are simpler existing daemon/agent/proxy that allows a per-database connection routing.

recovery – Databases missing after exporting server and rebuilding master DB

We had a server running Microsoft SQL Server 2014 databases on Hyper-V. I exported it (with C: and D: drives) and opened on other machine.

I could not start the MSSQLSERVER service because it said the master db needed to be rebuilt.

I’ve rebuilt the master db, but now the databases are empty, I cannot see databases except master, model, msdb, and tempdb.

I have all databases (mdf and ldf files) in this folder D:Microsoft SQL ServerMSSQL12.MSSQLSERVERMSSQLDATA. How to restore them and not lose all functionality?

sql – Sites com Databases prontos

Existem sites com databases prontos? Com create table e inserts?
Quero praticar alguns comandos SQL em um database que tenha várias informações, sem precisar dar cada insert no banco.

databases – How can MongoDB be hacked on a private network

We had MongoDB on a server that only listen is own, but our client required us to change MongoDB to another server, so first I ask them if the network was private or I should set a firewall to do the replica and then changed the other configurations to point at the new server.
They say that it was a private network.

I add on the bind on the old server 192.168.0.6 that was his own private IP and I started the replica. I left it all night, and in the morning I noticed I have been hacked and one of the database was README_TO_RECOVER_YOUR_DATA. There is a lot of information about this ransomware but that’s not what I’m worried about, we had backups and all the stuff so we could recover most of the data.

The thing it bothers me is HOW. I checked the logs:

 2020-11-06T07:45:51.771+0100 I NETWORK  (conn4033) end connection 176.123.5.15:63894 (5 connections now open)
 2020-11-06T07:45:58.346+0100 I NETWORK  (conn4031) end connection 192.168.0.6:42949 (4 connections now open)
 2020-11-06T07:45:58.346+0100 I NETWORK  (initandlisten) connection accepted from 192.168.0.6:43007 #4035 (5 connections now open)
 2020-11-06T07:45:59.148+0100 I NETWORK  (initandlisten) connection accepted from 176.123.5.15:64089 #4036 (6 connections now open)
 2020-11-06T07:45:59.427+0100 I NETWORK  (initandlisten) connection accepted from 176.123.5.15:64092 #4037 (7 connections now open)
 2020-11-06T07:45:59.878+0100 I COMMAND  (conn4037) dropDatabase database_1 starting
 2020-11-06T07:46:02.041+0100 I NETWORK  (initandlisten) connection accepted from 176.123.5.15:64108 #4038 (8 connections now open)
 2020-11-06T07:46:04.560+0100 I NETWORK  (initandlisten) connection accepted from 176.123.5.15:64141 #4039 (9 connections now open)
 2020-11-06T07:46:06.828+0100 I COMMAND  (conn4037) dropDatabase database_1 finished
 2020-11-06T07:46:06.829+0100 I COMMAND  (conn4037) command database_1 .$cmd command: dropDatabase { dropDatabase: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:58 locks:{ Global: { acquireCount: { r: 3, w: 2,
  W: 1 } }, Database: { acquireCount: { w: 1, W: 1 } }, oplog: { acquireCount: { w: 1 } } } 6950ms
 2020-11-06T07:46:06.829+0100 I COMMAND  (conn4038) dropDatabase database_2 starting
 2020-11-06T07:46:06.903+0100 I NETWORK  (conn4036) end connection 176.123.5.15:64089 (8 connections now open)
 2020-11-06T07:46:07.056+0100 I NETWORK  (conn4037) end connection 176.123.5.15:64092 (7 connections now open)
 2020-11-06T07:46:07.174+0100 I COMMAND  (conn4038) dropDatabase database_2 finished
 2020-11-06T07:46:07.174+0100 I COMMAND  (conn4038) command database_2 .$cmd command: dropDatabase { dropDatabase: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:61 locks:{ Global: { acquireCount: { r: 3, w:
  2, W: 1 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 4424340 } }, Database: { acquireCount: { w: 1, W: 1 } }, oplog: { acquireCount: { w: 1 } } } 4769ms
 2020-11-06T07:46:07.174+0100 I COMMAND  (conn3983) getmore local.oplog.rs query: { ts: { $gte: Timestamp 1604626286000|594 } } cursorid:72911333337 ntoreturn:0 keyUpdates:0 writeConflicts:0 numYields:0 nreturned
 :2 reslen:207 locks:{ Global: { acquireCount: { r: 8 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 6617185 } }, Database: { acquireCount: { r: 4 } }, oplog: { acquireCount: { r: 4 } } } 9624ms
 2020-11-06T07:46:07.174+0100 I NETWORK  (conn4038) end connection 176.123.5.15:64108 (6 connections now open)
 2020-11-06T07:46:07.315+0100 I WRITE    (conn4039) insert READ_ME_TO_RECOVER_YOUR_DATA.README query: { content: "All your data is a backed up. You must pay 0.04 BTC to 15iXDfXsjseSASsm5P8uQMSj5fmLQuHNMn 48 hours for recover it. After 48 hours expiration we will l...", _id: ObjectId('5fa4f12cfcf3b186c639216b') } ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: { acquireCount: { r: 4, w: 4 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 2380706 } }, Database: { acquireCount: { w: 3, W: 1 } }, Collection: { acquireCount: { W: 1 } }, oplog: { acquireCount: { w: 2 } } } 2522ms
 2020-11-06T07:46:07.316+0100 I COMMAND  (conn4039) command READ_ME_TO_RECOVER_YOUR_DATA.$cmd command: insert { insert: "README", ordered: true, documents: ( { content: "All your data is a backed up. You must pay 0.04 BTC to 15iXDfXsjseSASsm5P8uQMSj5fmLQuHNMn 48 hours for recover it. After 48 hours expiration we will l...", _id: ObjectId('5fa4f12cfcf3b186c639216b') } ) } keyUpdates:0 writeConflicts:0 numYields:0 reslen:80 locks:{ Global: { acquireCount: { r: 4, w: 4 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 2380706 } }, Database: { acquireCount: { w: 3, W: 1 } }, Collection: { acquireCount: { W: 1 } }, oplog: { acquireCount: { w: 2 } } } 2522ms
 2020-11-06T07:46:07.316+0100 I NETWORK  (conn4039) end connection 176.123.5.15:64141 (5 connections now open)

How is possible that a public IP like 176.123.5.15 connected to the database? I know I open it without any passwords or firewalls but was it exposed to externally?
Is it possible that the clients already had this in the local network?
What should be our next steps there? I guess we will firewall, but will it be enough?

databases – Why is it a big deal here on this website and on the internet about the “ISP spying on people’s browsing activities”?

Why is it a big deal here on this website and on the internet about the “ISP spying on people’s browsing activities”?

Because in the end, ISPs are going to delete all User data as per their own data retention policies. If in the end everything is going to get deleted, what’s the danger to anyone? Why is it a big deal? I don’t understand.

bash – check the file1 databases exist in file2( if file2 exist means running) if exist do not send that list

check the file1 databases exist in file2( if file2 exist means running) if exist do not send that list .

Task1:

===========
file1

SDC5 MGVD MGVD-DMV1 —(MGVD there in second file so do not diplay it)
SDC5 MPKK MPKK-DMV1
MSH MFZV MFZV-DMV1
MSH MFSM MFSM-TMST
NSH HDNO HDNO-DEV1 —(HDNO there in second file so do not diplay it)
MSH HDNO HDNO-TMST —(HDNO there in second file so do not diplay it)

file2

KSH MGVD MFZV-DMV1 102703.270517 RUNNING 27-OCT-2020
KSH HDNO HDNO-TMST 102703.270409 RUNNING 27-OCT-2020
KSH MFIT MFIT-TMST 102613.261353 RUNNING 26-OCT-2020
NSH MGDD MGDD-DMV1 102603.261024 RUNNING 26-OCT-2020
KSH MGAY MGAY-DMV3 102603.260802 RUNNING 26-OCT-2020
KSH MGBN MGBN-DMV2 102603.260544 RUNNING 26-OCT-2020
KSH MFFF MFFF-TMST 102603.260453 RUNNING 26-OCT-2020
TSH HDMM HDMM-DMV1 102603.260515 RUNNING 26-OCT-2020

DC SOURCE TARGET
CDC5 MPKK MPKK-DMV1
ASH MFZV MFZV-DMV1
ASH MFSM MFSM-TMST

send mail with table format with colour

/backup_DB if mount point is greather 70% take file3 colum2 take as input and check MPKK-MC,MFZV-MC,MFSM-MC with -MC and MPKK,MFZV,MFSM without -MC dir exist or not if exist get the size of those directors and send mail and if reach 75% remove those before remove send warning mail.
file3
DC SOURCE TARGET
CDC5 MPKK MPKK-DMV1
ASH MFZV MFZV-DMV1
ASH MFSM MFSM-TMST

send mail with table format with colour

Your /backup_DB partition remaining free space is critically low. Used: 77%
120T 92T 29T 77% /backup_DB

DC SOURCE TARGET SIZE
CDC5 MPKK MPKK-DMV1 1.1TB
ASH MFZV MFZV-DMV1 dir does not exit
ASH MFSM MFSM-TMST 10GB

send mail with table format with colour

Your /backup_DB partition under control now . Used: 68%
120T 92T 39T 68% /backup_DB
DC SOURCE Target source Backup status
CDC5 MPKK MPKK-DMV1 Source Removed
ASH MFSM MFSM-TMST Source Removed

mysql – How to hot merge MariaDb databases from multiple servers into a single one

Current Configuration

I currently have 3 servers:

  • server australia with the database canberra
  • servers belgium and germany are a galera cluster with the database bruxelles.

Target Configuration

My final goal is to merge both databases into a single galera cluster:

  • servers australia, belgium and germany will be a galera cluster with the databases canberra and bruxelles.

Constraints and leeways

australia, belgium and germany are public production servers. They all handle read and write queries from various clients 24/7 (SELECT, INSERT, UPDATE, DELETE), so I cannot allow neither downtime nor read-only for any of them.

canberra and bruxelles schemas are obviously distinct, so there is no risk of collision when merging.

canberra and bruxelles schemas are stable (no ALTER), so I can copy the schemas from one server to another at any time, without any need to keep the schemas in sync.

The servers are adressed by clients through DNS aliases :

  • australia is behind pacific.example.com
  • belgium and germany are behind europe.example.com (round-robin handled by HAProxy).

It is easy to add or change DNS aliases (eg. edit europe.example.com to include australia in the round-robin, or to exclude germany).

I am not sure that the galera information is relevant for this particular issue, so any answer working for 2 servers (let’s say australia and belgium) should be easily translatable to my issue.

Naive approach

My plan was the following:

  1. Initiate a replication where australia is the master and belgium is the slave.
  2. Have pacific.example.com target germany instead of australia (no more write on the master, all writes on the slave)
  3. Shut down australia
  4. Overwrite australia as a new galera cluster member behind europe.example.com.
  5. Reboot belgium removing slave config
  6. Have pacific.example.com target the galera cluster.

However, I found myself unable to initiate the replication. Trying to make belgium a slave of australia results in the destrution of bruxelles database.

Question

How would you do it?