API Design – Retry a failed REST API request – Java / Postgres

We have a REST API that calls a third-party REST API to send e-mail. The Third Party API is not very reliable and occasionally fails with a 500.

Our customers do not want to repeat it at all and have instead asked us to create a retry mechanism for failed emails.

We use Spring Retry to implement Retry and Circuit Breaker Patterns, where we store failed requests somewhere in the fallback method (DB / File is still an open question).

We have a scheduled job that runs every hour, exposing any errors that have exhausted the initial retry attempts, and trying to resend emails.

My question is, are there any good ways to save the failed request?

  1. Let's save the request as it is with text, URL, and headers in a blob / text in db, so it's easier for the scheduled service to resend it,
  2. Should we write the failed request somewhere in a file and send it again?
  3. Should we reconstruct the API request from the ground up, using all the data submitted by the customer and stored in the database in various tables (access numbers, user names, URLs), retrieving API keys, and reconstructing URLs?

We are leaning towards option 3, more development work is required, but we already have all the data stored and can use it to reconstruct the entire request. Is something missing here that I miss or are there any best practices or design patterns that I can use?

linux – iptables / 1.8.2 Initialization failed nft: protocol is not supported

I run Debian and try to set some firewall rules with iptables, but only get an error message:

iptables/1.8.2 Failed to initialize nft: Protocol not supported

It does not matter what kind of rule I want to set, it just gives me the same mistake. I tried to Google the mistake without finding anything.

These are the rules I want to set:

iptables -A INPUT -i eth0 -p tcp --dport 22 -m state --state NEW,ESTABLISHED -j ACCEPT
iptables -A OUTPUT -o eth0 -p tcp --sport 22 -m state --state ESTABLISHED -j ACCEPT

Who has an idea? Thank you very much

Partition – macOS Catalina – Unable to mount external disk – Check / read omap object failed: Invalid byte string

hope someone can help.

I have an external hard drive with 3 partitions, and one has given up the ghost. I can fix the other two, but the third, my backup, can not be fixed. I've been digging a bit and here's what I tried. / dev / disk5s3 / / dev / disk6 is the culprit.

I tried to check …

diskutil verifyVolume disk6s1
Started file system verification on disk6s1 Store
Verifying file system
Volume is already unmounted
Performing fsck_apfs -n -x /dev/rdisk6s1
File system check exit code is 78
Restoring the original state found as unmounted
Error: -69845: File system verify or repair failed
Underlying error: 78

But that did not work.

Here is a discussion list

/dev/disk5 (external, physical):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *4.0 TB     disk5
   1:                        EFI EFI                     209.7 MB   disk5s1
   2:       Microsoft Basic Data Boot                    100.0 GB   disk5s2
   3:                 Apple_APFS Container disk6         2.9 TB     disk5s3
   4:          Apple_CoreStorage Time Machine            999.9 GB   disk5s4
   5:                 Apple_Boot Boot OS X               134.2 MB   disk5s5

/dev/disk6 (synthesized):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      APFS Container Scheme -                      +2.9 TB     disk6
                                 Physical Store disk5s3
   1:                APFS Volume Store                   1.7 TB     disk6s1

A verifyVolume of disk5 indicates this

√ /Volumes % diskutil verifyVolume /dev/disk5s3
Started file system verification on disk5s3
Verifying storage system
Performing fsck_apfs -n -x /dev/disk5s3
Checking the container superblock
Checking the space manager
Checking the space manager free queue trees
Checking the object map
error: (oid 0x3038dc) om: invalid o_type (0x40000003, expected 0x4000000b)
error: verification/reading of the omap object failed: Illegal byte sequence

The mistake seems kind of strange. I've been hiking from Mojave to Catalina a few days ago and I'm not sure if it worked after that.

When analyzing the disk5 partition table, we get the following

/ Volumes% sudo gpt -r show disk5

Password:
       start        size  index  contents
           0           1         PMBR
           1           1         Pri GPT header
           2          32         Pri GPT table
          34           6         
          40      409600      1  GPT part - C12A7328-F81F-11D2-BA4B-00A0C93EC93B
      409640   195312496      2  GPT part - EBD0A0A2-B9E5-4433-87C0-68B6B72699C7
   195722136      262144         
   195984280  5664860592      3  GPT part - 7C3457EF-0000-11AA-AA11-00306543ECAC
  5860844872  1952862864      4  GPT part - 53746F72-6167-11AA-AA11-00306543ECAC
  7813707736      262144      5  GPT part - 426F6F74-0000-11AA-AA11-00306543ECAC
  7813969880           7         
  7813969887          32         Sec GPT table
  7813969919           1         Sec GPT header

As we can see, it's index 3. The others are perfectly mounted.

Information was found by the user Klanomath https://apple.stackexchange.com/users/93229/klanomath

but it does not really help me because I'm not sure what to do. I do not want to lose all other partitions …

√ /Volumes % sudo dd if=/dev/disk5 bs=512 skip=195984280 count=1 | hexdump
1+0 records in
1+0 records out
0000000 a8 87 a3 75 03 ea 8e e1 01 00 00 00 00 00 00 00
512 bytes transferred in 0.072951 secs (7018 bytes/sec)
0000010 ad b6 02 00 00 00 00 00 01 00 00 80 00 00 00 00
0000020 4e 58 53 42 00 10 00 00 36 dd 34 2a 00 00 00 00
0000030 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0000040 02 00 00 00 00 00 00 00 95 e8 ff 32 dd 02 40 03
0000050 a3 67 58 81 70 d6 9e 23 b0 97 00 00 00 00 00 00
0000060 ae b6 02 00 00 00 00 00 1c 01 00 00 f4 6c 00 00
0000070 01 00 00 00 00 00 00 00 1d 01 00 00 00 00 00 00
0000080 6a 00 00 00 4c 44 00 00 68 00 00 00 02 00 00 00
0000090 45 44 00 00 07 00 00 00 00 04 00 00 00 00 00 00
00000a0 dd 37 30 00 00 00 00 00 01 04 00 00 00 00 00 00
00000b0 00 00 00 00 64 00 00 00 02 04 00 00 00 00 00 00
00000c0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00

This is of course an interesting treat.

0x40000003, expected 0x4000000b

Any help would be appreciated.

Cheers,

Tom

Design – How do I fix failed writes?

I write logic for an patchable game and patching changes several files.

type UpdateInfo {
    file string
    data ()byte
    at   int64
}

func AtomicUpdate(ui <-chan UpdateInfo) error {
    for _, v := range ui {
        file, err := os.OpenFile(v.file, os.O_RDWR|os.O_CREATE, 0)
        if err != nil {
            return err
        }
        _, err = file.WriteAt(v.data, v.at)
        if err != nil {
            return err
        }
    }
}

ui represents a single atomic set of modifications that should be performed to maintain a consistent state. The reason for inconsistent states is that users may need to re-download everything, which is pretty bad but not catastrophic.

The question then is what a rational design is to recover from a mistake AtomicUpdate Where can the state remain inconsistent?

There is also the possibility of an interruption (eg tripping of users) during the process AtomicUpdate and does not even allow normal error handling. Should this be a problem and how can I recover from it?

18.04 – Mount / Cow failed on root: Invalid Argument Overlay Mount failed

I'm using an xps 13 9365 that had data deleted from the BIOS by a friend who then considered it dead. I tried to boot with a USB boot disk (current LTS distribution (18.04 or similar)) and got into the language selection / setup menu. I chose noapic with option F6. In BIOS SATA, the Safe Boot option is set to RAID and Legacy.

But when I try to install, I'm thrown into a tiny script window with the following message

(initramfs) mount: mount / cow failed on root: Invalid argument failed when mounting the overlay

Any ideas?

Try resending data to previously failed sites

Hello again, @Sven
I have some private blog accounts that I have imported into SER and where I wanted to publish an article. It went as planned, except that one of the blogs was not available because of technical problems.
Some time later, I managed to put the site online and I am waiting for SER to post on this last site as well. That does not happen.
I had activated "Try again to send to previously failed sent sites" and set it to 10, but after the site came back online, I increased the retry to 20. Still, nothing happened.
My question is whether SER will ever post to the last page that was unreachable.
If not, because SER has gone through the specified number of retry attempts and has stopped trying. Would it be possible to add a timer there? There we can set the time that must pass between two retries.

electrum – Esplora on local elementsregtest – parse failed: Data is not completely consumed during explicit deserialization

This is related to esplora (the Block Explorer) and its Electricity Backend API. Is it possible to run esplora only for a local elementsregtest?

When I start electrs, this is the error I get back:

DEBUG - Server listening on 127.0.0.1:44224
DEBUG - Running accept thread
INFO - NetworkInfo { version: 180101, subversion: "/Elements Core:0.18.1.1/" }
INFO - BlockchainInfo { chain: "liquidregtest", blocks: 1, headers: 1, bestblockhash: "9cc7c8fb1c8e2e1e8ed184f5e31548eb5859b74e7552ba8841c41aeeb24d0ae3", pruned: false, verificationprogress: 0.334, initialblockdownload: Some(false) }
DEBUG - opening DB at "./db/liquidregtest/newindex/txstore"
DEBUG - 0 blocks were added
DEBUG - opening DB at "./db/liquidregtest/newindex/history"
DEBUG - 0 blocks were indexed
DEBUG - opening DB at "./db/liquidregtest/newindex/cache"
DEBUG - downloading all block headers up to 9cc7c8fb1c8e2e1e8ed184f5e31548eb5859b74e7552ba8841c41aeeb24d0ae3
TRACE - downloading 2 block headers
ERROR - server failed: Error: failed to parse header 000000a021cab1e5da4718ea140d9716931702422f0e6ad915c8d9b583cac2706b2a9000ac20a615d9b0d4df3e3ac2cb7018a07bd314d6bb715a57adead7c03e208b3658890e9d5d01000000012200204ae81572f06e1b88fd5ced7a1a000945432e83e1551e6f721ee9c00b8cc332604b00000000010151
Caused by: parse failed: data not consumed entirely when explicitly deserializing

The command to start electricity is:

cargo run --features liquid --release --bin electrs -- -vvvv --daemon-dir ~/.elements/elements-0.18.1.1/elementsdir/ --daemon-rpc-addr 127.0.0.1:18886 --cookie user:password --network liquidregtest -v

And it actually runs an elementsd on this 127.0.0.1:18886, whose name is liquidregtest (in the configuration file is the line: chain = liquidregtest), in fact I have successfully requested an address with the elements-cli and executed a generatetoaddress, this gave the "9cc7c8fb1c8e2e1e8ed184f5e31548eb5859b74e7552ba8841c41aeeb24d0ae3" that you see in the DEBUG (which means that the block was created successfully).

Failed to import bootstrap paragraph submodule configuration

I tried to implement the Hero with Xeno Hero in Drupal 8 bootstrap sections. An error occurred while attempting to import the submodule configuration. I'm not sure I'm doing it right, but it came to that mistake.

The configuration can not be imported because the validation failed for the following reasons:
The configuration of Paragraphs_type.paragraph.xeno_hero.default depends on the configuration (field.field.paragraph.xeno_hero.xeno_content, field.field.paragraph.xeno_hero.xeno_invert, field.field.paragraph.xeno_hero.xeno_offset, field.paragraph. xeno_hero.xeno_overlay, field.field.paragraph.xeno_hero.xeno_parallax, paragraphs.paragraphs_type.xeno_hero) that do not exist after the import.

What I did is import a single item, but I do not know which type of configuration I should choose, so I tried Display of the client form and Display of the object view as well as fields,

Could someone just point me in the right direction or give me clues that I could follow to get it right?