applications – Android app for PGP Verify APK File Signature

GPG signatures are a proof that distributed files have been signed by the owner of the signing key.
And Hackers are not be able to create valid signatures (please note that they are acually able to create valid hashes!).

In order to be able to verify GPG signatures, you need an app to import the public key of the signer.

Is there an application to verify such files?

Is there an app with such a feature in the google play Store? And if not, why!?

magento2.3 – Integration test setup fails The entity ID is incorrect. Verify the ID and try again

I have an existing store with a collection of existing plugins.
I’m trying to setup an integration test to test some new features, but the test doesn’t get past the installation:

Module 'Manadev_SpecialCategories':
Installing schema...
In EavSetup.php line 296:

  [MagentoFrameworkExceptionLocalizedException]
  The entity ID is incorrect. Verify the ID and try again.

I checked the code in the stack trace and I see that this plugin in it’s setup script tries to select the id from eavy_entity_type where entity_type_code is catalog_category

But, in the testing database this table is completely empty, so the setup script fails.
Is there a way to bypass this, or trigger the magento seeders to get integration tests running smoothly?

malware – How to verify there is no malicious code in an opensource library?

You’ll need to go through the source code and decide for yourself if it’s safe for your use or not. If you deem it as safe, you can compile the code yourself and deploy it (as the already compiled code might be different from the source code).

how can i monitor it through some logs(e.g. example access log for incoming connections)? I would like to monitor outbound transfer and what data has transfered?

There are different tools to do that. If you want to block the connections, you can use a firewall.

But if you want to monitor the traffic, you can use tools like Wireshark to capture the packets and see what’s incoming and what’s outgoing.

And as pointed out in comments by @Steffen, the licence does not guarantee any kind of bug/malice free code. What’s bug or undesired feature for you, might be a necessary feature for other.

php – How to verify there is no malicious codes in opensource library?

I am planning to use a opensource library to my project instead of developing from scratch. How can i verify there is no malicious codes in the library or someone cannot access my files, currently Visual studio code also implemented Workspace trust and some extensions are disabled even the extensions are liscenced by trusted source. So I would like to know is all opensource libraries are getting liscence only after they properly verify the libraries? If yes, please leave this question.

Simply if i run some codes and some other codes runs in the background which are not favour to me how can i get alerts or if i call some function is there any possibility post data different server, if yes how can i monitor it?

magento2 – How to verify an encrypted module admin configuration from my database?

When a module configuration is saved in an encrypted mode I just see dots in the admin panel, as the image below shows.

How to verify an encrypted module admin configuration from Magento 2 Database

When I check in the database via Magerun the result is that below.

n98-magerun2 config:store:get narvar_accord/narvar_settings/narvar_auth
+-------------------------------------------+---------+----------+------------------------------------------------------------------------------+
| Path                                      | Scope   | Scope-ID | Value                                                                        |
+-------------------------------------------+---------+----------+------------------------------------------------------------------------------+
| narvar_accord/narvar_settings/narvar_auth | default | 0        | UmFmYWVsIENvcnJlYSBHb21lcwo= |
+-------------------------------------------+---------+----------+------------------------------------------------------------------------------+

When I run the same command with the –decrypt parameter, the value is unreadable.

+-------------------------------------------+---------+----------+----------------------------------------------------------+
| Path                                      | Scope   | Scope-ID | Value                                                    |
+-------------------------------------------+---------+----------+----------------------------------------------------------+
| narvar_accord/narvar_settings/narvar_auth | default | 0        | և�2��ڢ�[ |
+-------------------------------------------+---------+----------+----------------------------------------------------------+

How could I see the decrypted value for that configuration?

openssl won’t verify certs beyond intermediate CA, error 20 even when using CApath or CAfile

Ultimately, I am trying to configure an ocsp server on ubuntu 20.4, but I cannot even verify any certs issued by my intermediate CA yet.

I have configured a ca-root called ca-root.mydomain.org. I also have configured a intermediate ca called ca-sub.mydomain.org. Finally, there is my future ocsp server, ocsp-server.mydomain.org.

First, I make a self-signed cert ca_root_cert_file. Then I have the ca-root sign a cert for ca-sub.mydomain.org, ca_sub_cert_file. I then create a cert chain pem file “sub-chain.pem”. It contains the sub-ca cert, then the ca-root cert, in that order.

Next, I then copy both ca_root_cert_file and ca_sub_cert_file to a “$CA_ROOTS_HASHES_DIR” directory, and copy all the root certs in /etc/ssl/certs there as well. I run the openssl utility c_rehash -v "$CA_ROOTS_HASHES_DIR". I expect I can now use this as the argument for the -CApaths parameter of openssl verify.

Next, I have the ca-sub sign a cert for ocsp-server.mydomain.org. I then create a cert chain pem file “ocsp_signer_chain.pem”. It contains the ocsp-server cert, the sub-ca cert, then the ca-root cert, in that order. I don’t expect to need this ocsp_signer_chain.pem, but I have it.

I can use openssl verify to verify ca_sub_cert_file:

`openssl verify -verbose -show_chain -CApath "$CA_ROOTS_HASHES_DIR" "$ca_sub_cert_file"`
OK
Chain:
depth=0: C = US, ST = California, L = Pacifica, O = Mydomain, CN = ca-sub.mydomain.org (untrusted)
depth=1: C = US, ST = California, L = Pacifica, O = Mydomain, CN = ca-root.mydomain.org, emailAddress = deft@mydomain.org

But I can’t verify ocsp-server_cert_file. I always get error 20 at 0 depth lookup: unable to get local issuer certificate.
I’ve tried CAfile with sub-chain.pem vs. ocsp_signer_chain.pem vs. -CApath "$CA_ROOTS_HASHES_DIR".
I’ve tried with and without -untrusted "$ca_sub_cert_file"

openssl verify -verbose -show_chain -CApath "$CA_ROOTS_HASHES_DIR" -untrusted  "$ca_sub_cert_file" "$ocsp-server_cert_file"`
C = US, ST = California, L = Pacifica, O = Mydomain, CN = ocsp-signer.mydomain.org
error 20 at 0 depth lookup: unable to get local issuer certificate
error ocsp.mydomain.org_ocspserver_ocsp-signing.crt: verification failed

What am I doing wrong? I’ve been searching for days, but the answers I’ve found all end with using CApath or CAfile

I’m surprised that even when verifying ca_sub_cert_file, openssl reports “ca-sub.mydomain.org (untrusted)” I expected that having the cert in CA_ROOTS_HASHES_DIR would make it trusted. :/

linear algebra – unable to verify this Gaussian elimination output

I was trying the following code to obtain the Gaussian elimination matrix. (the Echelon form, not the reduce echelon form). Since Mathematica does not have a build in function for it, but I used this code from this post https://community.wolfram.com/groups/-/m/t/475750

I know that Gaussian elimination phase is not unique. Two people can get different output. However, one should still be able to obtain one output from the other, using some allowed row operations, correct?

The output I get from the above code, I am not able to transform to the output I get by hand, also checked with Maple. The output of Maple’s Gaussian elimination, I can transform to the one I got by hand.

So I am not sure what is going on. Here is the code is used as is from the above link, I just changed the matrix to the one I am using

(mat = {{2, -1, 3}, {3, 1, -2}, {2, -2, 1}}) // MatrixForm
{lu, p, c} = LUDecomposition(mat)
u = Normal(lu*SparseArray({i_, j_} /; j >= i -> 1, Dimensions(mat)))
MatrixForm(u)

enter image description here

The second Matrix above is the Gaussian elimination. I will now show Maple’s output (which agrees with mine, after transformation)

restart
A:=Matrix(((2,-1,3),(3,1,-2),(2,-2,1)));
LinearAlgebra:-GaussianElimination(A)

enter image description here

The second matrix above is the Gaussian elimination. We see the first row is the same. Multiplying the last row given by Mathematica by 2/5 gives the last row from Maple’s result. So far so good.

But the second row is the problem. Multiplying the second row given by Mathematica by -5/2 gives 0,5/2,5 and not as Maple shows which is 0,5/2,-13/2. The last entry is not the same.

There should be a way to transform Maple’s result to Mathematica’s and vis verse, using the allowed row operations. Correct?

I do not see how to do this.

Is Mathematica’s Gaussian elimination result given from the above code correct? If so, what legal row operations can be used to transform it to Maple’s output (which I know is correct, as it agrees with what I obtained by hand). I also verified Maple’s result using code posted in find-elementary-matrices-that-produce-rref

Version 12.3

gnupg – Can’t get `gpg –auto-key-retrieve –verify` to work

I am trying to automate the compilation of the newest GCC on my dev machine, and I’d like to automatically verify the signature of the tarball too. However, I can’t get gpg --auto-key-retrieve to work:

gcc# gpg --auto-key-retrieve --verify gcc-11.1.0.tar.xz.sig tarballs/gcc-11.1.0.tar.xz
gpg: Signature made Tue Apr 27 12:39:44 2021 CEST
gpg:                using RSA key 6C35B99309B5FA62
gpg: Can't check signature: No public key

If I manually retrieve the key, it works just fine (and the gpg --verify succeeds as well):

gcc# gpg --recv-keys 6C35B99309B5FA62
gpg: /home/user/.gnupg/trustdb.gpg: trustdb created
gpg: key 6C35B99309B5FA62: public key "..." imported
gpg: Total number processed: 1
gpg:               imported: 1

(default keyserver used in the above is https://keys.openpgp.org:443)

I’ve tried:

  • --keyserver-options auto-key-retrieve,
  • manually specifying the server,
  • auto-key-locate (though I’ve learned this has nothing to do with my use case).

What am I doing wrong?

I’m running gpg version 2.2.19 on Ubuntu 20.04.2 and working on files originally from http://ftp.gnu.org/gnu/gcc/gcc-11.1.0/.

authentication – Could blockchain be useful for a protocol to verify content from a trusted publisher in the way I’m thinking of?

The problem with static software whitelisting is that in the real world, employees with versatile jobs need to run unexpected programs. Company sets up a whitelist to limit what programs can run – cool, security! But next thing you know someone needs to run something to do some time-sensitive task, they can’t, deliverables are late, company loses money, etc, etc, whitelist is dead.

I’m thinking of a more dynamic mitigation strategy:

A scenario where we can verify all programs that run on monitored hosts in our network on a wide scale. Let’s say it’s fine for unverified files to be downloaded by some hosts on our network, but if we detect that same file on perhaps 20% of the hosts on our network, we block it unless:

  • we can call out to it’s origin through some protocol
  • that origin be a publisher we trust (whitelisted)
  • the protocol allows us to verify the authenticity of the hash we
    scanned against the one they published
  • all done in a cryptographically secure way, i.e. HTTPS

You could say files already downloaded via HTTPS, such as JavaScript, might be verified against MITM alteration through the security provided via HTTPS, but what if:

  • The attacker compromises the trusted publisher, falsifies the hash and file they’re publishing so victims around the world reach out to verify the hash, get a good match, and trust this malicious file.

So I’m imagining another requirement:

Let’s imagine we trust a publisher, but want to prepare for the eventuality that they’re breached.

  • The protocol involves a global publisher blockchain where many trusted publishers maintain a blockchain verifying file hashes.

Is there anything wrong with this scheme? Some vulnerability or logistical issue I’m missing?


In case I wasn’t clear, an example:

  • A nation-state actor hacks Google
  • Google, following a standard API, sends an HTTPS POST to trustedpublishers.org containing the hash of their file they’re about to publish, with a mandatory human personnel validation step to sign off that the file is untainted and secure.
  • trustedpublishers.org forwards this new transaction to Google and every other trusted publisher with a membership to their trust org, who each do the work, similar to the “mining” done with crypto-currencies to propagate the change into the blockchain.
  • Google pushes an update to the JavaScript running on Google.com
  • For the 1st time, one employee of Company C opens Google Chrome, a malicious version of this new JavaScript file is downloaded and Company C’s Anti-Virus does some investigation.
  • The user’s host executes the JS, no latency is experienced, nor the execution of the script halted.
  • Company C’s AV reaches out via HTTPS GET to the API at trustedpublishers.org and also checks with a few endpoints mirrored by members of trustedpublishers.org to make sure everyone agrees with the hash presented.

The hash can’t be validated:

Depending on the network admin’s config choice:

  • The network admin is alerted and the file is immediately blocked from running

or

  • Time passes, 20% of the hosts on Company C’s network have now executed this file
  • Further executions of the file are blocked and the network admin is alerted to investigate and either whitelist the hash or not.