encryption – Combining confidentiality, authenticity and data integrity to form secure URL

There are clients, they can share links to their profile info, which should be confidential. The link expires in 5 minutes. They set Auth Code, so that the one who gets the link can access the client’s data if he get the correct Auth code.

Restrictions:

  1. The link data should be encrypted.

  2. Every client has different key

  3. The client can generate the link in offline mode.

I looked at this answer https://security.stackexchange.com/a/63134/238870

But the same secret key is shared between all clients. Given that the profile, in my problem, contains sensitive information, I can’t rely on that solution.
Also the solution assumes the clients are online and the server knows the IV (I maybe wrong understanding the solution)

What I came up with is combining asymmetric encryption with digital signature:

Clients will encrypt the link data (Auth code, timestamp, user ID) using the server public key, and signs the data using their private key. The combination of the ciphertext and signature is the link data.

At the server, the encrypted data gets decrypted using the server’s private key, then I get the user ID, and by that ID, I get his public key, then verify his signature.

The problem with my solution, is that it’s costly (asymmetric encryption)
And the link gets too long, which is not very handy when sharing links via QR code, which is a main functionality of the application.

c++ – Dilemma over authenticity of gcov generated code coverage percentage where unit tests are not technically correct

When I joined my company as a new comer and I was exploring the unit test suite of the product code. It is using gtest framework. But when I checked all the tests, they were testing the whole functionality by calling real functions and asserting expected output. Below is one such test case as an example:

TEST(nle_26, UriExt1)
{
    int threadid = 1;
    std::shared_ptr<LSEng> e = std::make_shared<aseng:: LSEng >(threadid, "./daemon.conf");
    std::shared_ptr<LSAttrib> attr = e->initDefaultLSAttrib();
    e->setLSAttrib( attr );
    std::shared_ptr<DBOwner> ndb = e->initDatabase(datafile,e->getLogger());
    e->loadASData(ndb);
    e->setVerbose();

    std::shared_ptr<NewMessage> m = std::make_shared<NewMessage>(e->getLogger());
    ASSERT_TRUE(m != nullptr);
    ASSERT_TRUE(e != nullptr);
    m->readFromFile("../../msgs/nle1-26-s1");
    e->scanMsg(m, &scan_callBack_26, NULL);
    std::map<std::string, std::vector<std::string>> Parts = e->verboseInfo.eventParts;
    std::vector<std::string> uris = Parts("prt.uri");
    ASSERT_EQ(uris.size(), 2 );
    ASSERT_EQ(uris(0) , "mailto:www.us_megalotoliveclaim@hotmail.com");
    ASSERT_EQ(uris(1) , "hotmail.com");
}

I found all the tests in the unit test directory having the same pattern like:

  1. Creating and initialising actual object
  2. Calling actual function
  3. Starting actual daemon
  4. Loading actual database of size around 45MB
  5. Sending actual mail for parsing to daemon by calling actual scanMsg function, etc.

So, all the tests appear more of as functional tests, rather than unit tests.

But, the critical part is, on their official intranet site, they have projected the code coverage percentage of this product as 73%, computed using gcov.

Now, code profiling tools like gcov computes coverage on the following params:

  1. How often each line of code executes
  2. What lines of code are actually executed
  3. How much computing time each section of code uses.

As, these tests are running actual daemon, loading real database and calling actual functions to scan the message, of course, above 3 params will play some role in it, so I doubt it will be completely zero.

But my bothering questions are:

  1. Black box testing also does functional testing just as this, so what’s the difference between above and functional test?. In blackbox, testers unaware of the inside code, writes test cases to test the functionalities specific to requirements. How above such kind of tests are different than that? So does gcov generated coverage on this test suite, can be trusted or misleading?

  2. Apparently, gcov code coverage data is based on test suite with all technically incorrect unit tests, does it mean the actual code coverage may be even zero?

  3. In unit test, we mock function calls using google mock-like framework rather than calling actual calls Purpose of unit test is to test the code itself, by smallest unit wise. But above tests, seemingly more like functional tests, can gcov generate reliable code coverage data based on it??

This is haunting me for last two days. So thought to serve on the table for experts.

Awaiting wonderful insights 🙂

Thanks.

c++11 – Dilemma over authenticity of gcov generated code coverage percentage where unit test are not technically correct

When I joined my company as a new comer and I was exploring the unit test suite of the product code. It is using gtest framework. But when I checked all the tests, they were testing the whole functionality by calling real functions and asserting expected output. Below is one such test case as an example:

TEST(nle_26, UriExt1)
{
    int threadid = 1;
    std::shared_ptr<LSEng> e = std::make_shared<aseng:: LSEng >(threadid, "./daemon.conf");
    std::shared_ptr<LSAttrib> attr = e->initDefaultLSAttrib();
    e->setLSAttrib( attr );
    std::shared_ptr<DBOwner> ndb = e->initDatabase(datafile,e->getLogger());
    e->loadASData(ndb);
    e->setVerbose();

    std::shared_ptr<NewMessage> m = std::make_shared<NewMessage>(e->getLogger());
    ASSERT_TRUE(m != nullptr);
    ASSERT_TRUE(e != nullptr);
    m->readFromFile("../../msgs/nle1-26-s1");
    e->scanMsg(m, &scan_callBack_26, NULL);
    std::map<std::string, std::vector<std::string>> Parts = e->verboseInfo.eventParts;
    std::vector<std::string> uris = Parts("prt.uri");
    ASSERT_EQ(uris.size(), 2 );
    ASSERT_EQ(uris(0) , "mailto:www.us_megalotoliveclaim@hotmail.com");
    ASSERT_EQ(uris(1) , "hotmail.com");
}

I found all the tests in the unit test directory having the same pattern like:

Creating and initialising actual object

  1. Calling actual function
  2. Starting actual daemon
  3. Loading actual database of size around 45MB
  4. Sending actual mail for parsing to daemon by calling actual scanMsg function, etc.

So, all the tests appear more of as functional tests, rather than unit tests.

But, the critical part is, on their official intranet site, they have projected the code coverage percentage of this product as 73%, computed using gcov.

Now, code profiling tools like gcov computes coverage on the following params:

  1. How often each line of code executes
  2. What lines of code are actually executed
  3. How much computing time each section of code uses.

As, these tests are running actual daemon, loading real database and calling actual functions to scan the message, of course, above 3 params will play some role in it, so I doubt it will be completely zero.

But my bothering questions are:

  1. Black box testing also does functional testing just as this, so what’s the difference, I can’t understand?. In blackbox, testers unaware of the inside code, writes test cases to test the functionalities specific to requirements. How above such kind of tests are different than that? So does gcov generated coverage on this test suite, can be trusted or misleading or even can be zero?

  2. Apparently, gcov code coverage data is based on test suite with all incorrect unit tests, does it mean the actual code coverage may be even zero?

  3. In unit test, we mock function calls using gmock-like framework rather than calling actual calls Purpose of unit test is to test the code itself, by smallest unit wise. But above tests, seemingly more like functional tests, can gcov generate reliable code coverage data??

This is haunting me for last two days. So thought to serve on the table for experts.

Awaiting wonderful insights 🙂

Thanks.

development – How to cryptographically verify the authenticity and integrity of Android Studio releases (with gpg?)

For a given Android Studio release published by Google, how can I cryptographically verify the authenticity and integrity of the .tar.gz file that I downloaded before I copy it onto a USB drive and attempt to install it on my laptop?

Today I wanted to download Android Studio, but the download page said nothing about how to cryptographically verify the integrity and authenticity of their release after download.

https://developer.android.com/studio#downloads

I expected to see a message on the download page telling me:

  1. The fingerprint of their PGP release signing key,
  2. A link to further documentation, and
  3. Links to (a) a manifest file (eg SHA256SUMS) and (b) a detached signature of that manifest file (eg SHA256SUMS.asc, SHA256SUMS.sig, SHA256SUMS.gpg, etc)

Unfortunately, the only information I found on the download page was how to verify the integrity of the tarball using a SHA-256 checksum found in a table on the same page. Obviously, this checks integrity but not authenticity. And it provides no security because it’s not out-of-band from the .tar.gz itself.

How can I preform cryptographic integrity and authenticity verification with Google’s Android Studio releases?

digital signature – Proving authenticity of a message from a message app in case of deletion

Say you want to prove that you received a certain message from someone.
This can be difficult because many messaging apps (like facebook messenger) allow the sender to delete messages on the recipient’s end.

Of course, you could screenshot the message, but then you still need to prove 2 things:

  1. The time of the message.
  2. The authenticity of your screenshot.

1 is easy, if you upload a hash to public databases
as described in this question: Proving creation time/date of a screenshot

2 is a bit more difficult. You’d have to prove somehow that you didn’t just forge the screenshot. Note: this is a little different than proving that it was the other person that sent it. This is proving that you received the message from a certain account.

Is there a way establish proof of this?

How to cryptographically verify the authenticity and integrity of Mozilla releases (Firefox, Thunderbird, etc)

For a given software release published by Mozilla, how can I cryptographically verify the authenticity and integrity of the file that I downloaded before I execute it to install that program?

Today I wanted to download firefox and thunderbird for Windows, but the download page said nothing about how to verify the release after download.

I expected to see a message telling me the fingerprint of their release signing key, a link to further documentation, and links to (a) a manifest file (eg SHA256SUMS) and (b) a detached signature of that manifest file (eg SHA256SUMS.asc, SHA256SUMS.sig, SHA256SUMS.gpg, etc)

How can I preform this verification with mozilla products, such as firefox and thunderbird?

hash – How can the authenticity of releases on GitHub and GitLab be ensured? Can their hashsums change?

To help ensure authenticity of packages some projects on GitHub and on GitLab add hashsums to the descriptions of the release on the Releases page. Sometimes, at least here, the hashsum are made part of the release’s filename.

However, many projects don’t add these hashsums to their releases. They aren’t automatically added and not always posted in a findable way somewhere else on the Web.

I proposed adding hashsums to the Releases page of Kodi but was told that the hashes (of files in the Releases of a GitHub project) can change in the case of GitHub. Is that really true? Doesn’t this diminish the authenticability of builds/files distributed this way?

I think ensuring authenticity of builds requires at least:

  • them to be reproducible (reproducible-builds)
  • a mechanism to verify that one has the reproducible software and not something else
  • a mechanism to allow developers to authorize, sign and audit the software/changes to it
  • a mechanism to ensure that the reproducible software is installed – and remains unaltered – and not something else.

I thought that a very easy-to-implement and convenient step towards this for projects distributed via GitHub or GitLab would be simply adding hashsums (e.g. simply the SHA256 hashsum of the tarball or .deb file). What would a more complete mechanism look like? And more importantly: can hashsums of GitHub releases really change? Why and wouldn’t this mean that the authenticity of releases there can’t be ensured? How come some projects add them to release’s description in that case?

cryptography – Methods to Prove Data Authenticity from Potentially Compromised Sources?

I’ve been thinking about this problem for some time and I wanted to ask if there are any known methods, or research papers, about how to prove “authenticity” or correctness of data originating from a potentially compromised source (remote server, process, etc). Specifically what I’ve been imagining is say you have service A and service B, service B sources data from A but is worried that A has been compromised such that even if data is signed by A, B can’t trust that it was generated by code written by A‘s developers. Is it possible for B to prove to itself that data from A is authentic, that it was indeed generated by the expected code and not injected or generated by an attacker who has compromised A?

One solution I’ve been thinking about is using a sort of distributed ledger or blockchain so that multiple nodes compute the same data, and in doing so raises the bar such that an attacker would have to compromise N% of the services producing the needed data, this provides naturally replication and I can use an appropriate consensus protocol, but ofc introduces some overhead, efficiency concerns, and I would need to think hard about side-effects being performed more than once.

If there is only one node possible of generating data, such as a sensor node, and it is compromised, I’d imagine all hope is lost, but I also wouldn’t be surprised if there is some clever crypto scheme that attempts to solve this problem as well.

I hope it’s clear as to what the question is, thank you.