unit testing – Is inversion of control the opposite of “tell, don’t ask” principle?

First, let me explain what I mean by inversion of control and tell, don’t ask in this context. I have 3 objects ClassA, ClassB and ClassC. ClassA is a consumer of target object ClassB.

Inversion of control here means two things:

  • passing in required arguments into each method on the target object ClassB
  • assumes that target objects are made up of atomic methods to be controlled by either ClassA the consumer or public methods on ClassB

Tell, don’t ask here the following scenario:

  • we have target task ClassC::d() and can either fetch arguments ClassB::E and ClassB::F on classA, then inject them into ClassC::d(). Or, the alternative that is instantiate ClassB inside ClassC. Then on ClassC::D(), we pull and modify the encapsulated ClassB instance and use it however we deem fit, be it by calling other local methods or what not ( utilize other classes), as opposed to returning a value after d’s atomic operation for it

Are these terms mutually exclusive? I got here by comparing both patterns. In pattern 1 which I appreciate atomic methods for their testability. I don’t have to bother about complex dependency chains.
However, in pattern 2, TDA ( given this understanding of it) is obviously cleaner and the consumer only ever worries about ClassC. This reveals pattern 1 equally creates a kind of dependency problem as regards testing. At what point is acceptable for an action in a class to be responsible for deciding what parameters it runs with.

One of my primary motives in all this is creation of an enabling environment for unit tests and a clear API (non-cluttered) for consumers

continuous integration – Clarifying the steps in a CI/CD, but namely if if unit testing should be done building a Docker image or before

I’m building at a Build and Deployment pipeline and looking for clarification on a couple points. In addition, I’m trying to implement Trunk Based Development with short-lived branches.

The process I have thus far:

  1. Local development is done on the main branch.

  2. Developer, before pushing to remote, rebases on remote main branch.

  3. Developer pushes to short-lived branch: git push origin main:short_lived_branch.

  4. Developer opens PR to merge short_lived_branch into main.

  5. When PR is submitted it triggers the PR pipeline that has the following stages:

    1. Builds the microservice.
    2. Unit tests the microservice.
    3. If passing, builds the Docker image with a test-latest tag and push to container registry.
    4. Integration testing with other microservices (still need to figure this out).
    5. Cross-browser testing (still need to figure this out).
  6. If the PR pipeline is successful, the PR is approved, commits are squashed, and merged to main.

  7. The merge to main triggers the Deployment pipeline, which has the following stages:

    1. Builds the microservice.
    2. Unit tests the microservice.
    3. If passing, builds the Docker image with a release-<version> tag and push to container registry.
    4. Integration testing with other microservices (still need to figure this out).
    5. Cross-browser testing (still need to figure this out).
    6. If passing, deploy the images to Kubernetes cluster.

I still have a ton of research to do on the integration and cross-browser testing, as it isn’t quite clear to me how to implement it.

That being said, my questions thus far really have to do with the process overall, unit testing and building the Docker image:

  1. Does this flow make sense or should it be changed in anyway?

  2. Regarding unit testing and building the Docker image, I’ve read some articles that suggest doing the unit testing during the building of the Docker image. Basically eliminating the first two stages in my PR and Deployment pipelines. Some reasons given:

    • You are testing the code and not the containerized code which is actually what will be run.
    • Even if unit testing passes, the image could be broke and it will be even longer before you find out.
    • Building on that, it increases the overall build and deployment time. From my experience, the first two stages in my pipelines for a specific service take about a minute and half. Then building and pushing the image takes another two and half minutes. Overall about four minutes. If the unit tests were incorporated into the Docker build, then it could possibly shave a minute or more off the first three stages in my pipeline.

    Would this be a bad practice to eliminate the code build and unit testing stages, and just moving unit testing into the Docker build stage?

Thanks for weighing in on this while I’m sorting it out.

usability testing – How do you prototype systems that are normally connected to Active Directory or other complex external systems?

I am working on a product that has a quite typical setup when it comes to enterprise software: it is usually connected to the Active Directory of the origanization and authenticates its users against it and fetches their group membership information from it. The permissions within the products are assigned to the groups that come from AD. For tiny installations and in test scenarios it is possible to add local users and groups but in production usage it is almost always integrated with Active Directory.

We are planning on making some pretty significant changes in how permission settings can be made and the mockups for the changes tested well when local users & groups were used. We would now like to see if the interface works well in a more realistic scenario when the product is connected to AD and we have thousands of users and groups.

I was wondering on whether you have any experience or insight on how to do users tests in such a situation. Creating and maintaining a fake, internet-facing AD installation seems to be an overkill for this purpose and also cause problems during the test as well as it’d be impossible to connect the real AD with the wireframe we want to test. Creating a mock AD user management interface would also take tons of time and would probably still be quite far from how that UI works normally.

Do you have any experience with this or more generally speaking on doing wireframe tests of systems that are normally connected to large, complex external systems in production?

testing – Efficiently updating a common repository used by multiple other repositories

Suppose we have a project consisting of many microservices, all of which use a common library. The common library has been put into a separate git repository, and each microservice is also in its own individual git repository.

When the time comes to make a change to the common library, how should that be done? Because all of the microservices use it, it seems like it would be necessary to clone all the microservice repositories that were not cloned locally already, update each of them to point to the new version of the common library, publish locally the new version of the common library, and then run all of their tests. And then, in principle, this has to be done on the CI system as well, because otherwise there could be a subtle difference in the local environment that makes the change happen to build OK on the local environment, but not in CI!

If we don’t do this, but simply do the lazy thing and update only the common library and the particular microservice we are working on at the moment, we run the risk that we accidentally break something in another microservice and that this only becomes apparent later when the dependency on the common library gets bumped in the latter microservice. If the common library were an open source project and the microservices depending on it were third party code, we could just say “tough luck – you fix it on your side, or raise a PR to fix it on our side. It’s not our responsibility to babysit your repositories.” But since they are our repositories, they are our responsibility – so we shouldn’t really break them gratuitously with a poorly-thought-out change to the common library.

However, the approach of testing all common library changes everywhere that I have outlined is laborious – and doesn’t scale particularly well, either. (Imagine if Google had their billions of lines of in-house code, not in a mono-repo but in this kind of setup – how would they be able to make safe changes to shared libraries in a scalable way?)

What sorts of approaches can be used to better manage such updates and make the QA more efficient?

unit testing – How to test variable values due to different database types?

I am working on a program that needs to work on values that are fetched from different database types: currently we support 12 different database types. My code applies some business logic to the values fetched from the database, a score is calculated and values are ordered. I have written tests for this business logic. Due to the differences of 10^-6 between values fetched from different databases, ordering changes depending on the database type.
For production, I believe difference in ordering due to a difference of 10^-6 is acceptable. (Especially because we say the data integrity and quality is user’s responsibility for our product.) Also, our tests for fetching values from dbs test up to a 10^-5 precision.
What is the best way to test this ? Namely, ordering changes due to small differences in database type.
Note.Two databases that give different results are Sqlite3 and MariaDB.

java – How to run testing values to get to a result?

I have some variables like A B C and D. Each one of them have a specific value. And I have the variable X, which I inform it’s value like:
A has value of 20
B has value of 25
C has value of 27
D has value of 30.

Those values are static.

And I what’s to know how many As or Bs, etc that I need to get closest possible to X value. And X value can be any value the user inform. For example 135.
Then I want the application to calculate the best combination os A B C and D (not necessary to have them all) to get the closest possible to X number.

[GET] Ethical Hacking, Penetration Testing: Buffer Overflow | Proxies-free


  1. Ziplack

    Ziplack
    VIP UPLOADER


    Joined:
    Dec 9, 2014
    Messages:
    1,055
    Likes Received:
    135
    Trophy Points:
    63

    [GET] Ethical Hacking, Penetration Testing: Buffer Overflow

    Hidden Content:

    You must reply before you can see the hidden data contained here.

     

  2. jasmine
    New Member


    Joined:
    Today
    Messages:
    2
    Likes Received:
    0
    Trophy Points:
    1

    thanks for help man

     

Can I use an emulator for malware testing?

I know that virtualization is different from emulation; I was wondering if the emulation is isolated from it’s host and any destruction on it will not affect the host as if it were a virtual machine. I also understand that there can be exploits which can be used to escape the emulation but I just want to know if it is isolated in general.

Idiomatic Golang Unit Testing – Software Engineering Stack Exchange

Currently I have some code which is structured like this:

type Service struct {
    // some dependencies
}

func (s *Service) FindStuff(ctx Context) { // this signature cannot be changed
    // some logic...
    isNew := s.isNewUser(ctx)
    if isNew {
        // call new flow
    } else {
        // call old flow
    }
}

func (s *Service) isNewUser(ctx Context) bool {
    value := apiGet("some-endpoint")
    // some logic...
    for {
        for {
            if  { return true }
            if  { return false }
            if  { return false }
        }
    }
    return true
}
  • The existing unit tests call FindStuff which calls isNewUser, the call apiGet("some-endpoint") is mocked.
  • The isNewUser method is used by multiple methods on the Service class.

In a Java world you could create a new class which has a method isNewUser you could then pass a mock object of that class as a dependency to Service and mock the call to isNewUser to return either true of false and allowing everything to be tested in isolation.

What is the most idiomatic Go way of testing this?