Sources of participants for user research and feedback

I’m re-evaluating the sources of getting participants our UX team uses to recruit participants. These participants will be mainly invited to test prototypes and conduct interviews.

Besides the following, I’m curious if there are any other potential sources.

  • End users who are current customers. they joined by signing up on the user engagement program.
  • New employees, since they are still somewhat unbiased
  • Employees from other departments since they are not too close to the product team

microservices – Where to place an in-memory cache to handle repetitive bursts of database queries from several downstream sources, all within a few milliseconds span

I’m working on a Java service that runs on Google Cloud Platform and utilizes a MySQL database via Cloud SQL. The database stores simple relationships between users, accounts they belong to, and groupings of accounts. Being an “accounts” service, naturally there are many downstreams. And downstream service A may for example hit several other upstream services B, C, D, which in turn might call other services E and F, but because so much is tied to accounts (checking permissions, getting user preferences, sending emails), every service from A to F end up hitting my service with identical, repetitive calls. So in other words, a single call to some endpoint might result in 10 queries to get a user’s accounts, even though obviously that information doesn’t change over a few milliseconds.

So where is it it appropriate to place a cache?

  1. Should downstream service owners be responsible for implementing a cache? I don’t think so, because why should they know about my service’s data, like what can be cached and for how long.

  2. Should I put an in-memory cache in my service, like Google’s Common CacheLoader, in front of my DAO? But, does this really provide anything over MySQL’s caching? (Admittedly I don’t know anything about how databases cache, but I’m sure that they do.)

  3. Should I put an in-memory cache in the Java client? We use gRPC so we have generated clients that all those services A, B, C, D, E, F use already. Putting a cache in the client means they can skip making outgoing calls but only if the service has made this call before and the data can have a long-enough TTL to be useful, e.g. an account’s group is permanent. So, yea, that’s not helping at all with the “bursts,” not to mention the caches living in different zone instances. (I haven’t customized a generated gRPC client yet, but I assume there’s a way.)

I’m leaning toward #2 but my understanding of databases is weak, and I don’t know how to collect the data I need to justify the effort. I feel like what I need to know is: How often do “bursts” of identical queries occur, how are these bursts processed by MySQL (esp. given caching), and what’s the bottom-line effect on downstream performance as a result, if any at all?

I feel experience may answer this question better than finding those metrics myself.

Asking myself, “Why do I want to do this, given no evidence of any bottleneck?” Well, (1) it just seems wrong that there’s so many duplicate queries, (2) it adds a lot of noise in our logs, and (3) I don’t want to wait until we scale to find out that it’s a deep issue.

audio – How to patch sources in ffmpeg?

I’m sure this has been covered before, but it’s not making sense to me.


If I want to capture the far right of 3 screens (3 wide, 1 high) at 30fps, all of which are 1920×1080, I do this:

ffmpeg 
    -f x11grab -r 30 -s 1920x1080 -i :0.0+3840,0 
    test.mkv

(based loosely on this)

That works, and produces a silent video. So far, so good.


Now I want to add a mono soundtrack to it, taken from channel 3 of a 32-channel USB interface, so I do this to start with???:

ffmpeg 
    -f alsa -ac 32 -i plughw:CARD=XUSB,DEV=0 
    -f x11grab -r 30 -s 1920x1080 -i :0.0+3840,0 
    test.mkv

(based loosely on this)

I imagine that that would give me a video file with 32 uncompressed audio tracks. And once I see that working, I could add one more line to the command to filter out just the one that I want, and then another line or two to compress the audio and video. But as it is, it still gives me a silent video, and a bunch of “ALSA buffer xruns” in the terminal while it’s running.

I can’t re-patch the hardware to channel 1 (cropped screenshot shown) because channels 1&2 are a stereo pair for a different, simultaneous use, and that receiving app only cares about 1&2. So the broadcast must go there, and I need to pick channel 3 or higher to be the mono soundtrack of the additional recorded video.

enter image description here

I can’t use the broadcast app to record this, because the broadcast needs to be different from the recording, and that app only does one stream. If it wasn’t already tied up, then I could use it for this recording (with the audio patched to 1&2), and it would be dead simple.

But since all the components of the recording already exist, I figured I could just add some lines to the startup script to pull it all together behind the scenes. When the event is done, “Oh look! There’s the recording too! And it’s different from the broadcast, just like we wanted.”

I can’t imagine that no one has documented this particular use as a working example, albeit with possibly different numbers, but I can’t seem to find it.

My specific case is a meeting with some remote participants, with the broadcast feeding the remote people without looping their feeds back to them, and the recording needs to include everyone.
But I can see a nearly identical configuration used for gaming or software demonstrations, etc.


Recording audio alone does work, using arecord:

arecord 
    --device=plughw:CARD=XUSB,DEV=0 --channels=32 --file-type=wav --format=S32_LE --rate=48000 
    test.wav

That gives me a 32-track wav file, all of which is correct, according to Audacity.
(that’s the only format that this interface supports – it just is what it is)

So that gives me a little bit of reassurance that it can work somehow. I just can’t seem to find a decent example to take channel 3 or higher as the mono soundtrack to a separate video source.

cryptography – Methods to Prove Data Authenticity from Potentially Compromised Sources?

I’ve been thinking about this problem for some time and I wanted to ask if there are any known methods, or research papers, about how to prove “authenticity” or correctness of data originating from a potentially compromised source (remote server, process, etc). Specifically what I’ve been imagining is say you have service A and service B, service B sources data from A but is worried that A has been compromised such that even if data is signed by A, B can’t trust that it was generated by code written by A‘s developers. Is it possible for B to prove to itself that data from A is authentic, that it was indeed generated by the expected code and not injected or generated by an attacker who has compromised A?

One solution I’ve been thinking about is using a sort of distributed ledger or blockchain so that multiple nodes compute the same data, and in doing so raises the bar such that an attacker would have to compromise N% of the services producing the needed data, this provides naturally replication and I can use an appropriate consensus protocol, but ofc introduces some overhead, efficiency concerns, and I would need to think hard about side-effects being performed more than once.

If there is only one node possible of generating data, such as a sensor node, and it is compromised, I’d imagine all hope is lost, but I also wouldn’t be surprised if there is some clever crypto scheme that attempts to solve this problem as well.

I hope it’s clear as to what the question is, thank you.

RED HOT! PERSONALIZED ASTROLOGY OFFER – WORKS ON ALL TRAFFIC SOURCES – WEEKLY SUBSCRIPTION

Hello together,

We’re throwing a party and all of you are invited to join!

About 2 months ago we launched the first serious astrology offer on Clickbank and it’s going up like a rocket
(​IMG)
Source: https://cbengine.com/id/codestiny-graphs

We’re now #12 in the category and we’re just getting started

Why would you care and why are our payouts constantly increasing (see screenshot above)?

Astrology is a huge market that’s growing rapidly

(​IMG)

Yes, the market almost doubles every year and that’s people who spend money on this

(​IMG)

We target ALL signs with a personalized reading

And yes, we provide custom links so you too can target individual signs and send your traffic directly to a reading customized for the sign – you can image what that does to your EPC and conversion rate!

This offer is clean as a whistle and works on all traffic sources, no need to worry about your Facebook account promoting us!!

Guys and gals, over 90% of Americans know their sign, that’s how big this market is
And it’s a GLOBAL MARKET
(​IMG)
But now to the most important question…

HOW MUCH $$$ CAN YOU MAKE?

We pay 75% on everything, higher rates for true super affiliates (not for wannabes)

a) Our front-end is priced very low at $7. BUT: This is a weekly subscription and yes, you get paid your 75% on the recurring! On average our clients pay for 3-4 weeks
SEMrush

b) We have 3 finely tuned upsells at $17, $27 and $47 which work and our average order value is currently at $15.26 (and growing)

c) We also have 2 order-bump on our order page which convert well without affecting our upsell take rate

Higher conversion rates

a) Unlike other merchants we don’t just drop our product on the market and ignore it.
We are actively driving traffic to the offer ourselves. Why would you care?

Because we have money on the line and we make damn sure to get the highest conversion possible. Check our site and subscribe to our updated on Clickbank and you’ll constantly see us running A/B-split tests to improve on our sales message.

b) We are one of the few merchants who doesn’t force visitors to give their emails, milking your leads while you suffer from a lower conversion rate. In fact, we have reduced the data input from customer to a minimum because we all know how sensitive people are with their privacy these days.

c) We’re using custom technology to have auto-video play with sound on all devices for our initial presentation. No click = higher conversions

Product & Support

a) The highest conversions are useless if buyers get a low quality product. This won’t happen with our offer. The astrological forecast the customer receives is done by professionals, value that is felt in low refund rate of only 5% – despite our aggressive billing.

b) We care for our clients and you, our affiliates. You have a question, we answer. You want to know what promotional methods work? We’ll help you out as best as we can. We know that your success is our success so let’s make some money together!

And here’s where to find us on Clickbank + our contact details

(​IMG)

PS: We have this offer available in Spanish and Brazilian Portuguese as well!!

 

Unit Testing a class that requests data from multiple sources

Context

I’m working on a project that pulls data from AWS using the various AWS SDKs for .NET. This specific example deals with the AWSSDK.IdentityManagement SDK

The goal is to query information from IAmazonIdentityManagementService and map it to a model that is helpful to the business domain I’m working in

I’ve been tasked with writing unit tests for the IamService class.

Problem

With the Unit Test setups being so verbose, I can’t help but think the method I’m Unit Testing (GetIamSummaryAsync) must be constructed poorly.

I’ve googled around for things like “Design patterns for mapping multiple data sources to single objects”, but the only advice I see given is to use the Adapter or Proxy patterns. I’m not sure how to apply them to this scenario

Question

  • Is there a better way I could construct my IamService class to make it easier (more succinct) to test?
  • If the Adapter or Proxy patterns are appropriate for this type of scenario, how would they be applied?
public class IamService : IIamService
{
    IAmazonIdentityManagementService _iamClient;

    public IamService(IAmazonIdentityManagementService iamClient)
    {
        _iamClient = iamClient;
    }

    public async Task<IamSummaryModel> GetIamSummaryAsync()
    {
        var getAccountSummaryResponse           = await _iamClient.GetAccountSummaryAsync();
        var listCustomerManagedPoliciesResponse = await _iamClient.ListPoliciesAsync();
        var listGroupsResponse                  = await _iamClient.ListGroupsAsync();
        var listInstanceProfilesResponse        = await _iamClient.ListInstanceProfilesAsync();
        var listRolesResponse                   = await _iamClient.ListRolesAsync();
        var listServerCertificatesResponse      = await _iamClient.ListServerCertificatesAsync();
        var listUsersResponse                   = await _iamClient.ListUsersAsync();

        IamSummaryModel iamSummary = new IamSummaryModel();

        iamSummary.CustomerManagedPolicies.Count = listCustomerManagedPoliciesResponse.Policies.Count;
        iamSummary.CustomerManagedPolicies.DefaultQuota = getAccountSummaryResponse.SummaryMap("PoliciesQuota");

        iamSummary.Groups.Count = listGroupsResponse.Groups.Count;
        iamSummary.Groups.DefaultQuota = getAccountSummaryResponse.SummaryMap("GroupsQuota");

        iamSummary.InstanceProfiles.Count = listInstanceProfilesResponse.InstanceProfiles.Count;
        iamSummary.InstanceProfiles.DefaultQuota = getAccountSummaryResponse.SummaryMap("InstanceProfilesQuota");

        iamSummary.Roles.Count = listRolesResponse.Roles.Count;
        iamSummary.Roles.DefaultQuota = getAccountSummaryResponse.SummaryMap("RolesQuota");

        iamSummary.ServerCertificates.Count = listServerCertificatesResponse.ServerCertificateMetadataList.Count;
        iamSummary.ServerCertificates.DefaultQuota = getAccountSummaryResponse.SummaryMap("ServerCertificatesQuota");

        iamSummary.Users.Count = listUsersResponse.Users.Count;
        iamSummary.Users.DefaultQuota = getAccountSummaryResponse.SummaryMap("UsersQuota");

        return iamSummary;
    }
}

Where the class IamSummaryModel is defined as:

public sealed class IamSummaryModel
{
    public ResourceSummaryModel CustomerManagedPolicies { get; set; } = new ResourceSummaryModel();
    public ResourceSummaryModel Groups { get; set; } = new ResourceSummaryModel();
    public ResourceSummaryModel InstanceProfiles { get; set; } = new ResourceSummaryModel();
    public ResourceSummaryModel Roles { get; set; } = new ResourceSummaryModel();
    public ResourceSummaryModel ServerCertificates { get; set; } = new ResourceSummaryModel();
    public ResourceSummaryModel Users { get; set; } = new ResourceSummaryModel();
}

public sealed class ResourceSummaryModel
{
    public int Count { get; set; }
    public int DefaultQuota { get; set; }
}

The problem I’m facing is that my Unit Tests turn into a mass of code in the Assemble section. I have to mock every call I make to each AWS SDK client method.

Example Unit Test

(Fact)
public async Task GetIamSummaryAsync_CustomerManagerPolicies_MapToModel()
{
    // Arrange
    var iamClientStub = new Mock<IAmazonIdentityManagementService>();
    
    iamClientStub.Setup(iam => iam.ListPoliciesAsync(It.IsAny<CancellationToken>()))
        .Returns(Task.FromResult(
            new ListPoliciesResponse()
            {
                Policies = new List<ManagedPolicy>()
                {
                    new ManagedPolicy(),
                    new ManagedPolicy()
                }
            }
        ));

    // Lots of other mocks, one for each dependency
    
    var sut = new IamService(iamClientStub.Object);

    // Act
    var actual = await sut.GetIamSummaryAsync();

    // Assert
    Assert.Equal(2, actual.CustomerManagedPolicies.Count);
}

pulseaudio – Ubuntu Studio Controls: How to add/edit/remove pulse bridge capture sources?

I have a Scarlett 18i20 USB audio interface. It has 8 analog physical inputs on the device.

In Ubuntu Studio Controls, the device is called system, and it displays 20 capture sources. In the Pulse Bridging tab, these sources are available by way of a drop-down box. The drop-down items are grouped into pairs. (i.e.: 1 and 2, 3 and 4, 5 and 6, and so on):

screenshot of bridge capture sources dropdown

USC does not allow me to pick source for an input Bridge. It seems to assume that all capture sources will be received in stereo left and right channels, but this is not the case. Each source corresponds with a physical input connector on the device.

In some cases I might need a single capture source (i.e.: a single monophonic mic) patched to left/right input channels, but in other cases I may need numerous sources all pointing to one input bridge (i.e.: multiple mics).

I can edit the connections to my liking in a Carla Project File, but I cannot configure a series of preconfigured bridges directly from USC. I have no choice but to select no connection in the dropdown box. This means my audio capture does not function without loading a project file.

Can these drop-down items be edited? If so, where/how?

How did USC decide to pair these sources? Is there a configuration file, or is it somehow dynamically generated?

opengl – GLSL: How can I optimize this lighting (fragment) shader? Basic 2D game, 30+ light sources cause significant frame loss

Switching to deferred shading would be the best solution with this many lights. ( https://en.wikipedia.org/wiki/Deferred_shading )

I am using a deferred rendering process

vec4 pixel = texture2D(LastPass, gl_TexCoord(0).xy);

That is not proper deferred rendering.

You’re supposed to accumulate all the light values by drawing additively into a lighting buffer (letting the GPU’s memory/cache subsystem do the accumulation) and when this is completed combine both the diffuse and lighting buffers in one final pass.

And draw each light as a quad/triangle over the lighting buffer covering only their visible radius.

But without re-engineering the whole pipeline the first thing would be to remove some of the divisions and use pre-calculated reciprocals.

Transform:

attenuation = (attenuation - light.falloff) / (1 - light.falloff);

Into:

attenuation = (attenuation - light.falloff) * light.one_minus_falloff_inverse;

By pre-calculating one_minus_falloff_inverse as 1.0f/(1 - light.falloff) on the CPU.

Turn d/light.radius into d*light.radius_inverse

And this is a red-herring:

    //This line runs VERY slow when there are many light sources.
    finalPixel += (diffusepixel * ((vec4(light.color, 0.4) * attenuation)));

Commenting out this line makes everything faster because the shader compiler eliminates the entire loop and removes one texture lookup because nothing in the entire calculation is useful when that line of code is gone.

See “Dead code elimination” ( https://en.wikipedia.org/wiki/Dead_code ) done by compilers.