script – What are the downsides to enabling potentially suboptimal or unused opcodes in a future soft fork?

It appears to me that there are various ways to build covenants and vaults with opcodes and sighash flags that are not yet enabled in Bitcoin (e.g. OP_CHECKTEMPLATEVERIFY, SIGHASH_ANYPREVOUT, OP_CAT).

Assuming these are considered for the next soft fork in Bitcoin (not a certainty) what downsides are there to just enabling all of them and seeing what people build with them?

Obviously taking up a reserved OP_NOP with a potentially suboptimal opcode is one downside. And in the worst case a botched opcode could mean that a UTXO might not be able to spent (or impose unacceptable verification costs on full nodes). Are there any other potential downsides?

I suppose this is partly answered by what motivated Satoshi to disable a lot of opcodes in 2010 which I’m not clear on. The motivation appears to be security but again I’m not clear on what exact attacks that were possible with what opcodes and the severity of those attacks.

monetization – Creating a online game with a max $20 investment to potentially win much more. Want as little regulations

Creating an online game where maximum entry is $20, which is pooled amongst players and, in a series of stages, the players will lose until there is 1 person left who takes most of the pot. The T&C of the game will be that the most the creators (us) are liable for is your initial investment. I guess it’s analogous to a poker tournament.

I want as little regulations as possible. Of course the game will be above-board and fair but I want the government out of it as much as possible.

Is there a particular jurisdiction/country that would be best for incorporating such a business and hosting these type of games?

Which remote desktop protocol to choose for a potentially malicious server (linux)

I am trying to “outsource” potentially dangerous applications such as web browsing to a separate Linux machine sitting in its own network segment and which is isolated by a rigorous network firewall from our internal network, thus I am trying to build a “remote-controlled browser”. Since I am in the early planning phase, I wonder which remote protocol to choose best for remote access to such a machine. I have to deal with a potentially malicious server and I want to protect the client (Windows or Linux) which accesses it.

Which remote control protocol would you recommend for a small attack surface? I can think at the moment of

  • RDP
  • VNC
  • SPICE (from the proxmox hypervisor)
  • NX (Nomachine)
  • X2GO
  • XPRA via HTML5

It is clear that the more lightweight a protocol is, the more suitable it is. However, I would prefer to be able to also stream video + audio over it (which might rule out some protocols).

blockchain – Can you potentially mine for anything?

This is more of a general “blockchain” question and not specific to Bitcoin so if I’m off topic here I would appreciate it if you could direct me to a more relevant stackexchange maybe.

On PoW-based blockchains, participants mine for unique hashes with a specific number of leading zeros etc. (there’s probably more to it but it’s irrelevant to my question).

But I’m wondering whether a blockchain could ask participants to mine for something else. Think of it as a game, a challenge. Let’s say a string of characters (bytes, a file whatever..) “hidden” inside the blockchain. Mining for that string should (as in: that’s the challenge) be as hard as mining for these unique hashes.

My questions are the following:

  1. Is it possible?
  2. How/Where would you hide it (think of a “custom”/theoretical blockchain, not any of the existing ones necessarily)
  3. How would you allow users to mine for it without giving too much away thus making it easy for them to hack their way around the mining?

When it comes to mining for unique hashes there is nothing to hide because you’re trying to generate a random sequence of characters that doesn’t already “exist” somewhere. But in my hypothesis, the “hash” needs to necessarily pre-exist “somewhere” but the user shouldn’t have access to it, only “tools”/hints to mine for it..

Thank you in advance for your help 🙂

I’m not a blockchain expert (I am a front-end Js developer) so I’m looking for more “high-level” answers 🙂

dnd 5e – When hiding, how close can you get to enemies before they can potentially detect you?

Creatures are assumed to automatically notice other creatures that are within the range of their sight or hearing, unless they make Stealth checks

The DMG offers a limited amount of guidance about how to determine whether or not creatures detect each other. In the exploration section of chapter 8 (“Running the game”), it states:

Noticing Other Creatures

While exploring, characters might encounter other creatures. An important question in such a situation is who notices whom.

(…)

If neither side is being stealthy, creatures automatically notice each other once they are within sight or hearing range of one another. Otherwise, compare the Dexterity (Stealth) check results of the creatures in the group that is hiding with the passive Wisdom (Perception) scores of the other group, as explained in the Player’s Handbook.

Unless a creature actually attempts to be stealthy and makes a stealth check, it is assumed to be automatically noticed by any other creature once it gets within the range of their sight or hearing. Of course, this just raises the question of what the range of vision or hearing is.

Unfortunately this is entirely up to the DM’s discretion for range of hearing, but for vision, this is quite clearly defined. The DMG in the same section also states:

When traveling outdoors, characters can see about 2 miles in any direction on a clear day, or until the point where trees, hills, or other obstructions block their view. Rain normally cuts maximum visibility down to 1 mile, and fog can cut it down to between 100 and 300 feet.

And we also have the PHB’s rules about vision and light:

In a lightly obscured area, such as dim light, patchy fog, or
moderate foliage, creatures have disadvantage on Wisdom (Perception)
checks that rely on sight.

A heavily obscured area—such as darkness, opaque fog, or dense
foliage—blocks vision entirely. A creature effectively suffers from
the blinded condition when trying to see something in that area.

The presence or absence of light in an environment creates three
categories of illumination: bright light, dim light, and darkness.

Bright light lets most creatures see normally. (…)

Dim light, also called shadows, creates a lightly obscured area. (…)

Darkness creates a heavily obscured area.

Here we see that a creature cannot see anything that is heavily obscured with respect to it, so will not automatically notice creatures that are in darkness until they are either close enough to hear or get within range of a light source or darkvision. It’s also obvious that creatures who have full cover from one another cannot see each other, but can see each other if there is less than full cover involved.

Altogether, if two creatures are close enough to meaningfully interact, they are certainly close enough to see each other and will notice each other automatically, unless there are obstructions between them sufficient to grant full cover or some other impediment to visibility which makes one or the other heavily obscured. Without such circumstances, the creature must make a stealth check if it doesn’t want to be noticed.

In the circumstances of your post – assuming that there were no special conditions of lighting or fog or other obstructions that would have made someone 20 feet further away actually impossible to see – the enemies should, by RAW, have noticed the unstealthy characters approaching as soon as they had line of sight, even if they are distracted.

Of course, it is important to note that these rules assume that stealth checks are opposed by a creature’s passive perception, and if circumstances would grant them disadvantage on a perception check – for instance, if you judge they are distracted by something – that disadvantage translates to a -5 penalty to their passive perception score. So, even characters who aren’t particularly skilled at stealth still have a good shot at sneaking up on distracted, unaware enemies.

windows – Are there advantages to using a hardware token instead of a password on a potentially compromised system?

TLDR: Is there a security benefit to regularly accessing the admin account with a hardware token rather than with a well-protected password?


Long story: I’m both a developer and the system admin of our small network. Thus, on my PC, I usually do “regular work” which does not require admin credentials. A few times a week, I need to do “sysadmin stuff” (install updates, reconfigure servers, manage VMs, etc.) and start processes or remote desktop sessions with domain admin credentials.

We have a lot of “the usual” protections in place to protect our system (and my PC/account in particular) from being compromised. Nevertheless, no system is 100% secure and I want to reduce the amount of damage an attacker can do when compromising my PC and/or my account (yes, we already have full off-site backups). Once my PC is compromised, an attacker can just wait for the next time I use the admin password and elevate their privileges to system-wide domain admin access.

Thus, I had the idea of enabling hardware token (FIDO2?) access to our domain admin account and (only) use that instead of the password. (I’d print the password on a piece of paper, put it in a sealed envelope, store it in the office for emergencies when I’m unavailable and never use it again.)

That way, if my account is compromised, the attacker has to somehow exploit the administrative action that I am currently performing right away instead of being able to copy the password to use at their leisure. At a first glance, this seems harder to pull of, but is it really? I only want to go through the hassle of enabling hardware authentication if it provides a tangible security benefit.


To clarify the threat model: I am concerned about hackers on the other side of the world having a “lucky day” (i.e., me starting a malicious file which somehow managed to get through all our filters), getting into our network and then seeing how far they can elevate their privileges before doing their usual ransomware stuff (yes, as mentioned above, we do have backups). I am not concerned about targeted and/or physical attacks by state actors (fortunately, we are not important enough, and neither are our customers).

java – Designing around potentially multiple RESTful API calls to a downstream service

To set up the problem, let’s imagine we have a downstream service that we need to call for some information. We set up an API endpoint and call another method which will hold our business logic and make the HTTP request. Based off of certain parameters that were sent to this method, we may potentially have to make several calls to the same endpoint, depending on what it returns. The method is currently setup like so:

public HttpEntity<String> getInfo(//parameters) {
    
    //setup HTTP headers etc.
    HttpEntity response = restTemplate.exchange(//stuff here);

    //based off of on the parameters given to this method, we will know whether or not we need to make additional calls
    //if we DO need to make additional calls, we will also need to inspect the response for information
    //this is difficult, because as you see below, this method doesn't start to procces the response until after error checking
    
    //do all error checking ie. checking for no data and other HTTP errors
    
    OurDataClass dataClass = objectmapper.writeValueasString(response.getBody());
    //do things to dataClass
    return new HttpEntity<String>(//dataClass and headers);

Given the current structure of this method, I don’t know how to work it into something that’s more extendable and maintainable. My first instinct was to just encapsulate the restTemplate and take care of additional calls there, but given the fact that I need to inspect the request contents of each call, something that is not done until the end of the current method, it would seem like we’re doing a lot of double work. On the other hand, working the solution into the method without any encapsulation would make it even more difficult to maintain down the road (it’s already a bit of a mess with error checking).

database – What type of data-base is good for storing records where each record potentially has hundreds of fields, and the fields are usually `null` values?

When I was in college, the only type of database we studied were relational databases using SQL.

However, I now have an application, where if you tried to use a relational-database, more than 90% of the cell-values would be null and tables would have thousands of columns.

Clearly, a relational database is the wrong choice.

My question is, what type of database is better suited to my application?

If you know what SQL is, then go to the next header/section-break.

In a relational database, everything is stored in spreadsheet-like tables.

Feel free to skip reading this section.

In an example of a relational database, a table recording data for apartment rentals might have the following columns:

  • Floor Area Lower Bound (e.g. 450 square feet)
  • Floor Area Upper Bound (e.g. 850 square feet)
  • Monthly Rent (e.g. $750.00)
  • Is Electric Bill is Paid by Landlord? (Boolean)
  • Is the water bill paid by the landlord
  • Street Address of the apartment (e.g. 123 somewhere lane)
  • etc…

I was thinking about creating a database for a job-applicants and job-classifieds.

Traditionally, job classifieds are stored as Unicode strings.
Computers have difficulty parsing and interpreting English.
Humans end up reading the job classifieds, and sorting the job classifieds “by hand.”

Suppose that a prospective job-applicant as no security clearance.
It would be nice if the computer would delete could all rows of the search results containing jobs for which a security clearance is required. This would save people time reading classifieds for jobs they are not qualified for.

The question is whether job J has at least one job mandatory/minimum qualification Q such that a human-being ,Sarah, does not have job-qualification Q

We are often working with 3-valued logic. In a generalized of the mathematical “law of the excluded middle“, One of the following 3 is always the case:

  • Jane has a commercial driver’s license.
  • Jane does NOT have a commercial driver’s license.
  • It is unknown whether Jane has a commercial driver’s license or not.

We want a database where:

  • Roles/Positions have qualifications.
  • Actors/Job-applicants have qualifications.

If a job requires at least 2 years of Java-programming, and Ian has 8 years of java programming, then we choose NOT to delete that job from the search results we show to Ian

I would prefer that bots filter and prune the search-space as much as possible.

A bot could run a search query, such as “furniture mover” using an traditional nothing-fancy search engine. After that, a bot could identify which job-qualification would split the search results most nearly in half. A 40%-60% split is better than 1% to 99%.

Maybe a commercial driver's license is a suitable job-qualification. After identifying an attribute to prune-on, the computer can ask the human something like, “do you have a commercial driver’s license?” The answer might cut the search space in half.

Every time the end user is asked a question about their qualifications, the computer stores the answer in a data-base.

A SQL table for job-applicants would have more than 2,000 columns (i.e. job qualifications). Examples of column headers are shown below:

  • Number of years of Network domain experience (float)
  • Number of years of experience operating fork-lifts (float)
  • Are you a licensed plumber (Boolean)
  • Do you have a Ph.D in psychology? (Boolean)
  • Are you a licensed in the United States to be a professional counselor (LPC)?
  • Number of years of experience writing computer code for front ends (float)
  • Do you have a CDL (commercial driver’s license)?

I am not willing to record, for every human being under the sun, whether that person has experience using a fork-lift or not, cooking Chinese food, or writing computer programs in java-script.

Perhaps we can “tag” each job-applicant.

  • Some users are “tagged commercial driver's license = yes
  • Some users are “tagged commercial driver's license = no
  • Some users do not have a tag for commercial driver's license at all.

What kind of data-base best supports what I am trying to do?

The website we are currently using, stackexchange.com, has a maximum of something like 8 tags per question.

I would like to be able to support at least a dozen, if not a couple hundred, tags for each stage-role (job) or single stage-actor (job-applicant)

architecture – How to anticipate a software future where ARM (potentially) replaces x86 in server and PCs?

Quite simple: write portable code in a high-level, multi-platform language, without making assumptions about endianness, or size of integers, and CPU architecture. Just check if your favourite language supports the new platform.

It’s not a new situation: 30 years ago the CPU landscape was very diverse. There was x86, M68K, PowerPC, Sparc and several others. You had to trust your OS, your compilers and your libraries. It’s far more challenging to have a consistent multi-platform user-interface across many systems, than supporting multiple processors (provided you kept portability in mind).

One challenge is the communication between processes which could use different CPU architectures. Binary data makes a lot of assumptions about the CPU and requires some extra caution. But in reality, this challenge exists already today if you use multi-language development.

Another challenge will be to package and distribute multi-target code. While this may impact the toolchain, build, test (multiple automated tests for each targeted platform required) and distribution process, this will not fundamentally impact the way software is developed. The best preparation here is probably to familiarize yourself with multi-target open source projects.

domain driven design – How can we update potentially many Read Model Projections based on an Event that is not within the native context?

I have a Shop which sells Items for a given Price.

Let’s say we had a Shop that sold three Items; Potions, Buckets, and Shields. Each one has a different Price. Additionally, we’ll say that the Potion Item was incorrectly named “Potionn” (something that will be fixed later).

The User Interface would present the following to the user:

Item Name          Price
Potionn            $100
Bucket             $10
Shield             $40

The Read Model Projection to represent this Shop could look something like

{
    "shop_id": "1",
    "items": (
        {
            "item_id": "potion",
            "item_name": "Potionn",// misspelled name
            "price": {
                "currency": "dollars",
                "amount": 100
            }
        },
        {
            "item_id": "bucket",
            "item_name": "Bucket",
            "price": {
                "currency": "dollars",
                "amount": 10
            }
        },
        {
            "item_id": "shield",
            "item_name": "Shield",
            "price": {
                "currency": "dollars",
                "amount": 40
            }
        }
    )
}

The Event Stream that fed this Projection could look something like

Item Created ("potion")
Item Created ("bucket")
Item Created ("shield")
Item Named ("potion", "Potionn")// misspelled name
Item Named ("bucket", "Bucket")
Item Named ("shield", "Shield")
Shop Created (1)
Item Added to Shop (1, "potion", (100, "dollars"))
Item Added to Shop (1, "bucket", (10, "dollars"))
Item Added to Shop (1, "shield", (40, "dollars"))

The question is in regards to what happens when an Item Name is changed, after its been added to a Shop.

Item Named ("potion", "Potion")

Here, we change the name to correct an issue (the misspelling). The Projection subsequently needs to be changed, to provide the correct name to the user. If there was only one Shop that sold Potions, we would only need to update one Projection. But, if there are hundreds or thousands of Shops that sell Potions, then all of those will need to be updated in response to the corrected Item Named Event.

Some potential solutions I’ve come up with are:

  1. Just iterate through every Shop, and update the name if it has a “potion” Item. Problems include having to know all of the Shops that exist (keeping a separate list of Shop Ids), and checking all Shops, even if they don’t include a Potion Item.
  2. Maintain a separate index of all Shops that contain a given Item. The base projection used by the User Interface is keyed on Shop Id, and contains a list of Items in that Shop. This separate index would be keyed on Item Id, and contain a list of Shops that contain that Item. This would reduce the amount of searching needed, only going through Shops that actually had the Potion Item, instead of all Shops that exist. The problem though is having to maintain two separate models, one actually used by the user interface, and another one making reacting to certain events easier; in this case, reducing the amount of Shop Projections needing to be found, and updated.
  3. Maintain an internal list of Item Names, and Join this list with a slimmed down Projection of Shop Items when queried. When an Item Named Event is received, this internal list is updated, and when a Shop Items Projection is requested, the Name is injected. The problems with this include maintaining two models (shop items and item names), changing from a straight query->response model, to query->join->response model, and duplication of the Item Name list, as other systems will probably maintain their own Item Name lists if they need that information. This problem is further worsened if, in addition to Item Name, we need a Sprite/Image/Picture, Description, or anything else.
  4. Have a separate, out of context/domain Item Name list, and use its Domain Service to query the correct Item Name. This Projection wouldn’t have to maintain the list, the list wouldn’t have to be duplicated by other systems, but to query then involves querying something else too. Problems with this include querying a separate Read Model when the Shop Items Projection is queried, moving to a query->join->response model like above, and it feels like it breaks down the CQRS boundary a bit.

Core aspects that influence these solutions are:

  1. Is it a design smell to maintain multiple separate Read Models, even for one use-case?
  2. I noticed there was kind of a separation of concerns in the Read Model itself. One side is the querying of the Read Model (query->response vs query->join->response), and the other side is the updating of the Read Model (iterate through all, even if it won’t be updated vs iterate only through ones we know we’re going to update).
  3. Cross context/domain querying seems to go against the grain of CQRS. It feels awkward to try to query something from another domain/context. It feels like this projection should have absolutely all the information it needs for this use-case. If it is missing something that it needs, it seems like an indicator that there are events that it’s not listening to that it should be listening to.