8 – Best Practice: Remove UUID or default_config_hash from Custom Content Type definitions?

When I’m exporting the custom content type definitions using the UI, many of the resulting YML files hold a uuid and default_config_hash, like:

uuid: b69d8e54-076c-4a15-a396-19f49369fa68
  default_config_hash: 3aSvUp4PtrivflrkaLdNRL1USkZLsS7NdQroSiRX9mA

Now, when I’m using this as a basis for my custom module, is it recommended to remove those lines?

authentication – Best practice for authenticating resend email requests

I have a Node.js/Express 4/JWT user authentication service using Passport.js, with Sequelize and MySQL for database.

In my service, upon signing up/resetting a password, the user will be redirected to a page telling them to

  • click the link in the email that was just sent, or
  • click a button to resend the email (if they did not get it)

In the database I have a dynamic_urls table for activate/reset links. The dynamic_urls table has a cap of 5 resend attempts before they are directed to contact the admin (obv. incremented with every resend). The URL itself is generated via JWT, with a payload of the user’s id, password with their account creation date, and a private key of a long “secret” string. It is then stored in the database until it is clicked or expires.

I have a few questions for best info sec practices with JWT, namely:

  1. Should I even care about having the resend button have a random URL?
    I assume that I must, because the resend request has to correspond to the correct dynamic URL, which requires user information, which must not be leaked.

And if so:

  1. What should I use for in the payload when creating the dynamic URL of the resend button?
    I assume I shouldn’t use the same/similar payload that I used to make the email link. UUID maybe? (but then I’d have to presumably stash the UUID in the database as well, no?)
  2. Should I renew the resend URL with each resend attempt.
    (E.g. each resend lands user on new page with new unique resend link)

Thank you in advance!

database – Storing static data idea — is it best practice?

If the data truly does not change, or changes on such long time frames, or must be crucially fast to access then there is wriggle room to store it directly in the application.

If you are going to do that however it makes much more sense to leverage the type system of the language you are using to provide a strong name, strong behaviour, and immutability guarantees. Particularly with a strong name, a code editor can trivially locate usages of the object.

But if none of those apply, you are probably edging on lazyness, or a bad design choice. In which case push it back into the database and stop being lazy. If it worries you, measure it, collect the data and see what is what. Maybe you really are in the first category and this makes sense. Just don’t presume so.

This probably sounds trite, but a json data file isn’t compiled.

  • If it were corrupted you wouldn’t know till it was used.
  • And depending on your configuration system, the developer configs probably are not used in production, so another layer of moving parts to break down.
  • And being an external dependency are probably mocked out in unit tests/use a safe short list instead of the real production file.

This is a large source of risk and errors. At least with a database/compiled source these structural issues can be detected early and fixed. Also being part of the stock data available to the program they probably participate in a few unit/integration tests.

There are some systems were a json data file would be on par, but they provide at least two functions. First they prove that the file can be loaded, and second that when loaded the data is in the right shape.

java – Which API building practice is better?

Im working on a ERP product, In which backend logics are to be APIs. Right now I have around
80 Tables.

Proposal 1: Creating CRUD APIs for all tables and manipulation of data to be handled in front End.

Proposal 2: Creating CRUD APIs with few Buffer APIs for Data manipulation (Business logics or joining multiple table) and sending the final json to front end?

Front End: Vue.js (Most likely)

Which proposal is better? Or If there is any better solutions I would love to listen.

gui design – Best practice for dealing with configuration conflicts?

I have a problem with applying configuration to 2 groups of devices.
Thee groups can contain the same device. Which will result in conflicts.
Any ideas would be much appreciated.

option1: Allow user to save over existing conflicts – provide confirmation
this causes constant dialogs and a sort of whack-a-mole scenario

option2: Allow user to save over existing conflicts – provide warning only
Device just assumes the last config applied.

option3: Prevent crossover of groups
This is my preference but not the current setup in code, not time to change.

enter image description here

version control – Best practice for organizing build products of dependencies and project code in your repo source tree?

I’ve checked quite a few related questions on source tree organization, but couldn’t find the answer for my exact need:

For a project I’m working on, my source tree is organized this way

  • build: all build scripts and resources required by continuous integration
  • src: all first-party source code and IDE projects of our team
  • test: all the code and data required for automated tests
  • thirdparty: all external dependencies
    • _original_: all downloaded open-source library archives
    • libthis: unzipped open-source lib with possible custom changes
    • libthat: …
    • ….

So far I’ve been building our first-party build products right in the src folder inside each IDE projects, such as Visual Studio and Xcode; and build the third-party products in their own working copy folders.


However, this reveals several drawbacks, e.g.:

  • In order to accommodate the variety of dependency locations, the lib search paths of the first-party IDE projects become messy
  • it’s hard to track the output products through the file system


So I’d love to centralize all the build products including dependencies and our first-party products, so that

  • the build products don’t mess up the repo SCM tidiness
  • all the targets and intermediates are easy to rebuild, archive, or purge
  • it’s easy to track down to the sub source tree of the products from file system

Current Ideas

I’ve tried to create another folder, e.g., _prebuilt_ under thirdparty folder so that it looks like this

  • thirdparty
    • _original_
    • _prebuilt_: holding build products from all thirdparty libs for all platforms
      • platform1
      • platform2
      • platform3
      • ….
    • libthis
    • libthat

One complaint I have towards this scheme: mixing derivatives with working copies (lib…) and archives (original) forces me to make folders of derivatives and archives stand out by naming them with ugly markups, in this case underscores (_).

Another idea is to use a universal folder right at the root of the repo and have all the build products of dependencies and project products sit there in a jumble. But then, it sounds messy and would be hard to track different sources.

Either way, some post-build scripts must be set in action to move artifacts out of their original working copies.


In general, what would be the best practice to organize the build products?

I’d love to achieve at least the goals in the Intentions above

authentication – What is the suggested best practice for changing a users email address?

I recently jumped onto the hypetrain for an unnamed e-mail service and am currently on my way to update all my accounts on various websites to get most of my (future) data off googlemail.

During this adventure I came across a couple user-flows of changing your e-mail address which I would like to share (amounts like “many” or “a few” a purely subjective, I did not count):

1. No questions asked

E-mail address is just changed without any confirmation-mails, second password check or spellchecking (two input fields). The e-mail adress is the main login method to this account with some sensitive data. Any person with malicous intend will not be stopped from taking over my account if they change the email adress and after that my password.

2. Confirmation of new email

What I feel like the method used by most platforms: You will receive a confirmation email to the new address you provide. This will assure you typed in the e-mail correctly, will not stop anyone from changing the main login method though.

3. Confirmation through old address

Very few platforms send an email to the old address to check if I am the actual owner of this account. If I click the link in the mail or enter a number they send me, the adress is changed.

4. Confirmation through old and of new address

Just once I had to confirm with my old address that I am the owner of the account and got another email to the new address to check if it does indeed exist.

Looking back at it, it feels like the usual UX vs security conflict. While method 1 provides the most comfortable flow, I see the most issues with it as already pointed out.
Having to confirm the old address and the new one is kind of a hassle but from the methods pointed out the best way to keep the account of your users in their own hands.

Are there other common methods I am not aware of and what is generally considered best practice?

rest – Is it a good practice to have an endpoint URL with parameter accepting different type of values according to an indicator in the HTTP header?

Assume a resource URL in the context of REST API:

/sites/<site id or site code>/buildings/<building id or building code>

The value of the two path parameters, <site id or site code> and <building id or building code>, are as the name indicates, can be either id or code. Implicitly it means:

for instance, there is a building with 1 as building id and rake as building code, and it is located in the site with 5 as the site id and SF as the site code, then the following endpoint URL should retrieve the same result:

  • /sites/5/buildings/1
  • /sites/5/buildings/rake
  • /sites/SF/buildings/1
  • /sites/SF/buildings/rake

In order to reduce the ambiguity, there is a hint in the HTTP header, e.g. path-parameter-type with value as CODE or ID, indicating the given values of the path parameters are either code or ID.

Even though, the implementation of such resource endpoint contains lots of if conditions due to the ambiguity. However, from the end-user’s aspect, this seems to be handy

My question is whether such endpoint design is a good practice or a typical bad practice albeit the fact that there is a type indicator in the HTTP header?

rest – Is it a good practice to have an endpoint URL with parameter accepting different type of values?

To add to what Ewan has said, since HTTP already has “stringly” typed parameters, so it can be impossible to parse the parameters correctly. You should aim for precision in being able to express your intent. You will already have a lot of other factors to deal with, but you don’t want to have design ambiguity as part of it.

Ewan gives the counter-example of having both siteId: 1 and siteCode: “1”. That is bad, since you will eventually run into that, and it can become more and more complex. Avoiding that will make your life easier. And even if you don’t think you will have that problem, you must have logic to determine the intent of the client, but that is backwards– the client should determine their intent and be able to express it unambiguously to you by selecting an endpoint.

A single client will almost always pick one endpoint, so removing as many edge cases from that endpoint as possible will simplify usage for your clients as well. A client would rarely want to pass a siteId and siteCode to the same endpoint.

Function overloading in normal code works because of the type encoding of parameters, when that type information does not exist, you need to have another way of expressing it.

Documenting two separate endpoints will also be easier for your users. They will usually already know what sort of data they have access to and can pass in to you. Having a clearly separated set of two endpoints with exact specifications will allow them to find the one they are looking for and use that. It will be clearly documented and straightforward to use, instead of trying to understand the type differentiation between two.