patterns and practices – How does an Application Security Risk Matrix look like?

I am looking into “Secure Development Lifecycle” (SDLC) and I found a resource that say one should create a “Application Security Risk Matrix”. I guess that matrix has one row for each thread (e.g. SQL Injection), and the columns might be something like severity / likelihood / mitigation / comments. But I don’t really know. And I also would appreciate more examples of threads.

Is there a template / a reputable example for an Application Security Risk Matrix? Is it the same as a “Thread Matrix” (example)?

What I found

INFORMATION SECURITY RISK ANALYSIS – A MATRIX-BASED APPROACH: A lot of different matrices which seem to weight already found issues.

ICT risk matrix / Managing Cybersecurity Risks Using a Risk Matrix: More like a chart. I found a lot of tose via image search, but most of them were in private blogs.

patching – Best Practices / Standards / Tools for an OEM Vulnerability CERT?

For OEM selling high volume global connected consumer electronics products, I am reviewing best practices for setting up a dedicated corporate network security community emergency response team (CERT) for identifying security vulnerabilities and implementing priority solutions either target at cloud infrastructure or targeted for delivery to customers via over-the-air software updates to patch client software.

Typical OTA for software improvements and bug patching to these kinds of high volume global connected consumer electronics product lacks the dedicated focus and delivery speed necessary for an acceptable critical vulnerability response — hence my effort to investigate and identify a better strategy.

My background is in software planning and software development, with historic focuses on scientific computing and user interface/frontend design. I have no formal education, accreditation, or topical knowledge in InfoSec, so I would like community knowledge from topical experts on details of setting up such a workstream.

Naively I would assume the following needs:

  1. product manager to collate reports and ticket & on demand scrum master to ticket work
  2. fast access to cross-functional planning team, including access to decision maker(s) with sufficient authority to make potentially risky business decisions to rapidly remediate pressing issues in both cloud and/or network software.
  3. coding / testing / validation resources on both cloud infrastructure and client software teams on-call to prioritize over ordinary work
  4. sufficient local expertise to diagnose both well known and novel threat vectors
  5. community bug reporting to solicit vulnerability finding & disclosure, possibly with some level of financial reward for reporting.
  6. intelligence gathering & on demand briefing to identify/ticket non-submitted security issues.
  7. appropriate legal knowledge to harmonize with regional regulatory requirements in markets where product is sold.
  8. a documentation process for tracking incidents, planning to reduce them (dashboards with metrics / version controlled ticket system containing open and closed issues)
  9. API considerations in end product’s OTA system to deferentially handle security patches and perhaps force install / preconsent if a very time-critical and severe issue is encountered.
  10. Some level of public disclosure of addressed issues. (MAYBE)

That said beyond those general kind of layman’s assumptions, I lack knowledge on more InfoSec field-specific best practices in terms of:

  • Tools (bug reporting, ticketing(i), metrics)
  • Training (conferences / books / online courses / online documentation)
  • Certification (not sure if applicable)
  • Standards applicable to InfoSec CERT for security patching
  • Other unexpected / critical considerations

(i) for ticketing, I am knowledgeable from a software development / software planning perspective on preferred tools (e.g. JIRA), but I’m not sure if they are consider ideal in this topical space for internal CERT ticketing at the tracking / planning (my naive assumption is possibly same tools apply, but unsure).

To try to make this request less open-ended, desire is to identify needs and topical requirement sufficient to implement a slim CERT appropriate to supporting a high volume connected product, not necessarily a highly resourced CERT like a platform provider might build up. Thank you!

workflows – What are the best practices for starting/developing new features?

Do you develop the core/skeleton first and fast, and then focus on the details? Or do you focus on the details from the get go? What do you think it’s best.

I normally would do the former, develop the new service, test it in a perfect environment and then focus on details/texts/bugs. But I do find that sometimes I miss some things that I didn’t account for or that I just forgot to fix.

Looking for some feedback and see how others do the same.

sql server – Best practices for providing PHI/PII data to users in organization

I am looking for best practices for how to share data from a SQL Server database that contains PHI/PII to individuals that cannot view PHI/PII. In short, we maintain a SQL server database that contains 30+ columns of PHI/PII. We need to provide datasets for certain individuals that cannot access the PHI/PII columns, but can access the other fields to conduct different types of analyses.

Current structure: The database is 100GBs and is updated 4 times per day. All data resides on Azure SQL Server. The tables should maintain metadata so blob storage is not ideal (unless someone has an idea to maintain metadata in blob storage). The data will be accessed through PowerBI or Azure Databricks.

Several options come to mind:

  1. Create a DB Role and deny access to PHI/PII columns
  2. Create a new SQL server database and an ETL that copies non-PHI/non-PII data from one database to the new database
  3. Create a new schema and then create views of the tables which do not contain PII/PHI columns and then restrict access to users for this schema only
  4. Build an script/ leverage Azure Data Factory to copy data from the database to Azure table.

Security is number one priority. Any advice is much appreciated.

Which practices should i use while generating SMS codes for auth on my project?

I’m using good algo? Maybe i need to use something better?

Which generator is being used by Math.random()?

Have a look at the footnote on that Mozilla page:

Math.random() does not provide cryptographically secure random numbers. Do not use them for anything related to security

Will it increase security if i will check previous sent code for {$n} last minutes from db and regenerate another one if it will be same (brute same code twice case), so user always gets random 5941-2862-1873-3855-2987 without 1023-1023-2525-2525-3733-3733 case? I understand that chance is low, but anyway…

No. You shouldn’t try to make numbers “more random” by avoiding repetitions. It’s the property of random numbers that there is a chance that the next one will be the same as the previous, and it’s ok. You would actually weak it by discarding those $n last codes.

I would actually try to implement HOTP / TOTP on the sms codes. You don’t really need to, a random number would do, but that way you could easily change the users from sms authentication to local-app authentication, with no changes on the verifier code.

rest – B2B authentication best practices

Regardless of the type of application, having only one set of credentials is certainly bad practice. For starters, since it’s shared, it’s more likely to be treated less carefully; e.g. written down in places where people are likely to see, and thus could more easily fall into the hands of an attacker. Once the password is compromised, it’s a bigger deal to change it since you need to notify everyone.

It also greatly reduces your ability to audit user activities; without an unique user ID, you can’t track who is doing what.

If the company already uses a central authentication service (e.g. active directory/LDAP) or single-sign-on (SSO), it would be ideal to instead rely on that for authentication, with the added benefit that all the user information is already there, as well as group/permission information.

user behavior – Best practices for long data entry forms

I’d like to know if there is some kind of best practices for this kind of form.

There is a redesign project for a long data-entry form, not only for aesthetics and UX, but there are also some fields that are going to be deleted.
The users are employees, they already know the form and what they have to type in each input.

The form has 2 steps (two separate pages) and is designed in 3 columns, most of the fields are text and dropdowns (some will be changed to radio buttons or checkboxes matching the information).

I added this image as an example, it’s not the actual product.

enter image description here

Given that the user already has physical memory of this task and the goal is aiding its job, should the redesign be subtle? (keep the columns and make small changes) or a bigger one would be better in the long run?

Any other suggestion is appreciated!

programming practices – counting identifiers and operators as code size metric

I’m looking for a code metric for monitor and track over time the size of several projects and their components.

Also, I would like to use it for:

  • evaluate size reduction after refactoring
  • compare the size/length of two implementations of the same specification, even across languages.

I know there are cyclomatic complexity and ABC metrics for complexity, but in addition to that I want a separate metric about the length/size/volume/extension of some code regardless of their complexity.

Being aware of the advantages and disadvantages of SLOC, I wouldn’t use it for these purposes, mainly because I’m trying to measure code that is in different styles or languages.

For example this method body has 3 SLOC:

  public static String threeLines(String arg1) {
    String var1 = arg1 + " is";
    String var2 = var1 + " something";
    return var2;

Also this one:

  public String otherThreeLines(String arg1) {
    IntStream stream1 =";")).sequential().map(s -> s.replaceAll("(element", "")).map(s2 -> s2.replaceAll(")", "")).mapToInt(Integer::parseInt);
    double var1 = stream1.mapToDouble(Double::new).map(d -> d / 2).sum();
    return String.valueOf(var1);

Clearly, the second one is “bigger” or “longer”, has more to read and think about, so I would like it to have a higher value in the metric.

There is no aim to evaluate if some piece of code is good or bad because of this metric, it’s just for statistical analysis.

It would also be nice if it were simple to implement, without the need to fully parse file language.

So, I’m thinking of counting identifiers, keywords, and operators.
for example this fragment

String var2 = var1 + " something";

could be analyzed as (String) (var2) (=) (var1) (+) (" something"); and have a score of 6

And this fragment from the second method:

double var1 = stream1.mapToDouble(Double::new).map(d -> d / 2).sum();

could be analyzed as (double) (var1) (=) (stream1).(mapToDouble)((Double)::(new)).(map)((d) (->) (d) (/) (2)).(sum()); and receive a score of 14

So the size/length of the second one should be roughly 2x of the first one.

Are there any known code metrics that would show similar results?

design – Temporary features – Good practices

The key issue you seem to be describing is a lack of modularity. In other words, your system must be altered at a fundamental level since there are no mechanisms to add those features as a module.

There are different levels of modularity, and what is most appropriate depends on what kind of application you are building. Each of these represent different types of modularity:

  • Plugins: popularized in desktop applications, plug-ins extend the base product with new features. It could be an editing mode, or a way to process pictures, etc.
  • Extensions: extensions integrate more pervasively, but has the same impact. The extension can add new tables, as well as code that works with those tables. Extensions can be either server side or client side.
  • Microservices: encapsulate a set of functionality on the server side. A microservice is intended to be fully encapsulated and deployed as an independent unit.

These are not the only way of extending your application. The key take away here is that you have to design for modularity. When you have temporary features, you need to be able to add support for the feature for the time it’s necessary, and then remove that capability when it is no longer necessary.

So, inside your module you have to decide how to store data:

  • Don’t extend existing tables. Either add a new table with 1:1 mapping of records, or track that information outside of your database
  • Plan how the user interface gets the new fields, etc.

The bottom line is that it takes longer to build modular code. There’s more to plan and think about. However, if the infrastructure that makes modular code is in place, then it does make it easier to add your temporary features, and remove them when they are no longer necessary.

database – Erlang: Seeking advice on BEST practices with ETS table manager

I wrote a some what “simple” module to retain ETS tables should the owner crash. I think it’s small enough for review; big enough to make enough mistakes. Honestly I’m a hobbyist programmer aiming for production code. I will take any criticism to get me closer.

More logging? Use of spec()? Anything.

Thank you.


-define(MASTER_TABLE, etsmgr_master).




-export((start_link/1, init/1)).

%% ====================================================================
%% API/Exposed functions
%% ====================================================================

spawn_init(SupRef) ->
    register(ets_fallback, self()),
    monitor(process, SupRef),

%% ====================================================================
%% Internal functions
%% ====================================================================

loop() ->
        {give_away,{?MASTER_TABLE, Pid}} ->
                {registered_name, ets_manager} = erlang:process_info(Pid, registered_name),
                ets:give_away(?MASTER_TABLE, Pid, ())
                error:{badmatch, _} ->  logger:error("Illegal process (~p) attempting ETS Manager table ownership.",(Pid)),
                                        {error, badmatch};
                error:badarg -> gen_server:cast(ets_manager, initialize);
                Type:Reason -> logger:error("Unhandled catch -> ~p : ~p",(Type, Reason)),
                                {Type, Reason}
        {'DOWN',_,_,_,_} ->
            case proplists:get_value(owner,ets:info(?MASTER_TABLE)) =:= self() of
                true -> ets:delete(?MASTER_TABLE);
                false -> continue
        _ -> continue

%% ====================================================================
%% Behavioural functions
%% ====================================================================

start_link(()) ->
    supervisor:start_link({local, ?MODULE}, ?MODULE, ()).

init(()) ->
    Pid = spawn(?MODULE, spawn_init, (self())),

    {ok, { #{}, (

        % === ETS Manager: gen_server to not lose table data
        #{  id => ets_manager,
            start => {ets_manager, start_link, (Pid)}




-export((start_link/1, init/1, handle_call/3, handle_cast/2, 
         handle_info/2, terminate/2, code_change/3)).

%% ====================================================================
%% API functions
%% ====================================================================
-export((request_table/1, create_table/2, create_table/3, update_pid/3)).

request_table(TableId) ->
    gen_server:call(?MODULE, {tbl_request, TableId}).

create_table(TableId, TblOpts) ->
    create_table(TableId, TblOpts, ()).

create_table(TableId, TblOpts, HeirData) ->
    gen_server:call(?MODULE, {tbl_create, TableId, TblOpts, HeirData}).

update_pid(TableId, Pid, HeirData) ->
    case process_info(Pid, registered_name) of
        {registered_name, ?MODULE} ->
            ets:setopts(TableId, {heir, Pid, HeirData});
        _ ->
            {error, eperm}

%% ====================================================================
%% Behavioural functions
%% ====================================================================
start_link(Pid) when is_pid(Pid) -> gen_server:start_link({local, ?MODULE}, ?MODULE, Pid, ()).

%% ====================================================================
%% Framework Functions
%% ====================================================================

%% init/1
%% ====================================================================
init(FallbackPID) ->
    FallbackPID ! {give_away,{?MASTER_TABLE, self()}},
   {ok, FallbackPID}.

%% handle_call/3
%% ====================================================================
handle_call({tbl_request, TableId}, {Pid, _}, FallbackPID) ->
    Me = self(),

    case ets:lookup(?MASTER_TABLE, TableId) of
    ({TableId, Me, Requestor, HeirData}) ->

        case process_info(FallbackPID, registered_name) of
            {registered_name, Requestor} ->
                ets:give_away(TableId, Pid, HeirData),
                {reply, {ok, TableId},FallbackPID};
            _ ->
                {reply, {error, eaccess}, FallbackPID}
    () ->
        {reply, {error, einval}, FallbackPID};
    ({TableId, _, _, _}) ->
        {reply, {error, ebusy}, FallbackPID}

handle_call({tbl_create, TableId, TblOpts, HeirData}, {Pid, _}, FallbackPID) ->
    Opts = proplists:delete(heir, proplists:delete(named_table, TblOpts)),

    Requestor = 
        case process_info(Pid, registered_name) of
            {registered_name, Module} -> Module;
            _ -> ()
    Reply =
            ets:new(TableId,(named_table | ( {heir, self(), HeirData} | Opts))),
            ets:insert(?MASTER_TABLE, {TableId, Pid, Requestor, HeirData}),
            ets:give_away(TableId, Pid, HeirData)
            _:_ -> 
                case ets:info(TableId) of
                    undefined -> continue;
                    _ -> ets:delete(TableId)
                ets:delete(?MASTER_TABLE, TableId),
                {error, ecanceled}

    {reply, Reply, FallbackPID}.

%% handle_info/2
%% ====================================================================
handle_info({'ETS-TRANSFER', ?MASTER_TABLE, _, _}, FallbackPID) ->
    ({?MODULE, OldPid}) = ets:lookup(?MASTER_TABLE, ?MODULE),
    ets:foldl(fun xfer_state/2,OldPid,?MASTER_TABLE),
    {noreply, FallbackPID};

handle_info({'ETS-TRANSFER', TableId, _, _}, FallbackPID) ->
    ets:update_element(?MASTER_TABLE, TableId,{2, self()}),
    {noreply, FallbackPID}.

%% handle_cast/2
%% ====================================================================
handle_cast(initialize, FallbackPID) ->
    ?MASTER_TABLE = ets:new(?MASTER_TABLE,(named_table, set, private, {heir, FallbackPID, ()})),
    ets:insert(?MASTER_TABLE, {?MODULE, self()}),
    {noreply, FallbackPID}.

%% Placeholders
%% ====================================================================
terminate(_, _) ->

code_change(_, FallbackPID, _) ->
    {ok, FallbackPID}.

%% ====================================================================
%% Internal Functions
%% ====================================================================

xfer_state({TableId, OldPid, _, _}, OldPid) ->
    ets:delete(?MASTER_TABLE, TableId), OldPid;
xfer_state({TableId, Pid, _, HeirData}, OldPid) ->
    Pid ! {'ETS-NEWMANAGER', self(), TableId, HeirData}, OldPid;
xfer_state({?MODULE, OldPid}, OldPid) ->
    ets:insert(?MASTER_TABLE, {?MODULE, self()}), OldPid.