logging – PHP: need a log-to-console solution that can be deployed offline to air-gapped system

just like the title says. CANNOT use Compose/Packagist/Whatever because the system is air-gapped and does not have access to internet, nor will it ever. also, this is not this.

need a solution that can log PHP messages/data to console WITHOUT using echo/print or anything else that uses the php://stdout which dirupts JSON exchanges with internal server. there used to be a Monolog implementation that did this, but haven’t been able to locate it and current Monolog deployment is all via Composer. this must be a STATIC install using basic include and require.

if there is such a beast, please reply. not interested in opinions on the operational constraints, so please do not post them. only interested in a solution that meets the criteria.

if this is the wrong exchange for this question, kindly point to the correct one. thank you.

oauth2 – API keys or Client Credentials flow? Good practice to control application access to a deployed web component

Company A developed a widget (Web Component) deployed on several clients/partners.

Only clients/partners must be authorized to use the widget.
No need to distinguish between each end users (clients’ users), because only the application itself, the widget itself must be authenticated/authorized.

I thought about using a relevant API KEY per client stored on a reverse proxy (mini-backend), the latter stored on the client’s infrastructure.
This way, the widget could target the reverse proxy, this one providing the hidden API KEY in order to deal with Company A’s backend.

Pros of this solution:
No front developments required on client’s infrastructure.
Cons:
If the API is stolen (in extreme cases), as there is no expiration by default, anyone could benefit of its power at any time except if additional check on domain + IP/DNS are carried out.

Other way, what about the Client Credentials flow of OAuth, that would consist in ensuring a communication between Company A’s backend AND client’s backend to generate a token allowing client/partner to ask for a business token that can expire in a short run.
Thus, the widget would be passed the business token in order to deal with Company A’s backend features at any time, before expiration.

Pros of this solution:
The token can expire and has therefore less potential of damages than a potential stolen API KEY that does not expire.
Cons:
Backend developments required on clients side in order to deal with the client credentials flow (service to service).
Front developments required on client’s infrastructure to provide the business token to the widget.

What would you suggest?

workflow – What permissions are required to invoke a custom WCF web service deployed to the ISAPI folder (exposed through /_vti_bin/)?

Related / follow up to this question.

Because a site’s email settings (SMTP server, “from” address, etc.) are not available through any client-side APIs, I created a mini WCF web service that is deployed to the ISAPI folder. It takes a site’s URL as a query parameter, and uses that to open SPSite and SPWeb objects in server side code, retrieve the email settings, and then return that in a simple JSON payload.

It works perfectly fine when I test it from Postman.

But the main reason I need it is because I need to use the HttpSend web request action inside a VisualStudio declarative custom action (.xaml file) in order to get those settings and send them on into a custom code activity run by Workflow Manager. (I’ve gotten CSOM code to run inside the custom code activity, but again, I can’t get those settings from any client APIs.)

I’m used to the general rule that a workflow will run with the permissions of the person who initiated it, but when I test my workflow and it gets to that step, I’m getting a 401 UNAUTHORIZED. And not only that – as one of the first lines in my web service method, I log to ULS that the web service was invoked. I can see those log entries from the times that I invoke the web service from Postman, but I don’t see them for when I tried it with the workflow. Which means to me that it’s not even choking on the part where I SPSite site = new SPSite(siteURL). It’s not even getting that far because the initial log entry isn’t there.

So… what permissions do I need to set up to enable a workflow to invoke a custom WCF web service at <site>/_vti_bin/path/to/service.svc?

I’m no expert at WCF, so I haven’t set up anything that I can see around authentication/authorization there. Do I need to do something explicit there? (Why would I, if it works for me from Postman as it is currently set up?)

Do I need to set up workflows to run with elevated permissions using the whole app permission model? If so, what would the minimum permission level need to be just to get the WCF service to run? I’d rather not give workflows full control on the site, and I have no problem using SPSecuity.RunWithElevatedPrivileges inside the web service to retrieve the values I need, as long as I can get it invoked in the first place.

Azure bot service issue: Test in web chat not available for deployed bot

I deployed my bot, which works fine locally, to the azure portal and succeeded according to the msg from deployment center. But when I try to test the bot in web chat, the chat window does not show up with only an error msg ‘Something went wrong, please contact the site administrator’. What may be the reason of this error and how could I fix this?

Thanks in advance!

How to pass a spring.config.additional-location context parameter to war deployed in websphere 8.5?

I have a spring-boot webapp.war which was developed and deployed to tomcat 8.5, which is configured using a tomcat context file like so:

<?xml version="1.0" encoding="UTF-8"?>
<Context>
    <Parameter name="spring.config.additional-location"
           value="/path/to/additional-location-application.yml"
           />   

</Context>

I’d like to deploy this on an existing websphere 8.5 server, however I am having trouble understanding how I can provide this optional context parameter to the application.

There is a similar question here: https://stackoverflow.com/questions/19968783/how-to-define-context-parameters-in-the-websphere-config-instead-of-the-web-xml

However ideally, I don’t want to modify the original web.xml in the war file. It’s also possible to provide this parameter in the JVM options like -Dspring.config.additional-location=/path/to/additional-location-application.yml however the problem with doing it that way, is that it would affect all the spring boot applications deployed on the same server

Notes:
Suggestion to use environment parameters instead:
https://ibm.software.websphere.application-server.narkive.com/Rcr0kBuU/how-to-define-context-parameters-with-was

javascript – problem with localstorage when deployed to the server

I m having a lot of problens using localstorage.
The localstorage keeps working perfectly in my pc, but when i deploy it to the server, it does not save nothing more in the browser, here is the code that post in local storage:

const inputs = {
  eMail: document.querySelector("#mauticform\(email\)"),
  loanOptions: document.querySelector("#mauticform\(loan_type\)")
}
const button = document.querySelector('#email-form > a')
const form = document.querySelector('#email-form')
const replaceSpaces = str => str.replace(/s/g, '%20')
    
button.addEventListener('click', (e) => {
  e.preventDefault()
  if(inputs.loanOptions.value == "") {
    alert('You need to chose a loan type')
    return
  }
  localStorage.setItem('email', replaceSpaces(inputs.eMail.value))
  localStorage.setItem('loan_type', replaceSpaces(inputs.loanOptions.value))
  window.open('myNextUrlHere(i cant show it)')
  form.submit()
})

and here is the code that receive this and use:

const typeformDiv = document.querySelector('.typeform-widget')
typeformDiv
  .setAttribute('data-url', `anotherUrlHere.com#email=${localStorage.getItem('email')}&loan_type=${localStorage.getItem('loan_type')}`)

my issue is… How can i make it work in the server?

python – In my flask website deployed in Heroku, if statements are not working

@app.route('/new', methods = ('GET', 'POST'))
def new():
if request.method == 'POST':

            
        global T_1_1
        global T_1_2
        global T_1_3
        global T_1_4
        global T_1_5
        global T_1_6
        global T_1_7

        session('buttonclicked') += 1

        T_1_1 = request.form('T_1_1')
        T_1_2 = request.form('T_1_2')
        T_1_3 = request.form('T_1_3')
        T_1_4 = request.form('T_1_4')
        T_1_5 = request.form('T_1_5')
        T_1_6 = request.form('T_1_6')
        T_1_7 = request.form('T_1_7')
    t1=T_1_1+T_1_2+T_1_3+T_1_4+T_1_5+T_1_6+T_1_7
    concepts3 = "abcd"
    if(int(t1) < 70):
        concepts3 =  ",Kinematics-Graphs,Centre of Mass"
    if(int(t1)<63):
        concepts3 = concepts3 + ",Moment of Inertia,One Dimensional Motion,Newton’s Laws"
    if(int(t1)<52):
        concepts3 = concepts3 + ",Work Done and Power,Impulse, Explosions and Collisions"
    if(int(t1) <42):
        concepts3 = concepts3 + ",Projectile Motion,Pure Rolling or Rolling without Slipping,Angular Momentum and its Conservation"
    if(int(t1)<37):
        concepts3 = concepts3 + ",Problems of Circular Motion,Friction"
    if(int(t1)<33):
        concepts3 = concepts3 + ",Linear Momentum, Mechanical Energy and Their Conservation"
    session('t1') = t1


    
    session('concepts3') = concepts3
    return redirect(url_for("page_decider"))

The above is my code when the function is called the if loops are not working. The concepts3 is only returning the first inserted value in it. But nothing is adding from if loop. I have deployed the app in Heroku. But when I run it locally it is running properly but in Heroku not working properly please help. I am not understanding why it is not working. Please help me to solve the problem.

linux – Docker Swarm service not deployed to worker nodes

I’m doing some fiddling on Docker Swarm to create a load balance for a Minecraft server.

I created a service that uses a mount type bind for the data, but when i create the service it is only available on the manager and not the worker node.

I tried re-joining the worker node to the swarm but that did nothing as well as remove any images the worked node had downloaded just in case they were old. Nothing i have done helps but if i run a different service like nginx, that does get deployed on the worker nodes.

This is what i am running

docker service create --name minecraft -p 19132:19132/udp --mount type=bind,src=/opt/minecraft,dst=/opt/minecraft repo/images:minecraft

Any idea why this is not working? I remember doing this before a couple months ago and it worked just fine but now that i am returning to this experiment, its not working.

sharepoint online – Deployed webpart from azure not showing in subsites

I am deploying a webpart from azure. After the my pipelines are competed the webpart is deployed successfully in the store and I can access is from the main site. Once I try to add it on a subsite it is not visible, therefore I cannot add it. Any ideas how I can fix that.

Here is my release pipeline structure:

  1. Use Node 10.x
  2. npm install -g @pnp/office365-cli
  3. o365 login https://$(tenant).sharepoint.com/$(catalogsite) --authType password --userName $(username) --password $(password)
  4. o365 spo app add -p "$(System.DefaultWorkingDirectory)/$(company)/drop/$(webpart_name).sppkg" --overwrite
  5. o365 spo app deploy --name project-details.sppkg --appCatalogUrl https://$(tenant).sharepoint.com/$(catalogsite)

Once I run my build pipeline and upload the file manually it works and shows everywhere but not when I automate the process. During manual upload I get a popup asking me if I agree to upload this webpart to Sharepoint Online. Maybe this is where it is failing.

Any help is appreciated!