macbook pro – How to make and receive calls in MBP via bluetooth?

I used to use the HandsFree 2 app. It worked perfectly and it even recorded calls, incredibly useful. But now in Catalina it crashes and it seems is discontinued.

I have been scouring for an alternative and have tried a couple (PhoneCall comes to mind), but none of them worked. I’m using an Android phone with modern Bluetooth support and MBP 16″ with Catalina 10.15.3.

What options do I have?

development – SystemUpdate() inside my remote event receive will raise “Access denied. You do not have permission to perform this action or access this resource.”

I have the following code inside my remote event receiver (which run on item added):-

 using (ClientContext context = TokenHelper.CreateRemoteEventReceiverClientContext(properties))
            {
              currentItem("OrderAssignToApprover2") = new FieldUserValue() { LookupId = spUser.Id };
              currentItem.SystemUpdate();

now if a non-admin user add an item then the remote event receiver will raise this error on the SystemUpdate():-

Access denied. You do not have permission to perform this action or access this resource.

but if admin user add an item then the remote event receiver will works fine OR if i change the remote event reicever to run using App Permsion, as follow:-

using (ClientContext context = Helpers.GetAppOnlyContext(properties.ItemEventProperties.WebUrl))
            {
              currentItem("OrderAssignToApprover2") = new FieldUserValue() { LookupId = spUser.Id };
              currentItem.SystemUpdate();

so can i assume that the SystemUpdate (unlike Update) require the user to have full control on the site?If this is the case then is there a way to allow non-admin users to execute SystemUpdate?
Thanks

scalability – Trying to understand NIC receive / send operations per second estimations for non-abstract system design

I am working through resources related to non-abstract large scale system design. I’m working through the google system design exercise in this video.

In the example solution presented, a calculation is made for the number of write requests a write server can pass through to an underlying storage subsystem. Essentially, the write service is receiving 4MB images from users and for each calling the storage system write operation in parallel. We assume that the storage subsystem has inifinite scaling. The write service hardware has a NIC capable of 1GB/s operation. We assume that the server has reasonable CPU and cache/memory to fully saturate the link up and down.

The example video tried to estimate the total number of write operations that a single server can achieve per second.

They state that:

  • it takes 4ms to receive the file from the user (4MB / 1000 MB/s)
  • it takes 4ms to send the file to the storage back end (4MB / 1000 MB/s)
  • Therefore 8ms to ‘save’ the file.
  • Therefore a single server instance
    can process 125 writes / second.

But this feel a bit wrong to me. If the server hardware is a standard NIC connected to a standard switch, then the connection is full duplex? Therefore the bandwidth up and down is not shared, and therefore the write operations would be roughly 250 / second?

python 3.x – Vue.js don’t receive Socket.io event from Flask

I created simple Flask app:

from flask import Flask, render_template, jsonify
from flask_cors import CORS
import socketio

app = Flask(__name__)
app.config("SECRET_KEY") = 'very-secret'
CORS(app)

sio = socketio.Server(cors_allowed_origins='*', logger=True, async_mode=None)
app.wsgi_app = socketio.WSGIApp(sio, app.wsgi_app)

@sio.event
def connect(sid, environ):
    print('connect ', sid)
    sio.emit('hello_world', {'data': 'A'})

@app.route('/api/ping', methods=('GET'))
def ping():
    sio.emit('hello_world', {'data': 'B'})

    return jsonify({
        'ok': True,
        'data': {
            'message': "Alive!"
        }
    })


if __name__ == '__main__':
    app.debug = True
    app.run(threaded=True)

and then I create simple Vue app, vue-cli starter with only differences in main.js where I added connection to socket server:

import Vue from 'vue'
import App from './App.vue'

import VueSocketIO from "vue-socket.io"

Vue.config.productionTip = false

Vue.use(
    new VueSocketIO({
        debug: true,
        connection: "http://localhost:5000"
    })
)

new Vue({
  render: h => h(App)  
}).$mount('#app')

and HelloWorld.vue component where I changed script part:

export default {
    name: 'HelloWorld',
    props: {
        msg: String
    },
    sockets: {
        connect () {
            console.log("Connected")
        },
        hello_world (data) {
            console.log(data)
        }
    },
    created () {
        this.$socket.emit("hello_world", {"data": "test!"})
    }
}

When I start Flask app and then start Vue app I have following log inside Flask:

$ python app.py
 * Serving Flask app "app" (lazy loading)
 * Environment: production
   WARNING: This is a development server. Do not use it in a production deployment.
   Use a production WSGI server instead.
 * Debug mode: on
 * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
 * Restarting with stat
 * Debugger is active!
 * Debugger PIN: 386-990-364
connect  dcf04629bd8245239789ae7c59fce915
emitting event "hello_world" to all (/)
127.0.0.1 - - (20/May/2020 19:15:22) "GET /socket.io/?EIO=3&transport=polling&t=N8p5ICr HTTP/1.1" 200 -
received event "hello_world" from dcf04629bd8245239789ae7c59fce915 (/)
127.0.0.1 - - (20/May/2020 19:15:22) "POST /socket.io/?EIO=3&transport=polling&t=N8p5IE0&sid=dcf04629bd8245239789ae7c59fce915 HTTP/1.1" 200 -

Which shows that Vue connected and Flask received this message emited by Vue, but my problem is that Vue don’t receive anything from this server even though that server emited hello_world. It doesn’t run neither connect nor hello_world functions inside sockets block in HelloWorld component. What I am missing here?

Receive 10,000 plr articles on credit, finance, insurance, mortgage for $2

Receive 10,000 plr articles on credit, finance, insurance, mortgage

Receive 10,000+ High Quality PLR articles on Credit Repair Articles, Sample Dispute Letters to improve credit score, Finance Insurance Mortgage Loan Credit within 24 hours!

Each article is ~300-1000 long.

Below are just some of the niches covered by these PLRs:

Credit

Credit Repair

Sample Credit

Fix Dispute letter

Currency

Trading

Debt

Debt Consolidation

Fundraising

Insurance

Investing

Leasing

Loans

Mortgage

Mutual Funds

Personal Finance

Stock Market

Taxes and many more … !

You are free to modify, edit and use these articles as per your requirements .Bundled with loads of quality information this Pack will help create valuable content for your website, blogs , social media etc.

All Articles have High Keyword Density and are SEO optimized .

.

Receive 10,000 plr articles on credit, finance, insurance, mortgage for $2

Receive 10,000 plr articles on credit, finance, insurance, mortgage

Receive 10,000+ High Quality PLR articles on Credit Repair Articles, Sample Dispute Letters to improve credit score, Finance Insurance Mortgage Loan Credit within 24 hours!

Each article is ~300-1000 long.

Below are just some of the niches covered by these PLRs:

Credit

Credit Repair

Sample Credit

Fix Dispute letter

Currency

Trading

Debt

Debt Consolidation

Fundraising

Insurance

Investing

Leasing

Loans

Mortgage

Mutual Funds

Personal Finance

Stock Market

Taxes and many more … !

You are free to modify, edit and use these articles as per your requirements .Bundled with loads of quality information this Pack will help create valuable content for your website, blogs , social media etc.

All Articles have High Keyword Density and are SEO optimized .

.

Javascript – All HTTPS requests to the API receive RequestError: Connect ETIMEDOUT, but HTTP requests work fine

I am trying to set up a web app with HTTPS to send data instead of HTTP. I have an SSL certificate that is current and valid from Letsencrypt. I use Node to set up a regular HTTP server on port 8080 and an HTTPS server on port 8081. This server runs on an Apache web server on an Amazon Linux 2 server.

const express = require("express");
const mongoose = require("mongoose");
const cors = require("cors");
const https = require("https");
const http = require("http");
const fs = require("fs");

const app = express();
app.use(cors());
app.use(express.json());

//Connect to MongoDB database
const db = require("./config/keys").mongoURI;
mongoose
  .connect(db, { useNewUrlParser: true })
  .then(() => console.log("MongoDB Connected"))
  .catch((err) => console.log(err));

//Link to routing files
app.use("/", require("./routes/index"));
app.use("/users", require("./routes/users"));

const httpServer = http.createServer(app);
const httpsServer = https.createServer(
  {
    key: fs.readFileSync("/etc/letsencrypt/live/trunk.pw/privkey.pem", "utf8"),
    cert: fs.readFileSync("/etc/letsencrypt/live/trunk.pw/cert.pem", "utf8"),
    ca: fs.readFileSync("/etc/letsencrypt/live/trunk.pw/chain.pem", "utf8"),
  },
  app
);

httpsServer.listen(8081, console.log("Server started on port 8081"));
httpServer.listen(8080, console.log("Server Started on port 8080"));

When I make a request to the server like GET http: // server name: 8080 / users / login, I get exactly what I expect from the GET / login route

router.get("/login", (req, res) => {
  console.log("Something");
  res.json({ message: "This is the login page!" });
});

I get "This is the login page!"

When I try to make a request on the https server like GET https: // server name: 8081 / users / login, this error is always displayed

Please check your networking connectivity and your time out in 0ms according to your configuration 'rest-client.timeoutinmilliseconds'. Details: RequestError: connect ETIMEDOUT 3.21.190.112:8081.

I have no idea why that doesn't work. Does anyone have any suggestions I could try?
Thank you so much!

Send and receive data from a Kubernetes job

I'm working on this containerized API where the user can interact with the output generated by the first step in two steps (one asynchronous and one synchronous) through a front-end service.

I wasn't sure if I should publish it here or on SO, but since it's more about design than implementation, I decided to publish it here. Let me know if you think it's more appropriate for SO.

The flow looks like this:

  1. The user has requested to run an asynchronous job for which they have been given a unique identifier.

The job creates a model that the user wants to test:

  1. They send a POST request to a front-end service with the unique identifier and the user-defined data (json) with which they want to test the model.

  2. The front-end service starts a Kubernetes job.

  3. The init container requests the model.

  4. The main container loads the model.

  5. The front-end service somehow sends a compute request to the main container using the user-supplied JSON.

  6. The answer is forwarded to the user.

  7. The pod stays in operation for a while so that when you receive similar requests for the same model, you don't have to boot up new pods every time.

  8. After a while, the pod is shut down and the job ends.

I'm having trouble with step 6 (and broadly with step 8). As far as I know, pods created from a job cannot be connected by a service. And even if this is possible, multiple requests for different models can occur simultaneously, so the service must be able to dynamically distinguish the pods.

The first iteration of this project was that the back-end container could load new models dynamically. However, after review, it was decided that this was not desirable. In order to load a new model, the container must now be restarted where the init container retrieves the correct data.

My first thought was to have the back-end job send a request to retrieve the data, but that leads to several problems:
1. The front-end service must store the json request in a database even though it is read only once because the back-end request can be forwarded to another front-end pod.
2. How would the job know to request new data? (Step 8)
3. How are the results sent to the user?

The second thought was to skip steps 8 and 9 and let the job run completely, let the front end read the job status, and read the logs when finished. At least that's how it works in the job documentation. However, this would mean that the job logs have to be reserved for output, which seems like a bad design.

However, we can build on that and instead of writing to the logs, writing to the database. This shares problem 1 of my first idea in the sense that the database contains data once read, but so far this seems to be the only viable solution.

What is your thought Is this the right way, or do you have a completely different way to summarize this behavior?

PHP – Does Gmail Need TLS to Receive Email?

I am currently working on sending emails in PHP directly stream_socket_client().

So far it works pretty well. I can send HTML emails with a DKIM signature that passes all tests on https://www.mail-tester.com

E-mails are received from my own mail server as well as from Yahoo and AOL without delivery or spam problems.

However, when I send to Gmail, not only does the email not arrive, there is no delivery report sent back. This is despite the fact that no errors were specified during the SMTP exchange.

If I send the exact same test email to Gmail mail() no problem arrives. I compared the source of both emails and they differ only slightly in the DKIM signature. I don't have an "x" header and the list of encrypted headers is different. I don't think that's the problem.

The only difference I can see between what I do and mail() is that I'm not using TLS yet. This seems like a strong candidate, but from what I've read in Gmail, TLS is not a prerequisite for receiving email.

Am i still missing something?

Below is the SMTP exchange:

SMTP Connection:

HOST: 220 mx.google.com ESMTP f128si20492958wme.149 - gsmtp

EHLO: 250-mx.google.com at your service, (89.200.136.185)

FROM: 250-SIZE 157286400

RCPT: 250-8BITMIME

DATA: 250-ENHANCEDSTATUSCODES

DATA WRITE: 250-PIPELINING

QUIT: 250-CHUNKING