vpn – Bypass openvpn for incoming connections

I have a PC at home which I can access over SSH (my router forwards that port). It works fine except when my PC is connected to the VPN, in which case all traffic (including response to incoming connections) goes out the VPN. This is because when the VPN is on and an SSH client tries to connect, my PC sees incoming connection from my SSH client’s public IP address and replies to its public IP address via the VPN interface. So the SSH client trying to connect to my PC at IP address 1.2.3.4 (my router’s public IP address) receives the response from IP address 5.6.7.8 (my VPN’s public IP address) and ignores it.

Is there a way to do either of those

  1. configure my router (Ubiquiti EdgeRouter X) to masquerade and make incoming connections appear to come from its local IP (as is the case with hairpinning when a client on the LAN connects to the PC via its public IP address)
  2. configure my PC to send replies to incoming connections via the normal LAN interface rather than the tunnel interface

P.S. This is a duplicate of openvpn routing setup for incoming connections on client but this question has not been solved either.

system error – Hotspot message “Maximum mobile hotspot connections reached” spewing

When I turn on hotspot,
This message
“Settings
Maximum mobile hotspot connections reached
Please disconnect one device to enable the new d…”
Appears in bursts of three or four. Sporadically, intervals of a few seconds to a couple of minutes.
Can’t click it, it simply disappears.
The procedure “resetting network connections & Doing a hard reset (press & hold power till reboot) & resetting network connection” recommended to clear microcode issues doesn’t seem to do much. The connection does work. I’d ignore it if I could turn it’s sound off (uses the texting sound). There’s only one W10 PC connecting to it.
Moto G7 Android V10 fully updated. I keep hoping they will fix this stuff in an update.
I’ve looked for methods to show what’s connected, most google hits are worthless – whatever versions they are running don’t apply. Some sites say there’s no limit on hotspot connections, others say one, but there’s a limit on wifi connections. Whatever.
I have no problems tethering internet via USB, but that’s not the point.
Thanks.

linux – How can I debug an intermittent server crash that rejects SSH connections, causes I/O errors, and renders existing sessions useless?

I’ve been running a home server based on debian for ~10 years now and recently decided to replace it with an HP EliteDesk 705 65W G4 Desktop Mini PC, but the new machine keeps crashing.

The machine will run fine for a few hours, then suddenly begins:

  1. Rejecting SSH connections immediately
  2. Returning “Command not found” for any commands run in existing ssh connections (e.g., ls)
  3. Giving I/O errors in response to running processes like docker stats
  4. Not showing any display output to connected monitors

I typically run a few home services in docker containers and initially thought an oddity of my config might be causing the crashes, so I decided to select a random existing github repo with a few containers and run it from scratch. I decided to use this HTPC download box repo, which seems to have a few linuxserver.io containers and should be a reasonable approximation for the lower bound of the workload my services would put on the machine.

Steps I have followed to create the crash:

  1. Install headless debian (netinstall image); configure the OS by following the below steps:
  2. Set hostname: test
  3. Set domain: example.com
  4. Add a new user, add user to sudoers, set up SSH to allow for only keys and only nick can log in (including adding my desktop’s public key to ~/.ssh/authorized_keys)
adduser nick
usermod -aG sudo nick
sudo nano /etc/ssh/sshd_config; the specific settings you want are:
PermitRootLogin no
Protocol 2
MaxAuthTries 3
PubkeyAuthentication yes
PasswordAuthentication no
PermitEmptyPasswords no
ChallengeResponseAuthentication no
UsePAM no
AllowUsers nick
  1. Restart ssh: sudo service sshd restart
  2. Install necessary services: sudo apt-get install docker docker-compose git
  3. Add your user to the docker group: sudo usermod -aG docker nick
  4. Generate a new ssh key and add it to your GitHub account: ssh-keygen -t ed25519, then copy the public key to GH
  5. Set your global git vars: git config --global user.name 'Nick'; git config --global user.email nick@example.com

Wait 2 days, verify no crash occurs

  1. Run the following commands:
cd /opt
sudo mkdir htpc-download-box
sudo chown -R nick:nick htpc-download-box
git clone git@github.com:sebgl/htpc-download-box.git
cd htpc-download-box
docker-compose up -d

(Note: I do no configuration whatsoever of the containers in the docker-compose file, I just start them running and then confirm I can access them via browser. I use the exact .env.example as the .env for the project.)

Wait a few hours, observe that server has crashed. Unable to log in via SSH and other issues as stated above. Interestingly, I can still view the web UI of some containers (e.g., sonarr) but when trying to browse the filesystem via that web UI, I am unable to see any folders and manually typing the path indicates that the user has no permissions to view that folder.

Since I observe crashes when running either my actual suite of services or the example repo detailed here, I must conclude it’s an issue with the machine itself. I have tested the nvme drive with smartmontools and both the short and long tests report no errors.

I am not familiar enough with Linux to know how to proceed from here (maybe give it another 10 years!) – what logs can I examine to determine what might cause the crash? Should I be setting up additional logging of some sort to try to ascertain the cause?

All of the issues are so general (I/O errors, SSH refusal, etc.) that Googling for the past week has not gotten me anywhere; I was sure the clean reinstall and using a new repo would not crash and I could then incrementally add my actual docker containers until a crash occurred, therefore finding the problematic container via trial and error, but I am now at a complete loss for how to proceed.

Route connections to individual PostgreSQL databases

I have this technical problem. There are PostgreSQL databases “a1“, “a2” … sitting on server “A“. There are also databases “b1“, “b2” … residing on server “B“. Database names do not overlap meaning if there is a databases “xyz” on server “A” then there will be no “xyz” on server “B” and vice versa.

I need to set up a server “X” which acts a a gateway listening for PostgreSQL connections and depending on what database the connection is requested for routes that request to the server either “A” or “B“. Server “X” has knowledge whether any given database lives on server “A” or “B“.

I looked into pgbouncer. It allows per-database mapping but the pass-through mode is not straightforward.

I wonder if there are simpler existing daemon/agent/proxy that allows a per-database connection routing.

rt.representation theory – Spin networks as functionals on the moduli space of connections modulo gauge transformations on a graph

I have just read a big part of John Baez’s nice article Spin network states in Gauge theory. The definitions are quite clear in that article. However, there is a part which is not explained explicitly there, in my humble opinion. Let us say you have a spin network corresponding to a compact connected Lie group $G$ (the gauge group), which is a finite oriented graph with each edge labeling an irreducible representation of $G$ and each vertex $v$ labeling an intertwining operator (for the action of $G$) from the tensor product of the (irreducible) representations labeled by the set of all incoming edges at $v$ to the tensor product of the reps labeled by the set of all outgoing edges at $v$.

What I would like to understand though, is how to regard a spin network as an element of $mathrm{L}^2(mathcal{A}/mathcal{G})$, where $mathcal{A}$ denotes the space of connections on the graph (here regarded as parallel transport maps associated to each edge, please see the article for more detail) and $mathcal{G}$ is the group of gauge transformations which acts on $mathcal{A}$.

Given a spin network and a “connection” $A$ on the graph, how do we “evaluate” the spin network on the connection $A$, or rather on the equivalence class of $A$ under the group $mathcal{G}$ of gauge transformations? The description in that article is via identifications and the Peter-Weyl theorem. Could someone perhaps spell it out for me in more concrete terms?

How do people study analysis? Specifically, how can I build connections between different topics faster or before I do the exercise?

I would really appreciate any suggestions on how to study analysis because I always get panic in an analysis exam. I immediately get panic whenever I find the exercises behind the book to be very different from what appears on the exam, and they always are. Sometimes even understanding a theorem is not easy to me. After I finally understood them, I am still unable to quickly make connections between the theorem that I just learned and the ones I learned 2 months ago, UNLESS I have seen some exercises around their connections before the exam, which never happened actually.

I usually would just suck it up all by myself when I was in undergrad, but now since I’m in a master’s program which is significantly shorter than any undergrad program, I am very concerned about what this indicates or where this will put me at. Analysis always makes me feel like I am inferior, and the study strategies I have always been using are not working at all. No matter how hard I try and how much time I am spending on it, I’m just not getting ahead of the class. I always feel gaps between my knowledge system that hold me from excelling in an analysis class, but I don’t exactly know how to fill these gaps. I would LOVE to find a way to master analysis, but I just don’t know how after I have tried going over the proofs in the class, understanding the theorems, etc. From next week we are going into a new chapter that is not cumulative upon the previous material which I am uncomfortable with, so this literally is my last chance if I want to flip my situation in this class. Any suggestions for studying analysis, please…?

debian – Frozen/dropped TCP connections in AWS

We have a number of AWS EC2 instances within the same AZ that transmit large amounts of network traffic to each other. In a small fraction of the connections, when a client on host A connects to a server on host B and sends a large amount of data (e.g. 20 GB) from A to B at a high rate, the TCP connection freezes or times out. I’ve investigated this and the symptoms aren’t always the same, but typically it appears that when a connection is impacted by this problem the sender (on host A) stops receiving the ACKs that the receiving side (host B) is sending to A after some time. So at first all ACKs pass through, and then they get blocked in the middle of the connection. Also, VPC Flow Logs show that some packets returning from host B (the receiver) to host A (the sender) are rejected.

This happens on a number of EC2 instances (typically r5a.xlarge) that run Debian Linux 10 with Linux kernel 5.3.9 and the ENA AWS network driver that’s shipped as part of Debian’s kernel. They run Docker 18.09.1, installed via the docker.io Debian Buster package. Interestingly, I’m not able to reproduce the issue on Amazon Linux 2 (with Docker installed).

I’ve been able to reproduce it by letting the following simple experiment run in a loop for some time:

# Host B (server receiving data)
docker run -it --rm -p 20098:20098 debian:buster bash
apt-get update && apt-get -y install netcat-openbsd
while true; do date; nc -l -p 20098 | dd of=/dev/null bs=1M; done

# Host A (client sending data)
docker run -it --rm debian:buster bash
apt-get update && apt-get -y install netcat-openbsd
while sleep 1; do date; dd if=/dev/zero bs=1M count=20480 | nc -q 1 <server> 20098; done

The vast majority of times the experiment will succeed in sending 20 GB over the wire, but every once in a while (sometimes within minutes, sometimes within a few hours or even days) the transfer will get stuck or get cut short due to an unexpected disconnect/timeout. On some hosts I can reproduce the problem a lot more easily than on other hosts. The hosts where I can reproduce this more quickly tend to have more Docker containers and network activity, but I’m not sure yet if there’s a causal relationship there. I was also able to reproduce the issue directly when running the above netcat experiment directly the host rather than within a Docker container, although it does seem a lot harder to reproduce this way. This happens on hosts within the same VPC, AZ, and even subnet so we can rule out cross-region/cross-AZ/cross-subnet connectivity issues as a cause.

Here’s example tcpdump output that shows network activity when this happens. I’m skipping many successfully transmitted and ACK’ed TCP packets within this same connection. This information was captured with tcpdump -i eth0 -p -G 600 -s 80 -w ... host ... and port 20098. This is captured on the host’s network interface, not inside the Docker network, so network address translations have already been applied.

Tcpdump output on host A (172.20.3.188, the sending client):

08:00:03.615061 IP 172.20.3.188.35506 > 172.20.3.89.20098: Flags (.), seq 322435576:322444525, ack 1, win 491, options (nop,nop,TS val 4223113896 ecr 683441101), length 8949
08:00:03.615064 IP 172.20.3.188.35506 > 172.20.3.89.20098: Flags (.), seq 322444525:322453474, ack 1, win 491, options (nop,nop,TS val 4223113896 ecr 683441101), length 8949
08:00:03.615066 IP 172.20.3.188.35506 > 172.20.3.89.20098: Flags (.), seq 322453474:322462423, ack 1, win 491, options (nop,nop,TS val 4223113896 ecr 683441101), length 8949
08:00:03.615069 IP 172.20.3.188.35506 > 172.20.3.89.20098: Flags (.), seq 322462423:322471372, ack 1, win 491, options (nop,nop,TS val 4223113896 ecr 683441101), length 8949
08:00:03.615071 IP 172.20.3.188.35506 > 172.20.3.89.20098: Flags (.), seq 322471372:322480321, ack 1, win 491, options (nop,nop,TS val 4223113896 ecr 683441101), length 8949
08:00:03.615073 IP 172.20.3.188.35506 > 172.20.3.89.20098: Flags (.), seq 322480321:322489270, ack 1, win 491, options (nop,nop,TS val 4223113896 ecr 683441101), length 8949
08:00:03.615076 IP 172.20.3.188.35506 > 172.20.3.89.20098: Flags (.), seq 322489270:322498219, ack 1, win 491, options (nop,nop,TS val 4223113896 ecr 683441101), length 8949
08:00:03.615140 IP 172.20.3.89.20098 > 172.20.3.188.35506: Flags (.), ack 322435576, win 256, options (nop,nop,TS val 683441101 ecr 4223113896), length 0
08:00:03.615178 IP 172.20.3.89.20098 > 172.20.3.188.35506: Flags (.), ack 322453474, win 117, options (nop,nop,TS val 683441101 ecr 4223113896), length 0
08:00:03.824740 IP 172.20.3.188.35506 > 172.20.3.89.20098: Flags (.), seq 322453474:322462423, ack 1, win 491, options (nop,nop,TS val 4223114105 ecr 683441101), length 8949
08:00:04.256748 IP 172.20.3.188.35506 > 172.20.3.89.20098: Flags (.), seq 322453474:322462423, ack 1, win 491, options (nop,nop,TS val 4223114537 ecr 683441101), length 8949
08:00:05.084733 IP 172.20.3.188.35506 > 172.20.3.89.20098: Flags (.), seq 322453474:322462423, ack 1, win 491, options (nop,nop,TS val 4223115365 ecr 683441101), length 8949
08:00:06.748724 IP 172.20.3.188.35506 > 172.20.3.89.20098: Flags (.), seq 322453474:322462423, ack 1, win 491, options (nop,nop,TS val 4223117029 ecr 683441101), length 8949
08:00:10.108720 IP 172.20.3.188.35506 > 172.20.3.89.20098: Flags (.), seq 322453474:322462423, ack 1, win 491, options (nop,nop,TS val 4223120389 ecr 683441101), length 8949
08:00:16.764722 IP 172.20.3.188.35506 > 172.20.3.89.20098: Flags (.), seq 322453474:322462423, ack 1, win 491, options (nop,nop,TS val 4223127045 ecr 683441101), length 8949
08:00:30.076723 IP 172.20.3.188.35506 > 172.20.3.89.20098: Flags (.), seq 322453474:322462423, ack 1, win 491, options (nop,nop,TS val 4223140357 ecr 683441101), length 8949
08:00:57.724718 IP 172.20.3.188.35506 > 172.20.3.89.20098: Flags (.), seq 322453474:322462423, ack 1, win 491, options (nop,nop,TS val 4223168005 ecr 683441101), length 8949
08:01:50.972736 IP 172.20.3.188.35506 > 172.20.3.89.20098: Flags (.), seq 322453474:322462423, ack 1, win 491, options (nop,nop,TS val 4223221253 ecr 683441101), length 8949
08:03:37.468722 IP 172.20.3.188.35506 > 172.20.3.89.20098: Flags (.), seq 322453474:322462423, ack 1, win 491, options (nop,nop,TS val 4223327749 ecr 683441101), length 8949
08:05:38.304715 IP 172.20.3.188.35506 > 172.20.3.89.20098: Flags (.), seq 322453474:322462423, ack 1, win 491, options (nop,nop,TS val 4223448585 ecr 683441101), length 8949
08:07:39.132913 IP 172.20.3.188.35506 > 172.20.3.89.20098: Flags (.), seq 322453474:322462423, ack 1, win 491, options (nop,nop,TS val 4223569414 ecr 683441101), length 8949

Tcpdump output on host B (172.20.3.89, the receiving server):

08:00:03.615206 IP 172.20.3.188.35506 > 172.20.3.89.20098: Flags (.), seq 322435576:322453474, ack 1, win 491, options (nop,nop,TS val 4223113896 ecr 683441101), length 17898
08:00:03.615225 IP 172.20.3.188.35506 > 172.20.3.89.20098: Flags (.), seq 322453474:322498219, ack 1, win 491, options (nop,nop,TS val 4223113896 ecr 683441101), length 44745
08:00:03.615228 IP 172.20.3.89.20098 > 172.20.3.188.35506: Flags (.), ack 322453474, win 117, options (nop,nop,TS val 683441101 ecr 4223113896), length 0
08:00:03.615256 IP 172.20.3.89.20098 > 172.20.3.188.35506: Flags (.), ack 322498219, win 0, options (nop,nop,TS val 683441101 ecr 4223113896), length 0
08:00:03.615908 IP 172.20.3.89.20098 > 172.20.3.188.35506: Flags (.), ack 322498219, win 1642, options (nop,nop,TS val 683441102 ecr 4223113896), length 0
08:00:03.616389 IP 172.20.3.89.20098 > 172.20.3.188.35506: Flags (.), ack 322498219, win 3373, options (nop,nop,TS val 683441102 ecr 4223113896), length 0
08:00:03.618742 IP 172.20.3.89.20098 > 172.20.3.188.35506: Flags (.), ack 322498219, win 6862, options (nop,nop,TS val 683441105 ecr 4223113896), length 0
08:00:03.621737 IP 172.20.3.89.20098 > 172.20.3.188.35506: Flags (.), ack 322498219, win 13913, options (nop,nop,TS val 683441108 ecr 4223113896), length 0
08:00:03.824879 IP 172.20.3.188.35506 > 172.20.3.89.20098: Flags (.), seq 322453474:322462423, ack 1, win 491, options (nop,nop,TS val 4223114105 ecr 683441101), length 8949
08:00:03.824905 IP 172.20.3.89.20098 > 172.20.3.188.35506: Flags (.), ack 322498219, win 24576, options (nop,nop,TS val 683441311 ecr 4223113896,nop,nop,sack 1 {322453474:322462423}), length 0
08:00:04.256895 IP 172.20.3.188.35506 > 172.20.3.89.20098: Flags (.), seq 322453474:322462423, ack 1, win 491, options (nop,nop,TS val 4223114537 ecr 683441101), length 8949
08:00:04.256929 IP 172.20.3.89.20098 > 172.20.3.188.35506: Flags (.), ack 322498219, win 24576, options (nop,nop,TS val 683441743 ecr 4223113896,nop,nop,sack 1 {322453474:322462423}), length 0
08:00:05.084873 IP 172.20.3.188.35506 > 172.20.3.89.20098: Flags (.), seq 322453474:322462423, ack 1, win 491, options (nop,nop,TS val 4223115365 ecr 683441101), length 8949
08:00:05.084908 IP 172.20.3.89.20098 > 172.20.3.188.35506: Flags (.), ack 322498219, win 24576, options (nop,nop,TS val 683442571 ecr 4223113896,nop,nop,sack 1 {322453474:322462423}), length 0
08:00:06.748872 IP 172.20.3.188.35506 > 172.20.3.89.20098: Flags (.), seq 322453474:322462423, ack 1, win 491, options (nop,nop,TS val 4223117029 ecr 683441101), length 8949
08:00:06.748901 IP 172.20.3.89.20098 > 172.20.3.188.35506: Flags (.), ack 322498219, win 24576, options (nop,nop,TS val 683444235 ecr 4223113896,nop,nop,sack 1 {322453474:322462423}), length 0
08:00:10.108863 IP 172.20.3.188.35506 > 172.20.3.89.20098: Flags (.), seq 322453474:322462423, ack 1, win 491, options (nop,nop,TS val 4223120389 ecr 683441101), length 8949
08:00:10.108889 IP 172.20.3.89.20098 > 172.20.3.188.35506: Flags (.), ack 322498219, win 24576, options (nop,nop,TS val 683447595 ecr 4223113896,nop,nop,sack 1 {322453474:322462423}), length 0
08:00:16.764877 IP 172.20.3.188.35506 > 172.20.3.89.20098: Flags (.), seq 322453474:322462423, ack 1, win 491, options (nop,nop,TS val 4223127045 ecr 683441101), length 8949
08:00:16.764905 IP 172.20.3.89.20098 > 172.20.3.188.35506: Flags (.), ack 322498219, win 24576, options (nop,nop,TS val 683454251 ecr 4223113896,nop,nop,sack 1 {322453474:322462423}), length 0
08:00:30.076864 IP 172.20.3.188.35506 > 172.20.3.89.20098: Flags (.), seq 322453474:322462423, ack 1, win 491, options (nop,nop,TS val 4223140357 ecr 683441101), length 8949
08:00:30.076881 IP 172.20.3.89.20098 > 172.20.3.188.35506: Flags (.), ack 322498219, win 24576, options (nop,nop,TS val 683467563 ecr 4223113896,nop,nop,sack 1 {322453474:322462423}), length 0
08:00:57.724863 IP 172.20.3.188.35506 > 172.20.3.89.20098: Flags (.), seq 322453474:322462423, ack 1, win 491, options (nop,nop,TS val 4223168005 ecr 683441101), length 8949
08:00:57.724877 IP 172.20.3.89.20098 > 172.20.3.188.35506: Flags (.), ack 322498219, win 24576, options (nop,nop,TS val 683495211 ecr 4223113896,nop,nop,sack 1 {322453474:322462423}), length 0
08:01:50.972908 IP 172.20.3.188.35506 > 172.20.3.89.20098: Flags (.), seq 322453474:322462423, ack 1, win 491, options (nop,nop,TS val 4223221253 ecr 683441101), length 8949
08:01:50.972922 IP 172.20.3.89.20098 > 172.20.3.188.35506: Flags (.), ack 322498219, win 24576, options (nop,nop,TS val 683548459 ecr 4223113896,nop,nop,sack 1 {322453474:322462423}), length 0
08:03:37.468882 IP 172.20.3.188.35506 > 172.20.3.89.20098: Flags (.), seq 322453474:322462423, ack 1, win 491, options (nop,nop,TS val 4223327749 ecr 683441101), length 8949
08:03:37.468902 IP 172.20.3.89.20098 > 172.20.3.188.35506: Flags (.), ack 322498219, win 24576, options (nop,nop,TS val 683654955 ecr 4223113896,nop,nop,sack 1 {322453474:322462423}), length 0
08:05:38.304895 IP 172.20.3.188.35506 > 172.20.3.89.20098: Flags (.), seq 322453474:322462423, ack 1, win 491, options (nop,nop,TS val 4223448585 ecr 683441101), length 8949
08:05:38.304942 IP 172.20.3.89.20098 > 172.20.3.188.35506: Flags (.), ack 322498219, win 24576, options (nop,nop,TS val 683775791 ecr 4223113896,nop,nop,sack 1 {322453474:322462423}), length 0
08:07:39.133073 IP 172.20.3.188.35506 > 172.20.3.89.20098: Flags (.), seq 322453474:322462423, ack 1, win 491, options (nop,nop,TS val 4223569414 ecr 683441101), length 8949
08:07:39.133092 IP 172.20.3.89.20098 > 172.20.3.188.35506: Flags (.), ack 322498219, win 24576, options (nop,nop,TS val 683896619 ecr 4223113896,nop,nop,sack 1 {322453474:322462423}), length 0

Notice how host A stops receiving packets from host B after it receives the 08:00:03.615178 ... ack 322453474 packet.

Here is the output of VPC Flow Logs during a failed connection (captured at a different time than the tcpdump output above):

VPC Flow Log output

Given that Amazon Linux 2 doesn’t seem to exhibit this problem I’ve tried to bring the network stack on Debian a bit more closely in line with Amazon Linux. I’ve tried to do the following on the Debian instances:

  • Apply some of the network sysctl settings from Amazon Linux to Debian
  • Upgrade the Linux kernel to 5.8.10
  • Upgrade the ena driver to 2.2.11
  • Upgrade Docker to 19.03.13
  • Explicitly allow ingress and egress traffic to/from ephemeral ports (32768-65535) to/from all IPs within our VPC in the security group that these hosts use

None of these seem to resolve the issue I’m having. What could possibly cause these dropped/rejected packets?

short connections – How long does it take to transfer from Terminal 3 to Terminal 1 at SFO?

Officially the “Minimum Connection Time” for United Domestic -> United International at SFO is 45 minutes, and as you’re above this it is still what is considered a “legal” connection.

However all this really means is that if you miss your connection the airline is responsible for finding you a seat on a later SFO-LHR flight. It doesn’t mean that you’ll make your connection, nor does it mean that the airline will cover any additional expenses such as hotels during the delay, etc (They might, but it depends on the cause of the delay).

As both of your flights are on United you do NOT need to re-clear security at SFO, nor do you need to collect your bags as they will be checked all the way through from LAS to LHR.

Presuming that your LAS-SFO flight has a flight number below 2000 then you inbound flight will arrive in Terminal 3, and your outbound flight will leave from the International Terminal G. These two terminals are connected by an air-side walkway, and depending on which gates you land at and depart from it will be in the 10-15 minute range to walk for an average person.

If your inbound flight is a flight number above 6000 then it will arrive in Terminal 1. There is an air-side bus from Terminal 1 to Terminal G, but depending on how busy the bus is it could take up to 20+ minutes to get between the two terminals and to your gate.

Your outbound flight will start boarding around around 45 minutes before departure, and technically you need to be at the gate 30 minutes before departure time, although realistically 10-15 mins is normally the minimum – any later and you risk losing your seat.

So basically, if everything goes right and your inbound flight is on time then you’ll have no trouble making your connection. You probably won’t have enough spare time for anything other than a very quick bathroom break, but it’s doable.

However if your inbound flight is delayed even a little – which is not uncommon on LAS-SFO flights, especially if it’s a morning flight – then you will likely miss your connecting flight. Depending on the time of year there are only 1 or 2 direct SFO-LHR flights per day on United, so the impact of missing your flight is very high – most likely an up to 24 hour delay!

If it were me, I would be changing to an earlier LAS-SFO flight. United is normally very happy to do this when there has been a schedule change. In this case as the ticket was booked via an agency and as a codeshare they may not be willing to assist, but I would start with them presuming you can conveniently call them. Otherwise talk to EBookers and ask them to arrange the change. In cases like this it’s normally best to know exactly what you want before calling (eg, you want flight UAxxx from LAS-SFO rather than your existing UAyyyy).

performance – Multi-tenant reduce number of MongoDB connections

Our multi-tenant app has about 4000 databases per tenant and we are having a problem with performance because of too many connections. Creating MongoDB connections on each database is a heavy operation.

We tried the following approaches but still could not reduce the number of connections.

  1. Sharding only applying for the datasets
  2. Increasing the pool size already applied

How to reduce the number of MongoDB connections? Is there any way to split databases into 1000,1000?