domain name system – FreeIPA: External DNS requests (google etc.) fail for clients on new subnet

I’m trying to rebuild my home network to make use of FreeIPA to manage some Linux clients. This has all gone well on my main network ( with all clients being able to resolve both internal DNS and external requests for google etc. All clients on that network can SSH (with sudo) using a user I created in FreeIPA.

The issue comes when I try to connect my Wifi network ( to the FreeIPA server. Clients on the Wifi network can only resolve internal DNS. Requests for etc. are ignored. This works fine on my main network.

So from a host on my main network:

(root@kvm ~)# dig @auth.brocas.home monitoring.brocas.home +short
(root@kvm ~)# dig +short

But on my network, no external DNS requests are resolved:

(manjaro-i3 ~)# dig @auth.brocas.home monitoring.brocas.home  +short
(manjaro-i3 ~)# dig @auth.brocas.home +short
(manjaro-i3 ~)# 

Does anyone know why this might be?

Thanks in advance.

distributed filesystems – What’s the simplest way of “resetting” multiple linux clients to restore the default state of a fresh install?

We have 6 linux clients (mint) which are used for training. Courses might run anywhere from a few days to multiple weeks.

They want a “one button press”, as in very quick and easy, way of resetting all the clients back to their fresh install state when a course is over and it should be done remotely from a linux server they’re all connected to.
What would be the easiest way to achieve this, do you have any advice?
Would you save a fresh mint image on the server and distribute it to all clients each time and if so what would be the easiest way to do that?
Thank you very much

port forward only for vpn clients on draytek vpn router

I have a VPN server set up on a vigor 2865. I now want to allow one external ip address, or anyone connected to this vpn, access to a website on an internal server. This website uses a public dns name to resolve to my public ip address. e.g.

If I port forward 80/443 to the internal server then anyone can access the site. This works fine.

If I add a firewall rule to only allow access to the external ip address then this also works fine.

I thought I could just change the firewall rule to allow the public ip of the vpn/router to give access to vpn clients but this does not work. If I go to, my ip does change when connected to the vpn but the firewall does not let this ip address through despite being configured to do so.

If I set the firewall to use my home ip address then it works but I do not want to configure the private ip addresses of all employees to grant access.

How do I configure the firewall / port forwarding to allow vpn clients only?

Let your clients be heard with Cancellation Center For WHMCS 1.1.0! | Proxies-free

Although it’s hardly a new concept, surveys are still counted among the most effective means to look deeper beneath the surface of customer experience. You can, and most likely do, make use of them to collect customer feedback at various stages of your relationship. But have you ever given some real thought to that one type of insight offered by customers who no longer want to use your products? After all, this particular angle on your business is too educational not to pay attention to it.

Our Cancellation Center For WHMCS has been designed exclusively for the purpose of putting together various questionnaires going in-depth into motives underlying cancellation requests submitted by your clients. The module’s just announced 1.1.0 version comes with its own choice of new utilities which can help you better shape the contect of your surveys.

  • New “Scale” question type – allow your clients to grade their answers on the scale from 0 to 10.
  • Faster configuration of relations – assign questions directly to product groups.
  • Multi-language support – create more than one version of a question, each in a different language.

Here’s one tip that always helps: the bigger the amount of feedback you compile, the better the understanding you will gain to tailor your product base!

Reach deeper in the minds of your clients with Cancellation Center For WHMCS 1.1.0!

Need Custom Software Development For Your Business?

Specially for you we will adapt an application and its design to your own needs, create a new module or even a completely new system built from scratch!


distributed systems – How does clients observe different replicas and stale data even if the replicas include same set of updates?

I am studying about gossip architecture.
This is gossip architecture-:
enter image description here

Gossip architecture provides 2 guarentees-:

1)Each client gets consistent service over time(meaning even if clients use different RMs, the returned data reflects the updates seen by client as of now)

  1. Relaxed consistency between replicas-: All RMs eventually receive all updates and apply updates with ordering guarentee.

But here is the confusion.

It also says-:

Two clients may observe different replicas even though replicas include same set of updates, ana client may observe stale data.

How can 2 clients observe different replicas when replicas include same set of updates and ordering is guarenteed?

As per my inituition, it is probably because since the consistency is relaxed so at that moment, all replicas don’t have same set of updates.

And may be that’s why clients observe stale data. Am I correct?

linux – How can I configure Wireguard running in a docker container on a VPS to allow communication between its’ connected clients?

I have a VPS running WireGuard as a server in a docker container, where I’ve given it the devices I intend on adding as peers.

I have a home server running WireGuard as a client in a docker container using the host network mode. IP Forwarding is enabled on each of these servers.

When I connect with my laptop to the WireGuard host on the VPS, I’m unable to access my home server.

Am I approaching this wrong, or is there a simpler/better way to try to achieve this?
One of the reasons I’m wanting to configure this in this way is so that I can set up a reverse proxy to one of the services running on the home server over the tunnel.

Here's some fun questions for Clients and Freelancers

Freelancer Questions
1.Tell us about yourself?
2.What niches you write in?
3.How did you get into copywriting?
4.What challenges you face as a copywriter?
5.What kept you going while facing adversity?
6.Favorite client to work with?
7.Would your client mind being interviewed?

Client Questions
1. How did you find your freelancer?
2. What made you chose that freelancer?
3. How did your freelancer solve your problem?
4.Have you referred your freelancer to other client's?
5.How often you use…

Here's some fun questions for Clients and Freelancers

architecture – Exposing redis to external clients

We are building a system that runs on our cloud and that needs information from our clients network that must not be exposed openly. We have concluded that the only way this could work, is if our clients can install an agent in their network, that would gather the required information and push it to our system through the internet.

This agent application, cannot communicate to our server using a regular RestAPI or webservice, because we have use cases that require us requesting specific information to the specific agent instances (a request to resync data from client 1 should only go to client 1’s agent). We looked into gRPC streams and keep a stream open with every agent and just routes those requests to the stream, but since our backend has multiple instances, it would be hard to identify and reuse a single stream from different instances.

Then something that works for us, is using Redis streams and create a stream per agent, then our backend would only have to write to the specific agent redis stream (it would know the name of the stream as it would the agent/client identifier) from whichever of our instances.

The concern though, is the “over the internet” part.

Redis has TLS support since version 6, allows us to block certain commands (flushall for example) and supports defining users with access to only specific commands and keys/streams…

I understand exposing Redis over the internet or to any other network that not the one serving redis itself, has always been discouraged (AWS ElastiCache only allows access from within the same VPC/subnet for example). However, everything I find about redis security, say something in the lines of:

Do not publicly expose the Redis server
Since Redis has no default authentication, it does not support encryption. All data is stored in cleartext. An attacker can use FLUSHALL command to delete all key-value data sets…

That type of statement holds very true to older versions of Redis that didn’t provide TLS nor access control. But since those are now supported since version 6, am I crazy to go forward with this?

There’s a concern about rate limiting, which is probably something that is not addressed by Redis itself, but that we can possibly find a solution with Kubernetes or network configuration in the cloud.

So, I know this is an unusual architecture and I am not the only one to think about it, but I am just looking for your opinions on whether exposing Redis streams over the internet for my use case, considering the features that Redis now has, is that much of a security risk/ bad idea overall.

vpn – Providing cloned virtual environment to multiple clients

I have a virtual environment created with vmware esxi consisting on a ipfire and a internal network with some vms.

What would be the best way to clone this environment, on demand, so each client could have access to a vpn that leads to copies of the same vms on his own private network? (the client would only have a ovpn (or other) and connect to the vms.

I wouldn’t mind changing any of the software.

I have tried using something like pritunl, but I can’t segment a /24, so I can only make it work on 255 clients.