fsm – A Finite state machine where some states shouldn’t be transitioned to even if initial conditions satisfy

I’ve built a Finite State Machine for my player. Currently I have only one, locomotion, but I plan on adding more state machines. At the moment state relationship looks something like this:
enter image description here

The blue colored states are ability states. I don’t want those states to trigger even if the condition input.JumpActivated is satisfied.

As there’s only two such states at the moment, having the condition be abilityManager.isActive(abilityId) && input.JumpActivated instead of just input.JumpActivated is fine, but it seems… manual. As the game grows and more abilities are added, there is bound to be a case where I’ll forget to add the condition and then the ability would be active even if it’s not supposed to be. I would like to disallow transitioning to those states, but so far none of the solutions I’ve come up with have looked elegant to me.

This is what I’ve tried:

  1. Instead of transitions having one condition, they allow multiple conditions, and base state sets a condition that if the next state is of AbilityState type that it must be active.
  2. Implemented state enabling/disabling and had AbilityManager – a component external to the FSM, enable/disable states. Disabled states are not transitioned to.
  3. Implemented state interceptors, which can be subscribed to and if any of them return true the state is not transitioned to

All of those solutions work, but something feels off. I think that all these solutions hide that the states have special logic tied to them in some obscure class/method away from the actual class they belong to.

How do you, experienced game developers, handle such cases?

database – Why shouldn’t I be using SQLITE instead of the INI format in all cases?

I think the problem is you definition of application settings.

I think most people would designate your “partial and full downloads” as application state rather than application settings, which would commonly be a small set of configuration information which remains generally static

As to whether databases are better than flat files. Well obviously they are, but at a cost. Such as size on disk/memory of the database engine, non-human readablity of data etc.

ini files are better for small amounts of simple data, such as app settings. So they wouldn’t be deprecated in favour of a database, even if in your senario a database might be a better choice.

vpn – Using iptables to set up a killswitch for openvpn: DNS requests are blocked but they shouldn’t

I bought a subscription to a VPN service and I am using the openvpn 2.5.1 client to connect to it. I am using Ubuntu 20.10.

I now want to emulate the “kill switch” feature of most proprietary VPN client.

That is, I want to block any connection that is not tunneled through the VPN. Said otherwise, if the VPN connection drops for some reason (eg. server unreachable), I want all internet connection to be blocked.

To achieve this result, I am following this tutorial.

I have come up with the following iptables rules:

*filter

# Drop all packets
-P INPUT DROP
-P FORWARD DROP
-P OUTPUT DROP

# Allow incoming packets only for related and established connections
-A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT

# Allow loopback and tunnel interface
-A OUTPUT -o lo -j ACCEPT
-A OUTPUT -o tun0 -p icmp -j ACCEPT

# Allow local LAN
-A OUTPUT -d 192.168.1.0/24 -j ACCEPT

# Allow VPN's DNS servers
# Gli indirizzi del server DNS di NordVPN sono 103.86.96.100 e 103.86.99.100
-A OUTPUT -d <DNS_SERVER_1> -j ACCEPT
-A OUTPUT -d <DNS_SERVER_2> -j ACCEPT

# Allow the VPN itself (both protocol/port and interface)
# We use TCP/443
#-A OUTPUT -p udp -m udp --dport 1194 -j ACCEPT
#-A OUTPUT -p tcp -m tcp --dport 443 -j ACCEPT
-A OUTPUT -p tcp --dport 443 -j ACCEPT
-A OUTPUT -o tun0 -j ACCEPT

COMMIT

and I am importing it with sudo iptables-restore < ./vpn_iptables_killswitch_rules.ipv4.

After the import I am able to connect to the VPN successfully. That is, the openvpn client establishes the connection successfully.

However, I am unable to resolve domain name in IP addresses. In fact, ping google.com returns a temporary failure in name resolution, while traceroute 8.8.8.8 works without problems.

This should not happen since I have whitelisted the DNS servers on my rules.

A nmcli connection show <SSID> shows that the connection is using the DNS servers provided by my VPN provided and is ignoring the DNS servers provided by DHCP.

What I am doing wrong here?

numerical integration – Shouldn’t NIntegrate return a number whose precision is PrecisionGoal, not WorkingPrecision?

I know that Mathematica has great built-in precision tracking, so when you do calculations with arbitrary-precision numbers, Mathematica keeps track of the precision on the result. Given this careful attention to numerical error and precision tracking, I am surprised that, say

InputForm(NIntegrate(E^(-x^2), {x, 0, Infinity},PrecisionGoal -> 20, WorkingPrecision -> 100))

returns a number with precision 100, not 20. I know Mathematica is using precision-100 numbers in its numerical calculations for NIntegrate, but the function is built to return a number whose actual precision is at least 20. In the spirit of useful precision tracking, wouldn’t it make more sense for NIntegrate to return a number with a precision of PrecisionGoal, not WorkingPrecision?


This question is more about numerical coding philosophy than about how NIntegrate works. But this is important as Wolfram presumably makes these decisions with use cases in mind, so I want to know if I’m missing something.

Navigating an AI that shouldn’t take the shortest path but the scenic route instead (e.g. fish)

I’m working on a sidescrolling game with an underwater "fish-like" AI that has some goals (interacting at various locations) and things to avoid (player).

  1. Started of with a Navmesh and simple straight line navigation but since algos like A* always take the shortest path what ends up happening just doesn’t look good.

Straight line navigation

  1. To solve this, I thought I could add some random area penalties to the navigation nodes (currently a grid). I generated these penalties using perlin noise, which did curve the path, but the effect is very minor, because again A* does not like to deviate from a straight line even with big penalties. If you make them too big, it’s gonna act like obstacles, which isn’t very usable either.

Noise penalties

  1. So the question is – how do I achieve something like this?

The dream

Thanks

importrange – Google sheets custom function runs when it shouldnt

I have a google spreadsheet that I used an importrange function to get addresses from another sheet. Then I added this into the script editor to convert those addresses to coordinates using a custom function:


 * Returns latitude and longitude values for given address using the Google Maps Geocoder.

 *

 * @param {string} address - The address you get the latitude and longitude for.

 * @customfunction

 */

function GEOCODE_GOOGLE(address) {

    if (address.map) {

        return address.map(GEOCODE_GOOGLE)

    } else {

        var r = Maps.newGeocoder().geocode(address)

        for (var i = 0; i < r.results.length; i++) {

            var res = r.results(i)

            return res.geometry.location.lat + ", " + res.geometry.location.lng

        }

    }

}

And when I put into the cell the function =geocode_google(L3), where L3 is the full address joined together from a function using the street number and street name from importranage, it spits out the coordinates in one cell that I then use another function to split into 2 separate columns. And then I autofilled that formula to the next approximately 350 addresses. The problem is that when I first did it it worked great but now its saying the service is being invoked too many times in a day for every single cell I have that custom function in except maybe 3 or 4 cells. I know there’s a limit of 1000 per day times I can use the google geocoder service but the original spreadsheet only changes maybe 1 address every 3 days and I don’t see why all the cells from the beginning of the sheets containing the custom function would come up as being invoked too many times.

Is it because the importrange fucntion refreshes every 30 minutes even though the values of the cells doesn’t change and that triggers the custom function which causes it to run every 30 minutes and hit that 1000 max per day?

Is there anyway to limit the custom function to run only when the value of the full address cell changes or any other work around?

I also tried to use this code as well:

function myFunction() {
var sheet = SpreadsheetApp.getActiveSpreadsheet().getSheetByName('Geo 4')

var range = sheet.getDataRange();
var cells = range.getValues();

var latitudes = ();
var longitudes = ();

for (var i = 0; i < cells.length; i++) {
var address = cells(i)(11);
 
if (address == "") {
latitudes.push((""));
longitudes.push((""));
} else {
var geocoder = Maps.newGeocoder().geocode(address);  
 
 
var geocoder = Maps.newGeocoder().geocode(address);
 
var res = geocoder.results(0);
if(res){
latitudes.push((res.geometry.location.lat));
longitudes.push((res.geometry.location.lng));
}
else
{
latitudes.push((0));
longitudes.push((0));
}
}

sheet.getRange('M1').offset(0, 0, latitudes.length).setValues(latitudes)
sheet.getRange('N1').offset(0, 0, latitudes.length).setValues(longitudes);
}
}  

But it wasn’t recognizing some of the addresses so I don’t run it anymore and I prefer the custom function because I can see if there’s a specific address not being found instead of the code just not running .

Is there any other workaround to just import addresses and get the latitude and longitude as the addresses change? Thank you!

identity – Should or shouldn’t I show a serial number, MAC address and other product ID when I sell it online?

I want to sell online some of my electronic stuff that I don’t need anymore, such as my ASUS Wi-Fi router, and I’m wondering when I upload photos of it should I leave its serial number, MAC address and pincode written on the back of the device visible or should I photoshop it out. I checked other people’s stuff pages and many of them take explicit photos of their devices’ serial numbers, MAC addresses etc. Why?

Furthermore, some potential buyers don’t want to buy your product if ID numbers are blurred out. Why so, why do people need to see those numbers of the products they don’t own yet, and do they actually need to? Is it safe for me to publish such data? Theoretically someone can go into their ASUS (or other brand) account and register a product with my serial number, if I haven’t registered it myself, right?

permissions – Why shouldn’t I give everyone sudo?

My department used to be a small department. As is common with small groups, system security wasn’t really enforced. In short, everyone used the same login account and had root access.
As the department has grown (about 50 people and still growing) and we’ve moved on to more of a production role, we’ve had to adjust our habits.

We recently implemented network logins. With this came noone has root access (or sudo).
The amount of pushback I have received from this is tremendous (as is expected) as not everyone can just sudo fix problem. However, sudo is dangerous, and most users are electrical engineers who aren’t unix savvy. After explaining why we will not be getting sudo back, their is still tremendous pushback.

This is where I turn to stack exchange. As I have looked on google for why not everyone should have root access. And surprisingly, this information is hard to come by. I have tried making a list myself, but it would be good to have another online source backing up what I’m saying, which I thought there would be a plethora of articles for!

Here are some of the reasons they want sudo access back, and how I addressed their concerns:

  1. Can’t edit other user’s files in a shared directory
    • Don’t use shared directories to edit your code. Everyone should be editing in their own home directories and pushing to git. Let git do it’s job and merge stuff.
    • If for whatever reason you need to use shared directories, have the directory owner give the group write permissions
  2. While using a shared directory, if one person compiles, the others can no longer compile as they’d have to remove the first binary
  3. If one person pushes to git with no group write permission, the others cannot push to git
    • For now, set your umask (i told them how to do this). Soon, IT will be taking over all git remote ownership and will be setting up shared permissions properly

Some other reasons I have come up with for why sudo access would not be good.

  • We can’t accidentally delete customer data forever (this has happened)
  • We can’t sudo reboot and kick everyone off whos remotely logged in (someone kicked me off a few times and I lost my work)
  • It restricts what we can run (in a good way!). One time someone installed Debian GUI libraries they found online, but we run CentOS.

Can anyone point me to where I can find an article stating why multiple users having sudo/root is a bad idea.

operating systems – Is there any reason I shouldn’t use a LInux host for a Linux guest VM?

I’m a developer and my typical environment is to use is a Linux guest OS running inside VirtualBox on a Windows host. Most software companies don’t allow developers to install Linux bare metal on their machines. The Windows host/Linux guest solution is a good compromise.

My new workplace is different. They’ll let me use any kind of set up I like. I’d still like to use a VirtualBox as I can simply import my existing VM to save the hassle of setting up my entire development environment from scratch.

I have 2 questions:

#1 Is there any reason I should not use a Linux host/Linux guest arrangement? #2 Which Linux OS would you recommend to use as the host?

Thanks :]