time complexity – Most efficient algorithm to output the optimal choice of party invitees?

I am trying to design an efficient algorithm that takes as an input a list of n people and a list of pairs who know each other (as an adjacency list) and outputs the maximum number of invites given the following constraints:

  • Each person should have at least p other people they know at the party
  • Each person should have at least p other people they DONT know at the party

My attempt

I believe I have found a solution in $O(n(n + m))$ time. Here is the pseudocode:

1.) Loop through the adjacency list counting how many attendees each attendee knows that are invited. If any attendee doesn’t meet the criteria (num < p OR num > current_total - p), label them as a “0” in an array representing which invitees meet the criteria. This takes $O(n + m)$ time.

2.) Repeat the above loop until it finishes with no changes being made to the invitee array. This takes $O(n)$ time.

My question

Is there a way to beat this time complexity? If so, do I need to use a specialized data structure? Or perhaps delete nodes from the adjacency list if they dont fit the criteria as we go along?

regular languages – Efficient algorithm to find a rejecting input of an NFA

I cannot think of a PTIME algorithm to find a rejecting input of an NFA. While it is possible to efficiently find a rejecting input for a DFA, converting an NFA to DFA is too expensive.

The algorithm should have the following behavior: Given an NFA $A = (Q, Sigma, Delta, q_0, F)$, if $L(A) = Sigma^*$, then return the error state $Err$, otherwise return any $w in Sigma^* setminus L(A)$.

Ideally, $w$ should be as short as possible.

Is this possible?

What I tried:

I thought of using a modified version of Thompson’s algorithm. In Thompson’s algorithm, the current input character is used to determine the next set of states from the current set of states. Thompson’s algorithm will reject an input string if (1) the next set of states is empty or (2) if the string ended and the current set of states does not contain a final state.

Let $N_T: P(Q) times Sigma to P(Q)$ be the function from Thompson’s algorithm that given the current set of states and an input character determines the next set of states.

By repeatedly using all characters $c in Sigma$ as input characters to create the next set of states, the algorithm can approximate simulating all input strings. The function determining the next set of states is defined as:

$$N: P(Q) to P(Q); \ N(S) = bigcup_{c in Sigma} N_T(S, c)$$

The current set of states for a given iteration $k$ is defined as: $$S_k =
{q_0}, & text{if } k = 0 \
N(S_{k-1}), & text{otherwise}

In each iteration $k$, the following conditions (corresponding to the rejection conditions of Thompson’s algorithm) are checked:

  1. If $exists c in Sigma$ such that $N_T(c)=emptyset$, then all $w in Sigma^k c$ will be rejected by $A$. Return any $w$.
  2. If $S_k cap F = emptyset$, then all $w in Sigma^k$ will be rejected by $A$. Return any $w$.

Lastly, if an already seen set of states is seen again, the algorithm returns $Err$. This ensures that the algorithm terminates.

However, there are 2 problems with this algorithm:

  1. It’s only an approximation. There are some NFA for which this algorithm returns $Err$ even though the NFA doesn’t accept all inputs.
  2. It terminates after at most $2^{O(n)}$ many steps. This isn’t PTIME. However, this problem can be solved by only allowing $|Q|$ many iterations since the shortest rejecting input should have at most $|Q|$ many characters.

array – Can my javascript code for pulling data from an API and working with it in Goole Sheets be more efficient?

I’m trying to build out some DeFi position tracking pages in Google Sheets. I want to pull from the Zapper API (or other APIs) to track trades, positions, gains, losses, fees…etc. This is my first crack at tracking a position on SushiSwap.

I tried to make re-usable functions and organize the code so it’s easy to understand/work with.

I’d love some insight on better ways to write the functions and/or organize the code. I think this could be a big project and I want to make sure I’m using a good system to write code and organize things. Basically, am I on the right track? Where could I do better?

// function to grab the API data and parse it.
// returns an array
function pullAndParseAPISushi (walletAddress, networkName){
  var apiKEY = "96e0cc51-a62e-42ca-acee-910ea7d2a241";  // API key from Zapper
  var url = "https://api.zapper.fi/v1/staked-balance/masterchef?addresses%5B%5D="+ 
walletAddress + "&network=" + networkName + "&api_key=" + apiKEY;  // assembles the API URL with the wallet address and network name
  var response = UrlFetchApp.fetch(url); // pulls data from the API
  var theparsedJSONdata = JSON.parse(response); // parses the JSON response from the API

  return theparsedJSONdata  // returns the parsed array

// access level 2 of the json data - it gets at the array of values after the wallet address 
/** this is the key line that breaks open the data.  Need this for next level data */
/** first level is the wallet address, second level is the big array, third level is the reward tokens...etc. */
function createTheArray(parsedDataFromJson, walletAddress) { 

  var levelTwo = parsedDataFromJson(walletAddress)(0) // 'breaks through' the first array

  var lastArrayReturn = ();  /** creates and loads an array with all the various pairs */
        for (const key in levelTwo){
           //console.log(key + " -> " + levelTwo(key))
           var tempArray = new Array(0)    // not sure why new array has argument for 0
           tempArray.push(key,levelTwo(key))  // stores each pair in a sigle array
           lastArrayReturn.push(tempArray) // loads smaller arrays into larger, single array; 2D array

  // lastArrayReturn is an array of arrays for level 2 data (i.e. the data after the wallet address in the JSON object)

  return lastArrayReturn

// transposes a given array 
function transposeArray(theArray){
  var result = new Array(theArray(0).length);
  for(var i=0; i < result.length; i++){
      result(i) = new Array(theArray.length);
      for(var j = 0; j < result(i).length; j++){
        result(i)(j) = theArray(j)(i)
  console.log("the transposed array is: " + result)
  return result

// putting the array from the api parse into the Sushi Data Pull sheet
function placeSushiData (anArray) {

    theSushiArray = transposeArray(anArray)  // call a function to transpose the array
    let ss = SpreadsheetApp.getActiveSpreadsheet();           // get active spreadsheet
    let targetSheet = ss.getSheetByName('Sushi Data Pull');   // the tab where the data is going
    let targetRange = targetSheet.getRange(1,4, theSushiArray.length, theSushiArray(0).length);  // set the targe range of cells
    let targetDateRange = targetSheet.getRange(2,3,1,1);  // range for the timestamp; 2nd row, 3rd column

    targetRange.setValues(theSushiArray); // sets cells in the target range to the values in the array
    targetDateRange.setValue(new Date()); // puts time stamp in the date column

// fuction to run the data pull and placement
function runTheProgram(){
    var walletAddress = "0x00000000000000000000000000";
    var networkName = "polygon";
    var dataArray = pullAndParseAPISushi(walletAddress, networkName)
    var adjustedArray = createTheArray(dataArray, walletAddress)

An IEEE half-float implementation in python similar to array.array, is there any way I could make this more efficient?

I’ve written this class to wrap a collection of bytes and interpret them as 16-bit floats. It’s supposed to work like memoryview(buf).cast('f') or array.array('f', buf) I’m trying to avoid converting back and forth between values as much as possible. cPython does not currently support using the format code 'e' as the format argument to array.array.

Is there anything else I can add or take away?

import struct
from collections.abc import Sequence

class Float16Array(Sequence):
    Takes a bytes or bytearray object and interprets it as an array of
    16-bit IEEE half-floats

    Behaves a bit like if you could create an array.array('e', (1, 2, 3.7))
    def __init__(self, buf):
        self.hbuf = memoryview(buf).cast('H')

    def _to_h(v):
        "convert float to an unsigned 16 bit integer representation"
        return struct.unpack('H', struct.pack('e', v))(0)

    def _to_v(h):
        "convert 16-bit integer back to regular float"
        return struct.unpack('e', struct.pack('H', h))(0)

    def __len__(self):
        return len(self.hbuf)

    def __eq__(self, other):
        if isinstance(other, self.__class__):
            return self.hbuf == other.hbuf
        if isinstance(other, Sequence):
            if len(self) != len(other):
                return False
            for hval, oval in zip(self.hbuf, other):
                    if hval != self._to_h(oval):
                        return False
                except struct.error:
                    return False
            return True
            raise NotImplemented

    def __getitem__(self, key):
        if isinstance(key, slice):
            return self.__class__(self.hbuf(key).cast('B'))
        item = self.hbuf(key)
        return self._to_v(item)

    def __contains__(self, value):
            return self._to_h(value) in self.hbuf
        except struct.error:
            return False

    def __reversed__(self):
        for item in reversed(self.hbuf):
            yield self._to_v(item)

    def index(self, value, start=0, stop=None):
        buf = self.hbuf(start:stop)
            buf_val = self._to_h(value)
        except struct.error:
            raise TypeError('value must be float or int') from None
        for i, v in enumerate(buf):
            if v is buf_val or v == buf_val:
                return i
        raise ValueError

    def count(self, value):
            buf_val = self._to_h(value)
        except struct.error:
            raise TypeError('value must be float or int') from None
        return sum(1 for v in self.hbuf if v == buf_val)

    def __repr__(self):
        contents = ', '.join('{:.2f}'.format(v).rstrip('0') for v in self)
        return self.__class__.__name__ + '(' + contents + ')'

if __name__ == '__main__':
    my_array = Float16Array(struct.pack('eeee', 0.1, 0.1, 72.0, 3.141))
    assert 0.1 in my_array
    assert my_array.count(72) == 1
    assert my_array.count(0.1)
    assert my_array == (0.1, 0.1, 72.0, 3.141)
    assert my_array(0:-1) == Float16Array(struct.pack('eee', 0.1, 0.1, 72.0))

Efficient way of iterating over satisfiability instances

I am working on a problem that involves searching through solutions to a boolean equation, to see if any have a specific property. As the computations are not particularly fast, I would like to minimize the number I run. For this, the SelectFirst function is helpful, but not fast enough.

The only way I found to get all the solutions to a boolean equation is:

SatisfiabilityInstances[expression, vars, SatisfiabilityCount[expression]]

This works, but will often generate more than is needed. sometimes, the first expression works, but other times, it takes thousands. While SelectFirst can speed up some of the computation, there are still way more solutions generated than are needed most of the time. Is there a way to get the SatisfiabilityInstances one at a time, so that I only generate as many as I need? Or at least something better than what I have now?

Efficient algorithm to aggregate a heightmap to a lower resolution

I have a raw height map which consists of cells of the following structure:

type Cell =    
{ Coordinate: GeoCoordinate //contains Latitude and Longitude of the coordinate
  Elevation: int16

I generate this height map from real world data.

Now I want to aggregate the height map to a lower resolution, say from a cell grid length of 300 meters to 10 kilometers. That is, creating an average of the elevations. Of course, I can apply a brute force algorithm to do so, e.g. beginning from the center cell grid and aggregate it to a “bigger cell grid”, memorizing which grids have been considered, and so forth. But maybe this is not the best way of doing that. Are there more efficient ways (algorithms) for aggregating such a height map?

Efficient way in MariaDB/MySQL to manage user access from multiple IPv4 and IPv6 subnets

In our MariaDB setup I would like to treat users connecting from two different subnets as “local” users. I.e. grant the same permissions as they would connect from localhost. Since there are many users that will have the same permissions, I wonder if there is a clever/efficient way to achieve that.

As far as I understand it, I would have to create five MySQL users for every “real” user and copy all required permissions to all of them:

  • user1@localhost
  • user1@(IPv4 subnet1)
  • user1@(IPv4 subnet2)
  • user1@(IPv6 subnet1)
  • user1@(IPv6 subnet2)
  • userN@localhost
  • userN@(IPv4 subnet1)
  • userN@(IPv4 subnet2)
  • userN@(IPv6 subnet1)
  • userN@(IPv6 subnet2)

When I need to change the prermissions of a “real” user, I need to change them for all the corresponding MySQL users.

This seems like a lot of users and a lot of work. Is there an easier way to do this? Or is there a common technique to automate this setup? (Preferably without writing our own automation script.)

We are using the adminer web interface (https://www.adminer.org/) but it doesn’t seem to have a tool to do this.

Im aware of the possibility to limit access with a firewall, but unfortunately this does not work in our case, because a few databases need access from a wider range of remote addresses than the majority of DBs, that should be limited to the “local” subnets.

Any hint how to do this in an elegant way is greatly appreciated!

hash tables – The Most efficient algorithm for a program that returns true if the two strings only differ by one character

We have a list of strings with the same length. Each string only contains a, b and c.(For instance : aabcc)
The program gets a string and returns true if the two strings(input and the list of strings) only differ by one character.

A = {cacab cbabc}
cbbaa : False
cbaac : True

My first thought was to solve it with brute force which the worst-case time complexity is O(n^2). Is there any way to solve it with hash table or any other data structure?

sql server – Why database has char data type when varchar is more efficient?

I understand the basic difference between CHAR and VARCHAR datatype that CHAR takes up fixed length whereas VARCHAR takes up space based on the content being stored.

But if VARCHAR is so efficient in handling the space management dynamically based on the content being stored, why databases has the support for CHAR datatype, all CHAR(n) fields can be VARCHAR(n) fields, right ?