design – Architecture issue re best IPC method for multiple file descriptors in one process

This is question about architecture in an application that uses POSIX IPC to communicate between threads. The application uses multiple threads (clients) to send data to a single receiving (server) thread. Each thread is assigned to a separate core using its affinity mask. The threads are all within a single process – no process boundaries to cross. The most important factors are performance and reliability.

Currently I use a named pipe (FIFO) to communicate between the multiple writers and the single reader. The writers all use the same file descriptor and the reader reads from a single pipe.

However, the data must be processed in core (thread) order, with core 0 first, then core 1, then core 2, etc. With only a single pipe the application must organize the incoming messages in core order which adds extra processing overhead. The messages are added to a memory buffer maintained by the server side.

A better architecture from the standpoint of the reader (server) would be to use a separate pipe/socket/shared memory (or other IPC method) for each client. The server would read from each of the client file descriptors in core order, processing each record as it comes in, then read and process data from the next core, in a round-robin fashion. That way the server does not need to organize and process the records in core order, which is expensive. The server just receives them one at a time and processes them immediately upon receipt, then read from the next core in sequence, etc. No expense of a memory buffer or the overhead of organizing the records as they come in.

My question is, given the requirement described above, which of the POSIX IPC methods would be the best and most performant solution for this situation? I’m planning to go up to as many as 64 cores, so I would need up to as many as 63 file descriptors for the client side. I don’t need bidirectional commo.

The lowest system overhead would (I think) be an anonymous pipe. The server side could simply loop through an array of file descriptors to read the data. However, I’m not clear whether an anonymous pipe can be used for threads in a single process because, “It is not very useful for a single process to use a pipe to talk to itself. In typical use, a process creates a pipe just before it forks one or more child processes.” https://www.gnu.org/software/libc/manual/html_node/Creating-a-Pipe.html#Creating-a-Pipe

I currently use named pipes, which do work with threads in a single process, and which should work with multiple file descriptors.

I have also used UNIX domain datagram sockets with a single socket. My impression is that multiple sockets may be more system overhead than I need for this situation, but they may be the most performant solution.

Finally, I have considered POSIX shared memory, where each client thread has its own shared memory object. Shared memory is often described as the fastest IPC mechanism (https://www.softprayog.in/programming/interprocess-communication-using-posix-shared-memory-in-linux)

But with shared memory, there is the problem of synchronization. While the other IPC methods are basically queues where the data can be read one record at a time, shared memory requires a synchronization object like a semaphore or spinlock. As the man pages say, “Typically, processes must synchronize their access to a shared memory object, using, for example, POSIX semaphores.” (https://www.man7.org/linux/man-pages/man7/shm_overview.7.html.)
My concern is that the extra synchronization overhead may reduce the usefulness of shared memory in this situation.

Moreover, despite being billed as the fastest method, I am concerned about possible cache contention with shared memory. “(M)any CPUs need fast access to memory and will likely cache memory, which has two complications (access time and data coherence).” https://en.wikipedia.org/wiki/Shared_memory.

I could test each of these solutions, but before I choose one it would be helpful to ask the opinions of others on which IPC method would be the best for multiple pipes/sockets/shared memory for multiple clients, as described above.

Multithreading – Porting of a SYSV IPC multiprocess architecture to Windows

I want to port a Unix-based server application with multi-process architecture.

The app uses shared SYS V IPC storage and semaphores. It has a main message queue in shared memory with semaphores to protect simultaneous access and separate processes for enqueue, dequeue and various message type handlers etc. The single binary file initializes the shared memory and gives itself the ability to perform all of these processes.

I successfully ran the app on Windows under Cygwin that supports SYSV IPC. One option for me is to provide the app for Windows on Cygwin.

The architecture of the Unix version offers the following nice functions:

  • If a process crashes (e.g. a handler for a certain message type), the master can intercept it and branch a new instance of the process. In the meantime, other types of messages continue to be processed. This helps maximize server availability as opposed to a multi-threaded server with a process where the crash cancels everything that is currently running and also requires another supervisor process to restart the server.
  • A process can be updated during operation: the binary file can be recompiled and executed, and the new master takes over the management of the existing processes, which can be updated by ending them and switching them off again by the new master. In this way, new functions can be added to the server without losing operating times.

My question is: what architecture should I use if I ported this app to native Windows (i.e. don't run it under Cygwin)?

And if the answer is to do multiple processes, which Windows IPC features are best for the job? E.g. use CreateFileMapping to create shared storage and run the queue in it?

I am concerned that switching to a multi-threaded architecture with one process will lose both of the above. If the plan is to maintain a multiprocess architecture in Windows, I don't think it is possible for a single binary fork to contain multiple processes, so I would have to redesign the build system and startup code to use different binary files for each process.

Operating systems – Android Apps IPC while navigating the activity

While navigating activities in an Android app by clicking the Back button, the current activity is stopped, the previous one is restarted and continued, and the former is stopped. How does the activity manager service know that it publishes them in order on the main thread looper?
How does the main thread communicate with the activity manager service?

I want to know more about where this actually happens in the source code. I tried to examine ActivityThread, ActivityManager, etc., but I didn't understand where in this scenario the communication from the main thread to the Activity Manager service took place.

Do the customs authorities throw away some food / plants on arrival when flying between Mataveri Airport (IPC) and Santiago International Airport (SCL)?

Do customs officials throw away some types of food / plants on arrival when flying between Mataveri International Airport (IPC) on Easter Island in Chile and Santiago International Airport (SCL)?

Since both airports are in the same country, I would rather think no, but since Easter Island is quite far from the rest of Chile, I would rather think yes. I couldn't easily find the information online.

Is the WiFi at Mataveri International Airport (IPC) fast and robust enough to make video calls for 1 hour?

Is the WiFi at Mataveri International Airport (IPC) on Easter Island in Chile fast and robust enough to make a video call for 1 hour?

https://www.sleepinginairports.net/guides/easter-island-mataveri-airport-guide.htm#wifi mentions:

Free WiFi is available at Mataveri Airport. Connect to the Entel network.

but there is neither the speed nor the robustness.

Does Apple allow the use of Socket for IPC between iOS apps?

I have two apps (they are not from the same developer, not in the same app group), and I use a UNIX socket for the two apps to communicate with each other (one app binds to a local port and another app puts a connection forth)). This method is pretty good right now, but I want to know if Apple allows me to submit my app to the App Store. Many Thanks