architecture – What layer do third party API request/response models go in and what do you call them?

I have a RESTful API service that has three layers: Application/Domain, Infrastructure, and Presentation.

Application/Domain contain my interfaces and models. I currently have three different types of models:

  1. DTO – These are the models my controllers return to the client as well as the models that are passed around all the layers of my application.
  2. POCO – Domain model that has an instance of the corresponding DTO, it has business rules/validations.
  3. Entities – Persistence models that mirror database objects.

Now, if my RESTful API makes a request to another API service and the request requires both a body and response, I would want to create models for both the body and the response right? What would this kind of model be called and which layer would I put it in?

I guess it would have to go in the Application/Domain layer because that’s where the interface for my third party API client would also be. But what would I call these models? Are they entities? Are they DTOs?

Hmmm…I guess they could be considered entities. An entity is a model that represents the data in the database and I guess the third party API could also be considered a database of sorts…

I was initially calling the services that connect with dbs repositories and calling services that connect with other APIs ApiClients. But they really are just facade services. So I guess they are the same?

What do you guys think?

architecture – Azure Event Grid API layer beneath a HTTP API layer

I am part of a project/team that is building a new web app in Azure for the first time, having previously built and developed a traditional three tier ASP.NET web app over a number of years.

We have an external architect/consultant helping with the transition to Azure, and they are proposing an architecture that is proving to be somewhat controversial. In very simple terms the server architecture is basically:

   HTTP/REST API (HTTPTriggers) –> Event Grid –> Back-end Microservices (EventTriggers)

I.e., there is an EventGrid abstraction layer between the externally facing API and the back-end ‘domain’ microservices.

If we take the example of a simple HTTP GET of a data record; the HTTPTrigger C# function sends a ‘command’ event onto the grid, and waits for an ACKnowledge event, before sending the HTTP response back to the caller.

The abstraction layer isn’t super controversial per se, although some have questioned the need for it. There are some benefits I think, such as not having to manage lots of microservice URL endpoints, e.g. if we add a new back-end microservice, or split/merge existing back-end services then the REST API layer can (in principle) be oblivious to these changes at the back-end. There may also be benefits in terms of redundancy and scaling (although one could argue that Azure Functions and use of CosmosDB have those aspects covered without the extra abstraction layer).

The real source of concern is that (as I understand it) the HTTPTrigger function has no way of subscribing to event grid events for its short lifetime (which could/should be sub-second for most/many API calls) in order to receive the ACK event. As such this function sits in a polling loop using ‘await Task.Delay()’ in each loop so as not to sleep the executing thread, or use excessive CPU. We also talked about backing off the polling frequency over time to get a good balance of low latency for fast ACKs, and minimising the number of the polls/loops for slower ACKs.

The polling loop then, checks some appropriate data store, such as a redis cache entry, or a row in an Azure Table DB. Separately, an Azure Function with an EventTrigger has the sole purpose of handling ACK events and updating that data store. As such, the response data from the back-end microservice is conveyed via that storage, which seems a bit odd for a simple GET request. This use of storage, combined with polling, will add cost and latency, and I think the controversy is largely due to not seeing a clear benefit to counter those issues/costs.

One thing I was wondering about is how this pattern would work if the Azure Functions were geo-distributed (ie., e.g. if a customer wanted distribution of the back-end over two or more data centers); could we configure Azure such that the storage used for the event responses was always local to where the Azure functions were running? I don’t think so because there are two Azure Functions – the HTTPTrigger and the EventTrigger, and I’m not sure there is any way of ensuring that the EventTrigger function will run in the same locality as the original HTTPTrigger function – they are two completely independent functions. As such, the state store would need to be geo-distributed in that scenario, which sounds a little crazy to me to have a HTTP response being transmitted via the data replication/synchronisation mechanism of Azure Tables or redis.

Thoughts?

Thanks for reading!

architecture – Should selected person be part of my application layer? (MVP pattern)

Consider the following GUI screen:
enter image description here

When user selects a person from PersonListView, EditPersonView should show person’s first name and last name and allow the user edit. So, I end up with the following UML class Diagram where each Java package represents the layer of the class.

enter image description here

My question is in SelectedPerson and if this is an MVP “model” class. Should this be part of my application layer? Isn’t a presentation concern? The reason I added there is so the 2 presenters can observe it. When user selected an element from the list widget, it gets updated and EditPersonViewPresenter refreshes the 2 fields and “apply” button “Enability”.

Class Persons is another model, responsible for persons CRUD operations and it makes perfect sense to be part of the application layer (should it be in domain?). It is also responsible to notify its observers that a person had a CRUD operation on it. So, when “Apply” is pressed, PersonListViewPresenter as an observer to Persons is capable to refresh the list and show the new first/last name in the list.

Long story short, the two presenters communicate through Persons and SelectedPerson models. Assume this is a “correct” approach.

Now, the devil comes.

Selection is available only when there are no unsaved changes in the EditPersonView. If the person in record state is named “Jackie Chan” and user edits to “Hackie Chan”, the selection is disabled until the user clicks apply or restores the fields to “Jackie” and “Chan”.

How PersonListViewPresenter can know whether there are unsaved changes in EditPersonView?

According to the approach taken till now, a new model “EditPersonState” must be added in the application layer and follow the same approach. EditPersonViewPresenter updates the EditPersonState model and PersonListViewPresenter observes it and operate accordingly (disable the selection in list). But, if I have X forms and multiple presenters are interesting in, I will have my application layer with X of such models (that exist only to synchronize presenters). Should it be that way?

On the other hand, the use case for example, is “User can delete the selected person” so having a SelectedPerson model in application layer could allow me to put the Delete operation (of CRUD) there and make the use case(s) more “visible”.

As an alternate solution I thought, I could let this state in the view. The view will be treated as “the view” and then keep a hierarchy of views and child views. Each presenter depend on “the view” (say MainView) and then observes any of the interesting (child)views. So SelectedPerson model does not exist neither in application layer, neither in presenter layer. It exists in the child view. Persons model remain (in app layer) to do the CRUD. Uml class Diagram of this approach:

enter image description here

But in this approach there is no one-to-one relationship between a view and a presenter. Not that is is a law, but maybe one day if I have 50 different views I won’t be able to know which presenters touches which (proportion of a) view.

What am I missing?

Finally, one thought of mine stands in the following. Martin Fowler in the same page states the following:

Session State is data that the user is currently working on. Session
state is seen by the user as somewhat temporary, they usually have the
ability to save or discard their work.

So, based on that statement, the person selection is session state (?). According to Craig Larman in Applying UML and Patterns: An Introduction to Object-Oriented Analysis and Design and Iterative Development there is the following figure:

enter image description here

If Larman’s session state is the same thing as to Fowler’s session state, then my current approach, SelectedPerson in application layer agree with them.

linux networking – How can I set up a layer 3 bridge using Proxy ARP such that http requests can be made to the inside/proxied host’s IP successfully?

Currently I am using a Raspberry Pi to bridge an ethernet connected printer to wireless internet and have used DNAT successfully to give the printer internet access, manually forwarding the printer’s port 80 to the Rpi’s wlan0 interface port 80 along with other needed ports to access the printer using outside hosts. I’ve also been able to use Proxy ARP so that the printer’s static IP address is visible on the network, the Pi responding to ARP broadcasts on the printer’s behalf and proxying ARP requests for the printer. What I would like to do is combine the functionality of the DNAT approach with the IP separation provided by Proxy ARP.

The problem is that I cannot figure out how to seamlessly accomplish the needed forwarding/spoofing with the Rpi so that instead of directing requests to the Pi’s port 80, outside hosts can make requests using the printer’s IP directly even if it’s on a different subnet, say 10.1.2.254:80, to access the http page.

Is it possible to accomplish this routing in tandem with Proxy ARP? Are there other approaches that are better suited for this arrangement, or could IP aliases alongside DNAT accomplish this illusion that the printer’s IP and active ports are also present on the network/another network?

blockchain – Did Bitcoin founders know Bitcoin is not scalable at Base Layer?

Given Bitcoin takes 45 min per Transaction, and consumes lot of energy (around 1 million Visa transactions), https://digiconomist.net/bitcoin-energy-consumption/

This is more of a history question,
Did Bitcoin founders (Satoshi Nakamoto) know that Bitcoin would not be Scaleable?

Did they only create a Base Layer on purpose, and expect other people to create second derivative layers like Lightning Network?

Currently Bitcoin is not economic, without the key Lightning network, and certain provinces in China and other countries are attempting to shut down nodes. Lightning network has taken 10 years to release, since Bitcoin creation, and environmental politicians are starting to complain about power consumption. Just wondering if the founders, had foresight about this, and if they noted future goals for Bitcoin to work. I did not see it in their whitepaper. https://bitcoin.org/bitcoin.pdf

python – pytorch LSTM model with unequal hidden layer

i have tuned a lstm model in keras as follows. but i dont know how write that code in pytorch. i put my pytorch code here but i dont think be right, because It does not give the right answer. how much I searched, I could not find a sample code in pytorch for more than one lstm layer with unequal hidden layers. my input shape is (None,(60,10)) with output shape (None,15) Please express a similar example for my keras model in pytorch. Thank

my_Keras_model:

model_input = keras.Input(shape=(60, 10))
x_1 = layers.LSTM(160,return_sequences=True)(model_input)
x_1 = layers.LSTM(190)(x_1)
x_1 = layers.Dense(200)(x_1)
x_1 = layers.Dense(15)(x_1)
model = keras.models.Model(model_input, x_1)

my_pytorch_model:

input_dim = 10
hidden_dim_1 = 160
hidden_dim_2 = 190
hidden_dim_3 = 200
num_layers = 1
output_dim = 15

class LSTM(nn.Module):
    def __init__(self, input_dim, hidden_dim_1, hidden_dim_2, hidden_dim_3 ,num_layers, output_dim):
        super(LSTM, self).__init__()
        self.hidden_dim_1 = hidden_dim_1
        self.hidden_dim_2 = hidden_dim_2
        self.hidden_dim_3 = hidden_dim_3
        self.num_layers = num_layers
        
        self.lstm_1 = nn.LSTM(input_dim, hidden_dim_1, num_layers, batch_first=True)
        self.lstm_2 = nn.LSTM(hidden_dim_1, hidden_dim_2, num_layers, batch_first=True)
        self.fc_1 = nn.Linear(hidden_dim_2, hidden_dim_3)
        self.fc_out = nn.Linear(hidden_dim_3, output_dim)

    def forward(self, x):
        input_X = x
        h_1 = torch.zeros(num_layers, 1 , self.hidden_dim_1).requires_grad_()
        c_1 = torch.zeros(num_layers, 1 , self.hidden_dim_1).requires_grad_()
        h_2 = torch.zeros(num_layers, 1 , self.hidden_dim_2).requires_grad_()
        c_2 = torch.zeros(num_layers, 1 , self.hidden_dim_2).requires_grad_()
        out_put = ()
        
        for i, input_t in enumerate(input_X.chunk(input_X.size(0))):
          out_lstm_1 , (h_1, c_1) = self.lstm_1(input_t, (h_1.detach(), c_1.detach()))
          out_lstm_2 , (h_2, c_2) = self.lstm_2(out_lstm_1, (h_2.detach(), c_2.detach()))
          out_Dense_1 = self.fc_1(out_lstm_2(:, -1, :))
          out_Dense_out = self.fc_out(out_Dense_1)
          out_put += out_Dense_out
        out_put = torch.stack(out_put, 0).squeeze(1)
        return out_put

tiled – Phaser 3 – Show items from object layer – image not loading

Trying to create a collectible objects from object layer I’ve created with Tiled, however, don’t know the proper way to map the objects and my image file which contains several images.

What would be the proper way of showing the image from the object layer created by Tiled?

So far I have come up with this routine which finds the Sprite but it seems the image itself is not loaded as seen in the image below.

   private pickupsGroup!: Physics.Arcade.StaticGroup
   
   ...
   this.pickupsGroup = this.physics.add.staticGroup();
   const pickupsGameObjects = this.map.createFromObjects('Pickups', { });
    pickupsGameObjects.forEach((object) => {
      const sprite = object as Phaser.GameObjects.Sprite;      
      sprite.setDepth(9);
      this.pickupsGroup.add(sprite);
    });

   ...

Text

Tiled. (propsA.png consists of several images, bottom right corner.)

Text

The pot object can be found in the map file outdoors1.json

...
 {
         "draworder":"topdown",
         "id":13,
         "name":"Pickups",
         "objects":(
                {
                 "gid":4775,
                 "height":32,
                 "id":8,
                 "name":"pot",
                 "rotation":0,
                 "type":"pickup",
                 "visible":true,
                 "width":32,
                 "x":396.666666666667,
                 "y":564
                }),
         "opacity":1,
         "type":"objectgroup",
         "visible":true,
         "x":0,
         "y":0
        }, 
...

Docker network layer resolving container name to wrong IP adress

In a simplified example, I have 3 Docker containers located on 2 Docker networks:

Container_A : Connected to Network_1 and Network_2
Container_B : Connected to Network_1
Container_C : Connected to Network_2

When running ping Container_B from inside Container_A, the Docker network layer resolves the IP adress of Container_C instead of Contaier_B.

If I kill Container_C the name resolving will go back to the expected behaviour, but as soon as Container_C comes back online, the network layer starts resolving the wrong IP adresses again.

This has caused me a great headache and I have no idea how to fix this. Thanks for any advice.