c# – Better way to keep a list of items from a Http Request?

I have a simple web api where each request “item” is stored in a list and thus a list is built without creating a new list per request. I have achieved this via dependency injection , but I want to know if there is a better way to do it?

(ApiController)
(Route("(controller)"))
public class MyController: ControllerBase
{
    private readonly List<string> _items;

    public WeatherForecastController(List<string> items)
    {
        _items = items;
    }

    (HttpPost)
    public ActionResult GetList((FromBody) CustomRequestObject request)
    {
        _items.Add(request.Item);

        return Ok(new CustomResponseObject(){Items = _items});
    }
}

public class CustomRequestObject
{
    public string Item { get; set; }
}

public class CustomResponseObject
{
    public IList<string> Items { get; set; }
}

public class Startup
{
    public Startup(IConfiguration configuration)
    {
        Configuration = configuration;
    }

    public IConfiguration Configuration { get; }

    
    public void ConfigureServices(IServiceCollection services)
    {
        services.AddControllers();
        services.AddSingleton<List<string>>(); //items stored after each request
    }

    //rest of startup methods left for simplicity
}

sharepoint server – Site on https starts working only after refreshing page on http page

I have a SharePoint server with sp name which I can access by http://sp.

I’ve

  1. added this server to some domain and could access it by http://sp.domain.local;
  2. created and added certificate to this server, so I could access it by https://sp.domain.local and by https://sp;
  3. allowed access to it from Internet by https://external.domain.com

Issue:

When I try to access the server by http://sp.domain.local, https://sp.domain.local, https://external.domain.com after some time, for example, in a day, it allows me to be logged in, but does not display the Documents list content and search does not work.

When I refresh http://sp page, then Documents List content becomes visible and search starts to work here on http and https protocols and with sp.domain.local and external.domain.com names.

Question:

Which settings on SP or IIS side could be the cause of such behavior?

apache http server – Docker: httpd starts before volume is mounted?

I have a simple docker with apache2 installed (with a2enmod cgid) with CMD being:

CMD apachectl -D FOREGROUND

I run container with:

docker run --name app1 -p 8080:80 -v "C:storeapp1www":/app1/www -d app1:1.3

The problem I have is… if the container is stopped, then restarted (via the GUI docker desktop), apache goes into a state where it returns 503 for everything until it is restarted with apachectl restart.

I have no idea why, but I suspect that it is related to the volume not finishing mounting properly before the CMD is executed?

Is there something basic I am not understanding about when -v would complete compared to when CMD is run? The apache2 log file just says this for every request, even after volume appears to be mounted ok:

(cgid:error) No such file or directory: (client 127.0.0.1:59282) AH02833: ScriptSock /var/run/apache2/cgisock.9 does not exist: /bcon/www/index.cgi

I have resorted to doing this CMD, but I feel like I am missunderstanding something important about docker:

sleep 5 && apachectl -D FOREGROUND

http request – Redirect non-www to www in settings.php for Drupal 8

I want to redirect all non-www to www in below function.

if ( (!array_key_exists('HTTPS', $_SERVER)) && (PHP_SAPI !== 'cli') ) {
  if (substr($_SERVER('HTTP_HOST'), 0, 4) <> 'www.') {
    $new_url = 'www.' . $_SERVER('HTTP_HOST');
  } else {
    $new_url = $_SERVER('HTTP_HOST');
  }
  $new_url .= $_SERVER('REQUEST_URI');

  header('HTTP/1.1 301 Moved Permanently');
  header('Location: https://'. $new_url);
  exit();
}

Getting html page using get http request and sockets in C

Im trying to print the html page https://pastebin.com/raw/7y7MWssc

Here is my code :

#include <stdio.h>
#include <winsock2.h>

void main(){
    WSADATA WSA;
    WSAStartup(MAKEWORD(2,2),&WSA) ;

    char  Request() = "GET /raw/7y7MWssc HTTP/1.1rnrn" ;
    char  Response(2000) ;

    SOCKET  Socket = socket(AF_INET , SOCK_STREAM , 0 );
    struct sockaddr_in Server;
    struct hostent *H = gethostbyname("pastebin.com") ;

    Server.sin_addr.s_addr  = *( (int *)H->h_addr);
    Server.sin_family       = AF_INET;
    Server.sin_port         = htons( 80 );

    connect(Socket , (struct sockaddr *)&Server , sizeof(Server)) ;
    send(Socket , Request , strlen(Request) , 0)  ;
    int Rs = recv(Socket , Response , 2000 , 0) ;
    Response(Rs) = 0 ;
    printf("%sn", Response );

    closesocket(Socket);
    WSACleanup();
}

But i keep getting 400 Bad Request as response , but when the request is “GET /raw/7y7MWssc HTTP/1.1rnHost: pastebin.comrnrn” i get 301 Moved Permanently to the Location: https://pastebin.com/raw/7y7MWssc
Thanks for help

HTTP polling vs WebSocket for very small payloads that don’t change often

In our team we are currently discussing which technology makes more sense for an upcoming feature – HTTP polling vs WebSocket.

To give some context:

  • We are developing a TV streaming application (server and mobile clients).
  • We currently have about 60 TV channels.
  • On some channels we need to prohibit the user from seeking within/across ads.
  • For this, the mobile client needs to know where ads are in the current program – let’s say that a given program contains at most 10 ads and the client only needs to know the start and end timestamp of each ad, no additional info.
  • The information where the ads occur in a stream is pushed to one of our services directly from the TV companies. The information may arrive “just in time”, so we need to update the clients quite frequently to make sure they have current data.
  • We already have a WebSocket connection in place to track viewer numbers for each channel, i.e. the WebSocket service already knows which user is watching which channel.

So, I think this leaves us with two main solutions:

1.) The client “polls” the server every few seconds to ask for an updated list of ads for the current channel. As the data is minimal (60 channels x 10 ads for the current program x 2 timestamps per ad) it should be easy to keep in memory, maybe using memcached or the like. Since the data will only change every few minutes in practice, if we use HTTP caching (ETag), most responses would actually be empty.

2.) The service receiving the ads from the TV companies pushes them to the WebSocket service which in turn forwards them to all users that are currently watching the affected channel.

I am one of the client developers and as such I have a strong preference for HTTP polling. It fits into the app architecture a lot easier, parsing the responses is trivial and error handling methods are already in place. To use the WebSocket, I would have to change more app code, parsing all different kinds of messages coming out of a single WebSocket is a bit messier and there is practically no error handling.

I also would have thought that putting the current data into memcached or the like would be trivial – and preferable to having to push stuff around between services, but I am being told that the server load would be too much to handle. At the moment I think we rarely have more than about 10 000 concurrent users, but let’s be optimistic and hope for 100 000 within a year.

Given all this, which solution makes more sense?

I hope I have described the situation in just enough details to allow for a proper evaluation!

Thanks a lot for any advice!

http – Is the Host: header required over SSL?

Is the Host: header required over SSL even if the request is not HTTP/1.1?

So, if a client connects over SSL, and sends the following request:

GET / HTTP/1.0
  1. Should the web server throw a bad request due to the missing Host: header?
  2. Should the web server respond with an HTTP/1.0 200 OK response?
    (the index.html file always exists, so a request to /, would never lead to 403/404)

c# – Record status of http call to show in the UI

What I am wanting to do, is record the status of a HTTP call, so that I can give the user some feedback in the UI, rather than either succeeding silently, or showing a grim error.

What I have done is created a Result class. For ease, I will show the simple class, but I also have this as a generic class, so it can also contain data.

The Result class:

public class Result
    {
        public bool IsSuccess { get; set; }

        public string Error { get; set; }

        public HttpStatusCode HttpStatusCode { get; set; }

        public Result() { }

        public Result(bool isSuccess)
        {
            IsSuccess = isSuccess;
        }

        public Result(string errors, bool isSuccess)
        {
            IsSuccess = isSuccess;
            Error = errors;
        }

        public Result(string errors, bool isSuccess, HttpStatusCode statusCode)
        {
            Error = errors;
            IsSuccess = isSuccess;
            HttpStatusCode = statusCode;
        }
    }

When I make a call to the API I am using, I use the Result class, as below:

public async Task<Result> UpdateColour(GrapeColour grapeColour)
        {
            var body = new StringContent(JsonConvert.SerializeObject(grapeColour), Encoding.UTF8, "application/json");
            var client = _httpClient.CreateClient(ApiNames.WineApi);

            var response = await client.PutAsync(_grapeColourUrl, body).ConfigureAwait(false);
            if (response.IsSuccessStatusCode)
            {
                return new Result(true);
            }
            else
            {
                return await HttpResponseHandler.HandleError(response).ConfigureAwait(false);
            }
        }

Where my HttpResponseHandler does some stuff, but fundamentally writes the bool isSuccessful to false, give an error message, and set the status code.

This all lives in my Domain layer.

I then pass this over to the web side of my project, here is the controller method:

(HttpPost)
        public async Task<IActionResult> EditColour(Result<EditableGrapeColourViewModel> model)
        {
            if (!ModelState.IsValid)
            {
                return View(new Result<EditableGrapeColourViewModel>(model.Data));
            }

            var domainGrapeColour = _grapeMapper.Map<Domain.Grape.GrapeColour>(model.Data);

            var saveResult = await _grapeService.SaveColour(domainGrapeColour, SaveType.Update).ConfigureAwait(false);
            if (saveResult.IsSuccess)
            {
                return RedirectToAction("EditColour", "Grape", new { id = model.Data.Id, IsSuccess = true });
            }

            var viewModel = new Result<EditableGrapeColourViewModel>(saveResult.IsSuccess, saveResult.Error, model.Data);

            return View(viewModel);
        }

So the model I use in for the viewmodel and the view has to be the Result<T> so I can access the isSuccessful and the error message to show the user.

I use automapper to map from my domain object, to an object in the web project, but I use the domain Result.

So my question is two fold; Is what I am doing a decent solution for what I want to do? And if it is, should I be using the domain Result object as I am?

http – Confidentially seperating web content under single IP address

I am interested in exposing web content to strangers on the internet (Reddit, Discord, ..).
This content I wish to reside on the same web server under which personal web content resides.

(None of this content yet exists, and I’m not set on a specific web server or programming language.)

VPS (with single IP Address and two domains)

  • Content for strangers on the internet (1 domain)
  • Job specific services, portfolio, private family services (other domain)

My main worry is that people in real life discover my reddit / discord business.
Equally as worrying is the idea that strangers on reddit / discord can “dox” me.

I could just order an additional IP address from the VPS provider, but first I’d like to explore cost efficient methods.

For any ideas or input I’d be very grateful.

http – How to disable the use of verb tunneling using such headers or query parameters in .NET?

I’m failing a security scan that is saying my .NET application is allowing verb tunneling and the recommendation is to disable this. The application needs to accept PUT and DELETE headers as well as GET and POST. The scan is sending these headers on an endpoint that accepts POSTs:

X-HTTP-METHOD: PUT
X-HTTP-Method-Override: PUT
X-METHOD-OVERRIDE: PUT

I’ve done a lot of researching and am having a hard time finding a way to “disable” verb tunneling. It seems that these methods need to be allowed, not the other way around.

For example, in .NET the HttpMethodOverrideExtensions method is what allows these types of headers.

Am I correct in my response saying the application by default does not allow verb tunneling, as the methods to allow it are not in the application’s code base?