SeedVPS – VPS Hosting in Europe ─ Netherlands | KVM SSD | KVM Storage | 10Gbps NICs | From €9 /mo | Proxies-free


Plans starting from €9 EUR /mo

Check out our plans here:

=====> <====

Linux VPS | Windows VPS | SSD VPS


Our server nodes

  • HP Generation 9 Servers
  • Dual 2x Intel E5 CPUs
  • Pure SSD Storage
  • Enterprise Disks / Datacenter SSDs
  • Hardware RAID10 Storage
  • Dual 2x 10 Gbps NICs


All plans include

  • Instant Setup
  • 7 Days Money Back Guarantee
  • KVM Virtualization / OVZ Virtualization
  • 1Gbps Guaranteed Uplink
  • Free and Unlimited Inbound Traffic
  • 99.9% Uptime Guaranteed


50+ INTERNATIONAL Payment methods: PayPal, Payza, Skrill, Credit/Debit Cards, iDEAL, Sofort Banking, Bank Transfer, Bitcoin, Ethereum and more.

Looking Glass:

Status Page:

SeedVPS is an established company operating since 2013

Visit our website

Contact us [email protected]

mysql – Does large text datatypes reserves storage on disk?

I’m design an DB for an app, in a table there is a column MEDIUMTEXT which according to mysql doc can storage on disk upto 16MB.

So my question is if I’m storing a text of 1Mb in this column, will it still reserves 16MB storage/block on disk for this particular record.

And also while reading all columns of multiple rows at once i.e 100 rows, what impact will have on memory and CPU usage for reading such large size storage columns?

kubernetes – Cannot mount CIFS storage on k8s cluster

I have to mount CIFS storage, trying to use flexvolume, fstab/cifs, but I have no idea what i’m doing wrong.

Using microk8s v1.18

root@master:~/yamls# cat pod.yaml 
apiVersion: v1
kind: Secret
  name: cifs-secret
  namespace: default
type: fstab/cifs
  username: 'xxxxxxxxxxx='
  password: 'xxxxxxxxxxxxxxxxxxxxxx=='
apiVersion: v1
kind: Pod
  name: busybox
  namespace: default
  - name: busybox
    image: busybox
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
    - name: test
      mountPath: /data
  - name: test
      driver: "fstab/cifs"
      fsType: "cifs"
        name: "cifs-secret"
        networkPath: "//srv/storage"
        mountOptions: "dir_mode=0755,file_mode=0644,noperm"


root@master:~/yamls# kubectl apply -f pod.yaml 
pod/busybox configured
The Secret "cifs-secret" is invalid: type: Invalid value: "fstab/cifs": field is immutable

On changing type of secret to Opaque I get this

  Type     Reason       Age                   From                                      Message
  ----     ------       ----                  ----                                      -------
  Normal   Scheduled    <unknown>             default-scheduler                         Successfully assigned default/busybox to
  Warning  FailedMount  17m (x23 over 48m)    kubelet, master  MountVolume.SetUp failed for volume "test" : Couldn't get secret default/cifs-secret err: Cannot get secret of type fstab/cifs

What I have to use with CIFS driver on Secret? Why this is so hard? Is it changing API or else? Why API version changing from version to version, is it invented in order to give version compability?

And, in future, what can you suggest to NFS mounting? Even more, which practices do you use to provide mounts’ snapshots (or any other backup system)? – Storage RDP | Admin Encoding RDP | 8 Users MAX | NL Location | Proxies-free

Why Choose Server Trafficweb ?

  • We Provide 24/7 Real Support
  • Affordable Prices For High Quality Hardware
  • Windows Server 2012 R2
  • No Setup Cost.
  • Instant Setup
  • 99.9 % Network Uptime
  • Locations :- Netherlands

Pre Installed Application :

  • Internet Download Manager
  • WinRar
  • Chrome
  • Firefox
  • Media Player Classic
  • VLC
  • Utorrent
  • Total Commander
  • And More….

Netherlands RDP Servers :-
Our Goal :-
To provide best available service to our clients, by means of best support, offering cheapest prices for the RDP slots and with the best servers available in the market.
Our Guarantee
24 hour Money Back Guarantee*
Anytime refund in the form of Credits*

Not Allowed :-
Mining | Cracking | Downloading from Public Torrent Trackers | Child Porn | Sick Porn | Spamming | Not More than 3 Torrents at a time | Check with us if you are unsure about the activity before buying since we do not provide refunds.

Payment method :-

Proof Support :-

Speed Testing :-
Request demo for speed testing. We also have 24 hours money back guarantee.

24/7/365 Full Support :-

Skype :- live:servertrafficweb24
Website :-
Email :- [email protected]

Stay Connected :-
Facebook | Twitter | Instagram

javascript – Como obter o downloadURL do Firebase Storage para salvar no Realtime Database

Olá, eu estou querendo fazer upload de um arquivo para o FStorage e obter o downloadURL para salvar no Realtime Database
Este é meu código, não estou conseguindo recuperar o downloadURL na key poster_book para salvar.
As outras referências são salvas, mais o upload não é feito e nem salva o link

$("#send_poster").on('click', function () {

        var user = firebase.auth().currentUser;
        var displayName = user.displayName;
        var photoURL = user.photoURL;
        var pname = $("#poster-name").val();
        var pauthor = $("#poster-author").val();
        var pdescription = $("#poster-description").val();
        var pcover = $("#poster-book");
        var storageRef ='posters/'+; 
        var uploadTask = storageRef.put(pcover); 
        firebase.database().ref('posters/' +{
            poster_name: pname,
            poster_author: pauthor,
            poster_description: pdescription,
            poster_username: displayName,
            poster_book: downloadURL // Quero recuperar o downloadURL aqui


python – Optimization for data storage

I’d like your advice on the design of my application.

I use websockets to receive new data and the request module to retrieve older data.
Then I use pyqtgraph to display data and tables etc with pyqt5.

There are some data that I don’t keep in memory, I just display them on screen without the possibility to interact with them, and I have other data that I keep in memory, with which I do some processing.

I would like to know if I should use dictionaries to store and process data or create a database with SQL or use pandas.
There will be a lot of inserting, extracting, deleting and a lot of calculations.

Potentially, when there are big moves, I could have thousands of messages per second to process, which I would have to add to my database, process and then display them on screen or do whatever I wanted with them.

If you have any questions, don’t hesitate.

Example of connection:

import websockets
import asyncio
import json

async def capture_data():
    subscriptions = (sub for sub in ("quote", "trade", "instrument"))
    uri = "wss://" + ",".join(subscriptions)

    async with websockets.connect(uri) as websocket:
        while True:
            data = await websocket.recv()


Photos taking up space in General > iPhone Storage, but I don’t have any photos and iCloud Photos is disabled

so I recently was backing up my photos to google photos, and I usually clear everything from my photo library because it takes up space. I make sure to delete the recently deleted album as well, but I noticed that Photos still takes up space in iPhone Storage under General.

I’ve tried rebooting and resetting all settings, but it doesn’t seem to fix the problem. I disabled and deleted all of my photos in iCloud, so I don’t see how that could be the cause of this problem. I found a similar post but I’ve tried everything suggested except for restoring the iPhone.

So before I go ahead and restore my phone, I just wanted to make sure if there’s anything else I could try since that post is pretty old.

Here’s a screenshot of the issue:
enter image description here

storage – Looking for architecture recommendations/critique: locally hosted neo4j db + django application on apache server

We would like to develop an application for analyzing ribosomes, which will both require some local data and yield a lot of new data as well, be quite computationally demanding. Because the types of biological data that we work with are highly unstructred and often poorly integrated, i decided to use Neo4j graph-database in production.

I am planning to host it on our university’s rack which should also make all the compute necessary for parsing and clustering readily and locally available. One issue i see ahead is the fact that some of the crystallographic files that we use as the basis for our computations are quite large (on the order of 100mb) and come in multiple formats (ex..pdb,.cif). The closest that Neo4j has to this is geo-spatial 3D Point type, but other than that it does not support this sort of storage.

Hence, it seems like the optimal thing is to commit all the string/int non-structural data to the database and store the .cif files locally in the server’s filesystem. Then, when the user queries the front-end for something, have the Django app parse the large local file and pull the necessary nomenclature/identifiers from Neo4j, respond with the result.

I have however read a few “database vs file storage” discussions and articles and decided to ask what pitfalls are there for something like this and if there is a better solution/architecture that i’m not seeing?