virtualization – Should a unified bare metal application suite outperform multiple VMs on the same hardware?

Assuming one single-socket Xeon D server, 8 cores, 16 threads, two 10G SFP+ interfaces, 64 GB of RAM, two SSDs…

An application bundle that includes a nginx, MariaDB, some app logic, and maybe Redis. OS would be FreeBSD or Fedora Server.

Does it confer any benefit to spin up several redundant VMs, each containing the full bundle, vs. just running one instance of the bundle on bare metal with no virtualization?

Would you predict any difference in networking performance or throughput? Does sharing the NICs among several VMs boost capacity at all? I mean some kind of SR-IOV hardware virtualization or similar. Can a NIC handle more connections if it’s virtualized?

Is it better to just have one instance of the app exploit and control the hardware directly?

(Assume same OS as host and guest in the virtualization case. I guess using KVM with Fedora.)

Thanks.

virtualization – Google cloud VPC alias range

I have created 2 vms with two Ethernet(nic 0 & nic1) , nic0 is default range and for nic1 i created vpc 192.168.1.0/24 and 192.168.2.0/24 for alias ip . now form vm1 nic1 primary ip is 192.168.1.10 & secondary ip is 192.168.2.10/29 i create one virtual router in that vm and gave ip 192.168.2.11/29 so i can ping that ip from vm and from that virtual router also i can ping 192.168.1.10 & 192.168.2.10 now i create 2nd vm config same as vm1 for vm-2 nic1 ip is 192.168.1.20 now i try to ping 192.168.1.10,192.168.2.10&192.168.2.11 from vm-2 and i can ping 192.168.1.10 , 192.168.2.10 but i can notenter image description here ping 192.168.2.11 which is virtual app ip inside the vm-1 i also check packet capture i found that from vm-2 to 192.168.2.11 traffic hit also vm-1’s eth1 send ping reply but it is not reaching to vm-2 eth1

virtualization – Proxmox 6.2-4 GPU Passthrough to Windows 10 Guest: Code 43

Host

Proxmox 6.2-4 on a HP Compaq 8200 Elite Convertible Minitower w/ i7-2600 and 16GB RAM.

Integrated graphics is enabled, though there doesn’t seem to be an option to choose between the iGPU and dGPU when booting in the BIOS, so it always uses the latter until Proxmox is loaded. Requisite virtualization options are enabled (VT-d, etc.).

Configuration

I followed this guide for configuring Proxmox and the VM, but here are the important bits:

GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt pcie_acs_override=downstream,multifunction nofb nomodeset video=vesafb:off video=efifb:off"

~# ls /etc/modprobe.d/:
blacklist.conf iommu_unsafe_interrupts.conf kvm.conf pve-blacklist.conf vfio.conf (with the same content as specified in the guide)

~# lspci -nnk

~# cat /etc/pve/qemu-server/100.conf

Guest

Windows 10 1909 (18363.592) (Unactivated)

Task Manager correctly reports i7-2600 as the CPU, but can detect that it is in a VM (Virtual machine: Yes)

I used 352.84-desktop-win10-64bit-international-whql.exe as the driver, directly downloaded from Nvidia’s website.

GPU reports Code 43 in Device Manager, and there is no output on the attached monitor.

Please do not hesitate to ask for any more needed information. Any help is greatly appreciated!

Poll Results: Preferred Virtualization Engine for Cheap VPS Hosting

On May 15th we asked our users to tell us which virtualization engine they most preferred as the basis of their cheap VPS hosting plans.  After 9 days of voting and 327 unique votes, the answer is in, and the community has decidedly voted in favor of KVM virtualization.

While voting will remain open we grabbed the results as of May 24th (after 9 days of voting and 327 votes).

Recently we’ve published a number of guides about different virtualization technologies, including KVM, Xen and OpenVZ. If you are not familiar with the differences you can start by reading a broad description of the three and then continue to learn more about KVM Virtualization followed by OpenVZ Virtualization.

Check out the results below:

poll results preferred virtualization engine for cheap vps

While it is not surprising that KVM Virtualization was the preferred hypervisor engine according to our users, what is surprising is the extent to which it led other options, including OpenVZ and Xen. Both VMWare and Hyper-V tend to be more on the enterprise side of the game, but even still, they both beat Xen. While the whole “low end box” era of cheap VPS hosting started with OpenVZ it is clear that the transition to dedicated resources and enhanced security/neighbor protections offered by KVM is well under way. A few year ago a 2GB RAM OpenVZ VPS was very exciting — even though it would have likely been RAM oversold many times over. Today, there are no shortage of great deals available on inexpensive KVM based VPS options, many of which now offer RAM allocations in excess of 2GB for less than $10/month.

If you are shopping for a cheap KVM based VPS, check out the latest offers on LowEndBox.

There are plenty of great OpenVZ VPS offers still available, too, should you prefer it.

Thanks to everyone who took a moment to vote and share their opinion on our Preferred Virtualization Engine Poll!

Related posts:

Poll: OpenVZ VPS, KVM VPS, Xen VPS or Other?

Poll: What do you use your VPS for?

Jon Biloh

I’m Jon Biloh and I own LowEndBox and LowEndTalk. I’ve spent my nearly 20 year career in IT building companies and now I’m excited to focus on building and enhancing the community at LowEndBox and LowEndTalk.

virtualization – Is Windows Sandbox a viable alternative to conventional VM solutions considering its design?

The idea of having a fast, disposable VM at the palm of my hand appeals to me very much. It makes adding an extra layer of security to any thing I want to do so easy – just launch the sandbox application in a matter of seconds and you’re done. Of course, that is considering the VM actually does the job it’s supposed to do…

A little disclaimer beforehand – I’ve read the article Beware the perils of Windows Sandbox at Magnitude8, describing how the Windows Sandbox comes with a NAT pre-enabled and thus any malware running on the guest would still get a direct access to your intranet, which is already a large problem. But for the purpose of this question, let us just consider the host-guest scenarios.

Windows Sandbox claims to “achieve a combination of security, density, and performance that isn’t available in traditional VMs”, by leveraging a different approach to memory and disk management. If I understand things correctly, everything that in theory can be safely shared between the host and the guest, gets shared. According to the official documentation, the Sandbox shares both the host’s immutable system files, as well as the physical memory pages.

Despite that, Microsoft seems to remain confident that their solution is secure as implied by one of bullet points mentioned in the Sandbox overview:

Secure: Uses hardware-based virtualization for kernel isolation. It relies on the Microsoft hypervisor to run a separate kernel that isolates Windows Sandbox from the host.

This obviously raises a lot of questions, because at the first glance, all this resource sharing should increase the attack surface greatly, leaving more space for exploits to be found. Also, even the most sophisticated technology, which changes only the implementation and not the design, does ultimately make the discovery of an exploit only more time and resource consuming, but not less possible, doesn’t it?

So, my question is

Would you consider Windows Sandbox to be a viable alternative to conventional VM solutions in terms of security, or do the shortcuts used to achieve the performance undermine the VM’s core principles too much? Or am I just not understanding the technology and all of what the Sandbox is doing is technically safe?

An extra question: Does the situation change when we’re talking about a web-based attack, such as opening a malicious site in a browser from within the Sandbox, or does it come down to the same situation as running an infected executable? (disregarding the extra layer of sandboxing done in the browser itself)

virtualization – Is it safer to run executable file (not safe, may contain virus) on VBox

I have a host system (Windows 10) with premium antivirus and running a Windows 7 in Virtualbox with no antivirus. I have installed sandboxie and try to run executable files that may contain virus. Is it safe to run those applications? and what will be the effect if the virus slips out. I know this is a dumb question even after knowing about sanboxie. But I would like to know the consequences that occur.

Also, I would like to know how it affects the host machine if I run that exe file without sandboxie on Virtualbox.

Since there are many viruses and trojans that silently triggers even without running the exe file. Just want to learn the consequences that virus can damage. (P.S that I still having the antivirus on on the host machine).

kvm virtualization – How do remove the default storage pool from a libvirt hypervisor, so that even after libvirtd restarts there is NO storage pool

I want to remove the default storage pool from my virt-manager AND NOT HAVE IT COME BACK BY ITSELF, EVER. I can destroy it and undefine it all I want, but when i restart libvirtd (for me thats “sudo systemctl restart libvirtd” in an arch linux terminal window), and restart virt-manager, the default storage pool is back, just like Frankenstein.

I don’t want a storage pool of any kind. I simply want to move from the dual-boot I have now (arch linux and windows) to running the two OS simultaneously. I intend to provision two physical disk partitions on the host to be disks on the guest, and I can do this via the xml that defines the domain.

Or am i required to have a storage pool no matter what?

kvm virtualization – QEMU vmvga + Windows – Which driver should be used?

I am trying to find a driver for Windows 7/8/10 to use the display driver "vmvga" with QEMU + KVM.

This driver is available by default for Linux and VmsVGA2 is available for OSX. But where can I find the right driver for Windows and QEMU 2.11.1 (included in Ubuntu 18.04)?

If I now specify vmvga in libvirt, Windows uses the standard display adapter. It is not available in the virtio iso, nor can I find it anywhere on the web.

Dedicated Virtualization Web hosting conversation

Hello, I'm starting a new small company selling Linux VPS. I need to know some things, can someone answer me?
Special Specifications: SYS Game 1
Intel i7-4790K
4c / 8t
4 GHz
16 GB DDR3 1333 MHz
1x120GB SSD

I'm going to use Virtualizator.

How much VPS can I create without over selling in this dedicated one?
If every vps is 2 GB RAM, 1CPU (50%) runs with Ubuntu.
And what else should I know so as not to fail?