I am using vsphere 5.5.0. I have one ESXI host with 2 socket , 8 core , hyperthred . I have total 16 Core CPU. I have 6 virtual machines on this EXSI host with 2 core CPU each. So i left with 4 core CPU. One of my virtual machine utilizing more CPU so i want to increase its core CPU.How many more CPU can i assign to this machine. ? Can i assign all rest of 4 core CPU to it to make this machine with 6 core CPU?.Is it possible ? Or i can assign only 2 more core CPU to it. If i will assign all rest of 4 core CPU to it will it have any issue or side effect ?
Maximum core CPU usage on ESXI server
EVC Issue
Hi All,
Since migrating to 5.1 from 4.1 we have been having issues enabling EVC, this had been working in the previous 4.1 environment but now when trying to enable EVC we get the following error,
The host cannot be admitted to the cluster's current Enhanced vMotion Compatibility mode. Powered-on or suspended virtual machines on the host may be using CPU features hidden by that mode. (We get this message for each of the hosts that we have upgraded)
Is there any way to identify what is preventing this being enabled?
Thanks
Alex
vMotion Issues - Changing the network and host of a virtual machine during vMotion is not supported on the "Source" host 'xx.xx.xx.xx'.
Hi,
I'm new to VMWare environment, so I've small problem to ask.
I just setup mulitple vlan on my two-host cluster, it run well. Now, both host can run multiple vlan servers.
The problem is when i want to do vMotion, from host 1 to host 2. After I choose port-group under 'Destination Network', I get error notification like this:
Changing the network and host of a virtual machine during vMotion is not supported on the "Source" host 'xx.xx.xx.xx'.
Fyi, i use standard vSwith on both host since i use enterprise license.
How to fix this issue? Or, someone want to tell me additional configuration to fix this?
Best Regards,
Azlan Syah
Vmotion Error - The source detected that the destination failed to resume.
Since last few days I am noticing error stating "The source detected that the destination failed to resume.An error occurred restoring the virtual machine state during migration.
Failed to receive migration"
We have DRS setup and its happening with few VM's intermittently. Manual vmotion also gives the same error.
Esxi 6.0 Build 3620759
Any ideas ?
vMotion Gets Auto Enabled at Random on Management Port Group - Is there a reason?
Hi All,
I've run into a very strange issue and I cannot find a reason or resolution.
I'm currently running vCenter 6.0 U2 and I have 7 ESXi hosts with mixed versions, 6.0 U1 & 6.0 U2.
These hosts have 2 dedicated vmotion port groups tied to 2 nics, configured as active/active.
The management port group is configured only for management traffic. So it looks like this:
vmotion 1 - (vmotion only)
vmotion 2 - (vmotion only)
management - (management only)
Here's the strange issue. Sometimes, maybe once per week, I'm noticing that the management network automatically enables vmotion. This occurs on different hosts at different times. I can't figure out a pattern. I find this out because the logs indicate vmotion failures coming from the management IP address. I resolve it by unchecking vmotion from the management port group, and everything is happy again.
Would there be any reason this would happen, aside from someone actually checking off "vmotion" on the management network? I can say with 100% certainty, this is happening automatically, and nobody is making changes.
Thanks!
I want to ask about vMotion EVC.
Hi, all
Perhaps, I can use the vMotion EVC in online...?
Answer me, plz.... ;(((((((((((((
EVC - ESXI Hosts with different CPU speed
Hello Experts,
Actually, I am trying to find out the explanation on having any relation to performance problems with the Virtual Machines when we have ESXI hosts in a EVC enabled cluster running with different clock speed processor.
Many thanks for your inputs.
Cheers
Hi I have recently encountering the storage vmotion failing with VM many disks, some have more than 2 TB disk size. Does anyone can have answer
2016-03-22T23:40:20.689Z cpu28:44126813)WARNING: XVMotion: 2501: Only 67/1152 blocks available. Timed out waiting for more blocks.
2016-03-22T23:40:20.794Z cpu7:44028510)WARNING: XVMotion: 2501: Only 73/894 blocks available. Timed out waiting for more blocks.
2016-03-23T01:01:12.024Z cpu16:44072396)WARNING: SVM: 3745: scsi2:3 Failed to allocate blocks for read IO: Timeout
2016-03-23T01:01:12.024Z cpu16:44072396)WARNING: SVM: 4079: scsi2:3 Failed to issue async read IO: Timeout
2016-03-23T01:01:12.029Z cpu8:16211908)WARNING: SVM: 6165: IO error (status: bad0021, transientStatus: Success) encountered while mirroring guest IO to the destination disk.
2016-03-23T01:01:12.029Z cpu8:16211908)WARNING: SVM: 2389: SVMDisconnectNode: Some guest IOs failed for device 9c663946-ffffffffffffffff-svmmirror during mirroring: I/O error
vMotion without shared storage - undocumented limitations
Like a lot of people we have successfully used Enhanced vMotion or vMotion without shared storage in multiple vBlock and VSPEX hardware refreshes. Our customer has come to expect the ability to move workloads around with no shared storage requirement and we have done many hundreds. It's the greatest.
Just realized a limitation though that I cannot find in the documentation.
Requirements and Limitations for vMotion Without Shared Storage
One cannot select a datastore cluster for the destination, you have to choose a datastore. If you go through the wizard choosing "Select compute resource first" then when you get to the storage selection you are presented with datastores only, no datastore clusters. This is even with SDRS enabled in fully automated mode. If you go through the wizard choosing "Select storage resource first", you are immediately presented with a message "Datastore clusters cannot be selected when migrating storage and compute resources for powered-on virtual machines. Select a datastore."
We've just never questioned this but I'm curious as to why this limitation would exist.
Anyway if you know any more about this or have any comments feel free.
ESXi 6.0 Update 2
how to force vmk1 to use a specific vmnic?
Hi,
In my ESXi host, currently the vMotion port (vmk1) is using vmnic6. I would like to force it to use vmnic7 because we are having problems on vmnic6. Any idea how to do that without rebooting the host?
Thanks,
Ganesh
Migration of VCenter from local datastore to vSAN
Hi,
A simple question (hopefully).
I'm looking to migrate a VCenter (5.5) from a local datastore to vSAN. I have licences installed for vmotion/storage vmotion on the cluster hosts. As the easiest way to migrate the VCenter VM is via itself, I'm just looking for clarification that I can do this without any outage and without impact on the VCenter.
Any advice appreciated.
Thanks.
Can I deploy 2 vdp servers in the same vcenter ??
Hi,
Can I deploy 2 vsphere data protection servers in the same vcenter ??
Best Regards,
damrongpol
EVC Compatability between e5-2600 v0 and e5-2600v3
Hi All,
We are live migrating VMs from one Cluster to another standalone hosts .
We are having issues with cpu compatibility .
Legacy hosts have e5 2600 v0 processor and new hosts has e5 2600 v3 , please let me know which baseline should i use to successfully migrate a VM from our legacy cluster to new hosts .
Our plan is to create a Test Cluster and enable EVC with baseline which is common to old legacy hosts and add the new host and test migrate a VM .
Regards
Karthik
xVC vMotion using RelocateVM_Task and same datastore
Backstory: I have two discrete vCenter servers, we'll call them vcA and vcB. I setup a third server, vcC, in the same SSO domain as vcA now with linked mode! I would like to migrate guests from vcB to vcC live. (All vCenters are 6.0 U2 on Windows, and all hosts are ESXi 6.0.)
There is a script written by virtuallyGhetto to do such a thing. I have a four-node cluster that I'd like to start with. I moved all the guests off of the first host, disconnected it from vcB, and landed it on vcC. The host has all the same networks (I exported / imported the Virtual Distributed Switch configs) and I left same datastores intact. I re-wrote the script to match up the distributed port group, and individual datastore IDs per disk from the source to the destination vCenters. When I ran the script against a test VM, it migrated successfully.
When I tried another VM migration, it failed complaining about not enough space on the target datastore. I checked and the datastore for this VM was 70% utilized. The datastore for my test VM was only 25% utilized. Eventually, after trying a few things, I increased the size of the datastore so that it had >50% free space, and the migration succeeded.
I watched the datastore during the vMotion and I didn't see any sign that files were actually copied. The VM is ~5 TB in size, but the migration took 1 min 16 seconds. Our network isn't that fast. At this point I'm thinking that the RelocateVM_Task does pre-checks for target datastore size, without taking in to account the possibility that a storage vMotion will not be required.
Have I missed an option on the VirtualMachineRelocateSpec? Or on the VirtualMachineRelocateSpecDiskLocator? Someone on another forum suggested this was a bug which may be fixed in 6.0 U3.
Force to use specyfic vMotion adapter (mix 10gb and 1gb)
Hello, i have 2 physical NIC for vmotion, First is 10GB the second is 1GB. I dont want to put them together in 1 vSwitch in active/standby or active/unused because there is the well known issue (only 8 vm at once can move )
Can i use two vSwitch's (one connected to 10gb and second to 1GB) for vmotion and force system to use 10gb unless it is down ? or use both of them ?
vmotion woes vmk_preemptive_fail
Hello community! I've just started to have issues with a host in my cluster. There are 3 nodes in which 1 node will not allow a vm to vmotion off the server nor will it let the server vmotion to it. The vmotion network is fine. I've tested dns, vmkping and ping as which are were successful. I also restarted the management agents as well. I've taken a look at esxtop and don't see anything that is jumping out at me.
When I attempt to vmotion whether "high priority" or "standard" vmotion starts at 3 percent and then to at 9 percent. Shortly thereafter I get the following message:
Migrate virtual machine
VMViewTrans1.lldc.us19.local
A general
system error
occurred:
Failed to initialize migration at source.
Error 0xbad00a4.
VMotion failed to start due to lack of cpu or memory resources.
The vmkerrcode 0xbad00a4 which reports " vmk_preemptive_fail"
I have plenty of cpu and memory shown on each server and they are balanced as far as the cluster is shown with 3 connected. each having 98gb and the cpu's barely are working. I'd say maybe 2% at most on each. When I power off a machine, I can vmotion it to another host and vice versa but not powered on. When I try to vmotion back with a powered on machine as a test, I get the following below message. Please remember, this is only 1 host having the problem. I can vmotion fine between the other two, both servers are identical and vmotion worked fine before. The resource pools have never changed and I have no reservations whatsoever.
Failed to initialize migration at source. Error 195887268. VMotion failed to start due to lack of cpu or memory resources.
Failed to allocate migration heap.
Failed to create a migrate heap of size 30879984: Not found
Failed to reserve a migration memory slice. This host has reached its concurrent migration limit. Please wait for another migration to complete before retrying.
The vMotion failed because the destination host did not receive data from the source host on the vMotion network. Please check your vMotion network settings and physical network configuration and ensure they are correct.
Thanks in advance for any advice on how to resolve.
How to install vmotion
How can I set up and install v-motion in esx6.5 Just installed esx and cannot figure out v-motion. any help is appreciated.
thank You
Sol
vMotion with different Mgmt network
Hi All,
I have 3 ESXi host in a cluster having
2 ESXi (name: ESXi-A & ESXi-B) host with the management IP 192.168.0.1 and
1 ESXi (name: ESXi-C) host with the management IP 172.0.0.1
But my vMotion Network are configured on different vSwitch and are having the same IP address 172.0.3.x by all the 3 ESXi host.
Now Can I do a vMotion between ESXi-C to (ESXi A or ESXi-B) having different management networks ?
Script or vCO workflow to create contiguous space on datastore cluster
Hey guys first time asking a question here and hope someone can point me in the right direction.
Currently when a request comes in through vRA for a large VM we have to move VM's on a datastore cluster to consolidate the free space for the new VM. I would really love this to be somewhat automated where i could punch in the space i need and if it could fit it would move VM's around the ds cluster to make the space.
Its possible i'm missing something obvious that i can do but i have no idea. I toyed around with the idea of putting a ds in maintenance mode letting it move VM's and then taking it out but that will cause a lot of VM's to move that might not need to be.
Any ideas are appreciate! Thanks all.
Percentage of Entitled Resources Delivered Very Low
I have a cluster of HP BL685c G6 running ESXi5.1 with Enterprise Plus licenses connected to a vcenter server 5.1 environment.
DRS is enabled and set to fully automated.
The cluster runs close to 100 VMs. Nobody is screaming that performance is bad, but in my daily checks of the environment I constantly notice that the resource distribution chart shows seemingly unhealthy CPU resource delivery. yellows all over the place.
In addition to this, vcops shows an alarming risk assessment of the cluster stating that I'm over provisioned by dozens of VMs!
yet, when I investigate each host individually in vcops they tell a much different story where some are so under utilized as to allow for over 100 more VMs per host!
What could be causing this and what might help?
For comparison sake, I also have another cluster with HP BL685c G7 hosts similarly licensed in the same vcenter server environment running over 100 VMs and every one of them is green and being supplied with nearly 100% of the requested resources. DRS is set exactly the same.
I would think that if my hosts were not up to the task of supplying the necessary CPU resources the VM objects in this chart would be much larger showing that they are consuming more resources than the host can handle, but as you can see from the above chart they are all very small and total utilization per host is far below even 25%.