Quantcast
Channel: VMware Communities : All Content - vMotion & Resource Management
Viewing all 1307 articles
Browse latest View live

ESXi Resource Management

$
0
0

 

Hi, I recently performed an evaluation of vSphere 5.5 and I have a query about the meaning of some ESXi parameters.

 

 

I can see from the resource management guide that there are three parameters relating to CPU resource allocation: shares, limit and reservation. When viewing the configuration of a virtual machine via vCenter, I can see four parameters: CPU Shares, Max CPU, Min CPU and Min CPU Limit. CPU Shares and Max CPU correspond to shares and limit, however it is not clear what Min CPU and Min CPU Limit correspond to. Presumably one of them corresponds to the reservation but what does the other mean?


Disable vMotion for a single VM

$
0
0

Hi,

 

I'm trying to ascertain whether there is a way of preventing a manual vMotion of a particular VM. The reason for this is that the design I am currently working on has a number of Lync 2013 servers and vMotion is not supported by Microsoft for Lync 2013. I can already set DRS to be disabled for the VM to prevent DRS migrations and if I could create a complicated permission set to disable migration of a VM but I am wondering whether there is an advanced setting that would specifically prevent a powered on virtual machine from being migrated to an alternate host.

 

Any help much appreciated.

 

Thanks

 

Andy

vMotion Issue With Certain Host

$
0
0

We have a 4 host cluster and only one Host will not vMotion with the others.  i can vmkping from each host to the one in question just fine. There are NO other adapters that have vMotion enables with the exception of teh one teamed on that host.  I am a bit stumped about one piece of troubleshooting though.  When i tried to disconnect vmNIC6  (vmnic 3 and 6 are teamed) from teh switch it still shows teh nic as being UP.  However when i disconnect vmnic 6 it shows DOWN.  Is there a valid reason for that??  Checked all the switches and they all check out.  Ports are setup correctly, correct VLAN too.   Anyhow, can't figure out why it won't vMotion.  Below is the error I receive.  Any help is greatly appreciated.  Thanks

 

The vMotion migrations failed because the ESX hosts were not able to connect over the vMotion network. Check the vMotion network settings and physical network configuration.

vMotion migration [167790456:1443201287021031] failed to create a connection with remote host <10.0.71.130>: The ESX hosts failed to connect over the VMotion network

Migration [167790456:1443201287021031] failed to connect to remote host <10.0.71.130> from host <10.0.71.120>: Host is down

The vMotion failed because the destination host did not receive data from the source host on the vMotion network. Please check your vMotion network settings and physical network configuration and ensure they are correct.

DRS monitoring automation level.

$
0
0

Hi all,

 

It happens that for some maintenance reason we, the vmware admin team, set the DRS automation to "manual".

As we are humain, we may forget to set it back to "fully automated".

 

I know how to check the DRS level using powercli, but I would like to have an alarm on the vcenter so this DRS status is monitored in compliance with our standards.

 

I had a look around in documentation, and forums. I can't find anything.

Am I missing something ? Maybe I should think of another way to satisfy my need ? (which is : "be informed in vcenter interface that DRS is not on fully auto")

 

thanks for your replies / hints.

 

regards

Gildas

Host Vmotion stuck at 65% from 14 hours

$
0
0

Hello All

Am facing an issue in my environment.

Host vmotion gets stuck at 65%,this issue is only observed over the VM machines which are configured into SRM.

VMotion is stuck from past 14 hours.

 

Unable to understand and resolve the problem.

Virtual Machine Memory - vCenter graphs or Windows performance charts for showing accurate Memory utilisation.

$
0
0

Hi,

 

In my organisation I'm looking at scaling down some virtual machines due to a high cluster memory utilisation.

 

We use SCOM and have the veeam management pack for VMware which allows the running of reports on oversized and undersized VMs. After running this certain VM's I would expect to be reporting as oversized are not showing and in fact only 3 VM's are reporting as oversized..

 

When I go into the vCenter performance charts on certain VMs I get an active memory usage of under 50% but within windows performance charts I get a memory usage of around 90% utilisation. Due to this saying to a system admin I want to take memory away from their VM's is not welcomed as they are just seeing high memory utilisation within their windows OS.

 

Is there a tool or accurate way of determining this? I've looked in ESXTOP but this doesn't really show me much.

 

I am using vCenter 5.5 Update 3 and ESXi 5.5 Update 3.

 

Any help or guidance on this will be appreciated.

 

Thanks

Mike.

is vMotion working between HP DL380 G7 and Gen9

$
0
0

Hi All,

 

I would like to find out how vMotion is working between HP DL380 G7 and HP DL380 Gen9?

Can someone help me?

 

vMotion between this two Servers:

 

HP DL380 G7:

Intel Xeon E5649

Embedded 4x1GbE Network Adapter

+NC364T PCI Express QuadPort Gigabit

 

HP DL380 Gen9:

Intel Xeon E5-2650v3

Embedded 4x1GbE Network Adapter

+HP FlexFabric 10Gb 2-port 533FLR-T 10GBASE-T

 

Can someone help me?

I would be very thankful

 

Ralf

Successful Migration through vSphere GUI but not through PowerCLI

$
0
0

I am having some trouble performing a migration using PowerCLI.  I am getting the error: "Unable to access the virtual machine configuration: Unable to access file [datastore1 (3)] jcarroll-vni-3501-server-move/jcarroll-vni-3501-server-move.vmx".       I am able to successfully migrate various VM's through the vSphere Web Client, I only receive this error when attempting the following PowerCLI move command:  PowerCLI C:\Program Files (x86)\VMware\Infrastructure\vSphere PowerCLI> Get-VM -Name jcarroll-vni-3501-server-move | Move-VM -Destination x.x.x.x.

Again, this issue does not occur when migrating this VM via Web Client.  I have checked the datastore, and the file is present and named correctly.  I'm not sure what the issue could be.

Any info/help would be greatly appreciated.

Thanks.


Mulitple-NIC vs LCAP for VMotion

$
0
0

Hi

tnx for reading

 

I have Use Vsphere 6

My Switch have Nexus (2Port 10Gbs for Vmotion) ! and i dont know which one is Better !

 

Mulitple-NIC or  LCAP(LAG and Active Mode )  for VMotion ?

 

and other Q is for Best Speed for migrate Vmition  ! best MTU (Jumbo Frame ) !

 

 

 

 

tnxxxxxxxxxxxx

How to migrate vms from one data center to another

$
0
0

I have two VMware environments at two different data centers in different states. I want to migrate some of the VMs from one data center to another. I've heard you can storage vmotion across data centers with the VMware bridge option or by failing over with vsphere replication. What is the safest and best way to migrate VMs across data centers? Do I need to buy some kind of VMware option?

Migration Warning - not supportet devices

$
0
0

Hello,

 

i successfully moved a couple VMs from one host to another (vCenter with 2 ESXi 6 hosts) but on one VM i get the following errors:

 

"Device "Hard disk 1' uses a controller that is not supported. This is a general limitation of the virtual machine's compatibility with the ESXi version of the selected host."

"Virtual ethernet card 'Network adapter 1' is not supported.  This is not a limitation of the host in general, but of the virtual machine's configured guest OS on the selected host"

 

I get the options to continue or abort, but i don't want to procceed until i know for sure that the VM will not crash in any way.

 

 

The hosts are identical (Hardware + ESXi). The VM is a custom Linux (Other 3.x Linux) with vmware tools (Guest managed). Virtual Hardware version is 10 (1 HDD, 1 VMXNET 3 NIC, basic stuff)

 

Google did'nt rly helped me so far. I found similar cases, but they where all about boot problems and vm converting,... . The vm runs fine as it is and i'm not able to experiment with it, as it is in a productive inviroment.

 

Any advice?

What is the effect of assigning a static MAC address during live migration ?

$
0
0

Hi Guys!


I got to know that the MAC of a VM changes after migration only if the vm is restarted. But the definition of live migration says that "It is the kind of migration which happens without turning off the running VM or application."

The new MAC is allocated from the MAC pool in the destination.

I want to know that whether there is no change in the MAC address after migration if we use Static MAC address for the VM.


And why it is necessary to assign a static MAC for a VM running linux distribution.


Thanks in adv !!!


K.Visalini

VMware virtual machine migration types vSphere 6.0

$
0
0

In VMware virtualization, we can perform multiple types of migrations of virtual machines.

 

VMware Virtual machine migration means moving a virtual machine from one host, datastore, or vCenter Server to another host, datastore, or vCenter Server.  We perform these activities  for Workload balancing, or as part of preparations to avoid downtime during activities like server maintenance, site level switch-over and other


Below are the types of migrations we can perform:


Cold: 

Migration of a powered-off virtual machine from current host/datastore or both to a new host or datastore. Virtual machines can be migrated across vCenter Server in this type. 


Suspended: 


Migration of a suspended state virtual machine from current host/datastore or both to a new host or datastore. Virtual machines can be migrated across vCenter Server in this type.


vMotion: 


Migration of a powered-on virtual machine from current host to a new host without downtime and zero data and connectivity loss.
Only the CPU and Memory instances on ESXi host gets migrated, virtual machine files on datastore are not migrated.

 

Virtual machines can be migrated across vCenter Server in this type.

 

Image: VMware


Storage vMotion: 


Migration of a powered-on virtual machine's files from current datastore to a new datastore. In this type ESXi host is not changed, only datastore file location is changed from current datastore to new datastore.

 

Virtual machines can be migrated across vCenter Server in this type.

 

Image: VMware

 


Shared-nothing vSphere vMotion:
 

Migrate a powered-on virtual machine from current host and datastore to a new host and a new datastore as we are not using shared datastore here instead local datastores.

 

 

Image: VMware

 

Virtual machines can be migrated across vCenter Server in this type.

 

This migration type is often termed as vMotion+Storage vMotion due to its nature.

 

 

Image: VMware

 

 

NOTE: In case of vMotion, Shared nothing vMotion and Suspended state migration, CPU compatibility is must. CPU vendor and processor family should match, factors like number of cores, clock speed does not make any condition here.

Save

Why can i snapshot this?

$
0
0

Hey,

 

I have this virtual machine

 

- Hosted on esxi 5.5 update 2 patch 5
- Hardware version 8
- One RDM attached in physical mode

 

 

Snaptshot option isn't grayed out like I expected it to be, and I'm able to sucessfuly migrate the virtual machine without its memory.
Why can i migrate this?

 


Dan.

A general system error occurred: A fatal internal error occurred. See the virtual machine's log for more details. Failed to copy one or more disks

$
0
0

We are attempting to storage vMotion a drive between datastores and it keeps failing with the below event details.

 

 

 

----begin event details----

event 6/30/2016 12:36:40 PM, Cannot migrate chilhqdt02 (P1)(New Paperless) from host8., ch1nim01-t1-07 to host8.,ch1nim01-t1-07 in CH1

 

Description:

 

Failed to migrate the virtual machine for reasons described in the event message

 

Possible causes:

 

Cause: The virtual machine did not migrate. This condition can occur if vMotion IPs are not configured, the source and destination hosts are not accessible, and so on.

 

Action: Check the reason in the event message to find the cause of the failure. Ensure that the vMotion IPs are configured on source and destination hosts, the hosts are accessible, and so on.

--end event details---

 

 

So far we've tried these actions to no avail:

 

1. Upgraded the source datastore from VMFS3 to VMFS5

2. Attempted the storage vMotion several times including overnight when I/O would be very low for that VM and source /destination datastores.

3. Ensured vMotion is configured properly on the host

4. Ensured we have no issues with our network traffic

5. We've been able to successfully storage vMotion several other drives connected to same VM to the same destination datastore

6. vMotioned the VM to another host and tried storage vMotion again.

 

I also tried to complete the storage vMotion again while watching progress of the hostd.log and the result of greps for "fail" "storage" "VMotion" "vMotion" yielded the below. I wasn't able to find concrete root causes for this but I did find a line pointing to checking the virtual machine logs but that fails when attempting to download the file from thick client as well as web client. Anyone run across this scenario willing to share your solutions? I realize several things could be the true root cause.

 

2016-06-30T14:48:03.317Z [6A180B70 info 'Vmsvc.vm:/vmfs/volumes/576f5f18-eda38b00-fb57-0025b52a0cef/chilhqdt02 (P1)(New Paperless)/chilhqdt02 (P1)(New Paperless).vmx-c4-eb-28 user=vpxuser] DeleteFeatureCompat failed: vim.fault.GenericVmConfigFault

2016-06-30T14:48:03.317Z [6A180B70 info 'Vmsvc.vm:/vmfs/volumes/576f5f18-eda38b00-fb57-0025b52a0cef/chilhqdt02 (P1)(New Paperless)/chilhqdt02 (P1)(New Paperless).vmx-c4-eb-28 user=vpxuser] Is disk present failed: vim.fault.GenericVmConfigFault

2016-06-30T14:48:03.317Z [6A180B70 info 'Vmsvc.vm:/vmfs/volumes/576f5f18-eda38b00-fb57-0025b52a0cef/chilhqdt02 (P1)(New Paperless)/chilhqdt02 (P1)(New Paperless).vmx-c4-eb-28 user=vpxuser] Is disk present failed: vim.fault.GenericVmConfigFault

2016-06-30T14:48:03.318Z [6A180B70 info 'Vmsvc.vm:/vmfs/volumes/576f5f18-eda38b00-fb57-0025b52a0cef/chilhqdt02 (P1)(New Paperless)/chilhqdt02 (P1)(New Paperless).vmx-c4-eb-28 user=vpxuser] Is disk present failed: vim.fault.GenericVmConfigFault

2016-06-30T14:48:03.346Z [6A180B70 info 'Vmsvc.vm:/vmfs/volumes/576f5f18-eda38b00-fb57-0025b52a0cef/chilhqdt02 (P1)(New Paperless)/chilhqdt02 (P1)(New Paperless).vmx-c4-eb-28 user=vpxuser] Is disk present failed: vim.fault.GenericVmConfigFault

2016-06-30T14:48:03.349Z [6A180B70 info 'Vmsvc.vm:/vmfs/volumes/576f5f18-eda38b00-fb57-0025b52a0cef/chilhqdt02 (P1)(New Paperless)/chilhqdt02 (P1)(New Paperless).vmx-c4-eb-28 user=vpxuser] Is disk present failed: vim.fault.GenericVmConfigFault

2016-06-30T14:48:03.353Z [6A180B70 info 'Vmsvc.vm:/vmfs/volumes/576f5f18-eda38b00-fb57-0025b52a0cef/chilhqdt02 (P1)(New Paperless)/chilhqdt02 (P1)(New Paperless).vmx-c4-eb-28 user=vpxuser] Is disk present failed: vim.fault.GenericVmConfigFault

2016-06-30T14:48:03.356Z [6A180B70 info 'Vmsvc.vm:/vmfs/volumes/576f5f18-eda38b00-fb57-0025b52a0cef/chilhqdt02 (P1)(New Paperless)/chilhqdt02 (P1)(New Paperless).vmx-c4-eb-28 user=vpxuser] Is disk present failed: vim.fault.GenericVmConfigFault

2016-06-30T14:48:03.357Z [6A180B70 info 'Vmsvc.vm:/vmfs/volumes/576f5f18-eda38b00-fb57-0025b52a0cef/chilhqdt02 (P1)(New Paperless)/chilhqdt02 (P1)(New Paperless).vmx-c4-eb-28 user=vpxuser] Is disk present failed: vim.fault.GenericVmConfigFault

2016-06-30T14:48:03.357Z [6A180B70 info 'Vmsvc.vm:/vmfs/volumes/576f5f18-eda38b00-fb57-0025b52a0cef/chilhqdt02 (P1)(New Paperless)/chilhqdt02 (P1)(New Paperless).vmx-c4-eb-28 user=vpxuser] Is disk present failed: vim.fault.GenericVmConfigFault

2016-06-30T14:48:03.359Z [6A180B70 info 'Vmsvc.vm:/vmfs/volumes/576f5f18-eda38b00-fb57-0025b52a0cef/chilhqdt02 (P1)(New Paperless)/chilhqdt02 (P1)(New Paperless).vmx-c4-eb-28 user=vpxuser] Is disk present failed: vim.fault.GenericVmConfigFault

2016-06-30T14:48:03.367Z [6A180B70 info 'Vmsvc.vm:/vmfs/volumes/576f5f18-eda38b00-fb57-0025b52a0cef/chilhqdt02 (P1)(New Paperless)/chilhqdt02 (P1)(New Paperless).vmx-c4-eb-28 user=vpxuser] Is disk present failed: vim.fault.GenericVmConfigFault

2016-06-30T14:48:03.369Z [6A180B70 info 'Vmsvc.vm:/vmfs/volumes/576f5f18-eda38b00-fb57-0025b52a0cef/chilhqdt02 (P1)(New Paperless)/chilhqdt02 (P1)(New Paperless).vmx-c4-eb-28 user=vpxuser] Is disk present failed: vim.fault.GenericVmConfigFault

2016-06-30T14:48:03.370Z [6A180B70 info 'Vmsvc.vm:/vmfs/volumes/576f5f18-eda38b00-fb57-0025b52a0cef/chilhqdt02 (P1)(New Paperless)/chilhqdt02 (P1)(New Paperless).vmx-c4-eb-28 user=vpxuser] Is disk present failed: vim.fault.GenericVmConfigFault

2016-06-30T14:48:03.375Z [6A180B70 info 'Vmsvc.vm:/vmfs/volumes/576f5f18-eda38b00-fb57-0025b52a0cef/chilhqdt02 (P1)(New Paperless)/chilhqdt02 (P1)(New Paperless).vmx-c4-eb-28 user=vpxuser] Is disk present failed: vim.fault.GenericVmConfigFault

2016-06-30T14:48:03.385Z [6A180B70 info 'Vmsvc.vm:/vmfs/volumes/576f5f18-eda38b00-fb57-0025b52a0cef/chilhqdt02 (P1)(New Paperless)/chilhqdt02 (P1)(New Paperless).vmx-c4-eb-28 user=vpxuser] Is disk present failed: vim.fault.GenericVmConfigFault

2016-06-30T14:48:03.386Z [6A180B70 info 'Vmsvc.vm:/vmfs/volumes/576f5f18-eda38b00-fb57-0025b52a0cef/chilhqdt02 (P1)(New Paperless)/chilhqdt02 (P1)(New Paperless).vmx-c4-eb-28 user=vpxuser] NIC: is present failed: vim.fault.GenericVmConfigFault

2016-06-30T14:48:03.584Z [6A180B70 warning 'Vmsvc.vm:/vmfs/volumes/576f5f18-eda38b00-fb57-0025b52a0cef/chilhqdt02 (P1)(New Paperless)/chilhqdt02 (P1)(New Paperless).21B-c4-eb-28 user=vpxuser] PopulateCache failed: _diskAccess : false, _storageAccessible : true

2016-06-30T14:48:03.585Z [6A180B70 warning 'Vmsvc.vm:/vmfs/volumes/576f5f18-eda38b00-fb57-0025b52a0cef/chilhqdt02 (P1)(New Paperless)/chilhqdt02 (P1)(New Paperless).21B-c4-eb-28 user=vpxuser] PopulateCache failed: _diskAccess : false, _storageAccessible : true

2016-06-30T14:48:03.586Z [6A180B70 warning 'Vmsvc.vm:/vmfs/volumes/576f5f18-eda38b00-fb57-0025b52a0cef/chilhqdt02 (P1)(New Paperless)/chilhqdt02 (P1)(New Paperless).21B-c4-eb-28 user=vpxuser] PopulateCache failed: _diskAccess : false, _storageAccessible : true

2016-06-30T14:48:03.587Z [6A180B70 warning 'Vmsvc.vm:/vmfs/volumes/576f5f18-eda38b00-fb57-0025b52a0cef/chilhqdt02 (P1)(New Paperless)/chilhqdt02 (P1)(New Paperless).21B-c4-eb-28 user=vpxuser] PopulateCache failed: _diskAccess : false, _storageAccessible : true

2016-06-30T14:48:03.589Z [6A180B70 warning 'Vmsvc.vm:/vmfs/volumes/576f5f18-eda38b00-fb57-0025b52a0cef/chilhqdt02 (P1)(New Paperless)/chilhqdt02 (P1)(New Paperless).21B-c4-eb-28 user=vpxuser] PopulateCache failed: _diskAccess : false, _storageAccessible : true

2016-06-30T14:48:03.590Z [6A180B70 warning 'Vmsvc.vm:/vmfs/volumes/576f5f18-eda38b00-fb57-0025b52a0cef/chilhqdt02 (P1)(New Paperless)/chilhqdt02 (P1)(New Paperless).21B-c4-eb-28 user=vpxuser] PopulateCache failed: _diskAccess : false, _storageAccessible : true

2016-06-30T14:48:03.591Z [6A180B70 warning 'Vmsvc.vm:/vmfs/volumes/576f5f18-eda38b00-fb57-0025b52a0cef/chilhqdt02 (P1)(New Paperless)/chilhqdt02 (P1)(New Paperless).21B-c4-eb-28 user=vpxuser] PopulateCache failed: _diskAccess : false, _storageAccessible : true

2016-06-30T14:48:03.592Z [6A180B70 warning 'Vmsvc.vm:/vmfs/volumes/576f5f18-eda38b00-fb57-0025b52a0cef/chilhqdt02 (P1)(New Paperless)/chilhqdt02 (P1)(New Paperless).21B-c4-eb-28 user=vpxuser] PopulateCache failed: _diskAccess : false, _storageAccessible : true

2016-06-30T14:48:03.594Z [6A180B70 warning 'Vmsvc.vm:/vmfs/volumes/576f5f18-eda38b00-fb57-0025b52a0cef/chilhqdt02 (P1)(New Paperless)/chilhqdt02 (P1)(New Paperless).21B-c4-eb-28 user=vpxuser] PopulateCache failed: _diskAccess : false, _storageAccessible : true

2016-06-30T14:48:03.595Z [6A180B70 warning 'Vmsvc.vm:/vmfs/volumes/576f5f18-eda38b00-fb57-0025b52a0cef/chilhqdt02 (P1)(New Paperless)/chilhqdt02 (P1)(New Paperless).21B-c4-eb-28 user=vpxuser] PopulateCache failed: _diskAccess : false, _storageAccessible : true

2016-06-30T14:48:03.596Z [6A180B70 warning 'Vmsvc.vm:/vmfs/volumes/576f5f18-eda38b00-fb57-0025b52a0cef/chilhqdt02 (P1)(New Paperless)/chilhqdt02 (P1)(New Paperless).21B-c4-eb-28 user=vpxuser] PopulateCache failed: _diskAccess : false, _storageAccessible : true

2016-06-30T14:48:03.597Z [6A180B70 warning 'Vmsvc.vm:/vmfs/volumes/576f5f18-eda38b00-fb57-0025b52a0cef/chilhqdt02 (P1)(New Paperless)/chilhqdt02 (P1)(New Paperless).21B-c4-eb-28 user=vpxuser] PopulateCache failed: _diskAccess : false, _storageAccessible : true

2016-06-30T14:48:03.599Z [6A180B70 warning 'Vmsvc.vm:/vmfs/volumes/576f5f18-eda38b00-fb57-0025b52a0cef/chilhqdt02 (P1)(New Paperless)/chilhqdt02 (P1)(New Paperless).21B-c4-eb-28 user=vpxuser] PopulateCache failed: _diskAccess : false, _storageAccessible : true

2016-06-30T14:48:03.600Z [6A180B70 warning 'Vmsvc.vm:/vmfs/volumes/576f5f18-eda38b00-fb57-0025b52a0cef/chilhqdt02 (P1)(New Paperless)/chilhqdt02 (P1)(New Paperless).21B-c4-eb-28 user=vpxuser] PopulateCache failed: _diskAccess : false, _storageAccessible : true

2016-06-30T14:48:12.684Z [6A180B70 info 'Vmsvc.vm:/vmfs/volumes/4eec6ef5-6501dba0-7d60-0025b500003d/chilhqdt02/chilhqdt02.vmx' opID=A2B5CBBC-0000021B-c4-eb-12 user=vtlog state changed from failure to none

2016-06-30T14:48:13.157Z [6A180B70 warning 'Vmsvc.vm:/vmfs/volumes/576f5f18-eda38b00-fb57-0025b52a0cef/chilhqdt02 (P1)(New Paperless)/chilhqdt02 (P1)(New Paperless).ed: _diskAccess : false, _storageAccessible : true

2016-06-30T14:48:13.158Z [6A180B70 warning 'Vmsvc.vm:/vmfs/volumes/576f5f18-eda38b00-fb57-0025b52a0cef/chilhqdt02 (P1)(New Paperless)/chilhqdt02 (P1)(New Paperless).ed: _diskAccess : false, _storageAccessible : true

2016-06-30T14:48:13.160Z [6A180B70 warning 'Vmsvc.vm:/vmfs/volumes/576f5f18-eda38b00-fb57-0025b52a0cef/chilhqdt02 (P1)(New Paperless)/chilhqdt02 (P1)(New Paperless).ed: _diskAccess : false, _storageAccessible : true

2016-06-30T14:48:13.161Z [6A180B70 warning 'Vmsvc.vm:/vmfs/volumes/576f5f18-eda38b00-fb57-0025b52a0cef/chilhqdt02 (P1)(New Paperless)/chilhqdt02 (P1)(New Paperless).ed: _diskAccess : false, _storageAccessible : true

2016-06-30T14:48:13.162Z [6A180B70 warning 'Vmsvc.vm:/vmfs/volumes/576f5f18-eda38b00-fb57-0025b52a0cef/chilhqdt02 (P1)(New Paperless)/chilhqdt02 (P1)(New Paperless).ed: _diskAccess : false, _storageAccessible : true

2016-06-30T14:48:13.163Z [6A180B70 warning 'Vmsvc.vm:/vmfs/volumes/576f5f18-eda38b00-fb57-0025b52a0cef/chilhqdt02 (P1)(New Paperless)/chilhqdt02 (P1)(New Paperless).ed: _diskAccess : false, _storageAccessible : true

2016-06-30T14:48:13.164Z [6A180B70 warning 'Vmsvc.vm:/vmfs/volumes/576f5f18-eda38b00-fb57-0025b52a0cef/chilhqdt02 (P1)(New Paperless)/chilhqdt02 (P1)(New Paperless).ed: _diskAccess : false, _storageAccessible : true

2016-06-30T14:48:13.165Z [6A180B70 warning 'Vmsvc.vm:/vmfs/volumes/576f5f18-eda38b00-fb57-0025b52a0cef/chilhqdt02 (P1)(New Paperless)/chilhqdt02 (P1)(New Paperless).ed: _diskAccess : false, _storageAccessible : true

2016-06-30T14:48:13.166Z [6A180B70 warning 'Vmsvc.vm:/vmfs/volumes/576f5f18-eda38b00-fb57-0025b52a0cef/chilhqdt02 (P1)(New Paperless)/chilhqdt02 (P1)(New Paperless).ed: _diskAccess : false, _storageAccessible : true

2016-06-30T14:48:13.167Z [6A180B70 warning 'Vmsvc.vm:/vmfs/volumes/576f5f18-eda38b00-fb57-0025b52a0cef/chilhqdt02 (P1)(New Paperless)/chilhqdt02 (P1)(New Paperless).ed: _diskAccess : false, _storageAccessible : true

2016-06-30T14:48:13.169Z [6A180B70 warning 'Vmsvc.vm:/vmfs/volumes/576f5f18-eda38b00-fb57-0025b52a0cef/chilhqdt02 (P1)(New Paperless)/chilhqdt02 (P1)(New Paperless).ed: _diskAccess : false, _storageAccessible : true

2016-06-30T14:48:13.170Z [6A180B70 warning 'Vmsvc.vm:/vmfs/volumes/576f5f18-eda38b00-fb57-0025b52a0cef/chilhqdt02 (P1)(New Paperless)/chilhqdt02 (P1)(New Paperless).ed: _diskAccess : false, _storageAccessible : true

2016-06-30T14:48:13.171Z [6A180B70 warning 'Vmsvc.vm:/vmfs/volumes/576f5f18-eda38b00-fb57-0025b52a0cef/chilhqdt02 (P1)(New Paperless)/chilhqdt02 (P1)(New Paperless).ed: _diskAccess : false, _storageAccessible : true

2016-06-30T14:48:13.172Z [6A180B70 warning 'Vmsvc.vm:/vmfs/volumes/576f5f18-eda38b00-fb57-0025b52a0cef/chilhqdt02 (P1)(New Paperless)/chilhqdt02 (P1)(New Paperless).ed: _diskAccess : false, _storageAccessible : true

2016-06-30T14:48:13.188Z [6F8C1B70 info 'Hbrsvc'] Replicator: ReconfigListener failed to look up VM (id=28)

2016-06-30T14:48:19.508Z [6A380B70 warning 'Vmsvc.vm:/vmfs/volumes/576f5f18-eda38b00-fb57-0025b52a0cef/chilhqdt02 (P1)(New Paperless)/chilhqdt02 (P1)(New Paperless).ed: _diskAccess : false, _storageAccessible : true

2016-06-30T14:48:19.509Z [6A380B70 warning 'Vmsvc.vm:/vmfs/volumes/576f5f18-eda38b00-fb57-0025b52a0cef/chilhqdt02 (P1)(New Paperless)/chilhqdt02 (P1)(New Paperless).ed: _diskAccess : false, _storageAccessible : true

2016-06-30T14:48:19.510Z [6A380B70 warning 'Vmsvc.vm:/vmfs/volumes/576f5f18-eda38b00-fb57-0025b52a0cef/chilhqdt02 (P1)(New Paperless)/chilhqdt02 (P1)(New Paperless).ed: _diskAccess : false, _storageAccessible : true

2016-06-30T14:48:19.511Z [6A380B70 warning 'Vmsvc.vm:/vmfs/volumes/576f5f18-eda38b00-fb57-0025b52a0cef/chilhqdt02 (P1)(New Paperless)/chilhqdt02 (P1)(New Paperless).ed: _diskAccess : false, _storageAccessible : true

2016-06-30T14:48:19.512Z [6A380B70 warning 'Vmsvc.vm:/vmfs/volumes/576f5f18-eda38b00-fb57-0025b52a0cef/chilhqdt02 (P1)(New Paperless)/chilhqdt02 (P1)(New Paperless).ed: _diskAccess : false, _storageAccessible : true

2016-06-30T14:48:19.514Z [6A380B70 warning 'Vmsvc.vm:/vmfs/volumes/576f5f18-eda38b00-fb57-0025b52a0cef/chilhqdt02 (P1)(New Paperless)/chilhqdt02 (P1)(New Paperless).ed: _diskAccess : false, _storageAccessible : true

2016-06-30T14:48:19.515Z [6A380B70 warning 'Vmsvc.vm:/vmfs/volumes/576f5f18-eda38b00-fb57-0025b52a0cef/chilhqdt02 (P1)(New Paperless)/chilhqdt02 (P1)(New Paperless).ed: _diskAccess : false, _storageAccessible : true

2016-06-30T14:48:19.516Z [6A380B70 warning 'Vmsvc.vm:/vmfs/volumes/576f5f18-eda38b00-fb57-0025b52a0cef/chilhqdt02 (P1)(New Paperless)/chilhqdt02 (P1)(New Paperless).ed: _diskAccess : false, _storageAccessible : true

2016-06-30T14:48:19.517Z [6A380B70 warning 'Vmsvc.vm:/vmfs/volumes/576f5f18-eda38b00-fb57-0025b52a0cef/chilhqdt02 (P1)(New Paperless)/chilhqdt02 (P1)(New Paperless).ed: _diskAccess : false, _storageAccessible : true

2016-06-30T14:48:19.518Z [6A380B70 warning 'Vmsvc.vm:/vmfs/volumes/576f5f18-eda38b00-fb57-0025b52a0cef/chilhqdt02 (P1)(New Paperless)/chilhqdt02 (P1)(New Paperless).ed: _diskAccess : false, _storageAccessible : true

2016-06-30T14:48:19.519Z [6A380B70 warning 'Vmsvc.vm:/vmfs/volumes/576f5f18-eda38b00-fb57-0025b52a0cef/chilhqdt02 (P1)(New Paperless)/chilhqdt02 (P1)(New Paperless).ed: _diskAccess : false, _storageAccessible : true

2016-06-30T14:48:19.521Z [6A380B70 warning 'Vmsvc.vm:/vmfs/volumes/576f5f18-eda38b00-fb57-0025b52a0cef/chilhqdt02 (P1)(New Paperless)/chilhqdt02 (P1)(New Paperless).ed: _diskAccess : false, _storageAccessible : true

2016-06-30T14:48:19.522Z [6A380B70 warning 'Vmsvc.vm:/vmfs/volumes/576f5f18-eda38b00-fb57-0025b52a0cef/chilhqdt02 (P1)(New Paperless)/chilhqdt02 (P1)(New Paperless).ed: _diskAccess : false, _storageAccessible : true

2016-06-30T14:48:19.523Z [6A380B70 warning 'Vmsvc.vm:/vmfs/volumes/576f5f18-eda38b00-fb57-0025b52a0cef/chilhqdt02 (P1)(New Paperless)/chilhqdt02 (P1)(New Paperless).ed: _diskAccess : false, _storageAccessible : true

2016-06-30T14:48:33.374Z [FFF875B0 warning 'Vmsvc.vm:/vmfs/volumes/4eec6ef5-6501dba0-7d60-0025b500003d/chilhqdt02/chilhqdt02.vmx' opID=A2B5CBBC-0000021B-c4-eb-b userfailed: _diskAccess : false, _storageAccessible : true

2016-06-30T14:48:33.375Z [FFF875B0 warning 'Vmsvc.vm:/vmfs/volumes/4eec6ef5-6501dba0-7d60-0025b500003d/chilhqdt02/chilhqdt02.vmx' opID=A2B5CBBC-0000021B-c4-eb-b userfailed: _diskAccess : false, _storageAccessible : true

2016-06-30T14:48:33.376Z [FFF875B0 warning 'Vmsvc.vm:/vmfs/volumes/4eec6ef5-6501dba0-7d60-0025b500003d/chilhqdt02/chilhqdt02.vmx' opID=A2B5CBBC-0000021B-c4-eb-b userfailed: _diskAccess : false, _storageAccessible : true

2016-06-30T14:48:33.378Z [FFF875B0 warning 'Vmsvc.vm:/vmfs/volumes/4eec6ef5-6501dba0-7d60-0025b500003d/chilhqdt02/chilhqdt02.vmx' opID=A2B5CBBC-0000021B-c4-eb-b userfailed: _diskAccess : false, _storageAccessible : true

2016-06-30T14:48:33.379Z [FFF875B0 warning 'Vmsvc.vm:/vmfs/volumes/4eec6ef5-6501dba0-7d60-0025b500003d/chilhqdt02/chilhqdt02.vmx' opID=A2B5CBBC-0000021B-c4-eb-b userfailed: _diskAccess : false, _storageAccessible : true

2016-06-30T14:48:33.380Z [FFF875B0 warning 'Vmsvc.vm:/vmfs/volumes/4eec6ef5-6501dba0-7d60-0025b500003d/chilhqdt02/chilhqdt02.vmx' opID=A2B5CBBC-0000021B-c4-eb-b userfailed: _diskAccess : false, _storageAccessible : true

2016-06-30T14:48:33.381Z [FFF875B0 warning 'Vmsvc.vm:/vmfs/volumes/4eec6ef5-6501dba0-7d60-0025b500003d/chilhqdt02/chilhqdt02.vmx' opID=A2B5CBBC-0000021B-c4-eb-b userfailed: _diskAccess : false, _storageAccessible : true

2016-06-30T14:48:33.382Z [FFF875B0 warning 'Vmsvc.vm:/vmfs/volumes/4eec6ef5-6501dba0-7d60-0025b500003d/chilhqdt02/chilhqdt02.vmx' opID=A2B5CBBC-0000021B-c4-eb-b userfailed: _diskAccess : false, _storageAccessible : true

2016-06-30T14:48:33.383Z [FFF875B0 warning 'Vmsvc.vm:/vmfs/volumes/4eec6ef5-6501dba0-7d60-0025b500003d/chilhqdt02/chilhqdt02.vmx' opID=A2B5CBBC-0000021B-c4-eb-b userfailed: _diskAccess : false, _storageAccessible : true

2016-06-30T14:48:33.384Z [FFF875B0 warning 'Vmsvc.vm:/vmfs/volumes/4eec6ef5-6501dba0-7d60-0025b500003d/chilhqdt02/chilhqdt02.vmx' opID=A2B5CBBC-0000021B-c4-eb-b userfailed: _diskAccess : false, _storageAccessible : true

2016-06-30T14:48:33.386Z [FFF875B0 warning 'Vmsvc.vm:/vmfs/volumes/4eec6ef5-6501dba0-7d60-0025b500003d/chilhqdt02/chilhqdt02.vmx' opID=A2B5CBBC-0000021B-c4-eb-b userfailed: _diskAccess : false, _storageAccessible : true

2016-06-30T14:48:33.387Z [FFF875B0 warning 'Vmsvc.vm:/vmfs/volumes/4eec6ef5-6501dba0-7d60-0025b500003d/chilhqdt02/chilhqdt02.vmx' opID=A2B5CBBC-0000021B-c4-eb-b userfailed: _diskAccess : false, _storageAccessible : true

2016-06-30T14:48:33.388Z [FFF875B0 warning 'Vmsvc.vm:/vmfs/volumes/4eec6ef5-6501dba0-7d60-0025b500003d/chilhqdt02/chilhqdt02.vmx' opID=A2B5CBBC-0000021B-c4-eb-b userfailed: _diskAccess : false, _storageAccessible : true

2016-06-30T14:48:33.389Z [FFF875B0 warning 'Vmsvc.vm:/vmfs/volumes/4eec6ef5-6501dba0-7d60-0025b500003d/chilhqdt02/chilhqdt02.vmx' opID=A2B5CBBC-0000021B-c4-eb-b userfailed: _diskAccess : false, _storageAccessible : true

2016-06-30T14:48:33.455Z [6F680B70 warning 'Vmsvc.vm:/vmfs/volumes/4eec6ef5-6501dba0-7d60-0025b500003d/chilhqdt02/chilhqdt02.vmx' opID=A2B5CBBC-0000021B-c4-eb-b userfailed: _diskAccess : false, _storageAccessible : true

2016-06-30T14:48:33.456Z [6F680B70 warning 'Vmsvc.vm:/vmfs/volumes/4eec6ef5-6501dba0-7d60-0025b500003d/chilhqdt02/chilhqdt02.vmx' opID=A2B5CBBC-0000021B-c4-eb-b userfailed: _diskAccess : false, _storageAccessible : true

2016-06-30T14:48:33.458Z [6F680B70 warning 'Vmsvc.vm:/vmfs/volumes/4eec6ef5-6501dba0-7d60-0025b500003d/chilhqdt02/chilhqdt02.vmx' opID=A2B5CBBC-0000021B-c4-eb-b userfailed: _diskAccess : false, _storageAccessible : true

2016-06-30T14:48:33.459Z [6F680B70 warning 'Vmsvc.vm:/vmfs/volumes/4eec6ef5-6501dba0-7d60-0025b500003d/chilhqdt02/chilhqdt02.vmx' opID=A2B5CBBC-0000021B-c4-eb-b userfailed: _diskAccess : false, _storageAccessible : true

2016-06-30T14:48:33.460Z [6F680B70 warning 'Vmsvc.vm:/vmfs/volumes/4eec6ef5-6501dba0-7d60-0025b500003d/chilhqdt02/chilhqdt02.vmx' opID=A2B5CBBC-0000021B-c4-eb-b userfailed: _diskAccess : false, _storageAccessible : true

2016-06-30T14:48:33.461Z [6F680B70 warning 'Vmsvc.vm:/vmfs/volumes/4eec6ef5-6501dba0-7d60-0025b500003d/chilhqdt02/chilhqdt02.vmx' opID=A2B5CBBC-0000021B-c4-eb-b userfailed: _diskAccess : false, _storageAccessible : true

2016-06-30T14:48:33.462Z [6F680B70 warning 'Vmsvc.vm:/vmfs/volumes/4eec6ef5-6501dba0-7d60-0025b500003d/chilhqdt02/chilhqdt02.vmx' opID=A2B5CBBC-0000021B-c4-eb-b userfailed: _diskAccess : false, _storageAccessible : true

2016-06-30T14:48:33.463Z [6F680B70 warning 'Vmsvc.vm:/vmfs/volumes/4eec6ef5-6501dba0-7d60-0025b500003d/chilhqdt02/chilhqdt02.vmx' opID=A2B5CBBC-0000021B-c4-eb-b userfailed: _diskAccess : false, _storageAccessible : true

2016-06-30T14:48:33.465Z [6F680B70 warning 'Vmsvc.vm:/vmfs/volumes/4eec6ef5-6501dba0-7d60-0025b500003d/chilhqdt02/chilhqdt02.vmx' opID=A2B5CBBC-0000021B-c4-eb-b userfailed: _diskAccess : false, _storageAccessible : true

2016-06-30T14:48:33.466Z [6F680B70 warning 'Vmsvc.vm:/vmfs/volumes/4eec6ef5-6501dba0-7d60-0025b500003d/chilhqdt02/chilhqdt02.vmx' opID=A2B5CBBC-0000021B-c4-eb-b userfailed: _diskAccess : false, _storageAccessible : true

2016-06-30T14:48:33.467Z [6F680B70 warning 'Vmsvc.vm:/vmfs/volumes/4eec6ef5-6501dba0-7d60-0025b500003d/chilhqdt02/chilhqdt02.vmx' opID=A2B5CBBC-0000021B-c4-eb-b userfailed: _diskAccess : false, _storageAccessible : true

2016-06-30T14:48:33.468Z [6F680B70 warning 'Vmsvc.vm:/vmfs/volumes/4eec6ef5-6501dba0-7d60-0025b500003d/chilhqdt02/chilhqdt02.vmx' opID=A2B5CBBC-0000021B-c4-eb-b userfailed: _diskAccess : false, _storageAccessible : true

2016-06-30T14:48:33.469Z [6F680B70 warning 'Vmsvc.vm:/vmfs/volumes/4eec6ef5-6501dba0-7d60-0025b500003d/chilhqdt02/chilhqdt02.vmx' opID=A2B5CBBC-0000021B-c4-eb-b userfailed: _diskAccess : false, _storageAccessible : true

2016-06-30T14:48:33.471Z [6F680B70 warning 'Vmsvc.vm:/vmfs/volumes/4eec6ef5-6501dba0-7d60-0025b500003d/chilhqdt02/chilhqdt02.vmx' opID=A2B5CBBC-0000021B-c4-eb-b userfailed: _diskAccess : false, _storageAccessible : true

2016-06-30T15:10:53.914Z [66DB8B70 verbose 'Vmsvc.vm:/vmfs/volumes/576f5f18-eda38b00-fb57-0025b52a0cef/chilhqdt02 (P1)(New Paperless)/chilhqdt02 (P1)(New Paperless).b:: hostlog state changed from emigrating to failure

2016-06-30T15:54:21.440Z [694C6B70 info 'Vmsvc.vm:/vmfs/volumes/576f5f18-eda38b00-fb57-0025b52a0cef/chilhqdt02 (P1)(New Paperless)/chilhqdt02 (P1)(New Paperless).vmx-43-11-12 user=vpxuser] VMotionInit: hostlog state changed from failure to none

2016-06-30T16:14:21.388Z [6F280B70 verbose 'Vmsvc.vm:/vmfs/volumes/576f5f18-eda38b00-fb57-0025b52a0cef/chilhqdt02 (P1)(New Paperless)/chilhqdt02 (P1)(New Paperless).b:: hostlog state changed from emigrating to failure

2016-06-30T17:14:32.240Z [66DB8B70 info 'Vmsvc.vm:/vmfs/volumes/576f5f18-eda38b00-fb57-0025b52a0cef/chilhqdt02 (P1)(New Paperless)/chilhqdt02 (P1)(New Paperless).vmx-2c-5-64 user=vpxuser] VMotionInit: hostlog state changed from failure to none

2016-06-30T17:36:39.276Z [66F40B70 verbose 'Vmsvc.vm:/vmfs/volumes/576f5f18-eda38b00-fb57-0025b52a0cef/chilhqdt02 (P1)(New Paperless)/chilhqdt02 (P1)(New Paperless).b:: hostlog state changed from emigrating to failure

2016-06-30T20:29:23.714Z [66DB8B70 info 'Vmsvc.vm:/vmfs/volumes/576f5f18-eda38b00-fb57-0025b52a0cef/chilhqdt02 (P1)(New Paperless)/chilhqdt02 (P1)(New Paperless).vmx-9f-37-28 user=vpxuser] VMotionInit: hostlog state changed from failure to none

2016-06-30T20:53:00.389Z [696C1B70 verbose 'Vmsvc.vm:/vmfs/volumes/576f5f18-eda38b00-fb57-0025b52a0cef/chilhqdt02 (P1)(New Paperless)/chilhqdt02 (P1)(New Paperless).b:: hostlog state changed from emigrating to failure

2016-06-30T21:08:16.511Z [66DB8B70 info 'Vmsvc.vm:/vmfs/volumes/576f5f18-eda38b00-fb57-0025b52a0cef/chilhqdt02 (P1)(New Paperless)/chilhqdt02 (P1)(New Paperless).vmx-78-b7-f1 user=vpxuser] VMotionInit: hostlog state changed from failure to none

/var/log #


Storage vMotion on ESXi 5.0 without vSphere Server

$
0
0

I have installed an evaluation copy of ESXi 5.0 and want to migrate VMs that are currently on the local datastore to iSCSI datastores. I cannot see the "Migrate" line item when I right-click on a VM running from the local datastore. The evaluation license enables vMotion for 60 days, and I configured a 1Gbps port on the ESXi host for vMotion. Is vMotion available only when ESXi hosts are managed by the vSphere server? I do not have a vSpherer server inmy environment and manage the ESXi server directly from the vSphere Client.

 

Thanks.

Limiting the number of concurrent storage vMotions with vCenter 6.0

$
0
0

I have about 100 VMs to migrate to another datastore, but would like to minimise performance impact while doing so by limiting the number of concurrent Storage vMotions to only one, so that I can queue them all up and have them go one at a time.  (default limit is 8).

 

I found the below guides about how to achieve this, but was written and tested only with ESXi 4,  5 and 5.1 :

 

Limiting the number of Storage vMotions - frankdenneman.nl

Limiting the number of concurrent vMotions - frankdenneman.nl

 

vSphere 6 documentation seems to mirror what's Frank's last post:

vSphere 6.0 Documentation Center

 

So I tried adding in config key to my vCenter 6 and ESXi 6 environment like so:

  <vpxd>

    <ResourceManager>

      <maxCostPerEsx41Ds>20</maxCostPerEsx41Ds>

    </ResourceManager>


Then restarted vCenter virtual appliance, but was still able to Storage vMotion 3x VMs simultaneously to the same datastore...


I assume key has changed in version 6?  Any ideas?

Reservation vs dedicated resources

$
0
0

In regards to using reservations, I have a question that I'm hoping to get your input to.

Some quick background information:

Scenario 1:

Standalone host, without overcommitment on CPU, manually keeping the pCPU:vCPU ratio below or on 1:1

 

Scenario 2:

A overcommitted DRS cluster (CPUwise), and some VMs with reservations.

 

As I understand from https://www.vmware.com/support/developer/converter-sdk/conv55_apireference/vim.ResourceAllocationInfo.html,

then in Scenario 2, if I reserve CPU to a given VM, then if the VM does not actively use the entire reservation, then other VMs can make use of the resources.

 

And now to my question: if the VM with the reservation increases its demand, will the resources be "given back" to the VM instant, or will there be a tiny delay in access to the resources?

The reason for my question is that when running VOIP applications (such as Microsoft Skype For Business) they tend to be very sensitive  to CPU contention and the vendor recommends not overcommiting CPU in clusters containing these VMs.

If I compare the 2 scenarios, will the response time for access to CPU resources be the same, or will Scenario 1 perform better?

vMotion Compatibility

$
0
0

If I have two servers of different manufacturers in the same DRS cluster (i.e. Dell and HP), but both with Intel E5600 series processors, are there issues with vmotion compatibility?

Vmotion Error with one one server

$
0
0

We have 4 servers running in a cluster and we can use vmotion to move VMs except for one server. All esxi servers are running 6.0.0-3825889. We also have another machine running vcenter server 6.0.0-3634788 on windows. Everything is dual stack ipv4 and ipv6. When we try to move a vm to or from one of the servers I am getting the following error.

 

Network addresses 'XXX.XXX.98.196' and 'XXXX::98:197' are from different address families.

Unable to prepare migration.

 

Can anyone help?

Viewing all 1307 articles
Browse latest View live


Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>