Quantcast
Channel: VMware Communities : All Content - Best Practices

steps to download vmware macos for windows crashed when forming and apple before entering the macos.

$
0
0

steps to download vmware macos for windows crashed when forming and apple before entering the macos.

need advice or help programs


I am getting this error:Error while opening the virtual machine: A file access error occurred on the host or guest operating system.

$
0
0

My log file:

 

2019-01-31T11:11:16.592-06:00| vmx| I125: Log for VMware Player pid=16188 version=14.1.5 build=build-10950780 option=Release

2019-01-31T11:11:16.592-06:00| vmx| I125: The process is 64-bit.

2019-01-31T11:11:16.592-06:00| vmx| I125: Host codepage=windows-1252 encoding=windows-1252

2019-01-31T11:11:16.592-06:00| vmx| I125: Host is Windows 10, 64-bit  (Build 17134)

2019-01-31T11:11:16.560-06:00| vmx| I125: VTHREAD initialize main thread 1 "vmx" host id 16192

2019-01-31T11:11:16.560-06:00| vmx| I125: LOCALE windows-1252 -> NULL User=409 System=409

2019-01-31T11:11:16.560-06:00| vmx| I125: Msg_SetLocaleEx: HostLocale=windows-1252 UserLocale=NULL

2019-01-31T11:11:16.560-06:00| vmx| I125: FILE: FileCreateDirectoryRetry: Non-retriable error encountered (C:\ProgramData\VMware): Cannot create a file when that file already exists (183)

2019-01-31T11:11:16.560-06:00| vmx| I125: FILE: FileCreateDirectoryRetry: Non-retriable error encountered (C:\ProgramData\VMware\VMware Workstation): Cannot create a file when that file already exists (183)

2019-01-31T11:11:16.560-06:00| vmx| I125: FILE: FileCreateDirectoryRetry: Non-retriable error encountered (C:\ProgramData\VMware): Cannot create a file when that file already exists (183)

2019-01-31T11:11:16.560-06:00| vmx| I125: FILE: FileCreateDirectoryRetry: Non-retriable error encountered (C:\ProgramData\VMware\VMware Workstation): Cannot create a file when that file already exists (183)

2019-01-31T11:11:16.560-06:00| vmx| I125: FILE: FileCreateDirectoryRetry: Non-retriable error encountered (C:\ProgramData\VMware): Cannot create a file when that file already exists (183)

2019-01-31T11:11:16.560-06:00| vmx| I125: FILE: FileCreateDirectoryRetry: Non-retriable error encountered (C:\ProgramData\VMware\VMware Workstation): Cannot create a file when that file already exists (183)

2019-01-31T11:11:16.560-06:00| vmx| I125: DictionaryLoad: Cannot open file "C:\Users\TARUN MATHUR\AppData\Roaming\VMware\config.ini": The system cannot find the file specified.

2019-01-31T11:11:16.560-06:00| vmx| I125: Msg_Reset:

2019-01-31T11:11:16.560-06:00| vmx| I125: [msg.dictionary.load.openFailed] Cannot open file "C:\Users\TARUN MATHUR\AppData\Roaming\VMware\config.ini": The system cannot find the file specified.

2019-01-31T11:11:16.560-06:00| vmx| I125: ----------------------------------------

2019-01-31T11:11:16.560-06:00| vmx| I125: ConfigDB: Failed to load C:\Users\TARUN MATHUR\AppData\Roaming\VMware\config.ini

2019-01-31T11:11:16.560-06:00| vmx| I125: OBJLIB-LIB: Objlib initialized.

2019-01-31T11:11:16.560-06:00| vmx| I125: FILE: FileCreateDirectoryRetry: Non-retriable error encountered (C:\ProgramData\VMware): Cannot create a file when that file already exists (183)

2019-01-31T11:11:16.560-06:00| vmx| I125: FILE: FileCreateDirectoryRetry: Non-retriable error encountered (C:\ProgramData\VMware\VMware Player): Cannot create a file when that file already exists (183)

2019-01-31T11:11:16.560-06:00| vmx| I125: DictionaryLoad: Cannot open file "C:\ProgramData\VMware\VMware Player\config.ini": The system cannot find the file specified.

2019-01-31T11:11:16.560-06:00| vmx| I125: [msg.dictionary.load.openFailed] Cannot open file "C:\ProgramData\VMware\VMware Player\config.ini": The system cannot find the file specified.

2019-01-31T11:11:16.560-06:00| vmx| I125: PREF Optional preferences file not found at C:\ProgramData\VMware\VMware Player\config.ini. Using default values.

2019-01-31T11:11:16.576-06:00| vmx| I125: FILE: FileCreateDirectoryRetry: Non-retriable error encountered (C:\ProgramData\VMware): Cannot create a file when that file already exists (183)

2019-01-31T11:11:16.576-06:00| vmx| I125: FILE: FileCreateDirectoryRetry: Non-retriable error encountered (C:\ProgramData\VMware\VMware Player): Cannot create a file when that file already exists (183)

2019-01-31T11:11:16.576-06:00| vmx| I125: FILE: FileCreateDirectoryRetry: Non-retriable error encountered (C:\ProgramData\VMware): Cannot create a file when that file already exists (183)

2019-01-31T11:11:16.576-06:00| vmx| I125: FILE: FileCreateDirectoryRetry: Non-retriable error encountered (C:\ProgramData\VMware\VMware Player): Cannot create a file when that file already exists (183)

2019-01-31T11:11:16.576-06:00| vmx| I125: DictionaryLoad: Cannot open file "C:\ProgramData\VMware\VMware Player\settings.ini": The system cannot find the file specified.

2019-01-31T11:11:16.576-06:00| vmx| I125: [msg.dictionary.load.openFailed] Cannot open file "C:\ProgramData\VMware\VMware Player\settings.ini": The system cannot find the file specified.

2019-01-31T11:11:16.576-06:00| vmx| I125: PREF Optional preferences file not found at C:\ProgramData\VMware\VMware Player\settings.ini. Using default values.

2019-01-31T11:11:16.576-06:00| vmx| I125: DictionaryLoad: Cannot open file "C:\ProgramData\VMware\VMware Player\config.ini": The system cannot find the file specified.

2019-01-31T11:11:16.576-06:00| vmx| I125: [msg.dictionary.load.openFailed] Cannot open file "C:\ProgramData\VMware\VMware Player\config.ini": The system cannot find the file specified.

2019-01-31T11:11:16.576-06:00| vmx| I125: PREF Optional preferences file not found at C:\ProgramData\VMware\VMware Player\config.ini. Using default values.

2019-01-31T11:11:16.576-06:00| vmx| I125: DictionaryLoad: Cannot open file "C:\Users\TARUN MATHUR\AppData\Roaming\VMware\config.ini": The system cannot find the file specified.

2019-01-31T11:11:16.576-06:00| vmx| I125: [msg.dictionary.load.openFailed] Cannot open file "C:\Users\TARUN MATHUR\AppData\Roaming\VMware\config.ini": The system cannot find the file specified.

2019-01-31T11:11:16.576-06:00| vmx| I125: PREF Optional preferences file not found at C:\Users\TARUN MATHUR\AppData\Roaming\VMware\config.ini. Using default values.

2019-01-31T11:11:16.576-06:00| vmx| I125: UUID: SMBIOS UUID is reported as '44 45 4c 4c 34 00 10 31-80 59 b5 c0 4f 36 48 32'.

2019-01-31T11:11:16.576-06:00| vmx| I125: FILE: FileLockDynaLink: Further process validation tools are: available

2019-01-31T11:11:16.592-06:00| vmx| I125: lib/ssl: OpenSSL using FIPS_drbg for RAND

2019-01-31T11:11:16.592-06:00| vmx| I125: lib/ssl: protocol list tls1.2

2019-01-31T11:11:16.592-06:00| vmx| I125: lib/ssl: protocol list tls1.2 (openssl flags 0x17000000)

2019-01-31T11:11:16.592-06:00| vmx| I125: lib/ssl: cipher list !aNULL:kECDH+AESGCM:ECDH+AESGCM:RSA+AESGCM:kECDH+AES:ECDH+AES:RSA+AES

2019-01-31T11:11:16.592-06:00| vmx| I125: FILE: FileCreateDirectoryRetry: Non-retriable error encountered (C:\ProgramData\VMware): Cannot create a file when that file already exists (183)

2019-01-31T11:11:16.592-06:00| vmx| I125: FILE: FileCreateDirectoryRetry: Non-retriable error encountered (C:\ProgramData\VMware): Cannot create a file when that file already exists (183)

2019-01-31T11:11:16.592-06:00| vmx| I125: FILE: FileCreateDirectoryRetry: Non-retriable error encountered (C:\ProgramData\VMware): Cannot create a file when that file already exists (183)

2019-01-31T11:11:16.592-06:00| vmx| I125: FILE: FileCreateDirectoryRetry: Non-retriable error encountered (C:\ProgramData\VMware): Cannot create a file when that file already exists (183)

2019-01-31T11:11:16.592-06:00| vmx| I125: FILE: FileCreateDirectoryRetry: Non-retriable error encountered (C:\ProgramData\VMware): Cannot create a file when that file already exists (183)

2019-01-31T11:11:16.592-06:00| vmx| I125: FILE: FileCreateDirectoryRetry: Non-retriable error encountered (C:\Users): Cannot create a file when that file already exists (183)

2019-01-31T11:11:16.592-06:00| vmx| I125: FILE: FileCreateDirectoryRetry: Non-retriable error encountered (C:\Users\TARUNM~1): Cannot create a file when that file already exists (183)

2019-01-31T11:11:16.592-06:00| vmx| I125: FILE: FileCreateDirectoryRetry: Non-retriable error encountered (C:\Users\TARUNM~1\AppData): Cannot create a file when that file already exists (183)

2019-01-31T11:11:16.592-06:00| vmx| I125: FILE: FileCreateDirectoryRetry: Non-retriable error encountered (C:\Users\TARUNM~1\AppData\Local): Cannot create a file when that file already exists (183)

2019-01-31T11:11:16.592-06:00| vmx| I125: FILE: FileCreateDirectoryRetry: Non-retriable error encountered (C:\Users\TARUNM~1\AppData\Local\Temp): Cannot create a file when that file already exists (183)

2019-01-31T11:11:16.592-06:00| vmx| I125: FILE: FileCreateDirectoryRetry: Non-retriable error encountered (C:\Users\TARUNM~1\AppData\Local\Temp\vmware-TARUN MATHUR): Cannot create a file when that file already exists (183)

2019-01-31T11:11:16.592-06:00| vmx| I125: Hostname=DESKTOP-HSNNRAV

2019-01-31T11:11:16.607-06:00| vmx| I125: IP=fe80::896d:aea6:c579:de6d%18

2019-01-31T11:11:16.607-06:00| vmx| I125: IP=fe80::a9a9:e093:7a97:1ecc%16

2019-01-31T11:11:16.607-06:00| vmx| I125: IP=fe80::e546:31fc:d53d:dc61%9

2019-01-31T11:11:16.607-06:00| vmx| I125: IP=2605:6000:151f:c496:74ee:c90c:eb6b:539b

2019-01-31T11:11:16.607-06:00| vmx| I125: IP=2605:6000:151f:c496::8a1

2019-01-31T11:11:16.607-06:00| vmx| I125: IP=2605:6000:151f:c496:e546:31fc:d53d:dc61

2019-01-31T11:11:16.607-06:00| vmx| I125: IP=192.168.239.1

2019-01-31T11:11:16.607-06:00| vmx| I125: IP=192.168.233.1

2019-01-31T11:11:16.607-06:00| vmx| I125: IP=192.168.1.7

2019-01-31T11:11:16.638-06:00| vmx| I125: System uptime 63479299 us

2019-01-31T11:11:16.638-06:00| vmx| I125: Command line: "C:\Program Files (x86)\VMware\VMware Player\x64\vmware-vmx.exe" "-T" "querytoken" "-ssnapshot.numRollingTiers=0" "-sRemoteDisplay.vnc.enabled=FALSE" "-s" "vmx.stdio.keep=TRUE" "-#" "product=4;name=VMware Player;version=14.1.5;buildnumber=10950780;licensename=VMware Player;licenseversion=14.0+;" "-@" "pipe=\\.\pipe\vmxa2d30ce36c862572;msgs=ui" "C:\Users\TARUN MATHUR\Desktop\Cloudera-Training-CAPSpark-Student-VM-cdh5.4.3a-vmware\cloudera-training-capspark-student-rev_cdh5.4.3a.vmx"

2019-01-31T11:11:16.638-06:00| vmx| I125: Msg_SetLocaleEx: HostLocale=windows-1252 UserLocale=NULL

2019-01-31T11:11:16.670-06:00| vmx| I125: WQPoolAllocPoll : pollIx = 1, signalHandle = 696

2019-01-31T11:11:16.670-06:00| vmx| I125: WQPoolAllocPoll : pollIx = 2, signalHandle = 772

2019-01-31T11:11:16.685-06:00| vmx| I125: VigorTransport listening on fd 784

2019-01-31T11:11:16.685-06:00| vmx| I125: Vigor_Init 1

2019-01-31T11:11:16.685-06:00| vmx| I125: Connecting 'ui' to pipe '\\.\pipe\vmxa2d30ce36c862572' with user '(null)'

2019-01-31T11:11:16.685-06:00| vmx| I125: VMXVmdb: Local connection timeout: 60000 ms.

2019-01-31T11:11:16.763-06:00| vmx| I125: VmdbAddConnection: cnxPath=/db/connection/#1/, cnxIx=1

2019-01-31T11:11:16.763-06:00| vmx| I125: Vix: [16192 mainDispatch.c:490]: VMAutomation: Initializing VMAutomation.

2019-01-31T11:11:16.763-06:00| vmx| I125: Vix: [16192 mainDispatch.c:746]: VMAutomationOpenListenerSocket() listening

2019-01-31T11:11:16.763-06:00| vmx| I125: Vix: [16192 mainDispatch.c:4234]: VMAutomation_ReportPowerOpFinished: statevar=0, newAppState=1870, success=1 additionalError=0

2019-01-31T11:11:16.763-06:00| vmx| I125: Transitioned vmx/execState/val to poweredOff

2019-01-31T11:11:16.763-06:00| vmx| I125: Vix: [16192 mainDispatch.c:4234]: VMAutomation_ReportPowerOpFinished: statevar=1, newAppState=1873, success=1 additionalError=0

2019-01-31T11:11:16.763-06:00| vmx| I125: Vix: [16192 mainDispatch.c:4234]: VMAutomation_ReportPowerOpFinished: statevar=2, newAppState=1877, success=1 additionalError=0

2019-01-31T11:11:16.763-06:00| vmx| I125: Vix: [16192 mainDispatch.c:4234]: VMAutomation_ReportPowerOpFinished: statevar=3, newAppState=1881, success=1 additionalError=0

2019-01-31T11:11:16.779-06:00| vmx| I125: IOPL_VBSRunning: VBS is set to 0

2019-01-31T11:11:16.795-06:00| vmx| I125: IOCTL_VMX86_SET_MEMORY_PARAMS already set

2019-01-31T11:11:16.795-06:00| vmx| I125: FeatureCompat: No EVC masks.

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID vendor: GenuineIntel

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID family: 0x6 model: 0x8e stepping: 0x9

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID codename: Kabylake-U/Y QS

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID name: Intel(R) Core(TM) i5-7200U CPU @ 2.50GHz

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 00000000, 0: 0x00000016 0x756e6547 0x6c65746e 0x49656e69

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 00000001, 0: 0x000806e9 0x00100800 0x7ffafbbf 0xbfebfbff

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 00000002, 0: 0x76036301 0x00f0b5ff 0x00000000 0x00c30000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 00000003, 0: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 00000004, 0: 0x1c004121 0x01c0003f 0x0000003f 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 00000004, 1: 0x1c004122 0x01c0003f 0x0000003f 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 00000004, 2: 0x1c004143 0x00c0003f 0x000003ff 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 00000004, 3: 0x1c03c163 0x02c0003f 0x00000fff 0x00000006

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 00000004, 4: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 00000005, 0: 0x00000040 0x00000040 0x00000003 0x11142120

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 00000006, 0: 0x000027f7 0x00000002 0x00000009 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 00000007, 0: 0x00000000 0x029c67af 0x00000000 0x9c000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 00000008, 0: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 00000009, 0: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 0000000a, 0: 0x07300404 0x00000000 0x00000000 0x00000603

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 0000000b, 0: 0x00000001 0x00000002 0x00000100 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 0000000b, 1: 0x00000004 0x00000004 0x00000201 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 0000000b, 2: 0x00000000 0x00000000 0x00000002 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 0000000c, 0: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 0000000d, 0: 0x0000001f 0x00000440 0x00000440 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 0000000d, 1: 0x0000000f 0x00000440 0x00000100 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 0000000d, 2: 0x00000100 0x00000240 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 0000000d, 3: 0x00000040 0x000003c0 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 0000000d, 4: 0x00000040 0x00000400 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 0000000d, 5: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 0000000d, 6: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 0000000d, 7: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 0000000d, 8: 0x00000080 0x00000000 0x00000001 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 0000000d, 9: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 0000000d, a: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 0000000d, b: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 0000000d, c: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 0000000d, d: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 0000000d, e: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 0000000d, f: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 0000000d, 10: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 0000000d, 11: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 0000000d, 12: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 0000000d, 13: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 0000000d, 14: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 0000000d, 15: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 0000000d, 16: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 0000000d, 17: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 0000000d, 18: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 0000000d, 19: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 0000000d, 1a: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 0000000d, 1b: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 0000000d, 1c: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 0000000d, 1d: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 0000000d, 1e: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 0000000d, 1f: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 0000000d, 20: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 0000000d, 21: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 0000000d, 22: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 0000000d, 23: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 0000000d, 24: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 0000000d, 25: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 0000000d, 26: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 0000000d, 27: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 0000000d, 28: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 0000000d, 29: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 0000000d, 2a: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 0000000d, 2b: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 0000000d, 2c: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 0000000d, 2d: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 0000000d, 2e: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 0000000d, 2f: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 0000000d, 30: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 0000000d, 31: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 0000000d, 32: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 0000000d, 33: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 0000000d, 34: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 0000000d, 35: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 0000000d, 36: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 0000000d, 37: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 0000000d, 38: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 0000000d, 39: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 0000000d, 3a: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 0000000d, 3b: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 0000000d, 3c: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 0000000d, 3d: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 0000000d, 3e: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 0000000d, 3f: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 0000000e, 0: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 0000000f, 0: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 0000000f, 1: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 00000010, 0: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 00000011, 0: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 00000012, 0: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 00000012, 1: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 00000012, 2: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 00000012, 3: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 00000013, 0: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 00000014, 0: 0x00000001 0x0000000f 0x00000007 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 00000014, 1: 0x02490002 0x003f3fff 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 00000015, 0: 0x00000002 0x000000e2 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 00000016, 0: 0x00000a8c 0x00000c1c 0x00000064 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 80000000, 0: 0x80000008 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 80000001, 0: 0x00000000 0x00000000 0x00000121 0x2c100800

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 80000002, 0: 0x65746e49 0x2952286c 0x726f4320 0x4d542865

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 80000003, 0: 0x35692029 0x3032372d 0x43205530 0x40205550

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 80000004, 0: 0x352e3220 0x7a484730 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 80000005, 0: 0x00000000 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 80000006, 0: 0x00000000 0x00000000 0x01006040 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 80000007, 0: 0x00000000 0x00000000 0x00000000 0x00000100

2019-01-31T11:11:16.795-06:00| vmx| I125: hostCPUID level 80000008, 0: 0x00003027 0x00000000 0x00000000 0x00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: CPUID differences from hostCPUID.

2019-01-31T11:11:16.795-06:00| vmx| I125: CPUID[1] level 00000001, 0: 0x000806e9 0x01100800 0x7ffafbbf 0xbfebfbff

2019-01-31T11:11:16.795-06:00| vmx| I125: CPUID[1] level 0000000b, 0: 0x00000001 0x00000002 0x00000100 0x00000001

2019-01-31T11:11:16.795-06:00| vmx| I125: CPUID[1] level 0000000b, 1: 0x00000004 0x00000004 0x00000201 0x00000001

2019-01-31T11:11:16.795-06:00| vmx| I125: CPUID[1] level 0000000b, 2: 0x00000000 0x00000000 0x00000002 0x00000001

2019-01-31T11:11:16.795-06:00| vmx| I125: CPUID[2] level 00000001, 0: 0x000806e9 0x02100800 0x7ffafbbf 0xbfebfbff

2019-01-31T11:11:16.795-06:00| vmx| I125: CPUID[2] level 0000000b, 0: 0x00000001 0x00000002 0x00000100 0x00000002

2019-01-31T11:11:16.795-06:00| vmx| I125: CPUID[2] level 0000000b, 1: 0x00000004 0x00000004 0x00000201 0x00000002

2019-01-31T11:11:16.795-06:00| vmx| I125: CPUID[2] level 0000000b, 2: 0x00000000 0x00000000 0x00000002 0x00000002

2019-01-31T11:11:16.795-06:00| vmx| I125: CPUID[3] level 00000001, 0: 0x000806e9 0x03100800 0x7ffafbbf 0xbfebfbff

2019-01-31T11:11:16.795-06:00| vmx| I125: CPUID[3] level 0000000b, 0: 0x00000001 0x00000002 0x00000100 0x00000003

2019-01-31T11:11:16.795-06:00| vmx| I125: CPUID[3] level 0000000b, 1: 0x00000004 0x00000004 0x00000201 0x00000003

2019-01-31T11:11:16.795-06:00| vmx| I125: CPUID[3] level 0000000b, 2: 0x00000000 0x00000000 0x00000002 0x00000003

2019-01-31T11:11:16.795-06:00| vmx| I125: CPUID Maximum Physical Address Bits supported across all CPUs: 39

2019-01-31T11:11:16.795-06:00| vmx| I125: Common: MSR       0x3a =                0x5

2019-01-31T11:11:16.795-06:00| vmx| I125: Common: MSR      0x480 =   0xda040000000004

2019-01-31T11:11:16.795-06:00| vmx| I125: Common: MSR      0x481 =       0x7f00000016

2019-01-31T11:11:16.795-06:00| vmx| I125: Common: MSR      0x482 = 0xfff9fffe0401e172

2019-01-31T11:11:16.795-06:00| vmx| I125: Common: MSR      0x483 =  0x1ffffff00036dff

2019-01-31T11:11:16.795-06:00| vmx| I125: Common: MSR      0x484 =    0x3ffff000011ff

2019-01-31T11:11:16.795-06:00| vmx| I125: Common: MSR      0x485 =         0x7004c1e7

2019-01-31T11:11:16.795-06:00| vmx| I125: Common: MSR      0x486 =         0x80000021

2019-01-31T11:11:16.795-06:00| vmx| I125: Common: MSR      0x487 =         0xffffffff

2019-01-31T11:11:16.795-06:00| vmx| I125: Common: MSR      0x488 =             0x2000

2019-01-31T11:11:16.795-06:00| vmx| I125: Common: MSR      0x489 =           0x3727ff

2019-01-31T11:11:16.795-06:00| vmx| I125: Common: MSR      0x48a =               0x2e

2019-01-31T11:11:16.795-06:00| vmx| I125: Common: MSR      0x48b =   0x5fbcff00000000

2019-01-31T11:11:16.795-06:00| vmx| I125: Common: MSR      0x48c =      0xf0106734141

2019-01-31T11:11:16.795-06:00| vmx| I125: Common: MSR      0x48d =       0x7f00000016

2019-01-31T11:11:16.795-06:00| vmx| I125: Common: MSR      0x48e = 0xfff9fffe04006172

2019-01-31T11:11:16.795-06:00| vmx| I125: Common: MSR      0x48f =  0x1ffffff00036dfb

2019-01-31T11:11:16.795-06:00| vmx| I125: Common: MSR      0x490 =    0x3ffff000011fb

2019-01-31T11:11:16.795-06:00| vmx| I125: Common: MSR      0x491 =                0x1

2019-01-31T11:11:16.795-06:00| vmx| I125: Common: MSR 0xc0010114 =                  0

2019-01-31T11:11:16.795-06:00| vmx| I125: Common: MSR       0xce =    0x4043df1011b00

2019-01-31T11:11:16.795-06:00| vmx| I125: VMMon_GetkHzEstimate: Calculated 2712004 kHz

2019-01-31T11:11:16.810-06:00| vmx| I125: HOSTINFO: Host supports constant rate TSC.

2019-01-31T11:11:16.810-06:00| vmx| I125: TSC kHz estimates: vmmon 2712004, remembered 0, osReported 2701000. Using 2712004 kHz.

2019-01-31T11:11:16.810-06:00| vmx| I125: TSC first measured delta 223

2019-01-31T11:11:16.810-06:00| vmx| I125: TSC min delta 160

2019-01-31T11:11:16.810-06:00| vmx| I125: PTSC: RefClockToPTSC 0 @ 2648441Hz -> 0 @ 2712004000Hz

2019-01-31T11:11:16.810-06:00| vmx| I125: PTSC: RefClockToPTSC ((x * 2147483977) >> 21) + -143302651330

2019-01-31T11:11:16.810-06:00| vmx| I125: PTSC: tscOffset -171317645046

2019-01-31T11:11:16.810-06:00| vmx| I125: PTSC: using TSC

2019-01-31T11:11:16.810-06:00| vmx| I125: PTSC: hardware TSCs are synchronized.

2019-01-31T11:11:16.810-06:00| vmx| I125: PTSC: hardware TSCs may have been adjusted by the host.

2019-01-31T11:11:16.810-06:00| vmx| I125: PTSC: current PTSC=29732923954

2019-01-31T11:11:16.810-06:00| vmx| I125: WQPoolAllocPoll : pollIx = 3, signalHandle = 948

2019-01-31T11:11:16.873-06:00| vmx| I125: ConfigCheck: No rules file found. Checks are disabled.

2019-01-31T11:11:16.873-06:00| vmx| I125: changing directory to C:\Users\TARUN MATHUR\Desktop\Cloudera-Training-CAPSpark-Student-VM-cdh5.4.3a-vmware\.

2019-01-31T11:11:16.873-06:00| vmx| I125: Config file: C:\Users\TARUN MATHUR\Desktop\Cloudera-Training-CAPSpark-Student-VM-cdh5.4.3a-vmware\cloudera-training-capspark-student-rev_cdh5.4.3a.vmx

2019-01-31T11:11:16.873-06:00| vmx| I125: Vix: [16192 mainDispatch.c:4234]: VMAutomation_ReportPowerOpFinished: statevar=1, newAppState=1873, success=1 additionalError=0

2019-01-31T11:11:16.873-06:00| vmx| I125: Vix: [16192 mainDispatch.c:4234]: VMAutomation_ReportPowerOpFinished: statevar=2, newAppState=1878, success=1 additionalError=0

2019-01-31T11:11:16.873-06:00| vmx| I125: FILE: FileCreateDirectoryRetry: Non-retriable error encountered (C:\Users): Cannot create a file when that file already exists (183)

2019-01-31T11:11:16.873-06:00| vmx| I125: FILE: FileCreateDirectoryRetry: Non-retriable error encountered (C:\Users\TARUNM~1): Cannot create a file when that file already exists (183)

2019-01-31T11:11:16.873-06:00| vmx| I125: FILE: FileCreateDirectoryRetry: Non-retriable error encountered (C:\Users\TARUNM~1\AppData): Cannot create a file when that file already exists (183)

2019-01-31T11:11:16.873-06:00| vmx| I125: FILE: FileCreateDirectoryRetry: Non-retriable error encountered (C:\Users\TARUNM~1\AppData\Local): Cannot create a file when that file already exists (183)

2019-01-31T11:11:16.873-06:00| vmx| I125: FILE: FileCreateDirectoryRetry: Non-retriable error encountered (C:\Users\TARUNM~1\AppData\Local\Temp): Cannot create a file when that file already exists (183)

2019-01-31T11:11:16.873-06:00| vmx| I125: FILE: FileCreateDirectoryRetry: Non-retriable error encountered (C:\Users\TARUNM~1\AppData\Local\Temp\vmware-TARUN MATHUR): Cannot create a file when that file already exists (183)

2019-01-31T11:11:19.130-06:00| vmx| I125: FILE: FileDeletionRetry: Non-retriable error encountered (C:\Users\TARUN MATHUR\AppData\Local\Temp\vmware-TARUN MATHUR\vmware-vmx-16188.log): The process cannot access the file because it is being used by another process (32)

2019-01-31T11:11:19.161-06:00| vmx| A100: ConfigDB: Setting ide1:0.fileName = "auto detect"

2019-01-31T11:11:19.161-06:00| vmx| I125: VMXVmdbCbVmVmxExecState: Exec state change requested to state poweredOn without reset, soft, softOptionTimeout: 0.

2019-01-31T11:11:19.161-06:00| vmx| I125: Tools: sending 'OS_PowerOn' (state = 3) state change request

2019-01-31T11:11:19.161-06:00| vmx| I125: Tools: Delaying state change request to state 3.

2019-01-31T11:11:19.161-06:00| vmx| W115: PowerOn

2019-01-31T11:11:19.161-06:00| vmx| I125: VMX_PowerOn: VMX build 10950780, UI build 10950780

2019-01-31T11:11:19.161-06:00| vmx| I125: HostWin32: WIN32 NUMA node 0, CPU mask 0x000000000000000f

2019-01-31T11:11:19.192-06:00| vmx| I125: Vix: [16192 mainDispatch.c:4234]: VMAutomation_ReportPowerOpFinished: statevar=0, newAppState=1871, success=1 additionalError=0

2019-01-31T11:11:19.192-06:00| vmx| I125: HOST Windows version 10.0, build 17134, platform 2, ""

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT --- GLOBAL SETTINGS C:\ProgramData\VMware\VMware Player\settings.ini

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT --- NON PERSISTENT (null)

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT --- USER PREFERENCES C:\Users\TARUN MATHUR\AppData\Roaming\VMware\preferences.ini

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT pref.keyboardAndMouse.vmHotKey.enabled = "FALSE"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT pref.keyboardAndMouse.vmHotKey.count = "0"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT pref.vmplayer.firstRunDismissedVersion = "14.1.2"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT   pref.lastUpdateCheckSec = "1548903511"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT pref.updatesVersionIgnore.numItems = "1"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT pref.updatesVersionIgnore0.key = "paid"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT pref.updatesVersionIgnore0.value = "b3daec82-64a5-428b-aa29-476fd69fe5e7"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT      pref.mruVM0.filename = "C:\Users\TARUN MATHUR\Desktop\Cloudera-Training-CAPSpark-Student-VM-cdh5.4.3a-vmware\cloudera-training-capspark-student-rev_cdh5.4.3a.vmx"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT   pref.mruVM0.displayName = "cloudera-training-capspark-student-rev_cdh5.4.3a"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT         pref.mruVM0.index = "0"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT pref.vmplayer.deviceBarToplevel = "FALSE"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT --- USER DEFAULTS C:\Users\TARUN MATHUR\AppData\Roaming\VMware\config.ini

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT --- HOST DEFAULTS C:\ProgramData\VMware\VMware Player\config.ini

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT --- SITE DEFAULTS C:\ProgramData\VMware\VMware Player\config.ini

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT --- NONPERSISTENT

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT  snapshot.numRollingTiers = "0"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT RemoteDisplay.vnc.enabled = "FALSE"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT            vmx.stdio.keep = "TRUE"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT             gui.available = "TRUE"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT --- COMMAND LINE

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT  snapshot.numRollingTiers = "0"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT RemoteDisplay.vnc.enabled = "FALSE"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT            vmx.stdio.keep = "TRUE"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT             gui.available = "TRUE"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT --- RECORDING

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT  snapshot.numRollingTiers = "0"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT RemoteDisplay.vnc.enabled = "FALSE"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT            vmx.stdio.keep = "TRUE"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT             gui.available = "TRUE"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT --- CONFIGURATION C:\Users\TARUN MATHUR\Desktop\Cloudera-Training-CAPSpark-Student-VM-cdh5.4.3a-vmware\cloudera-training-capspark-student-rev_cdh5.4.3a.vmx

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT                annotation = "Cloudera Academic Program Spark Course (CDH 5.4.3) Student VM, Revision A"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT            bios.bootorder = "hdd,CDROM"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT        checkpoint.vmstate = ""

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT             cleanshutdown = "FALSE"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT            config.version = "8"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT      cpuid.corespersocket = "1"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT               displayname = "cloudera-training-capspark-student-rev_cdh5.4.3a"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT        ehci.pcislotnumber = "-1"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT              ehci.present = "FALSE"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT     ethernet0.addresstype = "generated"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT         ethernet0.bsdname = "en0"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT  ethernet0.connectiontype = "nat"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT     ethernet0.displayname = "Ethernet"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT ethernet0.linkstatepropagation.enable = "FALSE"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT   ethernet0.pcislotnumber = "33"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT         ethernet0.present = "TRUE"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT      ethernet0.virtualdev = "e1000"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT   ethernet0.wakeonpcktrcv = "FALSE"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT        extendedconfigfile = "cloudera-training-capspark-student-rev_cdh5.4.3a.vmxf"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT           floppy0.present = "FALSE"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT                   guestos = "centos-64"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT   gui.fullscreenatpoweron = "FALSE"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT     gui.viewmodeatpoweron = "windowed"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT        hgfs.linkrootshare = "TRUE"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT         hgfs.maprootshare = "TRUE"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT         ide1:0.autodetect = "TRUE"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT         ide1:0.devicetype = "cdrom-raw"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT           ide1:0.filename = "auto detect"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT            ide1:0.present = "TRUE"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT     ide1:0.startconnected = "FALSE"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT isolation.tools.hgfs.disable = "FALSE"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT                   memsize = "3072"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT    monitor.phys_bits_used = "40"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT            msg.autoanswer = "true"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT                  numvcpus = "1"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT                     nvram = "cloudera-training-capspark-student-rev_cdh5.4.3a.nvram"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT  pcibridge0.pcislotnumber = "17"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT        pcibridge0.present = "TRUE"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT      pcibridge4.functions = "8"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT  pcibridge4.pcislotnumber = "21"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT        pcibridge4.present = "TRUE"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT     pcibridge4.virtualdev = "pcieRootPort"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT      pcibridge5.functions = "8"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT  pcibridge5.pcislotnumber = "22"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT        pcibridge5.present = "TRUE"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT     pcibridge5.virtualdev = "pcieRootPort"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT      pcibridge6.functions = "8"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT  pcibridge6.pcislotnumber = "23"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT        pcibridge6.present = "TRUE"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT     pcibridge6.virtualdev = "pcieRootPort"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT      pcibridge7.functions = "8"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT  pcibridge7.pcislotnumber = "24"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT        pcibridge7.present = "TRUE"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT     pcibridge7.virtualdev = "pcieRootPort"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT          policy.vm.mvmtid = ""

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT        powertype.poweroff = "soft"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT         powertype.poweron = "soft"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT           powertype.reset = "soft"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT         powertype.suspend = "soft"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT   proxyapps.publishtohost = "FALSE"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT remotedisplay.vnc.enabled = "TRUE"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT    remotedisplay.vnc.port = "5987"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT           replay.filename = ""

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT          replay.supported = "FALSE"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT       scsi0.pcislotnumber = "16"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT             scsi0.present = "TRUE"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT          scsi0.virtualdev = "lsilogic"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT          scsi0:0.filename = "centos-6.6-x86_64-base-disk-cl2.vmdk"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT           scsi0:0.present = "TRUE"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT              scsi0:0.redo = ""

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT              softpoweroff = "FALSE"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT          sound.buffertime = "200"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT             sound.present = "FALSE"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT      sound.startconnected = "FALSE"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT       tools.remindinstall = "FALSE"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT            tools.synctime = "TRUE"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT      tools.upgrade.policy = "manual"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT         usb.pcislotnumber = "-1"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT               usb.present = "FALSE"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT               uuid.action = "create"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT                 uuid.bios = "56 4d 79 d5 28 7a 4c 17-c2 3d 38 0b fe 39 b6 7e"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT             uuid.location = "56 4d 79 d5 28 7a 4c 17-c2 3d 38 0b fe 39 b6 7e"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT                   vc.uuid = ""

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT virtualhw.productcompatibility = "hosted"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT         virtualhw.version = "8"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT                  vmci0.id = "1861462629"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT       vmci0.pcislotnumber = "35"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT             vmci0.present = "TRUE"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT  vmotion.checkpointfbsize = "33554432"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT           migrate.hostlog = "./cloudera-training-capspark-student-rev_cdh5.4.3a-c3d101b6.hlog"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT ethernet0.generatedAddress = "00:0c:29:39:b6:7e"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT ethernet0.generatedAddressOffset = "0"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT      numa.autosize.cookie = "10001"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT numa.autosize.vcpu.maxPerVirtualNode = "1"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT          unity.wasCapable = "TRUE"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT --- USER DEFAULTS C:\Users\TARUN MATHUR\AppData\Roaming\VMware\config.ini

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT --- HOST DEFAULTS C:\ProgramData\VMware\VMware Workstation\config.ini

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT installerDefaults.simplifiedUI = "no"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT installerDefaults.autoSoftwareUpdateEnabled = "yes"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT installerDefaults.autoSoftwareUpdateEnabled.epoch = "19567"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT installerDefaults.componentDownloadEnabled = "yes"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT installerDefaults.dataCollectionEnabled = "yes"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT installerDefaults.dataCollectionEnabled.epoch = "19567"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT --- SITE DEFAULTS C:\ProgramData\VMware\VMware Workstation\config.ini

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT installerDefaults.simplifiedUI = "no"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT installerDefaults.autoSoftwareUpdateEnabled = "yes"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT installerDefaults.autoSoftwareUpdateEnabled.epoch = "19567"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT installerDefaults.componentDownloadEnabled = "yes"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT installerDefaults.dataCollectionEnabled = "yes"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT installerDefaults.dataCollectionEnabled.epoch = "19567"

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT --- GLOBAL SETTINGS C:\ProgramData\VMware\VMware Workstation\settings.ini

2019-01-31T11:11:19.192-06:00| vmx| I125: DICT          printers.enabled = "FALSE"

2019-01-31T11:11:19.192-06:00| vmx| I125: ToolsISO: open of C:\Program Files (x86)\VMware\VMware Player\isoimages_manifest.txt.sig failed: Could not find the file

2019-01-31T11:11:19.192-06:00| vmx| I125: ToolsISO: Unable to read signature file 'C:\Program Files (x86)\VMware\VMware Player\isoimages_manifest.txt.sig', ignoring.

2019-01-31T11:11:19.192-06:00| vmx| I125: ToolsISO: Selected Tools ISO 'linux.iso' for 'centos-64' guest.

2019-01-31T11:11:19.208-06:00| vmx| I125: Vix: [16192 mainDispatch.c:4234]: VMAutomation_ReportPowerOpFinished: statevar=1, newAppState=1873, success=1 additionalError=0

2019-01-31T11:11:19.224-06:00| vmx| I125: Host VT-x Capabilities:

2019-01-31T11:11:19.224-06:00| vmx| I125: Basic VMX Information (0x00da040000000004)

2019-01-31T11:11:19.224-06:00| vmx| I125:   VMCS revision ID                   4

2019-01-31T11:11:19.224-06:00| vmx| I125:   VMCS region length              1024

2019-01-31T11:11:19.224-06:00| vmx| I125:   VMX physical-address width   natural

2019-01-31T11:11:19.224-06:00| vmx| I125:   SMM dual-monitor mode            yes

2019-01-31T11:11:19.224-06:00| vmx| I125:   Advanced INS/OUTS info           yes

2019-01-31T11:11:19.224-06:00| vmx| I125:   True VMX MSRs                    yes

2019-01-31T11:11:19.224-06:00| vmx| I125:   VMCS memory type                  WB

2019-01-31T11:11:19.224-06:00| vmx| I125: True Pin-Based VM-Execution Controls (0x0000007f00000016)

2019-01-31T11:11:19.224-06:00| vmx| I125:   External-interrupt exiting               {0,1}

2019-01-31T11:11:19.224-06:00| vmx| I125:   NMI exiting                              {0,1}

2019-01-31T11:11:19.224-06:00| vmx| I125:   Virtual NMIs                             {0,1}

2019-01-31T11:11:19.224-06:00| vmx| I125:   Activate VMX-preemption timer            {0,1}

2019-01-31T11:11:19.224-06:00| vmx| I125:   Process posted interrupts                { 0 }

2019-01-31T11:11:19.224-06:00| vmx| I125: True Primary Processor-Based VM-Execution Controls (0xfff9fffe04006172)

2019-01-31T11:11:19.224-06:00| vmx| I125:   Interrupt-window exiting                 {0,1}

2019-01-31T11:11:19.224-06:00| vmx| I125:   Use TSC offsetting                       {0,1}

2019-01-31T11:11:19.224-06:00| vmx| I125:   HLT exiting                              {0,1}

2019-01-31T11:11:19.224-06:00| vmx| I125:   INVLPG exiting                           {0,1}

2019-01-31T11:11:19.224-06:00| vmx| I125:   MWAIT exiting                            {0,1}

2019-01-31T11:11:19.224-06:00| vmx| I125:   RDPMC exiting                            {0,1}

2019-01-31T11:11:19.224-06:00| vmx| I125:   RDTSC exiting                            {0,1}

2019-01-31T11:11:19.224-06:00| vmx| I125:   CR3-load exiting                         {0,1}

2019-01-31T11:11:19.224-06:00| vmx| I125:   CR3-store exiting                        {0,1}

2019-01-31T11:11:19.224-06:00| vmx| I125:   CR8-load exiting                         {0,1}

2019-01-31T11:11:19.224-06:00| vmx| I125:   CR8-store exiting                        {0,1}

2019-01-31T11:11:19.224-06:00| vmx| I125:   Use TPR shadow                           {0,1}

2019-01-31T11:11:19.224-06:00| vmx| I125:   NMI-window exiting                       {0,1}

2019-01-31T11:11:19.224-06:00| vmx| I125:   MOV-DR exiting                           {0,1}

2019-01-31T11:11:19.224-06:00| vmx| I125:   Unconditional I/O exiting                {0,1}

2019-01-31T11:11:19.224-06:00| vmx| I125:   Use I/O bitmaps                          {0,1}

2019-01-31T11:11:19.224-06:00| vmx| I125:   Monitor trap flag                        {0,1}

2019-01-31T11:11:19.224-06:00| vmx| I125:   Use MSR bitmaps                          {0,1}

2019-01-31T11:11:19.224-06:00| vmx| I125:   MONITOR exiting                          {0,1}

2019-01-31T11:11:19.224-06:00| vmx| I125:   PAUSE exiting                            {0,1}

2019-01-31T11:11:19.224-06:00| vmx| I125:   Activate secondary controls              {0,1}

2019-01-31T11:11:19.224-06:00| vmx| I125: Secondary Processor-Based VM-Execution Controls (0x005fbcff00000000)

2019-01-31T11:11:19.224-06:00| vmx| I125:   Virtualize APIC accesses                 {0,1}

2019-01-31T11:11:19.224-06:00| vmx| I125:   Enable EPT                               {0,1}

2019-01-31T11:11:19.224-06:00| vmx| I125:   Descriptor-table exiting                 {0,1}

2019-01-31T11:11:19.224-06:00| vmx| I125:   Enable RDTSCP                            {0,1}

2019-01-31T11:11:19.224-06:00| vmx| I125:   Virtualize x2APIC mode                   {0,1}

2019-01-31T11:11:19.224-06:00| vmx| I125:   Enable VPID                              {0,1}

2019-01-31T11:11:19.224-06:00| vmx| I125:   WBINVD exiting                           {0,1}

2019-01-31T11:11:19.224-06:00| vmx| I125:   Unrestricted guest                       {0,1}

2019-01-31T11:11:19.224-06:00| vmx| I125:   APIC-register virtualization             { 0 }

2019-01-31T11:11:19.224-06:00| vmx| I125:   Virtual-interrupt delivery               { 0 }

2019-01-31T11:11:19.224-06:00| vmx| I125:   PAUSE-loop exiting                       {0,1}

2019-01-31T11:11:19.224-06:00| vmx| I125:   RDRAND exiting                           {0,1}

2019-01-31T11:11:19.224-06:00| vmx| I125:   Enable INVPCID                           {0,1}

2019-01-31T11:11:19.224-06:00| vmx| I125:   Enable VM Functions                      {0,1}

2019-01-31T11:11:19.224-06:00| vmx| I125:   Use VMCS shadowing                       { 0 }

2019-01-31T11:11:19.239-06:00| vmx| I125:   Enable ENCLS/ENCLU                       {0,1}

2019-01-31T11:11:19.239-06:00| vmx| I125:   RDSEED exiting                           {0,1}

2019-01-31T11:11:19.239-06:00| vmx| I125:   Enable PML                               {0,1}

2019-01-31T11:11:19.239-06:00| vmx| I125:   EPT-violation #VE                        {0,1}

2019-01-31T11:11:19.239-06:00| vmx| I125:   Enable XSAVES/XRSTORS                    {0,1}

2019-01-31T11:11:19.239-06:00| vmx| I125:   Mode-based execute control for EPT       {0,1}

2019-01-31T11:11:19.239-06:00| vmx| I125:   Use TSC scaling                          { 0 }

2019-01-31T11:11:19.239-06:00| vmx| I125: True VM-Exit Controls (0x01ffffff00036dfb)

2019-01-31T11:11:19.239-06:00| vmx| I125:   Save debug controls                      {0,1}

2019-01-31T11:11:19.239-06:00| vmx| I125:   Host address-space size                  {0,1}

2019-01-31T11:11:19.239-06:00| vmx| I125:   Load IA32_PERF_GLOBAL_CTRL               {0,1}

2019-01-31T11:11:19.239-06:00| vmx| I125:   Acknowledge interrupt on exit            {0,1}

2019-01-31T11:11:19.239-06:00| vmx| I125:   Save IA32_PAT                            {0,1}

2019-01-31T11:11:19.239-06:00| vmx| I125:   Load IA32_PAT                            {0,1}

2019-01-31T11:11:19.239-06:00| vmx| I125:   Save IA32_EFER                           {0,1}

2019-01-31T11:11:19.239-06:00| vmx| I125:   Load IA32_EFER                           {0,1}

2019-01-31T11:11:19.239-06:00| vmx| I125:   Save VMX-preemption timer                {0,1}

2019-01-31T11:11:19.239-06:00| vmx| I125: True VM-Entry Controls (0x0003ffff000011fb)

2019-01-31T11:11:19.239-06:00| vmx| I125:   Load debug controls                      {0,1}

2019-01-31T11:11:19.239-06:00| vmx| I125:   IA-32e mode guest                        {0,1}

2019-01-31T11:11:19.239-06:00| vmx| I125:   Entry to SMM                             {0,1}

2019-01-31T11:11:19.239-06:00| vmx| I125:   Deactivate dual-monitor mode             {0,1}

2019-01-31T11:11:19.239-06:00| vmx| I125:   Load IA32_PERF_GLOBAL_CTRL               {0,1}

2019-01-31T11:11:19.239-06:00| vmx| I125:   Load IA32_PAT                            {0,1}

2019-01-31T11:11:19.239-06:00| vmx| I125:   Load IA32_EFER                           {0,1}

2019-01-31T11:11:19.239-06:00| vmx| I125: VPID and EPT Capabilities (0x00000f0106734141)

2019-01-31T11:11:19.239-06:00| vmx| I125:   R=0/W=0/X=1                      yes

2019-01-31T11:11:19.239-06:00| vmx| I125:   Page-walk length 3               yes

2019-01-31T11:11:19.239-06:00| vmx| I125:   EPT memory type WB               yes

2019-01-31T11:11:19.239-06:00| vmx| I125:   2MB super-page                   yes

2019-01-31T11:11:19.239-06:00| vmx| I125:   1GB super-page                   yes

2019-01-31T11:11:19.239-06:00| vmx| I125:   INVEPT support                   yes

2019-01-31T11:11:19.239-06:00| vmx| I125:   Access & Dirty Bits              yes

2019-01-31T11:11:19.239-06:00| vmx| I125:   Advanced VM exit information for EPT violations   yes

2019-01-31T11:11:19.239-06:00| vmx| I125:   Type 1 INVEPT                    yes

2019-01-31T11:11:19.239-06:00| vmx| I125:   Type 2 INVEPT                    yes

2019-01-31T11:11:19.239-06:00| vmx| I125:   INVVPID support                  yes

2019-01-31T11:11:19.239-06:00| vmx| I125:   Type 0 INVVPID                   yes

2019-01-31T11:11:19.239-06:00| vmx| I125:   Type 1 INVVPID                   yes

2019-01-31T11:11:19.239-06:00| vmx| I125:   Type 2 INVVPID                   yes

2019-01-31T11:11:19.239-06:00| vmx| I125:   Type 3 INVVPID                   yes

2019-01-31T11:11:19.239-06:00| vmx| I125: Miscellaneous VMX Data (0x000000007004c1e7)

2019-01-31T11:11:19.239-06:00| vmx| I125:   TSC to preemption timer ratio      7

2019-01-31T11:11:19.239-06:00| vmx| I125:   VM-Exit saves EFER.LMA           yes

2019-01-31T11:11:19.239-06:00| vmx| I125:   Activity State HLT               yes

2019-01-31T11:11:19.239-06:00| vmx| I125:   Activity State shutdown          yes

2019-01-31T11:11:19.239-06:00| vmx| I125:   Activity State wait-for-SIPI     yes

2019-01-31T11:11:19.239-06:00| vmx| I125:   CR3 targets supported              4

2019-01-31T11:11:19.239-06:00| vmx| I125:   Maximum MSR list size            512

2019-01-31T11:11:19.239-06:00| vmx| I125:   Allow all VMWRITEs               yes

2019-01-31T11:11:19.239-06:00| vmx| I125:   Allow zero instruction length    yes

2019-01-31T11:11:19.239-06:00| vmx| I125:   MSEG revision ID                   0

2019-01-31T11:11:19.239-06:00| vmx| I125:   Processor trace in VMX           yes

2019-01-31T11:11:19.239-06:00| vmx| I125: VMX-Fixed Bits in CR0 (0x0000000080000021/0x00000000ffffffff)

2019-01-31T11:11:19.239-06:00| vmx| I125:   Fixed to 0        0xffffffff00000000

2019-01-31T11:11:19.239-06:00| vmx| I125:   Fixed to 1        0x0000000080000021

2019-01-31T11:11:19.239-06:00| vmx| I125:   Variable          0x000000007fffffde

2019-01-31T11:11:19.239-06:00| vmx| I125: VMX-Fixed Bits in CR4 (0x0000000000002000/0x00000000003727ff)

2019-01-31T11:11:19.239-06:00| vmx| I125:   Fixed to 0        0xffffffffffc8d800

2019-01-31T11:11:19.239-06:00| vmx| I125:   Fixed to 1        0x0000000000002000

2019-01-31T11:11:19.239-06:00| vmx| I125:   Variable          0x00000000003707ff

2019-01-31T11:11:19.239-06:00| vmx| I125: VMCS Enumeration (0x000000000000002e)

2019-01-31T11:11:19.239-06:00| vmx| I125:   Highest index                   0x17

2019-01-31T11:11:19.239-06:00| vmx| I125: VM Functions (0x0000000000000001)

2019-01-31T11:11:19.239-06:00| vmx| I125:   Function  0 (EPTP-switching) supported.

2019-01-31T11:11:19.239-06:00| vmx| I125: hostCpuFeatures = 0x34fd

2019-01-31T11:11:19.239-06:00| vmx| I125: hostNumGenPerfCounters = 4

2019-01-31T11:11:19.239-06:00| vmx| I125: hostNumFixedPerfCounters = 3

2019-01-31T11:11:19.239-06:00| vmx| I125: hostPerfCtrArch = 4

2019-01-31T11:11:19.243-06:00| vmx| I125: OvhdMem_PowerOn: initial admission: paged   205914 nonpaged    35939 anonymous     4357

2019-01-31T11:11:19.243-06:00| vmx| I125: VMMEM: Initial Reservation: 961MB (MainMem=3072MB)

2019-01-31T11:11:19.243-06:00| vmx| I125: MemSched_PowerOn: balloon minGuestSize 104857 (80% of min required size 131072)

2019-01-31T11:11:19.243-06:00| vmx| I125: MemSched: reserved mem (in MB) min 128 max 6187 recommended 6187

2019-01-31T11:11:19.243-06:00| vmx| I125: MemSched: pg 205914 np 35939 anon 4357 mem 786432

2019-01-31T11:11:19.570-06:00| vmx| I125: MemSched: numvm 1 locked pages: num 0 max 1575680

2019-01-31T11:11:19.570-06:00| vmx| I125: MemSched: locked Page Limit: host 1784163 config 1583872 dynam 1660769

2019-01-31T11:11:19.570-06:00| vmx| I125: MemSched: minmempct 50 minalloc 0 admitted 1

2019-01-31T11:11:19.570-06:00| vmx| I125: LICENSE: Running unlicensed VMX (VMware Player)

2019-01-31T11:11:19.570-06:00| WinNotifyThread| I125: VTHREAD start thread 2 "WinNotifyThread" host id 14592

2019-01-31T11:11:19.570-06:00| WinNotifyThread| I125: WinNotify thread is alive

2019-01-31T11:11:19.570-06:00| vthread-3| I125: VTHREAD start thread 3 "vthread-3" host id 14624

2019-01-31T11:11:19.570-06:00| vmx| I125: PolicyVMXFindPolicyKey: policy file does not exist.

2019-01-31T11:11:19.570-06:00| vmx| I125: PolicyVMXFindPolicyKey: policy file does not exist.

2019-01-31T11:11:19.570-06:00| vmx| I125: ToolsISO: open of C:\Program Files (x86)\VMware\VMware Player\isoimages_manifest.txt.sig failed: Could not find the file

2019-01-31T11:11:19.570-06:00| vmx| I125: ToolsISO: Unable to read signature file 'C:\Program Files (x86)\VMware\VMware Player\isoimages_manifest.txt.sig', ignoring.

2019-01-31T11:11:19.570-06:00| vmx| I125: ToolsISO: Selected Tools ISO 'linux.iso' for 'centos-64' guest.

2019-01-31T11:11:19.570-06:00| vmx| I125: Host IPI vectors: 0x2f 0. Monitor IPI vector: 0, HV IPI vector: 0

2019-01-31T11:11:19.570-06:00| vmx| I125: Monitor_PowerOn: HostedVSMP skew tracking is disabled

2019-01-31T11:11:19.570-06:00| vmx| I125: Monitor64_PowerOn()

2019-01-31T11:11:19.570-06:00| vmx| I125: Loaded crosspage: .crosspage.  Size = 4096.

2019-01-31T11:11:19.570-06:00| vmx| I125: vmm64-modules: [vmm.vmm64, mmu-hwmmu.vmm64, vprobe-none.vmm64, hv-vt.vmm64, gphys-ept.vmm64, callstack-none.vmm64, vmce-none.vmm64, vvtd-none.vmm64, gi-none.vmm64, e1000Shared=0x0, {UseUnwind}=0x0, numVCPUsAsAddr=0x1, {SharedAreaReservations}=0xec0, {rodataSize}=0x1e9a2, {textAddr}=0xfffffffffc000000, {textSize}=0x7ee83, <MonSrcFile>]

2019-01-31T11:11:19.570-06:00| vmx| I125: vmm64-vcpus:   1

2019-01-31T11:11:19.601-06:00| vmx| I125: KHZEstimate 2712004

2019-01-31T11:11:19.601-06:00| vmx| I125: MHZEstimate 2712

2019-01-31T11:11:19.601-06:00| vmx| I125: NumVCPUs 1

2019-01-31T11:11:19.601-06:00| vmx| I125: MonTimer: host does not have high resolution timers.

2019-01-31T11:11:19.601-06:00| vmx| I125: UUID: location-UUID is 56 4d 0f 94 48 8c ec 9e-57 d5 a9 19 44 32 40 d0

2019-01-31T11:11:19.601-06:00| vmx| I125: UUID: location-UUID is 56 4d 79 d5 28 7a 4c 17-c2 3d 38 0b fe 39 b6 7e

2019-01-31T11:11:19.601-06:00| vmx| I125: UUID: location-UUID is 56 4d 79 d5 28 7a 4c 17-c2 3d 38 0b fe 39 b6 7e

2019-01-31T11:11:19.601-06:00| vmx| I125: AIOGNRC: numThreads=18 ide=0, scsi=1, passthru=1

2019-01-31T11:11:19.601-06:00| vmx| I125: WORKER: Creating new group with numThreads=18 (18)

2019-01-31T11:11:19.632-06:00| vmx| I125: WORKER: Creating new group with numThreads=1 (19)

2019-01-31T11:11:19.632-06:00| vmx| I125: MainMem: CPT Host WZ=0 PF=3072 D=0

2019-01-31T11:11:19.632-06:00| vmx| I125: MainMem: CPT PLS=1 PLR=1 BS=1 BlkP=32 Mult=4 W=50

2019-01-31T11:11:19.632-06:00| vmx| I125: UUID: location-UUID is 56 4d 79 d5 28 7a 4c 17-c2 3d 38 0b fe 39 b6 7e

2019-01-31T11:11:19.632-06:00| vmx| I125: MainMem: Opened paging file, 'C:\Users\TARUN MATHUR\Desktop\Cloudera-Training-CAPSpark-Student-VM-cdh5.4.3a-vmware\564d79d5-287a-4c17-c23d-380bfe39b67e.vmem'.

2019-01-31T11:11:19.632-06:00| vmx| I125: MStat: Creating Stat vm.uptime

2019-01-31T11:11:19.632-06:00| vmx| I125: MStat: Creating Stat vm.suspendTime

2019-01-31T11:11:19.632-06:00| vmx| I125: MStat: Creating Stat vm.powerOnTimeStamp

2019-01-31T11:11:19.632-06:00| vmx| I125: VMXAIOMGR: Using: simple=Compl

2019-01-31T11:11:19.648-06:00| vmx| I125: WORKER: Creating new group with numThreads=1 (20)

2019-01-31T11:11:19.648-06:00| aioCompletion| I125: VTHREAD start thread 4 "aioCompletion" host id 9176

2019-01-31T11:11:19.663-06:00| vmx| I125: WORKER: Creating new group with numThreads=16 (36)

2019-01-31T11:11:19.663-06:00| vmx| I125: MigrateBusMemPrealloc: BusMem preallocation begins.

2019-01-31T11:11:19.663-06:00| vmx| I125: MigrateBusMemPrealloc: BusMem preallocation completes.

2019-01-31T11:11:19.663-06:00| vmx| I125: TimeTracker host to guest rate conversion 37496726233 @ 2712004000Hz -> 0 @ 2712004000Hz

2019-01-31T11:11:19.663-06:00| vmx| I125: TimeTracker host to guest rate conversion ((x * 2147483648) >> 31) + -37496726233

2019-01-31T11:11:19.663-06:00| vmx| I125: Disabling TSC scaling since host does not support it.

2019-01-31T11:11:19.663-06:00| vmx| I125: TSC offsetting enabled.

2019-01-31T11:11:19.663-06:00| vmx| I125: timeTracker.globalProgressMaxAllowanceMS: 2000

2019-01-31T11:11:19.663-06:00| vmx| I125: timeTracker.globalProgressToAllowanceNS: 1000

2019-01-31T11:11:19.663-06:00| vmx| A100: ConfigDB: Setting scsi0:0.redo = ""

2019-01-31T11:11:19.663-06:00| vmx| I125: DISK: OPEN scsi0:0 'C:\Users\TARUN MATHUR\Desktop\Cloudera-Training-CAPSpark-Student-VM-cdh5.4.3a-vmware\centos-6.6-x86_64-base-disk-cl2.vmdk' persistent R[]

2019-01-31T11:11:19.695-06:00| vmx| I125: DISKLIB-SPARSECHK: [C:\Users\TARUN MATHUR\Desktop\Cloudera-Training-CAPSpark-Student-VM-cdh5.4.3a-vmware\centos-6.6-x86_64-base-disk-cl2.vmdk] Grain #150633 @19286144 is pointed to by multiple GTEs

2019-01-31T11:11:19.695-06:00| vmx| I125: DISKLIB-SPARSECHK: [C:\Users\TARUN MATHUR\Desktop\Cloudera-Training-CAPSpark-Student-VM-cdh5.4.3a-vmware\centos-6.6-x86_64-base-disk-cl2.vmdk] Grain #150635 @19286400 is pointed to by multiple GTEs

2019-01-31T11:11:19.695-06:00| vmx| I125: DISKLIB-SPARSECHK: [C:\Users\TARUN MATHUR\Desktop\Cloudera-Training-CAPSpark-Student-VM-cdh5.4.3a-vmware\centos-6.6-x86_64-base-disk-cl2.vmdk] Grain #150636 @19286528 is pointed to by multiple GTEs

2019-01-31T11:11:19.695-06:00| vmx| I125: DISKLIB-SPARSECHK: [C:\Users\TARUN MATHUR\Desktop\Cloudera-Training-CAPSpark-Student-VM-cdh5.4.3a-vmware\centos-6.6-x86_64-base-disk-cl2.vmdk] Grain #150637 @19286656 is pointed to by multiple GTEs

2019-01-31T11:11:19.710-06:00| vmx| I125: DISKLIB-SPARSECHK: [C:\Users\TARUN MATHUR\Desktop\Cloudera-Training-CAPSpark-Student-VM-cdh5.4.3a-vmware\centos-6.6-x86_64-base-disk-cl2.vmdk] Grain #150634 @19286272 is pointed to by multiple GTEs

2019-01-31T11:11:19.726-06:00| vmx| I125: DISKLIB-SPARSECHK: [C:\Users\TARUN MATHUR\Desktop\Cloudera-Training-CAPSpark-Student-VM-cdh5.4.3a-vmware\centos-6.6-x86_64-base-disk-cl2.vmdk] GT Error (DD): GT[5][111] = 19286144 / 19286144

2019-01-31T11:11:19.726-06:00| vmx| I125: DISKLIB-SPARSECHK: [C:\Users\TARUN MATHUR\Desktop\Cloudera-Training-CAPSpark-Student-VM-cdh5.4.3a-vmware\centos-6.6-x86_64-base-disk-cl2.vmdk] GT Error (DD): GT[5][192] = 19286400 / 19286400

2019-01-31T11:11:19.726-06:00| vmx| I125: DISKLIB-SPARSECHK: [C:\Users\TARUN MATHUR\Desktop\Cloudera-Training-CAPSpark-Student-VM-cdh5.4.3a-vmware\centos-6.6-x86_64-base-disk-cl2.vmdk] GT Error (DD): GT[5][193] = 19286528 / 19286528

2019-01-31T11:11:19.726-06:00| vmx| I125: DISKLIB-SPARSECHK: [C:\Users\TARUN MATHUR\Desktop\Cloudera-Training-CAPSpark-Student-VM-cdh5.4.3a-vmware\centos-6.6-x86_64-base-disk-cl2.vmdk] GT Error (DD): GT[5][194] = 19286656 / 19286656

2019-01-31T11:11:19.726-06:00| vmx| I125: DISKLIB-SPARSECHK: [C:\Users\TARUN MATHUR\Desktop\Cloudera-Training-CAPSpark-Student-VM-cdh5.4.3a-vmware\centos-6.6-x86_64-base-disk-cl2.vmdk] GT Error (DD): GT[18][209] = 19286144 / 19286144

2019-01-31T11:11:19.726-06:00| vmx| I125: DISKLIB-SPARSECHK: [C:\Users\TARUN MATHUR\Desktop\Cloudera-Training-CAPSpark-Student-VM-cdh5.4.3a-vmware\centos-6.6-x86_64-base-disk-cl2.vmdk] GT Error (DD): GT[18][210] = 19286272 / 19286272

2019-01-31T11:11:19.726-06:00| vmx| I125: DISKLIB-SPARSECHK: [C:\Users\TARUN MATHUR\Desktop\Cloudera-Training-CAPSpark-Student-VM-cdh5.4.3a-vmware\centos-6.6-x86_64-base-disk-cl2.vmdk] GT Error (DD): GT[18][211] = 19286400 / 19286400

2019-01-31T11:11:19.726-06:00| vmx| I125: DISKLIB-SPARSECHK: [C:\Users\TARUN MATHUR\Desktop\Cloudera-Training-CAPSpark-Student-VM-cdh5.4.3a-vmware\centos-6.6-x86_64-base-disk-cl2.vmdk] GT Error (DD): GT[18][212] = 19286528 / 19286528

2019-01-31T11:11:19.726-06:00| vmx| I125: DISKLIB-SPARSECHK: [C:\Users\TARUN MATHUR\Desktop\Cloudera-Training-CAPSpark-Student-VM-cdh5.4.3a-vmware\centos-6.6-x86_64-base-disk-cl2.vmdk] GT Error (DD): GT[18][213] = 19286656 / 19286656

2019-01-31T11:11:19.741-06:00| vmx| I125: DISKLIB-SPARSECHK: [C:\Users\TARUN MATHUR\Desktop\Cloudera-Training-CAPSpark-Student-VM-cdh5.4.3a-vmware\centos-6.6-x86_64-base-disk-cl2.vmdk] GT Error (EE): GT[46][367] = 19860864 / 19860864

2019-01-31T11:11:19.741-06:00| vmx| I125: DISKLIB-SPARSECHK: [C:\Users\TARUN MATHUR\Desktop\Cloudera-Training-CAPSpark-Student-VM-cdh5.4.3a-vmware\centos-6.6-x86_64-base-disk-cl2.vmdk] Resolving      GT[46][367] = 0

2019-01-31T11:11:19.741-06:00| vmx| I125: DISKLIB-SPARSECHK: [C:\Users\TARUN MATHUR\Desktop\Cloudera-Training-CAPSpark-Student-VM-cdh5.4.3a-vmware\centos-6.6-x86_64-base-disk-cl2.vmdk] GT Error (EE): GT[46][368] = 19860992 / 19860992

2019-01-31T11:11:19.741-06:00| vmx| I125: DISKLIB-SPARSECHK: [C:\Users\TARUN MATHUR\Desktop\Cloudera-Training-CAPSpark-Student-VM-cdh5.4.3a-vmware\centos-6.6-x86_64-base-disk-cl2.vmdk] Resolving      GT[46][368] = 0

2019-01-31T11:11:19.741-06:00| vmx| I125: DISKLIB-SPARSECHK: [C:\Users\TARUN MATHUR\Desktop\Cloudera-Training-CAPSpark-Student-VM-cdh5.4.3a-vmware\centos-6.6-x86_64-base-disk-cl2.vmdk] GT Error (EE): GT[46][369] = 19861120 / 19861120

2019-01-31T11:11:19.741-06:00| vmx| I125: DISKLIB-SPARSECHK: [C:\Users\TARUN MATHUR\Desktop\Cloudera-Training-CAPSpark-Student-VM-cdh5.4.3a-vmware\centos-6.6-x86_64-base-disk-cl2.vmdk] Resolving      GT[46][369] = 0

2019-01-31T11:11:19.741-06:00| vmx| I125: DISKLIB-SPARSECHK: [C:\Users\TARUN MATHUR\Desktop\Cloudera-Training-CAPSpark-Student-VM-cdh5.4.3a-vmware\centos-6.6-x86_64-base-disk-cl2.vmdk] GT Error (EE): GT[46][370] = 19861248 / 19861248

2019-01-31T11:11:19.741-06:00| vmx| I125: DISKLIB-SPARSECHK: [C:\Users\TARUN MATHUR\Desktop\Cloudera-Training-CAPSpark-Student-VM-cdh5.4.3a-vmware\centos-6.6-x86_64-base-disk-cl2.vmdk] Resolving      GT[46][370] = 0

2019-01-31T11:11:19.741-06:00| vmx| I125: DISKLIB-SPARSECHK: [C:\Users\TARUN MATHUR\Desktop\Cloudera-Training-CAPSpark-Student-VM-cdh5.4.3a-vmware\centos-6.6-x86_64-base-disk-cl2.vmdk] GT Error (EE): GT[46][371] = 19861376 / 19861376

2019-01-31T11:11:19.741-06:00| vmx| I125: DISKLIB-SPARSECHK: [C:\Users\TARUN MATHUR\Desktop\Cloudera-Training-CAPSpark-Student-VM-cdh5.4.3a-vmware\centos-6.6-x86_64-base-disk-cl2.vmdk] Resolving      GT[46][371] = 0

2019-01-31T11:11:19.741-06:00| vmx| I125: DISKLIB-SPARSECHK: [C:\Users\TARUN MATHUR\Desktop\Cloudera-Training-CAPSpark-Student-VM-cdh5.4.3a-vmware\centos-6.6-x86_64-base-disk-cl2.vmdk] GT Error (EE): GT[46][372] = 19861504 / 19861504

2019-01-31T11:11:19.741-06:00| vmx| I125: DISKLIB-SPARSECHK: [C:\Users\TARUN MATHUR\Desktop\Cloudera-Training-CAPSpark-Student-VM-cdh5.4.3a-vmware\centos-6.6-x86_64-base-disk-cl2.vmdk] Resolving      GT[46][372] = 0

2019-01-31T11:11:19.741-06:00| vmx| I125: DISKLIB-SPARSECHK: [C:\Users\TARUN MATHUR\Desktop\Cloudera-Training-CAPSpark-Student-VM-cdh5.4.3a-vmware\centos-6.6-x86_64-base-disk-cl2.vmdk] GT Error (EE): GT[46][373] = 19861632 / 19861632

2019-01-31T11:11:19.741-06:00| vmx| I125: DISKLIB-SPARSECHK: [C:\Users\TARUN MATHUR\Desktop\Cloudera-Training-CAPSpark-Student-VM-cdh5.4.3a-vmware\centos-6.6-x86_64-base-disk-cl2.vmdk] Resolving      GT[46][373] = 0

2019-01-31T11:11:19.741-06:00| vmx| I125: DISKLIB-SPARSECHK: [C:\Users\TARUN MATHUR\Desktop\Cloudera-Training-CAPSpark-Student-VM-cdh5.4.3a-vmware\centos-6.6-x86_64-base-disk-cl2.vmdk] GT Error (DD): GT[328][209] = 19286272 / 19286272

2019-01-31T11:11:19.757-06:00| vmx| I125: DISKLIB-SPARSE: "C:\Users\TARUN MATHUR\Desktop\Cloudera-Training-CAPSpark-Student-VM-cdh5.4.3a-vmware\centos-6.6-x86_64-base-disk-cl2.vmdk" : failed to open (14): Disk needs repair.

2019-01-31T11:11:19.757-06:00| vmx| I125: DISKLIB-LINK  : "C:\Users\TARUN MATHUR\Desktop\Cloudera-Training-CAPSpark-Student-VM-cdh5.4.3a-vmware\centos-6.6-x86_64-base-disk-cl2.vmdk" : failed to open (The specified virtual disk needs repair). 

2019-01-31T11:11:19.757-06:00| vmx| I125: DISKLIB-CHAIN : "C:\Users\TARUN MATHUR\Desktop\Cloudera-Training-CAPSpark-Student-VM-cdh5.4.3a-vmware\centos-6.6-x86_64-base-disk-cl2.vmdk" : failed to open (The specified virtual disk needs repair).

2019-01-31T11:11:19.757-06:00| vmx| I125: DISKLIB-LIB   : Failed to open 'C:\Users\TARUN MATHUR\Desktop\Cloudera-Training-CAPSpark-Student-VM-cdh5.4.3a-vmware\centos-6.6-x86_64-base-disk-cl2.vmdk' with flags 0xa The specified virtual disk needs repair (14).

2019-01-31T11:11:19.773-06:00| vmx| I125: DISKLIB-SPARSECHK: [C:\Users\TARUN MATHUR\Desktop\Cloudera-Training-CAPSpark-Student-VM-cdh5.4.3a-vmware\centos-6.6-x86_64-base-disk-cl2.vmdk] Grain #150633 @19286144 is pointed to by multiple GTEs

2019-01-31T11:11:19.773-06:00| vmx| I125: DISKLIB-SPARSECHK: [C:\Users\TARUN MATHUR\Desktop\Cloudera-Training-CAPSpark-Student-VM-cdh5.4.3a-vmware\centos-6.6-x86_64-base-disk-cl2.vmdk] Grain #150635 @19286400 is pointed to by multiple GTEs

2019-01-31T11:11:19.773-06:00| vmx| I125: DISKLIB-SPARSECHK: [C:\Users\TARUN MATHUR\Desktop\Cloudera-Training-CAPSpark-Student-VM-cdh5.4.3a-vmware\centos-6.6-x86_64-base-disk-cl2.vmdk] Grain #150636 @19286528 is pointed to by multiple GTEs

2019-01-31T11:11:19.773-06:00| vmx| I125: DISKLIB-SPARSECHK: [C:\Users\TARUN MATHUR\Desktop\Cloudera-Training-CAPSpark-Student-VM-cdh5.4.3a-vmware\centos-6.6-x86_64-base-disk-cl2.vmdk] Grain #150637 @19286656 is pointed to by multiple GTEs

2019-01-31T11:11:19.788-06:00| vmx| I125: DISKLIB-SPARSECHK: [C:\Users\TARUN MATHUR\Desktop\Cloudera-Training-CAPSpark-Student-VM-cdh5.4.3a-vmware\centos-6.6-x86_64-base-disk-cl2.vmdk] Grain #150634 @19286272 is pointed to by multiple GTEs

2019-01-31T11:11:19.804-06:00| vmx| I125: DISKLIB-SPARSECHK: [C:\Users\TARUN MATHUR\Desktop\Cloudera-Training-CAPSpark-Student-VM-cdh5.4.3a-vmware\centos-6.6-x86_64-base-disk-cl2.vmdk] GT Error (DD): GT[5][111] = 19286144 / 19286144

2019-01-31T11:11:19.804-06:00| vmx| I125: DISKLIB-SPARSECHK: [C:\Users\TARUN MATHUR\Desktop\Cloudera-Training-CAPSpark-Student-VM-cdh5.4.3a-vmware\centos-6.6-x86_64-base-disk-cl2.vmdk] GT Error (DD): GT[5][192] = 19286400 / 19286400

2019-01-31T11:11:19.804-06:00| vmx| I125: DISKLIB-SPARSECHK: [C:\Users\TARUN MATHUR\Desktop\Cloudera-Training-CAPSpark-Student-VM-cdh5.4.3a-vmware\centos-6.6-x86_64-base-disk-cl2.vmdk] GT Error (DD): GT[5][193] = 19286528 / 19286528

2019-01-31T11:11:19.804-06:00| vmx| I125: DISKLIB-SPARSECHK: [C:\Users\TARUN MATHUR\Desktop\Cloudera-Training-CAPSpark-Student-VM-cdh5.4.3a-vmware\centos-6.6-x86_64-base-disk-cl2.vmdk] GT Error (DD): GT[5][194] = 19286656 / 19286656

2019-01-31T11:11:19.804-06:00| vmx| I125: DISKLIB-SPARSECHK: [C:\Users\TARUN MATHUR\Desktop\Cloudera-Training-CAPSpark-Student-VM-cdh5.4.3a-vmware\centos-6.6-x86_64-base-disk-cl2.vmdk] GT Error (DD): GT[18][209] = 19286144 / 19286144

2019-01-31T11:11:19.804-06:00| vmx| I125: DISKLIB-SPARSECHK: [C:\Users\TARUN MATHUR\Desktop\Cloudera-Training-CAPSpark-Student-VM-cdh5.4.3a-vmware\centos-6.6-x86_64-base-disk-cl2.vmdk] GT Error (DD): GT[18][210] = 19286272 / 19286272

2019-01-31T11:11:19.804-06:00| vmx| I125: DISKLIB-SPARSECHK: [C:\Users\TARUN MATHUR\Desktop\Cloudera-Training-CAPSpark-Student-VM-cdh5.4.3a-vmware\centos-6.6-x86_64-base-disk-cl2.vmdk] GT Error (DD): GT[18][211] = 19286400 / 19286400

2019-01-31T11:11:19.804-06:00| vmx| I125: DISKLIB-SPARSECHK: [C:\Users\TARUN MATHUR\Desktop\Cloudera-Training-CAPSpark-Student-VM-cdh5.4.3a-vmware\centos-6.6-x86_64-base-disk-cl2.vmdk] GT Error (DD): GT[18][212] = 19286528 / 19286528

2019-01-31T11:11:19.804-06:00| vmx| I125: DISKLIB-SPARSECHK: [C:\Users\TARUN MATHUR\Desktop\Cloudera-Training-CAPSpark-Student-VM-cdh5.4.3a-vmware\centos-6.6-x86_64-base-disk-cl2.vmdk] GT Error (DD): GT[18][213] = 19286656 / 19286656

2019-01-31T11:11:19.804-06:00| vmx| I125: DISKLIB-SPARSECHK: [C:\Users\TARUN MATHUR\Desktop\Cloudera-Training-CAPSpark-Student-VM-cdh5.4.3a-vmware\centos-6.6-x86_64-base-disk-cl2.vmdk] GT Error (EE): GT[46][367] = 19860864 / 19860864

2019-01-31T11:11:19.804-06:00| vmx| I125: DISKLIB-SPARSECHK: [C:\Users\TARUN MATHUR\Desktop\Cloudera-Training-CAPSpark-Student-VM-cdh5.4.3a-vmware\centos-6.6-x86_64-base-disk-cl2.vmdk] Resolving      GT[46][367] = 0

2019-01-31T11:11:19.804-06:00| vmx| I125: DISKLIB-SPARSECHK: [C:\Users\TARUN MATHUR\Desktop\Cloudera-Training-CAPSpark-Student-VM-cdh5.4.3a-vmware\centos-6.6-x86_64-base-disk-cl2.vmdk] GT Error (EE): GT[46][368] = 19860992 / 19860992

2019-01-31T11:11:19.804-06:00| vmx| I125: DISKLIB-SPARSECHK: [C:\Users\TARUN MATHUR\Desktop\Cloudera-Training-CAPSpark-Student-VM-cdh5.4.3a-vmware\centos-6.6-x86_64-base-disk-cl2.vmdk] Resolving      GT[46][368] = 0

2019-01-31T11:11:19.804-06:00| vmx| I125: DISKLIB-SPARSECHK: [C:\Users\TARUN MATHUR\Desktop\Cloudera-Training-CAPSpark-Student-VM-cdh5.4.3a-vmware\centos-6.6-x86_64-base-disk-cl2.vmdk] GT Error (EE): GT[46][369] = 19861120 / 19861120

2019-01-31T11:11:19.804-06:00| vmx| I125: DISKLIB-SPARSECHK: [C:\Users\TARUN MATHUR\Desktop\Cloudera-Training-CAPSpark-Student-VM-cdh5.4.3a-vmware\centos-6.6-x86_64-base-disk-cl2.vmdk] Resolving      GT[46][369] = 0

2019-01-31T11:11:19.804-06:00| vmx| I125: DISKLIB-SPARSECHK: [C:\Users\TARUN MATHUR\Desktop\Cloudera-Training-CAPSpark-Student-VM-cdh5.4.3a-vmware\centos-6.6-x86_64-base-disk-cl2.vmdk] GT Error (EE): GT[46][370] = 19861248 / 19861248

2019-01-31T11:11:19.804-06:00| vmx| I125: DISKLIB-SPARSECHK: [C:\Users\TARUN MATHUR\Desktop\Cloudera-Training-CAPSpark-Student-VM-cdh5.4.3a-vmware\centos-6.6-x86_64-base-disk-cl2.vmdk] Resolving      GT[46][370] = 0

2019-01-31T11:11:19.804-06:00| vmx| I125: DISKLIB-SPARSECHK: [C:\Users\TARUN MATHUR\Desktop\Cloudera-Training-CAPSpark-Student-VM-cdh5.4.3a-vmware\centos-6.6-x86_64-base-disk-cl2.vmdk] GT Error (EE): GT[46][371] = 19861376 / 19861376

2019-01-31T11:11:19.804-06:00| vmx| I125: DISKLIB-SPARSECHK: [C:\Users\TARUN MATHUR\Desktop\Cloudera-Training-CAPSpark-Student-VM-cdh5.4.3a-vmware\centos-6.6-x86_64-base-disk-cl2.vmdk] Resolving      GT[46][371] = 0

2019-01-31T11:11:19.804-06:00| vmx| I125: DISKLIB-SPARSECHK: [C:\Users\TARUN MATHUR\Desktop\Cloudera-Training-CAPSpark-Student-VM-cdh5.4.3a-vmware\centos-6.6-x86_64-base-disk-cl2.vmdk] GT Error (EE): GT[46][372] = 19861504 / 19861504

2019-01-31T11:11:19.804-06:00| vmx| I125: DISKLIB-SPARSECHK: [C:\Users\TARUN MATHUR\Desktop\Cloudera-Training-CAPSpark-Student-VM-cdh5.4.3a-vmware\centos-6.6-x86_64-base-disk-cl2.vmdk] Resolving      GT[46][372] = 0

2019-01-31T11:11:19.804-06:00| vmx| I125: DISKLIB-SPARSECHK: [C:\Users\TARUN MATHUR\Desktop\Cloudera-Training-CAPSpark-Student-VM-cdh5.4.3a-vmware\centos-6.6-x86_64-base-disk-cl2.vmdk] GT Error (EE): GT[46][373] = 19861632 / 19861632

2019-01-31T11:11:19.804-06:00| vmx| I125: DISKLIB-SPARSECHK: [C:\Users\TARUN MATHUR\Desktop\Cloudera-Training-CAPSpark-Student-VM-cdh5.4.3a-vmware\centos-6.6-x86_64-base-disk-cl2.vmdk] Resolving      GT[46][373] = 0

2019-01-31T11:11:19.830-06:00| vmx| I125: DISKLIB-SPARSECHK: [C:\Users\TARUN MATHUR\Desktop\Cloudera-Training-CAPSpark-Student-VM-cdh5.4.3a-vmware\centos-6.6-x86_64-base-disk-cl2.vmdk] GT Error (DD): GT[328][209] = 19286272 / 19286272

2019-01-31T11:11:19.842-06:00| vmx| I125: This is bug 1683.

2019-01-31T11:11:19.842-06:00| vmx| I125: DISK: Opening disks took 166 ms.

2019-01-31T11:11:19.842-06:00| vmx| I125: Module 'Disk' power on failed.

2019-01-31T11:11:19.842-06:00| vmx| I125: VMX_PowerOn: ModuleTable_PowerOn = 0

2019-01-31T11:11:19.850-06:00| vmx| I125: AIOWIN32C: asyncOps=0 syncOps=166 bufSize=0Kb fixedOps=0 sgOps=0 sgOn=0

2019-01-31T11:11:19.850-06:00| aioCompletion| I125: AIO thread processed 0 completions

2019-01-31T11:11:19.854-06:00| vmx| I125: Vix: [16192 mainDispatch.c:1175]: VMAutomationPowerOff: Powering off.

2019-01-31T11:11:19.854-06:00| vmx| I125: Policy_SavePolicyFile: invalid arguments to function.

2019-01-31T11:11:19.854-06:00| vmx| I125: PolicyVMX_Exit: Could not write out policies: 15.

2019-01-31T11:11:19.854-06:00| vmx| I125: WORKER: asyncOps=1 maxActiveOps=1 maxPending=0 maxCompleted=0

2019-01-31T11:11:19.854-06:00| WinNotifyThread| I125: WinNotify thread exiting

2019-01-31T11:11:19.870-06:00| vmx| I125: Vix: [16192 mainDispatch.c:4234]: VMAutomation_ReportPowerOpFinished: statevar=1, newAppState=1873, success=1 additionalError=0

2019-01-31T11:11:19.870-06:00| vmx| I125: Msg_Post: Error

2019-01-31T11:11:19.870-06:00| vmx| I125: [msg.disk.unrepairable] The disk 'C:\Users\TARUN MATHUR\Desktop\Cloudera-Training-CAPSpark-Student-VM-cdh5.4.3a-vmware\centos-6.6-x86_64-base-disk-cl2.vmdk' has one or more internal errors that cannot be fixed. Restore from a backup copy of this disk.

2019-01-31T11:11:19.870-06:00| vmx| I125: [msg.disklib.NEEDSREPAIR] The specified virtual disk needs repair

2019-01-31T11:11:19.870-06:00| vmx| I125: [msg.disk.noBackEnd] Cannot open the disk 'C:\Users\TARUN MATHUR\Desktop\Cloudera-Training-CAPSpark-Student-VM-cdh5.4.3a-vmware\centos-6.6-x86_64-base-disk-cl2.vmdk' or one of the snapshot disks it depends on.

2019-01-31T11:11:19.870-06:00| vmx| I125: [msg.moduletable.powerOnFailed] Module 'Disk' power on failed.

2019-01-31T11:11:19.870-06:00| vmx| I125: [msg.vmx.poweron.failed] Failed to start the virtual machine.

2019-01-31T11:11:19.870-06:00| vmx| I125: ----------------------------------------

2019-01-31T11:11:19.870-06:00| vmx| I125: MsgIsAnswered: Using builtin default 'OK' as the answer for 'msg.vmx.poweron.failed'

2019-01-31T11:11:19.870-06:00| vmx| I125: Vix: [16192 mainDispatch.c:4234]: VMAutomation_ReportPowerOpFinished: statevar=0, newAppState=1870, success=1 additionalError=0

2019-01-31T11:11:19.870-06:00| vmx| I125: Transitioned vmx/execState/val to poweredOff

2019-01-31T11:11:19.870-06:00| vmx| I125: Vix: [16192 mainDispatch.c:4234]: VMAutomation_ReportPowerOpFinished: statevar=0, newAppState=1870, success=0 additionalError=0

2019-01-31T11:11:19.870-06:00| vmx| I125: Vix: [16192 mainDispatch.c:4273]: Error VIX_E_FAIL in VMAutomation_ReportPowerOpFinished(): Unknown error

2019-01-31T11:11:19.870-06:00| vmx| I125: Vix: [16192 mainDispatch.c:4234]: VMAutomation_ReportPowerOpFinished: statevar=0, newAppState=1870, success=1 additionalError=0

2019-01-31T11:11:19.870-06:00| vmx| I125: Transitioned vmx/execState/val to poweredOff

2019-01-31T11:11:19.870-06:00| vmx| I125: WQPoolFreePoll : pollIx = 3, signalHandle = 948

2019-01-31T11:11:19.870-06:00| vmx| I125: Vix: [16192 mainDispatch.c:834]: VMAutomation_LateShutdown()

2019-01-31T11:11:19.870-06:00| vmx| I125: Vix: [16192 mainDispatch.c:783]: VMAutomationCloseListenerSocket. Closing listener socket.

2019-01-31T11:11:19.870-06:00| vmx| I125: Flushing VMX VMDB connections

2019-01-31T11:11:19.870-06:00| vmx| I125: VmdbDbRemoveCnx: Removing Cnx from Db for '/db/connection/#1/'

2019-01-31T11:11:19.870-06:00| vmx| I125: VmdbCnxDisconnect: Disconnect: closed pipe for pub cnx '/db/connection/#1/' (0)

2019-01-31T11:11:19.870-06:00| vmx| I125: VigorTransport_ServerDestroy: server destroyed.

2019-01-31T11:11:19.870-06:00| vmx| I125: WQPoolFreePoll : pollIx = 2, signalHandle = 772

2019-01-31T11:11:19.870-06:00| vmx| I125: WQPoolFreePoll : pollIx = 1, signalHandle = 696

2019-01-31T11:11:19.870-06:00| vmx| I125: VMX exit (0).

2019-01-31T11:11:19.870-06:00| vmx| I125: OBJLIB-LIB: ObjLib cleanup done.

2019-01-31T11:11:19.870-06:00| vmx| I125: AIOMGR-S : stat o=2 r=12 w=0 i=0 br=111616 bw=0

Firefox is not working in my Cloudera VM

Firefox is not starting in my Cloudera VM

$
0
0

 

How to kill Firefox process in Linux

Fixed HDD size

$
0
0

Hi, I have a question with VMware virtual machine setting.

 

I didn't set the virtual machine so I'm not sure why this is happened.

 

I really need to increase  the HDD's size at the virtual machine. However, i can't increase or decrease the size.

 

Can't even touch the number or up/down arrows.

 

This is the configure screen and the problem is at the red box.

Running a VirtualBox VM on ESXi

$
0
0

For an inter-organizational project, one of our users needs to run a VirtualBox VM.

 

Since he can't run it on his laptop because Hyper-V is enabled, he asked if he could run it inside a VM on our vSphere platform.

We could run the VM directly in vSphere but it doesn't need outside network access, only vm-to-host connectivity (host-only network) and has a fixed ip address so that would mean giving it network access and changing the IP address which I'd rather not do.

But running VirtualBox inside an ESXi VM requires to enable "Expose hardware assisted virtualization to the guest OS" for that VM.

 

If we enable this setting on that VM, are there any implications for the rest of our environment?

We will do it in a non-production cluster for now but I'd rather know if there are risks or known issues.

 

Thanks,

iscsi port binding

$
0
0

So I have my distributed switch set up with two ISCSI VMKs each with a single unique uplink per host.  My question is, do I bind the VMKs to the QLogic 57810 storage adapters, bind them to only the software ISCSI Adapter, or both for MPIO.  I have been confused by the documentation that I have read.

 

Thank you

 

Jeff

vCenter 6.5 running on windows to VCSA 6.5 appliance migration

$
0
0

Hi Experts,

 

I have a vcenter server 6.5 which hosted on windows platform and there is no update manager on it.

Am planning to install update manager on the same, but i read that update manager is in built in vcsa 6.5 appliance. If am migrate vcenter 6.5 to vcsa 6.5 appliance, update manager will automatically configured on vcsa appliance? please advice.

 

Arjun EK


Advise on hosting CyberPatriot Images in the cloud

$
0
0

Hello - I am a mentor to our local school / county's AFA JROTC Cyber Patriot program. (AFA CyberPatriot Website)

 

This program involves use of Windows/Ubuntu VM images in which cadets try to find & fix security vulnerabilities.

 

On of the biggest challenges is the availability of these images in which the cadets can practice. (GUI)

 

We can create these images but I would like to make them available in the cloud so anyone of our students can login and work on the image, and when they are done the image reverts back.

 

From what little I know it sounds like a 'metal' server is best but from here I'm a bit clueless.

 

I don't mind doing the self-educating but frankly could use some directions.

 

Thank you in advance!

- JR -

HCX - Remove from Disaster Recovery Status - (no link between on prem and cloud)

$
0
0

Our link between on prem and our remote site is down and apparently will be down for a while.

 

We have a number of VMs that were added to Disaster Recovery and obviously they are all in Error in the HCX View.

 

I need to expand some of the VM's disks but I'm unable to because of current Protected VM status.

 

Is there a manual/CLI way for me to break the link and get back full control of the VMs?

 

Thanks

VMware Backup with NAKIVO Backup & Replication

$
0
0

If you have virtualized infrastructure in your company, you must remember to periodically back up your virtual machines to safeguard the data and achieve compliance. However, choosing a particular backup solution is not always easy and straightforward. While there are some free backup solutions available, they usually come with a catch. This can be in the form of limited functionality, unreliable backups, or other similar shortcomings. For enterprise-level backup solutions, negative features are often manifested in high prices and “bulkiness” as you may not always have an IT team and environment that can handle such software. That’s why here at NAKIVO, our aim is to deliver the best of both worlds – a cost-effective, reliable, and lightweight solution suitable for any environment.

To illustrate our point, we are going to showcase how quick and easy it is to back up your VMs using NAKIVO Backup & Replication using VMware as an example.

Steps

  1. Install NAKIVO Backup & Replication and log in

 

The solution takes only up to 2 minutes to install.  Once done, log in using your credentials.

 

     2.  Add your vCenter or ESXi host to the inventory

 

Click the configuration tab and select Inventory. Click Add New and choose VMware vCenter or ESXi host. You’ll be prompted to enter the Display Name, Hostname or IP, Username, Password, and Web services port. Click Add at the bottom to proceed.

step 1.png

 

    3. Create VMware VM Backup Job and choose VMs to back up

From the main UI of NAKIVO Backup & Replication, click Create and choose VMware vSphere Backup Job. Choose the VMs that you want to back up. You can either select separate VMs or the whole container (e.g., a host) at once.  You can also choose to back up your VMs according to specific policies. These policies can automatically include VMs into the job based on specific criteria, such as the name or size of the VM. If you create a policy, then the new VMs matching the specific policy rules are going to be added to the job automatically; at the same time, non-matching VMs are going to be excluded.

 

step 2.png

   

      4.Select repositories

 

At the second step, select a repository for the backups to be stored. You can also exclude disks you don’t need to be protected from the backup.

step 4.png

     5. Configure scheduling

Next, either set up scheduling for the job or select the “Do not schedule, run on demand” option to make sure this job can only be activated manually. You have a wide range of options to select from, so you can be rather flexible with your scheduling if needed. You can also access the in-built dashboard calendar, which can help you better track your jobs.

step 5.png

     6.Select the retention policy

At the next step, select your preferred retention policy for your backups. You can keep up to 1000 recovery points and rotate them on a daily, weekly, monthly, or yearly basis.

step 6.png     

     7. Configure the job options

At the last step, specify a name for your job and enable additional options, such as app-aware mode, screenshot verification, encryption, log truncation, execution of pre- and post-job scripts, etc. If you have limited bandwidth, you may also want to consider enabling bandwidth throttling.

step 7.png

Once you’re done, click Finish & Run. Once a job has started, you can track its progress in the Activities tab.

 

References

VMware Recovery with Nakivo Backup & Replication

$
0
0

While successfully backing up your VM is crucial for ensuring the security of your data, it is only half of what you need: the recovery of said VM is just as important. Your backups should be consistently recoverable in any situation and at will. Moreover, the speed of recovery is also something to consider as your time is invaluable and can be better spent elsewhere, rather than on long manual recovery processes.

NAKIVO Backup & Replication was built with time, efficiency, and reliability in mind. When you use our solution to protect your business data, you can expect your VM backups to be reliable and your data to be recoverable both fully and granularly. Here is an example of how simple it is to recover your VMs in NAKIVO Backup & Replication:

Steps (4 total)

 

  1. Start VM recovery

 

From the main UI, click Recover and select VM recovery from backup.

     1.png

  

     2. Select the backup to recover

 

At the first step of the recovery process, select the backup and desired recovery point. Click Next to proceed to the next step.

   2.png

 

     3.Select the destination

 

Select the Target Container and the Datastore where you want to recover your VM data. You also need to choose the Network that your VM is going to be connected to. If necessary, you can recover individual disks to different datastores within a selected target container.

    3.png

 

     4.Configure VM recovery options

 

At the final step, you can specify the Job Name and configure various additional options, such as Network Acceleration, Recovery Mode, and Encryption. You can also enable the execution of pre- and post- job scripts and configure various data transfer options which speed up VM recovery. Enable the Send Job Reports to option to get a full report on the recovery process. Note, however, that for this option to be available, you need to have configured the e-mail settings in the Configuration Tab beforehand.

 

4.png

   

Conclusion

The VM recovery in NAKIVO Backup & Replication can be done in a few simple steps. The recovery process for individual files and objects (MS SQL, Exchange, and Active Directory) is just as easy and intuitive. Moreover, there is a Flash VM Boot option available, which allows you to recover entire VMs near-instantly.

 

Flash VM Boot with Nakivo Backup & Replication

$
0
0

Standard VM recovery can be a time-consuming process, and sometimes you just need specific files and objects restored now without having to wait for a full recovery to complete. For this purpose, NAKIVO Backup & Replication provides the Flash VM Boot functionality.

Flash VM Boot allows you to boot the VMs directly from compressed and deduplicated backups. This feature works out of the box, requiring no special setup or preparations. After the VM has been booted, you can migrate it to production for permanent recovery, get immediate access to certain files or folders, or test new system updates. Changes made in the VM do not modify the backup data.

Here is an example of how to use Flash VM Boot in NAKIVO Backup & Replication:

Steps (5 total)

 

1.  Start Flash VM Boot

 

From the main UI, click Recover and select Flash VM boot.

1p.png

 

2. Choose the Source

 

At the first step, choose the Backups you would like to recover, then select the Recovery Point. By default, NAKIVO Backup & Replication always selects the latest recovery point.

 

2p.png

 

3. Choose the Destination

 

Select Container and Datastore where the changes to the VM will be stored. You can also choose to connect the recovered machine to a specific network, Temporary Isolated Network, or forego this step altogether. If necessary, you can also store the changes to different disks on different datastores.

 

3p.png

 

4.  Configure Scheduling

 

NAKIVO Backup & Replication provides a variety of scheduling options. You can bring out a calendar dashboard for a bird’s-eye view of all of your scheduled jobs. Disabling scheduling and running the job on-demand is also an option.

4p.png

 

5. Configure the Job Options

 

At the last step, you can give a Name to the job and enable Screenshot Verification to make sure the recovery is successful. You can also enable the generation of new VM MAC addresses for the recovered VMs, select whether to power on the machines after the recovery, and run Pre- and Post-Job Scripts if necessary. It is also possible to use Proxy Transporter for data routing purposes. Click Finish & Run once you’re done configuring the options to create and run the job.

 

5p.png

 

 

 

Site Recovery with Nakivo Backup & Replication

$
0
0

Let’s face it—sometimes performing a standard VM backup just isn’t enough. If your production site is down, whether due to disaster or human error, it is paramount that you quickly resume business processes, lest you face the risk of losing money and customers. For this reason, NAKIVO Backup & Replication offers a comprehensive disaster recovery solution called Site Recovery.

With Site Recovery, you can create recovery workflows and perform scheduled non-disruptive disaster recovery testing. Each workflow combines certain actions and conditions that can be executed in a single click, edited, and tested at will.

Let’s create a simple workflow using Site Recovery to demonstrate how quickly you can restore your production environment after a disaster:

Steps (11 total)

 

 

  1. Create a site recovery workflow

From the main UI of NAKIVO Backup & Replication, click Create and select Site recovery job.

1st.png

 

     2. Start adding actions to the workflow

On the left, you can see the full list of actions available to you. For the purposes of this demonstration, let’s say you have a VMware environment that you want to recover with this particular workflow. Let’s start by stopping the existing jobs, in order to free up resources and increase the reliability of the site recovery process. Choose Stop jobs from the list.

 

2st.png

 

     3.Choose the jobs to stop

Select the jobs you wish to be stopped from the list on the left. Make sure to note the Action options below. The first option, Run this action in, allows the action to run only in production mode, testing mode, or in both. You can also configure the Waiting behavior of the solution for this action. NAKIVO Backup & Replication can wait for this action to finish, or proceed with the next action in the list immediately. The last option is Error handling. If the set action fails, the solution can be set to either proceed with the Site Recovery workflow or fail the entire job automatically.

 

 

3st.png

 

     4.Select VMs to fail over

At the next step, you must select the Failover VMware VMs action. This allows you to transfer workloads to your VM replicas at the DR location so as to promptly resume business operations. On the first screen, you can choose the replicas you wish to be used for failover and select the recovery point. Keep in mind that these replicas need to be created beforehand. Once done, configure the Action options and make sure Power off source VMs option is enabled to avoid any errors.

 

4st.png

    

     5.Wait a few minutes

Select the Wait action from the list to give your VMs enough time to boot properly.

 

5st.png

 

     6.Restart the jobs

After a successful failover, you can restart the jobs which were previously stopped. Add Run jobs action and reselect the jobs you want to start again. Click Next to proceed with the workflow configuration.

 

6st.png    

 

     7. Enable network mapping

Since your workflow includes the Failover action, the solution will allow you to configure network mapping and re-IP options to further automate the process. Network mapping helps connect VM replicas to the right network after failover, and re-IP assigns appropriate IP addresses to them.

To enable Network Mapping, specify the source and target networks you wish to be used, or choose a rule that has already been created.

 

7st.png

 

     8.Configure Re-IP

As is the case with network mapping, in order to enable Re-IP, you need to specify the old and new IP addresses for your VMs, or choose one of the rules which was created beforehand.

8st.png

 

     9.  Schedule the workflow testing

Site Recovery workflows can be tested on a schedule. You have a wide variety of options to choose from in terms of scheduling, such as having multiple schedules and using the calendar dashboard. You can also choose to disable scheduling altogether and run tests on demand.

 

9st.png

 

     10.Finish the workflow creation

At the final step of the workflow creation, you can enter a Job name and set the required recovery time objective (RTO) for testing purposes. Click Finish once you’re done.

 

10st.png 

 

     11.Run or test your site recovery workflow

Once your site recovery workflow has been created, you can run it either in test or production mode from the main UI of NAKIVO Backup & Replication by clicking Run Job, and choosing the appropriate option. One thing to note is that if you run Site Recovery in production, you will be prompted to choose the Failover type. This can either be a Planned Failover or an Emergency Failover. If you select the former, the solution is going to make one final snapshot before switching workloads to replicas. You should select this option if you still have time to prepare, for example, before an anticipated blackout. Choosing the latter option allows you to switch workloads immediately during failover. Additionally, you can always edit your created site recovery workflow if necessary.

 

11st.png

Conclusion

Site Recovery is a simple and effective disaster recovery functionality integrated into NAKIVO Backup & Replication. With Site Recovery, you can create a workflow of almost any level of complexity, containing a set of actions for any disaster or emergency scenario that suits your needs best.

In this short tutorial we have shown you only a fraction of what Site Recovery is capable of. You can test the full scope of its possibilities by downloading our full-featured Free Trial.

Reference:

NAKIVO Backup & Replication full-featured Free Trial

app in vmware not connected tooutside OS

$
0
0

HI,

I have oracle apps installed in virtual machine and i am using windows7 in my physical machine.

it was working good on both sides but suddenly the connection is lost.

App is working fine vm but not in outside OS.

What needs to be chekced to fix this? Do i need to look into ip address?

Please help


Vcenter Upgrade

$
0
0

How can I find out when my Vcenter 6.7 was upgraded

Thanks

Nimesh

6.7 U3 VMotion CPU compatibility

$
0
0

I have two HPE Gen9 servers

DL360 Gen 9 with Intel Xeon E5-2600-v3 Series

DL380 Gen9 with Intel Xeon E5-2600-v4 Series

According to the compatibility guide: VMware Compatibility Guide - cpu

the V4 CPU is showing it supports the Haswell Generation, but the V3 Haswell doesn't show it supports the Broadwell Generation?

 

Can someone clarify if VMotion will work across these two CPU series? Thanks

    

CPU Series DetailCPU Series Detail
CPU Series:Intel Xeon E5-2600-v4 SeriesCPU Series:Intel Xeon E5-2600-v3 Series
Max Cores per Socket:24Max Cores per Socket:18
Max Threads:48Max Threads:36
Enhanced vMotion Capability Modes:Intel® Haswell GenerationEnhanced vMotion Capability Modes:Intel® Haswell Generation
Intel® Merom GenerationIntel® Merom Generation
Intel® Penryn GenerationIntel® Penryn Generation
Intel® Nehalem GenerationIntel® Nehalem Generation
Intel® Westmere GenerationIntel® Westmere Generation
Intel® Sandy-Bridge GenerationIntel® Sandy-Bridge Generation
Intel® Ivy-Bridge GenerationIntel® Ivy-Bridge Generation
Intel® Broadwell GenerationCPUID Info:6.3F
CPUID Info:6.4FCPUIDs:0x000306F0
CPUIDs:0x000406F0Code Name:Haswell-EP
Code Name:Broadwell-EPLaunch Date:9/6/2014
Launch Date:3/28/2016Supports SMP-FT:Yes
Supports SMP-FT:YesCapable of Legacy FT:Yes
Capable of Legacy FT:NoLegacy FT Compatible Set:Intel® Haswell Generation
Legacy FT Compatible Set:N/A

Nested vSphere under QEMU/KVM unable to run VMs or Manage hosts

$
0
0

Dear community,

 

I am having troubles setting up a nested vSphere 6.7 environment under QEMU/KVM.

 

I created my first vESXi VM where I mount an NFS share and deploy my vCenter to it. The vCenter deployment on the first vESXi host works fine and I am able to add the first ESXi host under the the nested vCenter (vCenter runs on that very host). I use vmxnet3 for the first vESXi host. If I switch to e1000 for the first vESXi host, the vCenter deployment fails.


I created another vESXi VM but I am unable to add this vESXi host under the vCenter. The task gets stuck on 80% and packet capture shows a lot of re-transmissions. If I switch to e1000 for the second vESXi host, I am still not able to add it under the vCenter unless on the vCenter VM I disable LRO/LSO.

 

This helps me manage the second vESXi host under the vCenter. I am able to create vDS, PortGroups. etc but unable to run any VM on the second vESXi host because of e1000 performance probably. I see the following log when I try to deploy a VM on the second vESX host. Same error in the first vESXi's log made me try to use vmxnet3 to make the vCenter deployment work but vmxnet3 won't let me manage it under the vCenter.

 

[0x4180250e84e2]HelperQueueFunc@vmkernel#nover+0x30f stack: 0x43097b0e4768, 0x43097b0e4758, 0x43097b0

2019-03-20T16:14:54.386Z cpu3:2097552)0x451a0889bfe0:[0x4180253081f2]CpuSched_StartWorld@vmkernel#nover+0x77 stack: 0x0, 0x0, 0x0, 0x0, 0x0

2019-03-20T16:14:54.774Z cpu0:2100411)<4>e1000: vmnic0: e1000_phy_read_status: Error reading PHY register

2019-03-20T16:14:54.774Z cpu0:2100411)<6>e1000: vmnic0: e1000_watchdog_task: NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None

2019-03-20T16:15:04.332Z cpu3:2097552)IntrCookie: 2949: 0xa took 1000000us to sync

2019-03-20T16:15:04.405Z cpu3:2097208)NetqueueBal: 5030: vmnic0: device Up notification, reset logical space needed

2019-03-20T16:15:04.405Z cpu3:2097208)NetPort: 1580: disabled port 0x2000002

2019-03-20T16:15:04.405Z cpu2:2097714)NetSched: 654: vmnic0-0-tx: worldID = 2097714 exits

2019-03-20T16:15:04.405Z cpu3:2097208)Uplink: 11680: enabled port 0x2000002 with mac 0c:67:93:0d:6f:00

 

Any help would be very much appreciated.

not_working_nested_virtualization_esxi5.5_u3

$
0
0

Hello!

Help to understand why the nested virtualization on esxi 5.5 used on the dell poweredge r830 server does not work. Made the following points:

- addes to the file /etc/VMware/config

vhv.enabled = “TRUE”

hypervisor.cpuid.v0 = "FALSE"

 

-in the settings of the virtual machine added a parameter  "option/general -> hypervisor.cpuid.v0 value FALSE.

 

-in the settings of the virtual machine changed the parameter CPU/MMU Virtualization <- "Use intel VT-x/AMD for instruction set virtualization and intel EPT/AMD RVI for MMU virtualization"

 

the result - not working

 

 

Nested FT in vSphere 6.7

$
0
0

Hi All

 

When I try to enable FT in my vSphere 6.7 nested lab, I received the following error: "Virtual Machine Fault Tolerance state changed. vCenter disabled Fault Tolerance because the Secondary VM could not be powered on"

 

The secondary machine was created, but still disabled. Is it possible to test this feature in this environment?

 

Kind Regards.

 

Valter Junior

VMware SDDC components on Google cloud platform

$
0
0

We are thinking to migrate following VMware SDDC components/VMs to Google Cloud Platform (GCP). We want to know if this is compatible approach to host VMware appliances/VMs in to the GCP.

 

- vRealize Automation Center  Appliances 7.3.1

- DEM workers

- Model Manager Data/ WEB

- vSphere Proxy agents

- vRealize Manager server

- The Iaas database server (SQL 2012)

- vRealize Log Insight manager

- vCloud usage meter

- vRealize Operations Manager  6.6.1 & its remote collectors

- vRealize Hyperic manager

- VMware NSX Manager 6.3.7

- Platform service controller

- vRealize Orchestrator 7.3.1

- Site Recovery manager 6.5

 

If the answer to the compatibility question is 'Yes' then how do we approach this?

  If the answer is 'NO' then what are the high level reasons.

No connection from VM to outside and viceversa spite are connected to the same vswitch

$
0
0

Hello All,

 

I have a small home lab on my laptop and I have few problems on it. I'd like to fix it to learn the product.

 

ESXi 6.7 is installed VMware Workstatation for Linux (My dist is Kubuntu 18.04 with no firewall). And virtual machines are inside esxi.

I've configured networking, and I can ping outside from esxi. I can reach my lan, and getting internet.

For the other hand, virtual machines can ping esxi server but cant reach lan or internet connection.

 

In conclusion:

 

Reach Virtual Machines <- ESXi -> Reach Physical Computers & Internet

 

Virtual Machines  ----------> Can't reach Physical Computers & Internet

From Physical Computers & Internet ----------> Can't ping Virtual Machines

 

Virtual machines can ping each other.

 

I've read a lot, I've done several things but nothing works. :S

 

All my machines are in the same network: 192.168.0.0/24

 

Also, from my understanding, all my vm and vmkernel are attach to the same vswitch.

 

Does anybody know how to fix this problem?

Sorry for my poor English

 

Regards,

 

--

Galois


ESX servers - OFFICE Relocation

$
0
0

Hi All,

 

We need to relocate our office to new place. We have 3 ESX servers nodes running multiple Virtual Machines of different purposes.

I am looking for the best approach of this relocation in terms of down time, effort.

 

Can you throw some light on this?

A question about nested virtualization.

$
0
0

Hello,

Nested virtualization in VMware working without any problem? Any experiences?

For example, I install a Windows OS in VMware ESXi or Workstation and inside this OS install a tool like VirtualBox and install another OS.

 

Thanks.

Install ESXi 6.5 or 6.7 on Hyper-v Windows SRV 2016 VM

$
0
0

Hello,

 

I am trying to install ESXi 6.5 and later on Hyper-V Host. I have installed ESXi 6.0, old version and it works fine after doing:

1. Enable nested virtualization on VM

2. Using Legacy Net Adapter

3. adding net tulip vib to ESXi image iso

 

When trying to update ESXi to 6.5 or installing a fresh one, it fails, because no net adapter found.

 

Did anybody installed ESXi 6.5 and later on Hyper-V VM? Any suggestion?

virtualization

$
0
0

hello all,

i was trying to install ESXI 6.7 on ESXI 6.7 VM and i keep facing this error on the picture so i was wondering if any one could help me out?Capture 1.JPG

virtualization

$
0
0

hello all,

so am trying to configure FCOE storage on ESXI 6.7 can any one guide me through the steps?

Nested Virtualization Setup

$
0
0

I am looking to setup nested virtualization.. I'm pretty new to nested virtualization and vSpheres networking

 

I was reading that enabling 'Promiscuous Mode' may have an impact on the rest of the environment. Is there a way to setup a nested host without enabling the promiscuous mode, or is there a way to isolate it in the same environment by adding it to a new port group on a new vSwitch with promiscuous mode enabled, would that prevent performance issues?

 

Help/Suggestions always appreciated

Best practices VM physical socket / virtual socket

$
0
0

Hi,

 

I have a question about performance vCPU for a VM.

I have an ESXi with 1 physical socket (8 cores + HT) = 16 pCPU

 

My VM (Windows 2008 R2) with 2 virtual sockets (x 2 cores) = 4 vCPU

 

Will the CPU performance of my VM be damaged ?

 

 

 

 


 


Issue setting 'options kvm ignore_msrs=1' for nested 64-bit VMs

$
0
0

Hi,

 

I am having issues running 64-bit VMs on a nested ESXi 6.7 host running on KVM (Ubuntu 16.04.6 EVE-NG).

 

I believe the fix is to enable 'options kvm ignore_msrs=1' on the main Ubuntu machine.

 

I have tried adding 'options kvm ignore_msrs=1' in various places, including:

 

/etc/modprobe.d/kvm.conf

/etc/modprobe.d/kvm-intel.conf

/etc/modprobe.d/qemu-system-x86.conf

 

But after a reboot I still see:

 

root@eve-ng:~# cat /sys/module/kvm/parameters/ignore_msrs

N

 

I can set it to 'Y' with:

 

root@eve-ng:~# echo 1 > /sys/module/kvm/parameters/ignore_msrs

 

root@eve-ng:~# cat /sys/module/kvm/parameters/ignore_msrs

Y

 

But it doesn't survive a reboot.

 

Any suggestions? Thanks

 

P.S. I am running KVM on Ubuntu 16.04.6 LTS.

in an Azure standard offering the setup of ESXi fails with no network adapter found

$
0
0

I'm tinkering around with a nested virtualization setup, ESXi on Azure, and need help.

 

The ESXi setup starts, however, it fails with No Network adapter.

 

I didn't found for affordable home lab purposes any ipmi/bmc/iDRAC/iLO/kubernetesified baremetal-as-a-service offerings. The hardware Azure VM size used is a Standard_E4s_v3 offering. The automated provisioning scripts so far are documented at GitHub - dcasota/vesxi-on-azure-scripts. The netword card presented is a  Mellanox Technologies MT27500/MT27520 Family [ConnectX-3/ConnectX-3 Pro Virtual Function] [15b3:1004].

 

In reference to https://www.mellanox.com/page/products_dyn?product_family=29&mtag=vmware_driver (click on View the list of the latest VMware driver version for Mellanox products) the nic type ConnectX-3 is not officially supported on any VMware ESXi release. From a OEM perspective, https://docs.microsoft.com/en-us/azure/virtual-network/create-vm-accelerated-networking-powershell describes for an Azure Windows Server VM the use of the ConnectX-3 Ethernetadapter. Hence, it could be possible to get/make nic functionality for an Azure ESXi VM as well.

 

Using the Azure Standard_E4s_v3 offering, the no network adapter issue occurs with

- native ESXi image, latest 6.0, 6.5, 6.7

- customized ESXi 6.0 image using

      - older Mellanox driver MEL-mlnx-3.15.5.5-offline_bundle-4038025.zip and MLNX-OFED-ESX-1.8.2.5-10EM-600.0.0.2494585.zip

- customized ESXi 6.5 image using

      - a recipe description of removing first the softwarepackages net-e1000e, net-mlx4-en, net-mlx4-core, nmlx4-core, nmlx4-en, nmlx4-rdma and

        adding afterwards the Mellanox ConnectX-3 offline bundle 3.16.11.10, and the net-tulip driver, too.

        syslinux.cfg and boot.cfg were modified to use the advanced settings iovDisableIR=TRUE ignoreHeadless=TRUE noIOMMU noipmiEnabled

        See setup output protocol attached: The Mellanox network adapter wasn't loaded (follow-up issues lvmdriver failed to load & nfs41client failed to load).

 

Any study work suggestions?

Mount USB device automatically in VMware workstation

$
0
0

I have a server that hosts VMware workstation Pro 15.  Has like 6 virtuals on it.  5 of those I have USB removed from them.  The 6th has it enabled and has a USB cable attached outside to a device.  Everytime I turn off the virtual and on again I have to go on the console while booting and click it in the bottom right corner to enable to device during boot.  Very annoying.  Is there a way to automate mounting a USB connected device on a specific virtual (not the one always in the foreground)?

 

Thanks.

 

JR         

Nested LAB ESXI: License

vsphere home lab hardware--CPU discussion

$
0
0

Hi all ITers,

 

i am looking for a new intel platform hardware to play some vmware home labs at home.

 

currently, my home workstation hardware configurations are: intel Xeon E3-1230v3 with 32 GB DDR memory(non-ecc), because of motherboard limit, i can only run 32G memory, so i decide to upgrade my workstation.

 

my question is, i have checked intel desktop CPU specifications on intel website, and i found that the new intel i7 gen 9 CPUs are all non-hyperthreading supported, at the same time, because of budget, i don't want to purchase an i9 CPU, what i am concerning is about the non-hyperthreading feature.

 

in my home lab environment, i will run about 8~12 VMs, so the very non-hyperthreading will limit my lab environment buildup???

hyper-v crashing

$
0
0

Hello, I'm attempting to run Windows 2019 with Hyper-V on ESXI 6.7 with the latest build but having an issue with Hyper-V crashing every 28-29 mins of running.

This host supports Intel VT-x, but the Intel VT-x implementation is incompatible with VMware ESX.

$
0
0

I can not power on vm because of this problem

 

This host supports Intel VT-x, but the Intel VT-x implementation is incompatible with VMware ESX.


Oracle and Vmware (What is best practice about single instance oracle installation on virtual machine )

$
0
0

Hi

 

I have installed a single instance oracle (not oracle RAC just one node ) on vmware but just want to know what is vmware recommend for install an oracle production on a vm in cluster some questions such as :

 

Is there any issue due vmotion for that vm ?

Is that recommend to fix this machine on a specific host ?

.

.

.

actually I read some document about that but could not find specific answer about that ?

Nested Virtualization

$
0
0

Hi Guys,

Is there any security issue for my infrastructure to provide Nested Virtualizationfor my customer's VM?

Please advice me in this regard

Ressources allocation reports vcloud/vcenter

$
0
0

Hello the community,

 

Can somebody help me to figure out how the ressource availability reported in vcloud can be different with the availability in the vcenter.

For example, in vcloud ressource pool, I see 38GHz of used CPU out of 40 GHz.

Whereas in the provider vcenter, we have 38GHz used out of 498GHz.

NB: The memory availability is the same both in vcenter and vcloud. Nos mismatch.

 

Thanks for your kind support.

Nested ESXi lab, mac learning enabled, slow vMotion

$
0
0

Hello,

I've built a 3 nested ESXi 6.7 hosts lab out of an HP z420 with 128 GB of RAM and a E5-2640.
I'm not particularly interested in tuning my lab for high performance, my objective is to maintain my sysadmin skills and learn new ones.

However, after vmotioning my VCSA from one host to another, I noticed that the vMotion speed was not really close to what I would expect. Speed is not terrible but I'm just wondering how come I'm not achieving more than 3.5 Gb/s ?

As far as I understand, the data that is being moved around in a nested vMotion job resides in the RAM of my physical host so it there should be not bottleneck at that level.

My nested ESXi host VMs are configured with the standard VMXNET 3 adapter. Within ESXi, vmnics are configured in 10 Gbit/s, Full Duplex. The vMotion VMkernel is attached to a dvswitch where mac learning is enabled. And the underlying network with the ESXi VMs is also also attached to a dvswitch with mac learning enabled.

Why am I not getting close to the max speed of 10 Gbit/s when I vMotion?

Nested vDS with LLDP

$
0
0

I am trying to build a lab environment and I am using a nested setup.  One of the requirements is to have a vDS which can send LLDP packets.

 

I have enabled LLDP on the nested vDS however the adapters on the nested host state "Link Layer Discovery Protocol is not available on this physical network adapter".  I have tried both the VMXNET3 and E1000E adapters on the nested host without success.

 

Does anyone know if what I am trying is even possible?

 

Thanks,

 

Ruairi

Failed to run KVM/QEMU under VMware Fusion

$
0
0

I am running a Fedora 32 VM in VMware Fusion Professional Version 11.5.3.

 

I then configure a VM in Boxes (front end for KVM/QEMU) in Fedora and it fails to start. Initially it said hardware virtualization not found. So then I went into the settings in VMware Fusion and enabled nesting (VT-x/EPT inside this virtual machine).

 

But now it fails to start. In the log file I see:-

 

2020-05-03 01:47:53.464+0000: Domain id=2 is tainted: host-cpu

char device redirected to /dev/pts/1 (label charserial0)

2020-05-03T01:47:53.667149Z qemu-system-x86_64: error: failed to set MSR 0x48f to 0x7fefff00036dfb

qemu-system-x86_64: /builddir/build/BUILD/qemu-4.2.0/target/i386/kvm.c:2947: kvm_put_msrs: Assertion `ret == cpu->kvm_msr_buf->nmsrs' failed.

2020-05-03 01:47:54.154+0000: shutting down, reason=failed

 

Can anyone help?

 

I can run ESXi 7.0 no problem under VMware Fusion. So I assume that KVM/QEMU should also work.

 

Thanks!

VM Migration with ORACLE RAC local shared disks

$
0
0

VM Migration - How can we move the Oracle RAC VMs with local shared disks to new hardware,  If we do as a new VM RAC rebuilt, how can we make sure have the same configuration (Hostname, IPs, Clustername, Services. etc.,)  please advise..


Dell PowerEdge 6525 with ESX 7.0.0 attempting Nested ESX results in HARDWARE_VIRTUALIZATION warning

$
0
0

Hi,

 

I recently installed a Dell 6525 (Dual AMD Epyc 7302 16 Core CPUs, 128GB and 8TB Storage) and successfully installed ESX 6.7.0U3a, ESX 7.0.0 and Dell customised ESX 7.0.0 without any problems.

Normal VMs operate quickly, however when I create a VM ESX instance using ESX 6.7.0U3a, ESX 7.0.0 or the Dell customised ESX 7.0.0 I get the following error message

 

<HARDWARE_VIRTUALIZATION WARNING: Hardware Virtualization is not a feature of the CPU, or is not enabled in the BIOS>

 

Can I edit this VM ESX instance to add the feature to the CPU or does VMware need to issue a patch?

 

Doh!   Selecting the CPU and choosing to expose hardware assisted virtualization to host os resolved the warning.

Tool for Capacity Planning

$
0
0

Hi

 

Our company is now planning to move to the VMWARE Virtualized Environment. I have read the articles about the VMWARE Capacity Planner but this is available only from partners.

Is there any third party tool also available which do the same job?

 

Regards

 

Mansoor

esxi 7.0 nested on esxi 6.0 VM

$
0
0

In my physical ESXi 6.0 I've created a VM version 10. with 2 CPU and, 16GB RAM and 4 SCSI HDD

mounting a ESXi 6.7 iso on it, I'm able to install ESXi in one of the 4 HDD

mounting a ESXi 7.0 iso on it, I'm not able to install ESXi because it do not see ant HDD.

Why ?

Proxmox nested on ESXi 7

$
0
0

Hi to all

I'm trying to testing Proxmox on ESXi 7.

I done it and it works. But all nework on Proxmox is isolate and don't access to the network resource.

The promiscous mode is enable on the vswitch.

 

Any idea ?

Thx

copying Virtual machine

$
0
0

Hi,

I need help to copy or moving the Virtual machine configuration files to an external hard disk for backup because of the reason that we have a closed network and we can't install and run any backup software.

Or

Any backup solution for the VMs setting files without installing backup software.

 

Thanks.

Virtualization Intel VT-xEPT it not supported on this plataform.

$
0
0

I currently have one of Dell's most powerful notebook, an E7300 latitude with Intel i7-8665U processor, 32 GB of memory and 512 SSD disk.

 

I bought the latest version 16 of VMWare Workstation to run on this machine with Windows 10 Pro. I need to install Hyper-V on a Windows Server 2019 Virtual Machine with Nested Virtualization.

 

All BIOS virtualization options are already enabled. I mark the option "Virtualize Intel VT-x/EPT or AMD-V/RVI" in the VM processor properties. But when trying to connect the virtual machine I get the following message :

 

"Virtualized Intel VT-x/EPT is not supported on this plataform. Continue without virtualized Intel VT-xEPT?

 

I can't believe this machine doesn't support this.

vmxnet3 casuing issues for Nested ESXi 6.7/7.01 running inside QEMU/KVM

$
0
0

Hi Nested Community,

 

I have been using nested ESXi 6.7 under QEMU/KVM on a Ubuntu 18 host for a while now, and it has been working perfectly when the QEMU "e1000" network device is selected. I use the default NAT network that libvirt sets up where is it adds the guests (my ESXi hosts) to a virbr0 device automatically. Nested ESXi hosts are able to talk to each other and reach the outside world.

 

However, now that e1000 has been removed starting with ESXi 7, I tried using vmxnet3, which comes with Ubuntu 18 already compiled and available as a QEMU network device. The nested ESXi hosts boot just fine with this, and I can login to the ESXi web UI and SSH to them from my Ubuntu 18 host. The weird thing is they fail/timeout at certain tasks, for example:

 

  • Adding a host to vCenter hangs at 80% and never completes with the error of "A general system error occurred: Unable to push signed certificate to host". I am able to add the host that the VCSA runs on, but no other hosts.
  • When I go to download a OVA file when deploying a new OVA via vCenter GUI, after I put the URL in it asks me to verify the SSL thumbprint, but then hangs and fails with the error of "Unable to retrieve manifest or certificate file."

 

On ESXi 6.7, I simply stop the ESXi hosts, switch back to e1000, then everything works as expected. The problem is e1000 is not supported in ESXi 7, so I am out of luck running nested virt with this version.

 

Has anyone else came across this issue before?

I tried coming up with a few workarounds, but QEMU is limited in what network cards it can emulate. Please let me know if you have any ideas!

 

Thanks!






Latest Images