Monday, November 25, 2013

9 Clicks to Upgrade vCenter appliance from 5.1 to 5.5

Now worries to un install and re install  just few clicks to upgrade and only 2 clicks to update.


Step0.  Deploy a New vCenter appliance (vCSA 5.5) in the same network as your existing vCenter appliance, Power on the VM once it is deployed. Make sure it has an IP address to connect, while deploying DO NOT assign a DNS name to the VM. Take a snapshot on your existing vCenter appliance or simply clone it.

Step1. Connect to the Newly Deployed appliance over the admin UI https://vCSA_IP_Addr:5480, accept the eula and choose the option to upgrade from the previous version and click Next.

Step2: From the new vCSA copy the upgrade key, (from the below screenshot it starts from BEGIN 648) Now connect to your existing vCenter appliance over the admin UI https://existing _vCSA_ip_addr:5480 and go to the upgrade tab paste the copied key and click on import key and stop vCenter.
New vCSA appliance:


Existing vCSA appliance: Once the import is successful this generates another key as shown below:
(BEGIN 460), now copy this key and paste it in your newly deployed vCSA and click Next


Step3: Select the checkbox to replace the certificates and click Next as shown in the below diagram


Step4: Set your SSO administrator password here your existing admin@system-Domain is changing to adminsitrator@vsphere.local with a new password


Step5: Confirm the hosts need to checked with pre upgrade checker and click Next 


Step6: Review the results shown after scanning the hosts with pre-upgrade script:



Step7: Confirm the checkbox make sure you have a copy of the vCenter appliance or a snapshot on the existing vCSA and click Start

Step8: Once the upgrade is completed successfully, it will shut-down your source vCSA and the newly deployed vCSA becomes your production vCenter appliance. Make sure your vCenter Server is running.

In few minutes your vCenter is upgraded to 5.5.


Monday, June 17, 2013

Presenting Lun's from Netapp 8 Cluster to VMware ESXi & Configuring Array based replication in VMware SRM



How to present LUN from Netapp Cluster to VMware ESX

This is in continuation to my previous post now in the below video we will see how to present a Lun from a replicated volume to ESXi

 Step 1: Creating a LUN for ESXi on the Netapp Cluster
 Step 2 : Creating a Initiator group for which the LUN has to be mapped
 Step 3 : Choose the Volume from where the LUN has to be created
 Step 4 : Configure the iSCSI initiator on the ESXi server with the target ip address
 Step 5 : Once the volume is detected create a VMFS volume out of the newly presented LUN
 Step 6 : Configure the same on the rest of the ESX servers but no need to create the VMFS volume it will appear once you rescan the iscsi adapter

Now we can proceed with the array configuration on the SRM, before we start you have to download the appropriate SRA from vmware.com and install it on the SRM server, so that it can communicate with the array. It is simple configuration and all the steps are captured in the video.

                              Watch at 720p or above to see the text more clear


Saturday, June 15, 2013

How to setup Netapp 8.1.x Cluster with Replication in 25 mins


How to setup Netapp 8.1.x Cluster with Replication in 25 mins 


Pre requisites to setup Netapp 8.1.2 Cluster:

Hypervisor: ESX, VMware Workstation, VMware Player, VMware Fusion

VMware Player is free you can download it from here

Netapp Simulator: You can download it from Netapp.com, before the download you need to register and create an account

Management Software: You can download the On Command System Manager to Configure and Manage your Netapp Simulators

Compute Resources: 

Per simulator:  2vCPU, 1.7GB RAM 260GB, we need two simulators so total compute requirement is  4vCPU, 3.4GB memory, 320GB disk space.

IP address: we need at least 15 ip addresses for the complete setup: list of interfaces to be created
Cluster 1 vServer Cluster 1 Interfaces        ipaddr
Cluster 1 Cluster Management 
Cluster 1 Node Management
Cluster 1 vServer1 vServer data
Cluster 1 vServer1 vServer  Management
Cluster 1 vServer1 CIFS & NFS
Cluster 1 vServer1 iSCSi Lif 1
Cluster 1 vServer1 iSCSi Lif 2




Cluster 2 Cluster Management 
Cluster 2 Node Management
Cluster 2 vServer1 vServer data
Cluster 2 vServer1 vServer  Management
Cluster 2 vServer1 CIFS & NFS
Cluster 2 vServer1 iSCSi Lif 1
Cluster 2 vServer1 iSCSi Lif 2
Inter-Cluster communication

You can setup Netapp 8.1.2 Cluster on ESX, VMware Workstation, VMware Player, VMware Fusion


watch at 720p or above to see the text more clear




                              Netapp 8.1.2 Cluster Setup Part2

                    watch at 720p or above to see the text more clear




VMware vCloud Connector 2.5 GA and available to download


VMware vCloud Connector 2.5 GA and available to download

What's vCloud Connector?
  • vCloud Connector (vCC) is a key differentiator in vCloud Hybrid Service (vCHS) as well as a core component of the vCloud Suite.
  • vCC helps customers realize the hybrid cloud vision by providing them with a single pane of glass to view, operate and copy VMs/vApps/templates across vSphere/vCD, vCHS & vCloud Service Providers.
What's new in vCC 2.5?
  • Offline Data Transfer (ODT) for vCHS
Allows customers to ship large number of VMs/vApps/templates (>TBs) from on-prem vSphere / vCD to vCHS via an external storage device. Customer uses vCC to export VMs/vApps/templates to the device and ships the device to vCHS. vCHS Ops team then uploads the VMs/vApps/templates from the device into the customer vCHS account. Customer can start using these in their vCHS environment.
Note: This feature only support vCHS as a destination.
  • UDP Data Transfer (UDT)
For customers who could leverage the UDP protocol instead of HTTP, this feature will significantly improve the transfer performance of vCC and reduce the amount of time it takes to move VMs/vApps/templates between vSphere/vCD, vCHS & vCloud SPs.
  • Path Optimization
Uses streaming between the vCC Nodes to dramatically improve the transfer performance of vCC. 
Support
vSphere 5.1, 5.0, 4.x
vCloud Director 5.1, 1.5
vCHS

    • Available to all vSphere, vSphere with Operations Management & vCloud Director customers as a free download.
    • Includes all the new features such as ODT, UDT & Path Optimization.
    • Datacenter Extension & Content Sync NOT INCLUDED.
    • Available via vSphere download in the "Drivers & Tools" tab, Click here
    • Available to all vCloud Suite & vCHS customers. 
    • Includes all vCC Core features plus Datacenter Extension & Content Sync.
    • Activated with a valid vCloud Suite license key or a vCC Key provided with the vCloud Hybrid Service Click here

Wednesday, June 12, 2013

Want to go VMworld Europe in Barcelona



Join cloudcredibility and complete the tasks assigned to you and your team, these points can be redeemed to nice goodies. To see the entire list of goodies click HERE

The grand prize is a trip to VMworld Europe in Barcelona.





Different Hypervisor designs in Type 1 hypervisor's






Different Hypervisor designs in Type 1 hypervisor's

In the type 1 VMM/hypervisor a.k.a bare metal hypervisor there are two categories of Hypervisor designs.


a. Microkernelized Hypervisor Design
Ex: Microsoft Hyper V
b. Monolithic Hypervisor Design
Ex: VMware vSphere

Microkernelized hypervisor:

Device drivers does not need to be hypervisor aware and they are on the controlling layer. So a wide range of Hardware can run this kind of a Hypervisor and there is less overhead on the hypervisor itself.
At the same time this needs an Operating System to be installed to initialize the hypervisor layer, and any attack or a fault of that controlling layer operating system can affect the whole hypervisor and bring down all the virtual machines.

Monolithic Hypervisor Design:

In this design the device drivers run at the same layer as the VMM/Hypervisor, hence the hardware and I/O devices should be Hypervisor aware in other words device drivers to be developed for the hypervisor so this results only a certain set of certified hardware can only run this kind of a hypervisor.
No operating system is required to fork lift this hypervisor this makes the hypervisor stable, No security patches are needed for components running in the "Controlling Layer."

Now that we have understood about type 1 hypervisor's what is type 2.

A type  2 hypervisor runs as a software on top of an existing Operating system.

Ex: VMware Workstation, VMware Fusion etc.



Wednesday, May 8, 2013

Limits and Throughput's  of vCloud Network and Security components:

Details of Edge instances used in performance metrics comparison


Edge (Compact) Edge (Large) Edge (X-Large)
vCPU 1 2 2
Memory 256 MB 1 GB 8 GB
Disk 320 MB 320 MB 4.4 GB

 Tested Limits

The following table provides information on the tested soft limits per vCloud Networking and Security Manager:
Note: These soft limits can be exceeded on a per feature basis depending on the resources and the set of features in use.
Limit vCloud Networking and Security Manager
Number of Edge HA appliances 2,000 Compact / Large Edges or 1,000 X-Large Edges
Number of clusters 8
Number of hosts with Edge in use 256 (8 clusters * 32 hosts)
Number of hosts in inventory 400
Number of virtual machines 15000 total virtual machines, 5000 powered on
Number of networks 5000 VXLANs
Number of firewall rules 100,000
Number of firewall object groups 130,000
Number of DHCP static bindings 25,000
Number of DHCP pools 10,000
Number of static routes 100,000
Number of load balancer pools 3,000
Number of load balancer virtual servers 3,000
Number of members in load balancer pools 30,000
The following table provides information on the tested soft limits per vCloud Networking and Security Edge:
Limit vCloud Networking and Security Edge
Number of interfaces 10
Number of firewall rules 2,000
Number of NAT rules 2,000
Number of DHCP static bindings 25
Number of DHCP pools 10
Number of static routes 100
Number of load balancer pools 3 (Hard limit: 64)
Number of load balancer virtual servers 3 (Hard limit: 64)
Number of members per load balancer pool 10 (Hard limit: 32)
Concurrent IPSec VPN Tunnels 64
Concurrent SSL VPN Tunnels 25 (Compact), 100 (Large)

 Firewall and VPN Performance Comparison


Edge (Compact) Edge (Large)
Firewall Performance (Gbps) 3 9.7
Concurrent Sessions 64,000 1,000,000
New sessions/second 8,000 50,000
IPSec VPN throughput (Gbps) - H/W acceleration via AESNI 0.9 2

 Load Balancer Performance Comparison


Edge (Large) Edge (X-Large)
Load balancer throughput – L7 Proxy Mode (Gbps) 2.2 3
Load balancer connections / sec – L7 Proxy Mode 46,000 50,000
Load balancer concurrent connections – L7 Proxy Mode 8,000 60,000
Load balancer throughput – L4 Mode (Gbps) 6 6
Load balancer connections / sec – L4 Mode 50,000 50,000
Load balancer concurrent connections – L4 Mode 600,000 1,000,000