Category Archives: Cloud

HCX on IBM Cloud – Intro

Typically in a hybrid cloud implementation CIOs have the below concerns:
  1. I don’t have a compatible vSphere version
  2. My network architecture is different from Cloud Services Provider’s network
  3. My applications have a lot of complex dependencies and they interact with various other entities in my DCs like storage/DB Solutions, DMZs, security solutions  and platform applications
  4. They have created their own governance and control and they are concerned if they will be able to do  the same in a Cloud environment.

Is there a way they can get away from these complexities and find an easy way out ? These are the concerns which can be easily taken care by HCX. Below are some of the very important use cases of HCX .

  1. Extending to the Cloud :   when you want to extend your application may be because of the requirement of additional storage or compute.  This can be a short term requirement or even long term in case you have hard fresh refresh due.
  2. Disaster Recovery : Typically customer wants to have the shortest possible RTO and RPO for the on-premise DR site. This can be easily achieved if you have a vmware virtualised infra.
  3. Modernize on-premises DC : If you are planning to move entire DC to the cloud HCX can help you to migrate 100s of apps with just a single reboot and no change to the OS or application.

 

IBM Cloud provides the foundation to extend on-premise workload seamlessly and with minimum efforts required.  HCX enables this by extending on-premises network in to the cloud through an optimised software defined WAN.  VMware workloads can be moved to IBM Cloud from on-premise without any modifications required and without any downtime.
HCX establish a secure WAN optimised  hybrid interconnect which sits between cloud and on-premise environment.
Below is the architecture of the implementation :
Hybridity Services :  It has two parts. Target side and Source side. Target site is IBM Cloud DC and source site is on-premises deployment.
Product : VMware HCX.

Cloud Management : A centralized platform for managing the entire software-defined datacenter.
Product vCenter

Virtualization Admin : Responsible for maintaining the cloud services and environment.

Here is a step by step explanation of what needs to be done to use HCX
Step 1: VMware HCX is deployed in the cloud instance in an IBM Cloud DC
Step 2:  HCX is deployed on-premises
Step 3:  Virtualization administrator uses the HCX user interface in vCenter to establish the network connection
Step 4: The two HCX deployments establish a software defined WAN connection to extend the on-premises layer 2 network to the cloud instance.
Step 5: Virtualization administrator uses the standard vSphere user interface to initiate migration of on-premises workload to the target site in the cloud.
Step 6: HCX uses WAN optimization to efficiently migrate the workload to the target site, maintaining the current workload IP configuration
If you want to read more about the Solution architecture click here

Solution components and Reference Architecture

IBM Cloud provides the automated deployment of vmware solution components The offering in the solution portfolio consists of
  1. vCloud Foundation ( VCF) : vSphere ESXi, Platform Services Controller, vCenter Server appliance, SDDC manager, NSX and vSAN. VSF consists of :
    a. 4-node base cluster
    b. 2 Bare Metal Server size options
    c. Upgradable Memory option
    d. upto 27 additional nodes
  2. VMware vCenter Servers : vSphere ESXi, Platform Services Controller ( PSC ),  vCenter Server Appliance, NSX and optionally vSAN.
    a. 2-node base cluster
    b. 3 bare metal server size options
    c. upgradable memory options
    d. upgradable NSX options
The below figure shows the solution components of vmware Solution on IBM Cloud:
Below figure shows the architecture of the solution deployment:
Management : vCenter, A centralized platform for managing the entire software defined datacenter

Workloads : x86 based applications which need to migrated to cloud

Software Defined Networking :  Provides a network overlay virtualizing the physical network to provide a large number of customer defined networks (VxLAN), intelligent network routing, and microsegmentation for enhanced firewall capabilities
Product  – VMware NSX 


Compute : Enables many virtualized Linux and Windows servers to run concurrently on the same physical bare metal server providing high levels of server utilization and capacity.
Product :  VMware vSphere 

Software Defined Storage :  Local storage of the physical hosts are aggregated into a high performance, highly available software defined SAN.
Product – VMware virtual SAN 

Bare Metal Servers – IBM Cloud bare metal servers provide a dedicated, single tenant basis for deploying the clients private infrastructure. Clients can locate their deployment in any of dozens of IBM Cloud locations around the globe.
Products – IBM Cloud 

To learn more about the solution architecture, click here

 

This is blog post is second in the 10 part series of vmware on IBM Cloud. 

 

<- Intro : vmware on IBM Cloud                                                                                              VMware HCX on IBM Cloud ->

 

vmware on IBM Cloud – An Introduction

Hybrid Cloud is a combination of on-premises and public cloud services intended to work in unison to deliver value. Following are the values which can be derived :

  1. You can expand your DC with minimum efforts, expenses and on-demand
  2. You can consolidate your DCs which are located in different geo zones
  3. You can create test environments and DR sites

vmware has a huge virtualization market share. The value which IBM Cloud brings to the customers are :

  1. No new skill set required
  2. One single portal to manage
  3. You can bring your own license
  4. Monthly subscription of vmware softwares

To learn more about IBM Cloud for vmware see the this video .

Here is the complete stack of IBM Cloud for vmware solution:

Not just this depending on the use case you use the below platform extensions and solutions :

Hybrid Cloud Services or HCX : You can establish a software defined WAN between on-premise DC and IBM Cloud DC. Using this service you can move vmware workloads to and from cloud with zero downtime and modifications.

Zerto Disaster Recovery :  You can use Zerto Virtual Replication Technology to provide near zero RTO and RPO

Secure Virtualization:  Use Intel Hytrust TXT technology for secure virtualization simplify regulatory compliance and guarantee data sovereignty.

NSX Edge Services Gateway: NSX Edge Services Gateway provides connectivity between the virtualized software-defined network and its workloads and external networks.

vRealize Automation: Add automation and orchestration tooling for automated provisioning of applications ensuring users have the tooling and services they need.

If you have any queries and need help to get started with vmware on IBM Cloud then drop me a message or reach out to me over here.

 

Spectre and Meltdown

Google security blog recently revealed a  security flaw in all the modern day processors. The vulnerability is based on the speculative execution which is used by the CPUs to optimize system performance. As a consequence, nearly all the Cloud Services Providers which includes AWS, Azure, IBM, Google had to update their systems to protect them from the possible vulnerabilities. These vulnerabilities are Spectre and Meltdown. You can find out the detailed information on both of these vulnerabilities here.

So what are Meltdown and Spectre ?

Understanding CPU architecture  will  help us to set the back ground. There are two architectures of CPUs, 32 bit  and 64 bit, which essentially means that processor can process this much data and memory addresses.

32 bit architecture : The CPU has a limitation that it can address 4 GB of data. There are two types of data which the CPU handles 1. Kernel data 2. User data. In 32 bit architecture Kernel data takes 1 GB of data and is always at the beginning of the memory space.  Whereas User data takes remaining data and it occupies the remaining 3 GB.  Theoretically visibility from User memory to kernel memory is restricted.

64 bit architecture :  The addresses which CPUs can read got increased and the 4 GB restriction was also removed. There is one more important change.  Kernel now has something called KASLR, Kernel Address Space Layout Randomisation. KASLR reserves an address space for Kernel data which is random and thus becomes difficult to identify where the Kernel data is stored. This was done to resolve the vulnerabilities that could happen when an attacker could find out Kernel data.

The processor has a Translation Lookaside buffer (TLB) which is used to switch between user space and kernel space. Kernel space entries of TLB are not flushed because it’s a time consuming process to repopulate TLB. So long as memory leaks from kernel space do not find their way into user space, an attacker would not be able to infer the kernel’s location. Unfortunately, such leaks do occur, either from software errors or the hardware itself.

Another important concept that has to be understood is speculative execution which means some tasks are performed even before it’s determined whether they are needed to be done or not. If the speculation is correct that the tasks will have to be performed then it’s fine otherwise the results are ignored or discarded. It’s like you carry the umbrella or rain coat speculating that it may rain today.

It’s discovered that user space instructions can be used to retrieve kernel memory due to processors’ use of “speculative execution” that will attempt to guess what code will be executed in the next few cycles and “pre-execute” it in an attempt to increase performance. At times, this may mean that multiple code segments are pre-executed at the same time until the correct one is needed. The other segments are then discarded. Attackers may take the advantage of this speculative execution, insert their malicious code and retrieve sensitive information.

Meltdown :
Meltdown breaks the most fundamental isolation between user applications and the operating system. This attack allows a program to access the memory, and thus also the secrets, of other programs and the operating system.

Spectre ; Spectre breaks the isolation between different applications. It allows an attacker to trick error-free programs, which follow best practices, into leaking their secrets. In fact, the safety checks of said best practices actually increase the attack surface and may make applications more susceptible to Spectre

Below devices are affected because of Spectre and Meltdown :

  1. Servers
  2. Desktops
  3. Mobile
  4. IoT Devices
  5. Browsers

The attacker can do the following And here is all that are vulnerable :

  1. Steal passwords from password manager and browser
  2. Personal Photos
  3. Emails
  4. Instant Messages
  5. Business Critical documents

Which systems are affected by Meltdown?

Desktop, Laptop, and Cloud computers may be affected by Meltdown. More technically, every Intel processor which implements out-of-order execution is potentially affected, which is effectively every processor since 1995 (except Intel Itanium and Intel Atom before 2013). We successfully tested Meltdown on Intel processor generations released as early as 2011. Currently, we have only verified Meltdown on Intel processors. At the moment, it is unclear whether ARM and AMD processors are also affected by Meltdown.

Which systems are affected by Spectre?

Almost every system is affected by Spectre: Desktops, Laptops, Cloud Servers, as well as Smartphones. More specifically, all modern processors capable of keeping many instructions in flight are potentially vulnerable. In particular, we have verified Spectre on Intel, AMD, and ARM processors.

Which cloud providers are affected by Meltdown?

Cloud providers which use Intel CPUs and Xen PV as virtualization without having patches applied. Furthermore, cloud providers without real hardware virtualization, relying on containers that share one kernel, such as Docker, LXC, or OpenVZ are affected.

What is the difference between Meltdown and Spectre?

Meltdown breaks the mechanism that keeps applications from accessing arbitrary system memory. Consequently, applications can access system memory. Spectre tricks other applications into accessing arbitrary locations in their memory. Both attacks use side channels to obtain the information from the accessed memory location. For a more technical discussion we refer to the papers

References

Choosing the right MS SQL editions and implementation

thumbnailI have frequently come across a situation where, clients are usually confused about the selection of MS SQL edition and whether they should go on-premise or on cloud. And given the fact that there are other IaaS players too, selection of hosting a SQL server on cloud becomes even more difficult.

 

Editions of  2016 MS SQL 

Microsoft recently launched  2016 MS SQL editions. They are available in 4 flavors: Express, Standard, Enterprise and Developer

Here is the brief summary of what these editions are meant for :

Screen Shot 2016-09-09 at 1.20.04 PM

 

Although there is a lot of information available on Microsoft’s website, I have filtered out some of the most important features of different  2016 editions  which should be considered before finalizing which edition is right for you.

Screen Shot 2016-09-09 at 1.52.40 PM

* Basic HA – restricted to 2 node single database failover, and non readable secondary db. Basic HA ensure data availability so your data is not lost with basic HA and a fast two nodes non-readable synchronous replica.

** Advanced HA – Always On- availability groups, multi-database failover with readable secondaries.

On-premise deployment.

There are 4 costs associated in an op-premise set up.

  1. Infra cost
  2. Hardware
  3. Licenses cost
  4. Personnel

I would like to focus only on licensing cost. There are 2 types of licenses available for SQL Standard editions

  1. Server + CAL license:   Cost of MS SQL server is  $ 931 and $ 209 per Client access license (CAL) which is either used based or device based.
  2. Core based license :  $ 3,717 per core, in 2 core packs. There is no restriction on the number of users or devices which can access the server in this type of license.

On Cloud deployment:  

The benefit of spinning up SQL server on cloud is it’s fast, easy and you also have an option of getting a fully managed SQL instances.

Comparison of costs on Azure, AWS and Softlayer


Screen Shot 2016-09-09 at 6.36.28 PM


Below tables shows the configurations which were considered.


For AWS and IBM SoftLayer
Cores RAM HDD
CONF 1 2 8 100
CONF 2 4 16 200
CONF 3 8 32 400
CONF4 16 64 800

For Azure
Cores RAM HDD
CONF 1 2 7 100
CONF 2 4 14 200
CONF 3 8 28 400
CONF4 16 56 800

The graph shows clearly that SoftLayer is the cheapest when compared to both AWS and Azure. Following are the added advantages with IBM SoftLayer :

  1. Data download limit of 250 GB with virtual instances and 500 GB with Bare Metal instances
  2. There is no inter DC charges
  3. The instances are not bundled, so you have the flexibility of increasing or decreasing cores, RAM and HDD independently which is not in the case of Azure

Comparing the cost of on-cloud vs on-premise is  little tricky. You need to take the following things in to account :

  1. When is your server hardware refresh due: This is important because assuming you have recently invested in the hardware  and the next refresh is due only after 3 years, then you will incur only the license cost of MS SQL. In this case, most of the times, going for an on-premise will make more sense.
  2. No. of users in the organization:  Assuming you only have 20 -25 users and there is a lot of uncertainty about the increase or decrease of the no. of users, then in this case, most of the times, on-cloud will make sense. You just have to purchase the server license and then you can take CALs from your cloud service provider which comes in a minimal monthly cost.

In case you want to need to know more about the implementation and pricing on SoftLayer and want to do a TCO for your implementation then, you can reach out to me on this link or drop a comment here.

 

Shopping cart for Hadoop as a Service

Hadoop is an open source framework which different vendors take, customize it, add their own products on top of it  and bring the newly created product with different features and functionalities to the market.
I don’t know how far  this analogy is  correct but it’s like Android OS. Different vendors take the same core of Android, customize it build their own functionalities on top of it and create a  different product altogether.
Typically different hadoop distributions have different set of tools, support, optimizations and additional features. So the challenge then how can we decide which Hadoop service is suitable for our requirement  and which Hadoop service can serve the organization’s purpose.
You can see a list of Hadoop distribution here . Forrester in its report has recently done a market analysis and have rated different Hadoop on Cloud vendors.
Screen Shot 2016-06-13 at 3.00.06 PM
Here is a list of top Hadoop distributions, the value additions in them and also my thoughts on what would work for which use case :

  1. Cloudera distribution of Apache Hadoop  ( CDH ):  It’s the first commercial Hadoop Startup. offers core open distribution  along with a no. of frameworks which include Cloud era search, Impala, Cloudera Navigator and Cloudera Manager.
  2. Pivotal HD : includes a number of Pivotal software products such as HAWQ (SQL engine) GemFire, XD (analytics), Big Data extensions and USS storage abstraction. Pivotal supports building one physical platform to support multiple virtual clusters as well as PaaS using Hadoop and RabbitMQ.  
  3. IBM Infosphere BigInsghts : includes visualization and exploration, advanced analytics, security and administration. There is no other vendor which can give you the flexibility of working on a Bare Metal machine. But that comes at the price of scalability. Bare Metal machine can’t be scale up or down on the fly. IBM’s other products BigQuality, Bigintegrate, and IBM InfoSphere Big Match can be seamlessly integrated for a mature enterprise operations.
  4. Amazon Elastic MapRedue:  comes with EMRFS which allows EMR to be connected with S3 and use it as a storage layer. The fact that S3 is the market leader in object storage and many enterprises are already using S3 for their Big Data storage, makes it an obvious choice.
    But AWS EMR work with AWS data stores only and I really doubt if it can be integrated with other storage options.
  1. Azure HD Insight : Azure HD Insight uses HDP (Hortondataworks Platform) distribution which  is designed for Azure Cloud. Enterprise Architects can use C#, JAVA and .NET to create configure, monitor and submit Hadoop jobs.
  2. Google Cloud Dataproc: has built in integration with Google Cloud Services like BigQuery and Big Table along with Dataproc. Unlike other vendors Google bills you in minutes.
Looking at the functionalities, features, it’s quite easy to get confused with plethora of options available right now and each vendor is trying hard to get a bigger pie of this cake.

 

4 Disaster Recovery Strategies you must know

Typically there are 4 scenarios which are possible in designing a DR solution :

1. Back up and restore or Cold DR : In this Data is backed up to Data Center in some other region and then is restored when required. We typically have following storage options available to perform the back up and restore

  • Object Storage
  • Block Storage
  • File Storage or a NAS

A couple of other things which really matter when you take this approach is how can you transfer the data from on premise data center to Cloud provider. Following are the options

  • Using internet
  • Transferring the media directly to the cloud vendor
  • Using application which can be used to transfer the data at a higher speed such as IBM Aspera
  • Direct line between your DC to Cloud provider DC

2. Pilot Light for quick recovery or Warm DR : In this a minimal version of an environment is always running in the cloud. The idea is that you have your data ready by replication in cloud. And in case of a disaster, your network is configured in such a way that, it just routes to an active site in case the other goes down
Configuring a network is possible in two ways:

  1. Using IP addresses
  2. Using Load balancing

The prerequisite for this type of set up is that you must have a two tiered architecture, i.e. App server and DB server are two separate servers. You replicate only DB server, and  keep the installation scripts ready for App server, (along with images of the production server )while the core components are always mirroring

3. Warm Standby or Hot DR:   Scaled down version of an environment is always running in cloud. The App server in the cloud is also connected with the on premise DB server and vice-versa. And both the DB servers  are always running. In this set up you end paying a little more for the DR set up because both the App serves and DM servers.

4. Multi site solution or Active/Active:
Both the cloud and on-premise solutions are alway active. App servers and DB servers are active and share the workload. The data in both the DB servers are mirrored and both the App/servers share the workload.

Below table shows the gist of the above 4 DR strategies :

In case you need any assistance in setting up the DR solution, drop me a direct message or just ping me here

How to create a video application using IBM SoftLayer

Typically IBM provides the following applications for video services

  1. Something which can upload the files faster – Aspera
  2. Something which empower the users to access videos from any devices – Clearleap
  3. Something which provide live streaming capabilities  – UStream
  4. Something which enable on-demand ingest and distribution- Clearleap

IBM provides the following applications which can serve the above purposes :

Picture1And it’s very important that the core component of storage is also planned wisely which typically includes

  1. Compute -ideally a VM which ensures faster scalability
  2. Storage : ideally an object storage
  3. Network bandwidth
  4. CDNs – A strong content delivery which comes with greater no. of POPs

If you need more information about any of these services, just click here to reach out to me.

 

Future of Healthcare

With the emergence of Cloud as the backbone, Big Data Analytics as the heart and Cognitive as the brain, we can conveniently say that healthcare industry is transforming like never before and with an unprecedented pace. Think about the time when it was extremely difficult for the patients to find the right doctor, diagnosis  taking ages and doctors burdened with the upheaval tasks of identifying the right treatment.

Here are the 5 ways in by which the Healthcare going to become to become in future :

1. Integrated healthcare ecosystem: It’s important to understand that like any other industry, there is a complete ecosystem of value providers in healthcare too. For example :

  • Pharmacists for medicines
  • Ambulance services
  • Path labs
  • Hygiene and housekeeping services
  • Insurance providers
  • Clinical instrument manufactures
  • Regulators

The technology, to a great extent has enabled to bring together this entire ecosystem to provide an integrated healthcare ecosystem. The result is improved patient services,  optimization of cost and new emerging new business models.

2. Ubiquitous and personalized healthcare: Digitization of Patient health records enable the doctors to preserve the medical history of a patient and provide the health services with a lot more personalization and precision. This trend will continue and over the period of time, when this digitized data can be analyzed using sophisticated Big Data analytics technologies and made available ubiquitously almost everywhere anytime, patients can even predict the ailment and prescribe the medicines accordingly.

3. Convergence of healthcare and mobility with IoT: Healthcare providers will place the mobile diagnostic devices in patients’ home, link these to cloud platforms and monitor them continually. The explosion of wearable devices and the amount of data which it produces will empower the doctors to make patients cautious of their medical conditions in the real time.

Consider you getting an alert from your hospital or doctor just like you get for the over usage of internet next time when your life style requires to be changed.

4. Actionable insights driven and targeted healthcare: Insights based on analytics that are integrated with mobile devices or smart sensors will be used to improve clinical outcomes. The next step for the hospitals then is develop the capability to tap in to this enormous   and complex data.

5. Healthcare access for rural and poor people :  With the advent of video streaming and the expanded high speed fibrenet coverage, it will be possible to get the treatment remotely and virtually. Doctors can see the patients on their screen using high speed internet where it’s almost impossible for the quality healthcare to reach.

Like any other profession and industry, healthcare too is looking for a big leap forward, then only deciding factor then is – how much willing and prepared are you for this embrace this change ?

How to cancel a SoftLayer VM

SoftLayer SLA says that you have to cancel the device 24 hours prior to the next billing cycle avoid the billing for the next month. Follow the below steps to cancel the device.

Step 1 : Log in to https://control.softlayer.com by entering your username and password

Step 2: Go to Devices -> Device List

Device list Click

 

Step 3: You will see all the devices which you have purchased. Go to the device which you want to cancel. On the right hand side of the device name, click on Actions

Step 4: Click on Cancel Device, as shown below.

Click to cancel

A cancellation ticket will be raised and your device will be cancelled in 24 hours. Please ensure to take the backup of the data. Once your device is deleted you will not be able to recover your data in any possible ways.