What is serverless computing?
Google Cloud Functions
IBM Cloud Functions
Apache Open Whisk
What are some of the Top Use cases of Serverless ?
- I don’t have a compatible vSphere version
- My network architecture is different from Cloud Services Provider’s network
- My applications have a lot of complex dependencies and they interact with various other entities in my DCs like storage/DB Solutions, DMZs, security solutions and platform applications
- They have created their own governance and control and they are concerned if they will be able to do the same in a Cloud environment.
Is there a way they can get away from these complexities and find an easy way out ? These are the concerns which can be easily taken care by HCX. Below are some of the very important use cases of HCX .
- Extending to the Cloud : when you want to extend your application may be because of the requirement of additional storage or compute. This can be a short term requirement or even long term in case you have hard fresh refresh due.
- Disaster Recovery : Typically customer wants to have the shortest possible RTO and RPO for the on-premise DR site. This can be easily achieved if you have a vmware virtualised infra.
- Modernize on-premises DC : If you are planning to move entire DC to the cloud HCX can help you to migrate 100s of apps with just a single reboot and no change to the OS or application.
- vCloud Foundation ( VCF) : vSphere ESXi, Platform Services Controller, vCenter Server appliance, SDDC manager, NSX and vSAN. VSF consists of :
a. 4-node base cluster
b. 2 Bare Metal Server size options
c. Upgradable Memory option
d. upto 27 additional nodes
- VMware vCenter Servers : vSphere ESXi, Platform Services Controller ( PSC ), vCenter Server Appliance, NSX and optionally vSAN.
a. 2-node base cluster
b. 3 bare metal server size options
c. upgradable memory options
d. upgradable NSX options
To learn more about the solution architecture, click here
This is blog post is second in the 10 part series of vmware on IBM Cloud.
<- Intro : vmware on IBM Cloud VMware HCX on IBM Cloud ->
Hybrid Cloud is a combination of on-premises and public cloud services intended to work in unison to deliver value. Following are the values which can be derived :
- You can expand your DC with minimum efforts, expenses and on-demand
- You can consolidate your DCs which are located in different geo zones
- You can create test environments and DR sites
vmware has a huge virtualization market share. The value which IBM Cloud brings to the customers are :
- No new skill set required
- One single portal to manage
- You can bring your own license
- Monthly subscription of vmware softwares
To learn more about IBM Cloud for vmware see the this video .
Here is the complete stack of IBM Cloud for vmware solution:
Not just this depending on the use case you use the below platform extensions and solutions :
Hybrid Cloud Services or HCX : You can establish a software defined WAN between on-premise DC and IBM Cloud DC. Using this service you can move vmware workloads to and from cloud with zero downtime and modifications.
Zerto Disaster Recovery : You can use Zerto Virtual Replication Technology to provide near zero RTO and RPO
Secure Virtualization: Use Intel Hytrust TXT technology for secure virtualization simplify regulatory compliance and guarantee data sovereignty.
NSX Edge Services Gateway: NSX Edge Services Gateway provides connectivity between the virtualized software-defined network and its workloads and external networks.
vRealize Automation: Add automation and orchestration tooling for automated provisioning of applications ensuring users have the tooling and services they need.
If you have any queries and need help to get started with vmware on IBM Cloud then drop me a message or reach out to me over here.
Google security blog recently revealed a security flaw in all the modern day processors. The vulnerability is based on the speculative execution which is used by the CPUs to optimize system performance. As a consequence, nearly all the Cloud Services Providers which includes AWS, Azure, IBM, Google had to update their systems to protect them from the possible vulnerabilities. These vulnerabilities are Spectre and Meltdown. You can find out the detailed information on both of these vulnerabilities here.
So what are Meltdown and Spectre ?
Understanding CPU architecture will help us to set the back ground. There are two architectures of CPUs, 32 bit and 64 bit, which essentially means that processor can process this much data and memory addresses.
32 bit architecture : The CPU has a limitation that it can address 4 GB of data. There are two types of data which the CPU handles 1. Kernel data 2. User data. In 32 bit architecture Kernel data takes 1 GB of data and is always at the beginning of the memory space. Whereas User data takes remaining data and it occupies the remaining 3 GB. Theoretically visibility from User memory to kernel memory is restricted.
64 bit architecture : The addresses which CPUs can read got increased and the 4 GB restriction was also removed. There is one more important change. Kernel now has something called KASLR, Kernel Address Space Layout Randomisation. KASLR reserves an address space for Kernel data which is random and thus becomes difficult to identify where the Kernel data is stored. This was done to resolve the vulnerabilities that could happen when an attacker could find out Kernel data.
The processor has a Translation Lookaside buffer (TLB) which is used to switch between user space and kernel space. Kernel space entries of TLB are not flushed because it’s a time consuming process to repopulate TLB. So long as memory leaks from kernel space do not find their way into user space, an attacker would not be able to infer the kernel’s location. Unfortunately, such leaks do occur, either from software errors or the hardware itself.
Another important concept that has to be understood is speculative execution which means some tasks are performed even before it’s determined whether they are needed to be done or not. If the speculation is correct that the tasks will have to be performed then it’s fine otherwise the results are ignored or discarded. It’s like you carry the umbrella or rain coat speculating that it may rain today.
It’s discovered that user space instructions can be used to retrieve kernel memory due to processors’ use of “speculative execution” that will attempt to guess what code will be executed in the next few cycles and “pre-execute” it in an attempt to increase performance. At times, this may mean that multiple code segments are pre-executed at the same time until the correct one is needed. The other segments are then discarded. Attackers may take the advantage of this speculative execution, insert their malicious code and retrieve sensitive information.
Meltdown breaks the most fundamental isolation between user applications and the operating system. This attack allows a program to access the memory, and thus also the secrets, of other programs and the operating system.
Spectre ; Spectre breaks the isolation between different applications. It allows an attacker to trick error-free programs, which follow best practices, into leaking their secrets. In fact, the safety checks of said best practices actually increase the attack surface and may make applications more susceptible to Spectre
Below devices are affected because of Spectre and Meltdown :
- IoT Devices
The attacker can do the following And here is all that are vulnerable :
- Steal passwords from password manager and browser
- Personal Photos
- Instant Messages
- Business Critical documents
Which systems are affected by Meltdown?
Desktop, Laptop, and Cloud computers may be affected by Meltdown. More technically, every Intel processor which implements out-of-order execution is potentially affected, which is effectively every processor since 1995 (except Intel Itanium and Intel Atom before 2013). We successfully tested Meltdown on Intel processor generations released as early as 2011. Currently, we have only verified Meltdown on Intel processors. At the moment, it is unclear whether ARM and AMD processors are also affected by Meltdown.
Which systems are affected by Spectre?
Almost every system is affected by Spectre: Desktops, Laptops, Cloud Servers, as well as Smartphones. More specifically, all modern processors capable of keeping many instructions in flight are potentially vulnerable. In particular, we have verified Spectre on Intel, AMD, and ARM processors.
Which cloud providers are affected by Meltdown?
Cloud providers which use Intel CPUs and Xen PV as virtualization without having patches applied. Furthermore, cloud providers without real hardware virtualization, relying on containers that share one kernel, such as Docker, LXC, or OpenVZ are affected.
What is the difference between Meltdown and Spectre?
Meltdown breaks the mechanism that keeps applications from accessing arbitrary system memory. Consequently, applications can access system memory. Spectre tricks other applications into accessing arbitrary locations in their memory. Both attacks use side channels to obtain the information from the accessed memory location. For a more technical discussion we refer to the papers
I have frequently come across a situation where, clients are usually confused about the selection of MS SQL edition and whether they should go on-premise or on cloud. And given the fact that there are other IaaS players too, selection of hosting a SQL server on cloud becomes even more difficult.
Editions of 2016 MS SQL
Microsoft recently launched 2016 MS SQL editions. They are available in 4 flavors: Express, Standard, Enterprise and Developer
Here is the brief summary of what these editions are meant for :
Although there is a lot of information available on Microsoft’s website, I have filtered out some of the most important features of different 2016 editions which should be considered before finalizing which edition is right for you.
* Basic HA – restricted to 2 node single database failover, and non readable secondary db. Basic HA ensure data availability so your data is not lost with basic HA and a fast two nodes non-readable synchronous replica.
** Advanced HA – Always On- availability groups, multi-database failover with readable secondaries.
There are 4 costs associated in an op-premise set up.
- Infra cost
- Licenses cost
I would like to focus only on licensing cost. There are 2 types of licenses available for SQL Standard editions
- Server + CAL license: Cost of MS SQL server is $ 931 and $ 209 per Client access license (CAL) which is either used based or device based.
- Core based license : $ 3,717 per core, in 2 core packs. There is no restriction on the number of users or devices which can access the server in this type of license.
On Cloud deployment:
The benefit of spinning up SQL server on cloud is it’s fast, easy and you also have an option of getting a fully managed SQL instances.
Comparison of costs on Azure, AWS and Softlayer
Below tables shows the configurations which were considered.
|For AWS and IBM SoftLayer|
The graph shows clearly that SoftLayer is the cheapest when compared to both AWS and Azure. Following are the added advantages with IBM SoftLayer :
- Data download limit of 250 GB with virtual instances and 500 GB with Bare Metal instances
- There is no inter DC charges
- The instances are not bundled, so you have the flexibility of increasing or decreasing cores, RAM and HDD independently which is not in the case of Azure
Comparing the cost of on-cloud vs on-premise is little tricky. You need to take the following things in to account :
- When is your server hardware refresh due: This is important because assuming you have recently invested in the hardware and the next refresh is due only after 3 years, then you will incur only the license cost of MS SQL. In this case, most of the times, going for an on-premise will make more sense.
- No. of users in the organization: Assuming you only have 20 -25 users and there is a lot of uncertainty about the increase or decrease of the no. of users, then in this case, most of the times, on-cloud will make sense. You just have to purchase the server license and then you can take CALs from your cloud service provider which comes in a minimal monthly cost.
In case you want to need to know more about the implementation and pricing on SoftLayer and want to do a TCO for your implementation then, you can reach out to me on this link or drop a comment here.
- Cloudera distribution of Apache Hadoop ( CDH ): It’s the first commercial Hadoop Startup. offers core open distribution along with a no. of frameworks which include Cloud era search, Impala, Cloudera Navigator and Cloudera Manager.
- Pivotal HD : includes a number of Pivotal software products such as HAWQ (SQL engine) GemFire, XD (analytics), Big Data extensions and USS storage abstraction. Pivotal supports building one physical platform to support multiple virtual clusters as well as PaaS using Hadoop and RabbitMQ.
- IBM Infosphere BigInsghts : includes visualization and exploration, advanced analytics, security and administration. There is no other vendor which can give you the flexibility of working on a Bare Metal machine. But that comes at the price of scalability. Bare Metal machine can’t be scale up or down on the fly. IBM’s other products BigQuality, Bigintegrate, and IBM InfoSphere Big Match can be seamlessly integrated for a mature enterprise operations.
- Amazon Elastic MapRedue: comes with EMRFS which allows EMR to be connected with S3 and use it as a storage layer. The fact that S3 is the market leader in object storage and many enterprises are already using S3 for their Big Data storage, makes it an obvious choice.
But AWS EMR work with AWS data stores only and I really doubt if it can be integrated with other storage options.
- Azure HD Insight : Azure HD Insight uses HDP (Hortondataworks Platform) distribution which is designed for Azure Cloud. Enterprise Architects can use C#, JAVA and .NET to create configure, monitor and submit Hadoop jobs.
- Google Cloud Dataproc: has built in integration with Google Cloud Services like BigQuery and Big Table along with Dataproc. Unlike other vendors Google bills you in minutes.
Typically there are 4 scenarios which are possible in designing a DR solution :
1. Back up and restore or Cold DR : In this Data is backed up to Data Center in some other region and then is restored when required. We typically have following storage options available to perform the back up and restore
- Object Storage
- Block Storage
- File Storage or a NAS
A couple of other things which really matter when you take this approach is how can you transfer the data from on premise data center to Cloud provider. Following are the options
- Using internet
- Transferring the media directly to the cloud vendor
- Using application which can be used to transfer the data at a higher speed such as IBM Aspera
- Direct line between your DC to Cloud provider DC
2. Pilot Light for quick recovery or Warm DR : In this a minimal version of an environment is always running in the cloud. The idea is that you have your data ready by replication in cloud. And in case of a disaster, your network is configured in such a way that, it just routes to an active site in case the other goes down
Configuring a network is possible in two ways:
- Using IP addresses
- Using Load balancing
The prerequisite for this type of set up is that you must have a two tiered architecture, i.e. App server and DB server are two separate servers. You replicate only DB server, and keep the installation scripts ready for App server, (along with images of the production server )while the core components are always mirroring
3. Warm Standby or Hot DR: Scaled down version of an environment is always running in cloud. The App server in the cloud is also connected with the on premise DB server and vice-versa. And both the DB servers are always running. In this set up you end paying a little more for the DR set up because both the App serves and DM servers.
4. Multi site solution or Active/Active:
Both the cloud and on-premise solutions are alway active. App servers and DB servers are active and share the workload. The data in both the DB servers are mirrored and both the App/servers share the workload.
Below table shows the gist of the above 4 DR strategies :
In case you need any assistance in setting up the DR solution, drop me a direct message or just ping me here
Typically IBM provides the following applications for video services
- Something which can upload the files faster – Aspera
- Something which empower the users to access videos from any devices – Clearleap
- Something which provide live streaming capabilities – UStream
- Something which enable on-demand ingest and distribution- Clearleap
IBM provides the following applications which can serve the above purposes :
- Compute -ideally a VM which ensures faster scalability
- Storage : ideally an object storage
- Network bandwidth
- CDNs – A strong content delivery which comes with greater no. of POPs
If you need more information about any of these services, just click here to reach out to me.
With the emergence of Cloud as the backbone, Big Data Analytics as the heart and Cognitive as the brain, we can conveniently say that healthcare industry is transforming like never before and with an unprecedented pace. Think about the time when it was extremely difficult for the patients to find the right doctor, diagnosis taking ages and doctors burdened with the upheaval tasks of identifying the right treatment.
Here are the 5 ways in by which the Healthcare going to become to become in future :
1. Integrated healthcare ecosystem: It’s important to understand that like any other industry, there is a complete ecosystem of value providers in healthcare too. For example :
- Pharmacists for medicines
- Ambulance services
- Path labs
- Hygiene and housekeeping services
- Insurance providers
- Clinical instrument manufactures
The technology, to a great extent has enabled to bring together this entire ecosystem to provide an integrated healthcare ecosystem. The result is improved patient services, optimization of cost and new emerging new business models.
2. Ubiquitous and personalized healthcare: Digitization of Patient health records enable the doctors to preserve the medical history of a patient and provide the health services with a lot more personalization and precision. This trend will continue and over the period of time, when this digitized data can be analyzed using sophisticated Big Data analytics technologies and made available ubiquitously almost everywhere anytime, patients can even predict the ailment and prescribe the medicines accordingly.
3. Convergence of healthcare and mobility with IoT: Healthcare providers will place the mobile diagnostic devices in patients’ home, link these to cloud platforms and monitor them continually. The explosion of wearable devices and the amount of data which it produces will empower the doctors to make patients cautious of their medical conditions in the real time.
Consider you getting an alert from your hospital or doctor just like you get for the over usage of internet next time when your life style requires to be changed.
4. Actionable insights driven and targeted healthcare: Insights based on analytics that are integrated with mobile devices or smart sensors will be used to improve clinical outcomes. The next step for the hospitals then is develop the capability to tap in to this enormous and complex data.
5. Healthcare access for rural and poor people : With the advent of video streaming and the expanded high speed fibrenet coverage, it will be possible to get the treatment remotely and virtually. Doctors can see the patients on their screen using high speed internet where it’s almost impossible for the quality healthcare to reach.
Like any other profession and industry, healthcare too is looking for a big leap forward, then only deciding factor then is – how much willing and prepared are you for this embrace this change ?