What is serverless computing?
Google Cloud Functions
IBM Cloud Functions
Apache Open Whisk
What are some of the Top Use cases of Serverless ?
Is there a way they can get away from these complexities and find an easy way out ? These are the concerns which can be easily taken care by HCX. Below are some of the very important use cases of HCX .
To learn more about the solution architecture, click here
This is blog post is second in the 10 part series of vmware on IBM Cloud.
<- Intro : vmware on IBM Cloud VMware HCX on IBM Cloud ->
Hybrid Cloud is a combination of on-premises and public cloud services intended to work in unison to deliver value. Following are the values which can be derived :
vmware has a huge virtualization market share. The value which IBM Cloud brings to the customers are :
To learn more about IBM Cloud for vmware see the this video .
Here is the complete stack of IBM Cloud for vmware solution:
Not just this depending on the use case you use the below platform extensions and solutions :
Hybrid Cloud Services or HCX : You can establish a software defined WAN between on-premise DC and IBM Cloud DC. Using this service you can move vmware workloads to and from cloud with zero downtime and modifications.
Zerto Disaster Recovery : You can use Zerto Virtual Replication Technology to provide near zero RTO and RPO
Secure Virtualization: Use Intel Hytrust TXT technology for secure virtualization simplify regulatory compliance and guarantee data sovereignty.
NSX Edge Services Gateway: NSX Edge Services Gateway provides connectivity between the virtualized software-defined network and its workloads and external networks.
vRealize Automation: Add automation and orchestration tooling for automated provisioning of applications ensuring users have the tooling and services they need.
If you have any queries and need help to get started with vmware on IBM Cloud then drop me a message or reach out to me over here.
Google security blog recently revealed a security flaw in all the modern day processors. The vulnerability is based on the speculative execution which is used by the CPUs to optimize system performance. As a consequence, nearly all the Cloud Services Providers which includes AWS, Azure, IBM, Google had to update their systems to protect them from the possible vulnerabilities. These vulnerabilities are Spectre and Meltdown. You can find out the detailed information on both of these vulnerabilities here.
So what are Meltdown and Spectre ?
Understanding CPU architecture will help us to set the back ground. There are two architectures of CPUs, 32 bit and 64 bit, which essentially means that processor can process this much data and memory addresses.
32 bit architecture : The CPU has a limitation that it can address 4 GB of data. There are two types of data which the CPU handles 1. Kernel data 2. User data. In 32 bit architecture Kernel data takes 1 GB of data and is always at the beginning of the memory space. Whereas User data takes remaining data and it occupies the remaining 3 GB. Theoretically visibility from User memory to kernel memory is restricted.
64 bit architecture : The addresses which CPUs can read got increased and the 4 GB restriction was also removed. There is one more important change. Kernel now has something called KASLR, Kernel Address Space Layout Randomisation. KASLR reserves an address space for Kernel data which is random and thus becomes difficult to identify where the Kernel data is stored. This was done to resolve the vulnerabilities that could happen when an attacker could find out Kernel data.
The processor has a Translation Lookaside buffer (TLB) which is used to switch between user space and kernel space. Kernel space entries of TLB are not flushed because it’s a time consuming process to repopulate TLB. So long as memory leaks from kernel space do not find their way into user space, an attacker would not be able to infer the kernel’s location. Unfortunately, such leaks do occur, either from software errors or the hardware itself.
Another important concept that has to be understood is speculative execution which means some tasks are performed even before it’s determined whether they are needed to be done or not. If the speculation is correct that the tasks will have to be performed then it’s fine otherwise the results are ignored or discarded. It’s like you carry the umbrella or rain coat speculating that it may rain today.
It’s discovered that user space instructions can be used to retrieve kernel memory due to processors’ use of “speculative execution” that will attempt to guess what code will be executed in the next few cycles and “pre-execute” it in an attempt to increase performance. At times, this may mean that multiple code segments are pre-executed at the same time until the correct one is needed. The other segments are then discarded. Attackers may take the advantage of this speculative execution, insert their malicious code and retrieve sensitive information.
Meltdown breaks the most fundamental isolation between user applications and the operating system. This attack allows a program to access the memory, and thus also the secrets, of other programs and the operating system.
Spectre ; Spectre breaks the isolation between different applications. It allows an attacker to trick error-free programs, which follow best practices, into leaking their secrets. In fact, the safety checks of said best practices actually increase the attack surface and may make applications more susceptible to Spectre
Below devices are affected because of Spectre and Meltdown :
The attacker can do the following And here is all that are vulnerable :
Desktop, Laptop, and Cloud computers may be affected by Meltdown. More technically, every Intel processor which implements out-of-order execution is potentially affected, which is effectively every processor since 1995 (except Intel Itanium and Intel Atom before 2013). We successfully tested Meltdown on Intel processor generations released as early as 2011. Currently, we have only verified Meltdown on Intel processors. At the moment, it is unclear whether ARM and AMD processors are also affected by Meltdown.
Almost every system is affected by Spectre: Desktops, Laptops, Cloud Servers, as well as Smartphones. More specifically, all modern processors capable of keeping many instructions in flight are potentially vulnerable. In particular, we have verified Spectre on Intel, AMD, and ARM processors.
Cloud providers which use Intel CPUs and Xen PV as virtualization without having patches applied. Furthermore, cloud providers without real hardware virtualization, relying on containers that share one kernel, such as Docker, LXC, or OpenVZ are affected.
Meltdown breaks the mechanism that keeps applications from accessing arbitrary system memory. Consequently, applications can access system memory. Spectre tricks other applications into accessing arbitrary locations in their memory. Both attacks use side channels to obtain the information from the accessed memory location. For a more technical discussion we refer to the papers
I have frequently come across a situation where, clients are usually confused about the selection of MS SQL edition and whether they should go on-premise or on cloud. And given the fact that there are other IaaS players too, selection of hosting a SQL server on cloud becomes even more difficult.
Editions of 2016 MS SQL
Microsoft recently launched 2016 MS SQL editions. They are available in 4 flavors: Express, Standard, Enterprise and Developer
Here is the brief summary of what these editions are meant for :
Although there is a lot of information available on Microsoft’s website, I have filtered out some of the most important features of different 2016 editions which should be considered before finalizing which edition is right for you.
* Basic HA – restricted to 2 node single database failover, and non readable secondary db. Basic HA ensure data availability so your data is not lost with basic HA and a fast two nodes non-readable synchronous replica.
** Advanced HA – Always On- availability groups, multi-database failover with readable secondaries.
There are 4 costs associated in an op-premise set up.
I would like to focus only on licensing cost. There are 2 types of licenses available for SQL Standard editions
On Cloud deployment:
The benefit of spinning up SQL server on cloud is it’s fast, easy and you also have an option of getting a fully managed SQL instances.
Comparison of costs on Azure, AWS and Softlayer
Below tables shows the configurations which were considered.
|For AWS and IBM SoftLayer|
The graph shows clearly that SoftLayer is the cheapest when compared to both AWS and Azure. Following are the added advantages with IBM SoftLayer :
Comparing the cost of on-cloud vs on-premise is little tricky. You need to take the following things in to account :
In case you want to need to know more about the implementation and pricing on SoftLayer and want to do a TCO for your implementation then, you can reach out to me on this link or drop a comment here.
Typically there are 4 scenarios which are possible in designing a DR solution :
1. Back up and restore or Cold DR : In this Data is backed up to Data Center in some other region and then is restored when required. We typically have following storage options available to perform the back up and restore
A couple of other things which really matter when you take this approach is how can you transfer the data from on premise data center to Cloud provider. Following are the options
2. Pilot Light for quick recovery or Warm DR : In this a minimal version of an environment is always running in the cloud. The idea is that you have your data ready by replication in cloud. And in case of a disaster, your network is configured in such a way that, it just routes to an active site in case the other goes down
Configuring a network is possible in two ways:
The prerequisite for this type of set up is that you must have a two tiered architecture, i.e. App server and DB server are two separate servers. You replicate only DB server, and keep the installation scripts ready for App server, (along with images of the production server )while the core components are always mirroring
3. Warm Standby or Hot DR: Scaled down version of an environment is always running in cloud. The App server in the cloud is also connected with the on premise DB server and vice-versa. And both the DB servers are always running. In this set up you end paying a little more for the DR set up because both the App serves and DM servers.
4. Multi site solution or Active/Active:
Both the cloud and on-premise solutions are alway active. App servers and DB servers are active and share the workload. The data in both the DB servers are mirrored and both the App/servers share the workload.
Below table shows the gist of the above 4 DR strategies :
In case you need any assistance in setting up the DR solution, drop me a direct message or just ping me here
Typically IBM provides the following applications for video services
IBM provides the following applications which can serve the above purposes :
If you need more information about any of these services, just click here to reach out to me.
With the emergence of Cloud as the backbone, Big Data Analytics as the heart and Cognitive as the brain, we can conveniently say that healthcare industry is transforming like never before and with an unprecedented pace. Think about the time when it was extremely difficult for the patients to find the right doctor, diagnosis taking ages and doctors burdened with the upheaval tasks of identifying the right treatment.
Here are the 5 ways in by which the Healthcare going to become to become in future :
1. Integrated healthcare ecosystem: It’s important to understand that like any other industry, there is a complete ecosystem of value providers in healthcare too. For example :
The technology, to a great extent has enabled to bring together this entire ecosystem to provide an integrated healthcare ecosystem. The result is improved patient services, optimization of cost and new emerging new business models.
2. Ubiquitous and personalized healthcare: Digitization of Patient health records enable the doctors to preserve the medical history of a patient and provide the health services with a lot more personalization and precision. This trend will continue and over the period of time, when this digitized data can be analyzed using sophisticated Big Data analytics technologies and made available ubiquitously almost everywhere anytime, patients can even predict the ailment and prescribe the medicines accordingly.
3. Convergence of healthcare and mobility with IoT: Healthcare providers will place the mobile diagnostic devices in patients’ home, link these to cloud platforms and monitor them continually. The explosion of wearable devices and the amount of data which it produces will empower the doctors to make patients cautious of their medical conditions in the real time.
Consider you getting an alert from your hospital or doctor just like you get for the over usage of internet next time when your life style requires to be changed.
4. Actionable insights driven and targeted healthcare: Insights based on analytics that are integrated with mobile devices or smart sensors will be used to improve clinical outcomes. The next step for the hospitals then is develop the capability to tap in to this enormous and complex data.
5. Healthcare access for rural and poor people : With the advent of video streaming and the expanded high speed fibrenet coverage, it will be possible to get the treatment remotely and virtually. Doctors can see the patients on their screen using high speed internet where it’s almost impossible for the quality healthcare to reach.
Like any other profession and industry, healthcare too is looking for a big leap forward, then only deciding factor then is – how much willing and prepared are you for this embrace this change ?