lifeLinux: Linux Tips, Hacks, Tutorials, Ebooks View RSS

All About Linux !
Hide details



Bitglass: Cloud suppliers shouldn’t give data access to governments 30 Oct 2016 8:03 AM (8 years ago)

It cannot be denied that cloud computing has become popular more than ever in the recent years. There are tons of things for organizations and enterprises to concern about when migrating to the cloud. However, according to experts, the biggest concern for businesses is the question that who else might have access to the data.

Image by Computerworld Image by Computerworld

Pursuant to a recent survey conducted by Bitglass, which is a cloud security company, potential government access to encrypted data is an issue.

Pursuant to the survey’s findings, more than one third (35%) of (information technology) IT security professionals said they had the thought that cloud app suppliers should be forced to provide government the access to encrypted data. In the meantime, more than haft (55%) of respondents are opposed. The survey also figured out that in the United State, there’s more opposition with around two third (64%) of respondents opposed to government cooperation, in comparison with only 42% of respondents who live in the Middle East and Africa (EMEA) region.

As said by the Chief Executive Officer of Biglass, Nat Kausik, while hotly contested issues such as government intervention are still open, several years of experience with major public cloud applications has demonstrated that in comparison with on-premises apps, the cloud ones can be more secure. He said that one of the main open concerns is that whether or not it is possible for organizations and enterprises to put policies as well as controls in place so that they can securely make use of the cloud computing.

In addition, the survey pointed out that a vast majority of businesses has experienced some kind of cloud security incident. To be more detailed, 47% of them involves access from unauthorized devices, and nearly 60% of them are related to unwanted external sharing. There is also a lack of cloud visibility. According to the survey’s findings, less than 49% of businesses know even the basics, for instances, where and when sensitive data is being downloaded from the cloud.

Another finding is that the organizations’ demand on Cloud Access Security Brokers (CASBs) is on the upward trend. Pursuant to the survey, 60% of businesses said that they had deployed or had a plan to deploy a CASB, with data leakage prevention is regarded as the most important capability. Few organizations and enterprises have taken action to mitigate shadow IT threats either. The survey showed that 62% of organization are dependent on written policies rather than technical controls.

The post Bitglass: Cloud suppliers shouldn’t give data access to governments appeared first on lifeLinux: Linux Tips, Hacks, Tutorials, Ebooks.

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

Top 3 common issues when using free cloud storage 22 Oct 2016 5:22 PM (8 years ago)

So many people have the thought that free cloud storage means that risk-free storage. Nevertheless, the fact is that like paid cloud storage, the free ones comes with the same risks and concerns. Listed below are 3 most common issues you have to deal with when using free cloud storage:

Image by Network World Image by Network World

The first common issue you often have to deal with is privacy. This is because of the reason that a vast majority of big cloud companies share user details. Nevertheless, the fact you need to keep in mind is that security and privacy are different from each other.

Pursuant to what experts said, security is top-notch, with almost all mainstream cloud firms. They, however, still keep an eye on user content, as well as share it with third-parties.

In the meantime, security, according to experts, deals with the question that whether or not you’re hackable. As said by experts, privacy means what you’re uploading can’t be snooped on by a cloud company.

Which in reality is the norm of cloud storage services, all major venders, including Box, iCloud, OneDrive, Dropbox, etc., can view and share files.

Nevertheless, there are cloud storage services, for instance Tresorit, Sync.com, SpiderOak, and others which take privacy protection very seriously. In addition, they don’t retain any information about their clients.

The second issue you need to pay attention to is the crappy encryption options. As said by experts, encryption is something every user should know. This is because of the reason that it’s quite an important factor of cloud storage, paid or free.

Pursuant to what expert said, normally, the native encryption options that are available in free storage are non-existent or feeble.

Certainly, a direct route around this problem is that you need to encrypt data yourself before uploading it to the free cloud storage service of your choice.

Even if data encryption is not your need or intention, it’s still important for you to understand at least on a surface-level what it is, as well as how important it is to your business.

Another issue is customer support. It holds true that almost all cloud and backup services will offer users either phone, chat, or ticket support in order to help them deal with any personal issues that may occur.

However, if you are using free cloud storage, you cannot call a customer support member. This is because of the reason that free cloud storage services don’t come with customer support. However, according to experts, this doesn’t mean that support via other users or forums aren’t available.

Expecting to be treated like a paying client is what you shouldn’t do. This is because you’re not one. However, almost all mainstream cloud options are very well made as well as work smoothly.

The post Top 3 common issues when using free cloud storage appeared first on lifeLinux: Linux Tips, Hacks, Tutorials, Ebooks.

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

How to align hybrid cloud with business goals? 5 Oct 2016 12:19 AM (8 years ago)

According to experts, with powerful hybrid cloud systems, you can get elasticity as well as agility for your data center and the business. In addition, it allows you to create better go-to-market strategies while still supporting advanced IT initiatives. One of the biggest question for businesses nowadays is how to align the hybrid cloud strategy with the companies’ goals. Here are 3 useful tips for them:

Image by blueapache Image by blueapache

1. Partner with a vendor who has the ability to support your IT and business goals

It is reported that Azure Stack, which is the very much anticipated Microsoft Hybrid Cloud product, has recently experienced a few issues. Indeed, the architecture is not as perfect as developer were expecting. It will be a bit more limited. Pursuant to a recent post from GeekWire, an American technology news website that covers startups and established technology firms, a huge number of consumers said that simplicity and the development speed are the two important factors, trumping the capability of customizing at the infrastructure level.

As said by experts, in case you’re looking for better levels of infrastructure customization, you should create the partnership with a vender who is capable of supporting it. Otherwise, in case you are standardized on specific hardware models, some hybrid cloud providers are able to specifically support your use cases.

2. Make sure that management remains elastic throughout on-premise as well as public systems.

Pursuant to expert, it’s a must for you to be capable of controlling those systems that span cloud and on-premise resources. This is extremely important for information technology (IT) managers who control desktops, as well as other types of workloads that are being delivered to the user. You should keep in mind that we’re working with more data, more mobility, very rich experiences, and more cloud services.

Pursuant to what experts said, all of this translates to requirements around proactive, cloud-ready management. It’s a must for you to plan out and manage hybrid cloud systems. Sometimes, this requires granular control over underlying server systems. If that’s the case, working with venders who have the ability to support those types of services is an important thing you need to do. Most of all, these venders must be capable of extending management between your data center and their cloud. Keep in mind that lost resources are not cheap, and poor user experiences can lead to low productivity.

3. Create and develop data center technologies that are built around scale, predictability, and efficiency.

Pursuant to experts, using new types of converged and hyper-converged systems can help businesses better integration with virtualization as well as cloud resources. Most of all, teams within a company are able to leverage hybrid cloud services for their specific needs. By creating a powerful, underlying data center ecosystem, you can have the ability to enable an architecture which is capable of scaling seamlessly between in-premise and cloud data points.

As said by experts, these converged infrastructure technologies can help you aggregate critical resource points. They can also give you the ability to create greater levels of multi-tenancy, and user experience optimization, security, and infrastructure control.

The post How to align hybrid cloud with business goals? appeared first on lifeLinux: Linux Tips, Hacks, Tutorials, Ebooks.

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

How the cloud benefits the medical industry? 27 Sep 2016 12:03 AM (8 years ago)

It cannot be denied that cloud computing has many benefits for organizations as well as enterprises all around the world. Nevertheless, some people may not know that the cloud is also making advances in the medical industry.

Image by cloudspaceusa Image by cloudspaceusa

According to what experts said, cloud computing is becoming one of the most important tools for healthcare pros everywhere. Listed below are 3 ways the cloud benefits the medical industry:

1. Cloud computing helps the healthcare industry gain greater reach.

As said by experts, it’s not easy to give the present doctors the information they need, or to getting the necessary pros to the places they need to be in the event of disaster. It is claimed that having the capability of consulting with one another, keeping each other updated on the status of a disaster victim’s condition, or sending requests for man-power or additional resources can be seen as the difference between a life saved or lost.

Nowadays, with the help of cloud computing, it’s possible for an on-site doctor who has very little experience in surgery to get the real-time guidance from an expert to perform a field surgery, for instance, with all the present medical equipment that transmits real-time information from one source to the next with the aim to make sure that the best work is done.

2. Cloud computing can bring better collaboration.

For medical industry, collaboration is quite an important factor. This makes cloud computing a perfect companion in the field. Pursuant to what experts said, cloud computing enables pros to store and access data remotely, which helps healthcare pros from anywhere in the world have the ability to immediately access to patient data, as well as apply the necessary care without delaying. Apart from this, up-to-the-second updates on patient conditions and healthcare developments, remote conferencing, etc. is giving doctors the ability to save those precious life-saving minutes.

3. Cloud computing can help in improving medical research.

According to experts, much in the way big data is help doctors treat their patients in better and more effective ways, cloud computing makes it possible via storing as well as sharing data in order to help the research process to be conducted more quickly. With the ability to gather outside data from a wide variety of fields, data analysts are able to make use of cloud computing with the aim to pool this data as well as condense it into better results, which helps the medical pros gain a clearer and more advanced image of the subjects that they’re researching. As said by experts, these sort of advances are the kind that cure diseases as well as improve the kind of care being given.

The post How the cloud benefits the medical industry? appeared first on lifeLinux: Linux Tips, Hacks, Tutorials, Ebooks.

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

How to fully reap the benefits from the cloud? 28 Aug 2016 7:54 PM (8 years ago)

It cannot be denied that moving to the cloud now can be seen as one of the most important and necessary thing for every business. According to experts, the most important thing when moving to the cloud is that you need to know how to fully reap its benefits while minimizing the security risk it can bring to your organization. Here are three tips for you:

Image by toptensoft Image by toptensoft

# Tip 1: Making use of multi-factor verification

A huge number of cloud operators provide clients with the option to add extra layers of security to the log-in process. Instead of a single password, it’s a need for you to require the vendor to offer you a unique PIN code to be sent to your phone by cloud services. This is really necessary and important because of the reason that well documented data breaches included password data as well as common threats is still a growing trend. As said by experts, users are still clicking on bogus links in emails, as well as giving away passwords and data inadvertently. It is claimed that that extra layer of security at the log-in process will hinder hackers from breaking in.

# Tip 2: Giving your endpoints the best protection

Pursuant to what experts said, the cloud means that your data effectively follows you around. One of the biggest benefits of the cloud is that it allows you to access what you need at any time and from any place. Nevertheless, this is also regarded as the problem. According to experts, even thought you might be secure in the cloud, it can’t be say for sure that you won’t be lax in your security “on the ground”. This means that your data can be vulnerable due to the network connection you use. For this reason, it’s a must for you to make sure that mobile devices as well as laptops have security software in place. Even though there are good free solutions on the market, you need to be very wary of accessing cloud data in public places or via public networks. In addition, it is advised that you should protect them by making use of a Virtual Private Network (VPN) in order to prevent anyone from snooping.

# Tip 3: Encrypting your data

According to experts, before signing up to a cloud service, it’s important to make sure that the data you upload and download will be encrypted in transit and at rest by the cloud vendor. Otherwise, your data will be stolen while it’s on the move from point to point. In addition, if it is the file storage that you are using, you should consider encrypting your data before uploading it to the cloud. As by experts, adding an extra layer of security to your confidential information is quite an effective way for you to take into account.

The post How to fully reap the benefits from the cloud? appeared first on lifeLinux: Linux Tips, Hacks, Tutorials, Ebooks.

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

Why virtualisation isn’t enough in cloud computing 25 Aug 2016 8:48 PM (8 years ago)

While it is generally recognised that virtualisation is an important step in the move to cloud computing, as it enables efficient use of the underlying hardware and allows for true scalability,  for virtualisation in order to be truly valuable it really needs to understand the workloads that run on it and offer clear visibility of both the virtual and physical worlds.

On its own, virtualisation does not lend itself to creating sufficient visibility about the multiple applications and services running at any one time. For this reason a primitive automation system could cause a number of errors to occur, such as the spinning up of another virtual machine to offset the load on enterprise applications that are presumed to be overloaded.

Well that’s the argument that was presented by Karthikeyan Subramaniam in his Infoworld article last year, and his viewpoint is supported by experts at converged cloud vendor VCE.

“I agree absolutely because server virtualisation has created an unprecedented shift and transformation in the way datacentres are provisioned and managed”, affirms Archie Hendryx – VCE’s Principal vArchitect. He adds that, “server virtualisation has brought with it a new layer of abstraction and consequently a new challenge to monitor and optimise applications.”

Hendryx has also experienced first hand how customers address this challenge “as a converged architecture enables customers to quickly embark on a virtualisation journey that mitigates risks and ensures that they increase their P to V ratio compared to standard deployments.”

In his view there’s a need to develop new ways of monitoring provides end users more visibility concerning the complexities of their applications, their interdependencies and how they correlate with the virtualised infrastructure. “Our customers are now looking at how they can bring an end-to-end monitoring solution to their virtualised infrastructure and applications to their environments”, he says. In his experience this is because customers want their applications to have the same benefits of orchestration, automation, resource distribution and reclamation that they obtained with their hypervisor.

Make abstraction work

Clive Longbottom, Clients Services Director of analyst firm Quocirca, also agrees because virtualisation just applies a layer between the hardware and the software. “Far more is required to make that abstraction do something to add value to the overall platform, because elasticity is required to allow the resources to be shared”, he explains. This entails having some virtual machine mobility in order to have the capacity to deal with resource constraints through starting up new images or by moving them to new areas of the virtual estate. He therefore adds that predictive intelligence is necessary to ensure that virtual machines aren’t started up in a way that would cause the total collapse of the virtual environment.

“All of this needs tying into the capabilities of the facility, such as power distribution, cooling etc. which are just as important to a virtual platform as the server, storage and networking kit – and this requires some form of DCIM tooling to be present”, suggests Longbottom.

He adds that a hypervisor doesn’t lend visibility into what services are running inside a virtual machine because the virtual machine sits on top of the stack that sits on top of the hypervisor. This means that some proper systems and application management tools are required that enable an understanding of the virtual and physical worlds to ensure that everything works smoothly together.

Virtual and physical correlations

Hendryx adds: “By having a hypervisor you would have several operating system (OS) instances and applications. So for visibility you would need to correlate what is occurring on the virtual machine and the underlying physical server, with what is happening with the numerous applications.” He therefore believes that the challenge is to try to understand the behaviour of an underlying hypervisor that has several applications running simultaneously on it. For example, if a memory issue were to arise relating to an operating system of a virtual machine, it would be possible to find that the application either has no memory left, or it might be constrained, yet the hypervisor might still present metrics that there is sufficient memory available.

Hendryx says these situations are quite common: “This is because the memory metrics – from a hypervisor perspective – are not reflective of the application as the hypervisor has no visibility into how its virtual machines are using their allocated memory.” The problem being that the hypervisor has no knowledge of whether the memory it allocated to a virtual machine is, for cache, paging or pooled memory. What it understands in actuality is that it has made provision for memory and this is why errors can often occur.

Complexities

This lack of inherent visibility and correlation between the hypervisor, the operating system and the applications that run them could cause another virtual machine to spin up. “This disparity occurs because setting up a complex group of applications is far more complicated than setting up a virtual machine”, says Hendryx. There is no point in cloning a virtual machine with an encapsulated virtual machine either; this approach just won’t work, and that’s because it will fail to address what he describes as “the complexity of multi-tiered applications and their dynamically changing workloads.”

It’s therefore a must to have some application monitoring in place that correlates with the metrics that are being constantly monitored by the hypervisor and the application interdependencies.

“The other error that commonly occurs is caused when the process associated with provisioning is flawed and not addressed”, he comments. When this occurs the automation of that process will remain unsound to the extent that further issues may arise. He adds that automation from a virtual machine level will fail to allocate its resources adequately to the key applications and this will have a negative impact on response times and throughput – leading to poor performance.

Error prevention

Software-defined networks (SDNs) can prevent these situations from occurring. Nigel Moulton, VCE’s chief technology officer for the EMEA region explains: “SDN when combined with a software-defined datacentre (SDDC) provides a vision for a very high degree of automation within a datacentre and the networks that connect applications running on it to the users.  Well written applications that to adhere to standards being refined for these environments will go a long way to ensuring an error free environment.” Errors may occur in some situations and so he rightly says that it’s imperative to have the ability to quickly troubleshoot and fix them.

“In this sense a wholly virtualised environment where the hypervisor provides most if not all of the functions such as a firewall, load-balancing, routing and switching does not lend itself to the optimum resolution of a problem, as the hypervisor may provide too much abstraction from the hardware for low latency and highly complex functions”, he comments. For this reason he advises that there should be a blend of hardware control and visibility, which is tightly woven into a virtualised environment because it would be more effective from a troubleshooting perspective.

Longbottom agrees that SDN can help. He adds that it requires some “software-defined computing and storage to bring it all together under the same SDDC.” Yet he says they amount to abstractions that are situated away from the physical side of it all, and so there is a need for something that deals with the physical side. This shouldn’t involve a separate system because both the physical and logical aspects have to work and be managed together. “Only through this can a virtualised platform deliver on its promise”, he says.

Possible solutions

According to Hendryx, VCE has ensured customers have visibility within a virtualised and converged cloud environment by deploying VMWare’s vCenter Operations Manager to monitor the Vblock’s resource utilisation. He adds that “VMware’s Hyperic and Infrastructure Navigator has provided them with the visibility of virtual machine to application mapping as well as application performance monitoring, to give them the necessary correlation between applications, operating system, virtual machine and server…” It also offers them the visibility that has been so lacking.

Nigel Moulton and Archie Hendryx then concluded with their top five best practice virtualisation within a converged infrastructure tips:

1. If it’s successful and repeatable, then it’s worth standardising and automating because automation will enable you to make successful processes repeatable.

2.  Orchestrate it because even when a converged infrastructure is deployed there will still need to be changes that require rolling out; such as operating system updates, capacity changes, security events, load-balancing or application completions. These will all need to be placed in a certain order and you can automate the orchestration process.

3.  Simplify the front end by recognising that virtualisation has transformed your environment into a resource pool that end users should be able to request and provision for themselves and be consequently charged for. This may involve eliminating manual processes in favour of automated workflows, and simplification will enable a business to recognise the benefits of virtualisation.

4.  Manage and monitor: You can’t manage and monitor what you can’t see. For this reason VCE customers have an API that provides visibility and context to all of the individual components within a Vblock. They benefit from integration with VMware’s vCenter and vCenter Operations Manager and VCE’s API called Vision IOS. From these VCE’s customers gain visibility and the ability to immediately discover, identify and validate all of the components and firmware levels within the converged infrastructure as well as monitor its end-to-end health. This helps to eliminate any bottlenecks that might otherwise occur by allowing overly provisioned resources to be reclaimed.

5.  Create a software-defined datacentre: Utilise virtualisation to have a software-defined approach to your infrastructure that automatically responds to the needs of the application because it is ultimately managed and configured to support an application that in turn entails ‘compute’, storage, networking as well as databases and their interdependencies.

By following these tips and by talking with vendors such as VCE you will be able to increase the visibility that was once not available to your organisation and prevent disastrous errors from occurring.

So it’s clear – virtualisation is not all that you need. Other tools, such as the ones mentioned in this article, are required to ensure the smooth server and datacentre operation and performance. A hypervisor on its own will keep you in the dark. So to mitigate risk it’s worth considering an SDC and SDDC approach to catch errors before they run riot.

 

Source from: http://www.cloudcomputing-news.net

The post Why virtualisation isn’t enough in cloud computing appeared first on lifeLinux: Linux Tips, Hacks, Tutorials, Ebooks.

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

4 reasons for why cloud load testing is easy to adopt 18 Aug 2016 7:15 AM (8 years ago)

It cannot be denied that using cloud load testing can bring businesses so many advantages and benefits. In the recent years, cloud load testing has become an increasingly discussed and important topic. In this post, we will show you 4 main reason for why cloud load testing is easy for businesses to adopt.

Image by securityintelligence Image by securityintelligence

1. The first reason is that it has no infrastructure to maintain. According to what experts said, when using the capabilities of cloud load testing, you are leveraging the cloud to host your load generators and engines. For this reason, it’s not a must for you to maintain a separate infrastructure for this capability. It is claimed that the providers that you’re using are in charge of building and maintaining these core infrastructure components. They are the ones who take responsibility for delivering this as a service to you. This, in some cases, can help you reduce a huge number of capital and operational expenses.

2. Leveraging cloud load testing can be seen as a way to supplement your current load as well as performance testing abilities. For instance, you may typically run a 1,000-virtual-user test. Nevertheless, for an upcoming event, you want to make sure that you are able to handle and tune for a 5,000-virtual-user scenario. As such, instead of buying an additional 4,000 virtual users, you just need to leverage cloud load testing in order to drive additional traffic to your app under test for only the required period.

3. With cloud load testing, it’s possible for you to concurrently run with existing load tests. As said by experts, in the past, scheduling and shared resource models could help businesses increase the complexity as well as reduced the availability of load and performance-testing resources. Nowadays, with the help of cloud load testing, this capability is available exponentially. In addition, it can be triggered by any people as long as these people have the access to do so. Apart from lifecycle virtualization, this capability has given teams and businesses the opportunity to introduce continuous testing scenarios with an extremely reduced costs model.

4. Pursuant to experts, the capability of integrating has been further adopted broadly over the years. It, therefore, has been across the varied stakeholders. For this reason, there has been the flurry of integrations which help businesses easily schedule, execute, as well as report on cloud load testing results. According to experts, the reason for this is that it now can be triggered from every number of locations such as quality reporting systems, build systems, IDEs, load and performance tools, etc.

The post 4 reasons for why cloud load testing is easy to adopt appeared first on lifeLinux: Linux Tips, Hacks, Tutorials, Ebooks.

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

What to consider before moving apps to the public cloud? 15 Aug 2016 9:31 PM (8 years ago)

It is claimed that nowadays, more and more organizations and enterprises moving their existing apps to the public cloud because of the tremendous benefits it can bring to them. Nevertheless, the fact is that it’s not easy for all businesses know which apps they should move to the cloud. Listed below are 3 key factors to you need take into account before migrating your apps to the public cloud.

Image by enterprisersproject Image by enterprisersproject

1.You need to consider about redundancy.

According to what experts said, apps refactored for the cloud take into account the availability of the underlying infrastructure. It is claimed that cloud-native app design has the capability to assume so many different components of the infrastructure will fail. It is advised that before moving an app to the cloud, you need to focus on its requirements. Pursuant to experts, if the app need Five 9s (99.999%) of uptime, and you’ve built a redundant infrastructure with the aim to support that uptime requirement, it will be a need for you to think carefully before migrating the app to the cloud.

2. You should focus on virtualized infrastructure.

One of the biggest questions that businesses have before moving their existing apps to the cloud is that how a workload gets to the cloud in the first place. Google, for instance, allows the import of VirtualBox images as well as RAW disk images. In addition, third parties are available in order to assist in migration to the cloud, pursuant to experts. Another way for businesses is to opt for a pre-built OS image. Afterward, it’s necessary for businesses to re-install the app from binaries, as well as to replicate the data set.

It is claimed that the migration is essentially a virtual-to-virtual (V2V) migration. For this reason, from a pure technology standpoint, if your app is currently running on a virtual infrastructure, it can be regarded as a good candidate for the cloud computing. Nevertheless, the app’s physical migration can be seen as a minor consideration.

3. You should pay attention to licensing

It holds true that cloud infrastructures are experiencing a similar transition. According to what experts said, some specialty software or OSs may need to have additional licensing for the cloud. In the meantime, some solutions may not even provide users with a cloud option. As said by experts, the database layer is a good place to start looking. It’s a need for you to make a discussion on the database engine that you are leveraging. It’s also a need for you to know that whether or not it has licensing for your target cloud vendor. You can also choose to leverage the database service which is offered by your cloud vendor.

The post What to consider before moving apps to the public cloud? appeared first on lifeLinux: Linux Tips, Hacks, Tutorials, Ebooks.

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

Hybrid cloud and software defined data centres 14 Jun 2016 9:18 AM (8 years ago)

Amazon Web Services (AWS) is the fastest growing enterprise company that the world has ever seen – a testament to the fact that huge numbers of businesses are moving their data to the cloud not only to save on costs but to be able to analyse their data more effectively.  Netflix, Airbnb, the CIA and other high-profile organisations now run large portions of their businesses on AWS. Yet with lingering concerns about a ‘big bang’ move to cloud, many businesses are adopting a hybrid cloud approach to data storage.

Hybrid cloud is the middle ground of private storage with lower infrastructure overheads plus superfast, low latency connection to the public cloud.  Companies increasingly have their own mix of cloud storage systems reflective of their legacy IT infrastructure, current budgets and current and future operational requirements. As a result, many CIOs are having to think about how data is moving back and forth between various on-premise systems and cloud environments. This can be challenging when the data is transactional and the data set changes frequently.  In order to move this data, without interruption, active-active replication technology like WANdisco Fusion is required so the data can be moved and yet still fully utilised with business still operating as usual.

Software defined data centres (SDDC) – with virtualisation, automation, disaster recovery and applications and operations management – are making it easier for businesses to build, operate and manage a hybrid cloud infrastructure. Such a system enables businesses to move assets wherever they need to whilst maintaining security and availability.

As a result, according to an IBM study, “Growing up Hybrid: Accelerating digital transformation” many organisations that currently leverage hybrid cloud and use it to manage their IT environment in “an integrated, comprehensive fashion for high visibility and control” say that have already gained a competitive advantage from it. In many cases software defined data centres with hybrid cloud are accelerating the digital transformation of organisations as well as making it easier for companies to use cognitive computing such as predictive intelligence and machine learning.

Although hybrid cloud is rapidly becoming the option of choice by reducing IT costs and increasing efficiency, this shift is creating new concerns as CIOs must ensure a seamless technology experience regardless of where the company’s IT infrastructure resides. Whilst businesses are increasingly comfortable transitioning business-critical computing processes and data between different environments, it is vital on-premise infrastructure, private clouds and public clouds are monitored as changes can occur in any of these segments without notice.

In January this year HSBC saw a failure in its servers that left UK online banking customers unable to log in to their accounts for 9 hours. It took more than a day to identify the cause of the issue and as a result customers vented their anger on social media and the case made national headlines. Failures that cannot be quickly identified have the potential to cause huge financial losses as well as significant reputational damage.  Businesses must have real time visibility in a hybrid cloud environment to enable them to be able to head off or respond to issues in real time.

With companies needing to indefinitely maintain legacy IT infrastructure, a software defined data centre supporting a hybrid cloud can allow a business to have the best of both worlds – the cost effectiveness with elastic expansion & contraction of public cloud computing and the security of a private cloud. If you want to future proof your business and remain at the cutting edge of innovation in your sector, hybrid cloud and software defined data centres are what you need to ensure you can access public cloud resources, test new capabilities quickly and get to market faster without huge upfront costs.

The post Hybrid cloud and software defined data centres appeared first on lifeLinux: Linux Tips, Hacks, Tutorials, Ebooks.

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

Top cloud Infrastructure-as-a-Service vendors 13 Jun 2016 10:56 PM (8 years ago)

Choosing the right cloud Infrastructure-as-a-Service (IaaS) provider is growing increasingly difficult in a highly competitive market. Here are the top IaaS players identified by Gartner and their recommended use cases.

Gartner identifies 9 major IaaS players

The cloud Infrastructure-as-a-Service (IaaS) space is undergoing major upheaval this year, according to research firm Gartner. In its 2015 Magic Quadrant for Cloud Infrastructure as a Service, Worldwide, Gartner analysts found service providers were shifting strategies as the ecosystem consolidates around market leaders.

Leaders

IaaS vendors had to offer a service suitable for strategic adoption and boast an ambitious roadmap to make the cut for Gartner’s Leaders quadrant. These vendors, according to Gartner, offer services that serve a broad range of use cases. They may not necessarily be the best providers for a specific need, and they may not serve some use cases at all, but they have a track record of successful delivery, significant market share and lots of customers that can provide references.

Amazon Web Services (AWS)

AWS is the 800 lbs. gorilla in the IaaS market, with more than 10 times more cloud IaaS compute capacity in use than the aggregate total of the other 14 providers Gartner studied.

Gartner recommends AWS for all use cases that run well in a virtualized environment, but notes that some — particularly highly secure applications, strictly compliant applications or complex enterprise applications (such as SAP business applications) — may require special attention to architecture.

Microsoft

Microsoft, with its Azure Infrastructure Services, is the only other IaaS vendor to make the Leaders quadrant. It holds second place in market share, with more than twice as much cloud IaaS compute capacity as the other providers Gartner studied (excluding AWS).

Gartner recommends Azure for general business applications and development environments for Microsoft-centric organizations, as well as cloud-native applications and batch computing.

Visionaries

Vendors in Gartner’s Visionaries quadrant distinguished themselves with an ambitious vision of the future and significant investments in the development of unique technologies. They might be new entrants to the market or veteran providers in the process of reinventing their business. These vendors may have a significant number of customers but don’t yet serve a broad range of use cases well compared with their Leader quadrant counterparts.

The post Top cloud Infrastructure-as-a-Service vendors appeared first on lifeLinux: Linux Tips, Hacks, Tutorials, Ebooks.

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?