Cloud

What is cloud?

 

Cloud computing is, in effect, a type of 'shared computing', comparable to grid computing, where a group of devices form together to deal with a particular process. Originally manipulated for high performance processing, cloud computing has become very much part of the public domain and cloud computing solutions are now sought after by individuals and businesses alike.

There are three main service models for Cloud solutions, IaaS, PaaS & SaaS;

IaaS, or Infrastructure as a Service, involves the use of physical or virtual devices. The cloud user is responsible for maintaining their cloud services, including the operating system and software.

PaaS, or Platform as a Service, is where cloud providers deliver a fully managed cloud solution, meaning that developers do not have additional expenditure such as through the purchase of software or hardware. Additionally, the provider will fully manage the solution.

SaaS, or Software as a Service, has slowly taken off over the past couple of years. A commonly known example of SaaS is the Google Apps platform - the client will only see a single access point to the cloud, however the cloud provider will be hosting their software across a number of virtual devices.

 

Why use cloud?

There are a number of benefits to using a Cloud computing solution - including cost savings, improved communication and perhaps most importantly, scalability.

Cost Savings

Cloud computing solutions can save a large percentage of companies money, due to the fact that it largely reduces the initial investment needed to purchase the hardware. Datacentres can actively host your cloud solution through their own hardware, so you may not need to find that extra capital after all!

Communications - Real Time

Cloud solutions offer an excellent opportunity to communicate in 'real-time' with anyone, wherever they are in the world. One of the simplest examples of the benefits of this is that it allows for files to be edited at the same time with the risk of overwriting or data corruption.

Scalability

One of the major benefits of cloud solutions is that they are scalable - it is possible to find cloud services on a 'Pay As You Go' system. This means you only need to pay for what you are actually using and if you require additional resources, these can be added quickly and easily. This is particularly ideal for small to medium sized companies as it means they never need pay more than necessary.

 

What is public cloud?

A public cloud offers virtualised computing resources such as servers, storage, applications, networking and more. These resources are managed by a third-party provider such as Coreix and are delivered over the Internet. Public cloud offerings require no capital investment for the business and offer flexible pricing options with different service level agreements. In this instance technical teams no longer need to worry about procurement of hardware, installation or managing on-premise equipment. The flexibility of public cloud solutions enables businesses to be more agile this in turn often accelerates innovation.

What is a private cloud?

A private cloud delivers the same type of elastic virtualised resources as public clouds; however, services are dedicated for use by a single business. 

What is a hybrid cloud?

Hybrid cloud solutions use both public and private infrastructure, as well as technology that enables IT teams to easily orchestrate workloads across all clouds. Hybrid clouds provide a common set of tools for all public, private and edge cloud platforms, significantly simplifying the task of managing cloud resources.

Colocation

How much bandwidth is necessary for my needs?

Colocation bandwidth

This purely depends on what your equipment is being used for. Examples of server usage that may require large amounts of bandwidth are database driven servers (such as ecommerce sites), web servers and upload sites. Any server that is being utilised for a site with heavy traffic is likely to need large amounts of bandwidth. You are unlikely to need large amounts of bandwidth if you are running a basic website from your server or you are using your server as a backup device, as backups can be scheduled so as not to exceed limited bandwidth limits.

What ampage do I need for my solution?

Colocation power

The ampage required to run your server solution is dependent on the size and processing intensity of your equipment. For example, a colocation quarter rack is likely to need 2-4 amps of power, whereas a full colocation cabinet will need between 8 and 32 amps of power. If the servers in your rack are high powered, with large hard drives and top of the range CPUs, you will need a higher ampage for your colocation rack than if you have low-end servers in your rack. To ensure optimal performance, you should ensure you stay well within your power usage limits.

Do I need a full, half, quarter or multi rack solution?

Colocation size

Businesses needing full rack colocation are likely to require hosting for a number of servers with dedicated connections and high bandwidth requirements & are looking for full access to their own individual rack. This includes large e-commerce businesses that rely on a large amount of database processing power and large multinational companies that host their sites across the global market.

Half Rack Colocation (22U)

Half rack colocation is often used as a disaster recovery solution for larger companies as the majority of companies take the jump straight from a quarter rack straight to a full rack. This half rack will be located in an alternative data centre facility to the full rack solution, so if the worst should happen, all data can be backed up and available offsite. Small to mid sized businesses also occasionally employ half rack colocation solutions, although full rack colocation offers a better opportunity to expand than half rack solutions.

Quarter Rack Colocation (11U)

Perfect as an entry level colocation solution; combining significantly lower costs than a full rack colocation with upgradable bandwidth and power where necessary to ensure a flexible solution. Quarter racks are often lockable, ensuring extra security for your equipment. Quarter racks do have their limitations and are unlikely to be suitable for larger businesses that require an increased level of processing power or bandwidth - if you are consistently breaching your limits, quarter rack colocation becomes no longer effective for your requirements. Quarter rack colocation is ideal for smaller businesses that are looking to host their equipment in a secure facility that adheres to strict SLAs for uptime and other key variables.

 

Dedicated servers

How can a dual power supply affect the reliability of the equipment?

Dual power supply


A dual power supply improves the reliability of your dedicated server as it ensures there is a failsafe if the worst does happen and there is an issue with one of the power feeds.

 

What are cold and hot-swap hard drives?

Hot swap drive


Hot swap hard drives are secondary hard drives that are used to backup the data of your original hard drive. In the case of hard drive corruption or failure, the secondary 'hot swap' hard drive can be swapped with the faulty hard drive without having to take the machine offline, heavily reducing the effect this has on the uptime of the equipment. It is necessary to have a RAID array if you are looking to employ hot swapping for your server solution.

Cold swap hard drives require the equipment to be turned off before the drive can be inserted. Whilst this will still reduce the downtime of your server, this option offers significantly less redundancy than a hot swap hard drive.

 

Should I consider RAID for my server solution?

RAID


RAID, or Redundant Array of Independent Disks, is effectively a number of hard drives that work together to either optimise your server for performance, for reliability or for both.

There are seven commonly used RAID systems, listed below;

RAID 0

RAID 0 is, in a nutshell, purely about performance. Through a process called striping, data is broken up and written across multiple drives meaning that large amounts of data can be processed in the minimum possible time. The issue with RAID 0 is that if one hard drive fails, all your data is lost due the way it is split across the drives - therefore making this more of an Array than a Redundant Array. If you employ RAID 0, be prepared to lose your data. You require a minimum of two hard drives for RAID 0, although having more than this is common practice for those purely interested in maximum performance.

RAID 1

RAID 1 is perhaps the most common RAID array - it mirrors data to multiple disks meaning that you have backups in case your main data hard drive fails. This does somewhat impact on performance, as the server will be writing to multiple drives at once. However, for many companies with mission critical applications and data, redundancy is more important than performance, making RAID 1 a common option for those looking at dedicated servers. A minimum of two hard drives are required for RAID 1, one main hard drive and another hard drive that acts as the backup drive.

RAID 10 (1+0)

RAID 10 offers the benefits of both RAID 0 and RAID 1 but without the lack of redundancy found in RAID 0 or the lack of performance found in RAID 1. This means RAID 10 offers a cocktail of performance and reliability that is perfect for the majority of companies looking at a data centre dedicated server. It does this by striping across multiple mirrored drives, meaning that you will have increased performance by striping as well as redundancy in case a hard disk fails. You will require a minimum of four hard drives for RAID 10 (2 main hard drives and 2 backup drives)

RAID 2

RAID 2 is similar to RAID 0 in the respect that is uses data striping, however it does this using even smaller pieces of data and also employs additional error protection that allows for corrupt data to be recovered. The issue with RAID 2 is that it is heavily equipment intensive and there are realistically cheaper options that can provide you with a similar or better service.

RAID 3

Similar to RAID 0 in the sense that it strips data across multiple drives but also has an additional drive to deal with error correction. As all the drives perform as one unit, you can only read or write one operation at a time, meaning RAID 3 is realistically only useful for operations dealing with single large processes/files. You'll need 3 hard drives for RAID 3 (2 main hard drives and 1 drive for error correction)

RAID 4

RAID 4 is similar to RAID 3 but works with large chunks of data, meaning the drives don't need to work as one unit and you can therefore run more than one read process at any one time. However, as you only have a single disk for error correction, you can still only have one write operation going ahead at any one time.

RAID 5

RAID 5 is commonly used in a NAS configuration (network attached storage) for file sharing and media streaming. RAID 5 stripes data as with most of the other RAID systems, however it spreads the parity across all hard drives meaning that if a hard drive fails, you won't lose the data from that disk as the parity data is elsewhere. You'll need at least three hard drives for RAID 5 although configurations generally feature at least four.

RAID 6

Similar to RAID 5, with an additional hard drive for parity & error correction, meaning a minimum of four hard drives are required.

You've probably already made your mind up about which RAID array makes the most sense for your business - it's important to consider performance but when it comes to the crunch, redundancy is often more important for a business. It's all very well having great performance but if you lose your mission critical data, then this will likely cost your business substantially more. Utilised correctly, RAID arrays are excellent at providing both additional performance and redundancy for your servers, meaning your data not only flows better but is also safer at the same time.

 

How much RAM do I need for my server solution?

RAM performance


RAM, or Random Access Memory, is effectively a form of computer or server data storage. In its most basic form, the more RAM your server holds, the higher the memory cache, meaning you can host more complex solutions on your server.

There are a wide range of factors that can affect the amount of RAM that you will require on your server. Some of these include:

Operating System

Different operating systems require differing levels of RAM to run smoothly. A windows operating system, for example, will generally be more RAM intensive than the equivalent Linux operating system.

Databases

Dynamic sites, such as those that heavily rely on databases and more server intensive programming languages will require an increased volume of RAM. The usage and size of the databases will be the determining factor to the RAM you require.

Server Applications

Firewalls, email clients & live monitoring software, amongst other server applications, will all add to the load of the server and thus the server will require more RAM.

Control Panel

If you have a control panel, such as Cpanel/WHM, Plesk or DirectAdmin, this can add further strain onto your server - meaning you may be required to increase your RAM capacity.

Traffic & Usage

Finally, and most obviously, the more traffic that lands on your site, the more memory you will need to keep your server flowing freely. Additionally, if you running an intensive application server or the server is being heavily used, then a further increase in RAM may be needed.

 

Managed servers

What is RapidBalance?

RapidBalance

RapidBalance is effectively load balancing - this controls inbound traffic distribution and pools resources across multiple servers, meaning that if your server fails or there is unexpected network congestion, you don't have to worry about downtime affecting your live website or data processing abilities. This makes load balancing a necessity for any web site where it is difficult to predict the number of requests that will be present at any one point, as the load balancing will ensure the server network is able to flex as needed.

Load balancing also helps to minimise response times and maximises throughout, which leads to increased performance across your server network.

 

What is RapidCluster?

RapidCluster

RapidCluster is the name for clustering solutions offered by Coreix. Server clustering is where a group of independent servers are linked to create a high availability solution. This works as if one of the servers experiences a fault, the workload can be redistributed to either a single or multiple devices across the network, ensuring processing stability. Clustering offers an excellent level of scalability as the number of servers that can be added to the configuration is ultimately limitless, meaning that new servers can be added to the network with causing any issues or downtime.

 

What is RapidHA?

RapidHA

RapidHA, or Rapid High Availability combines various technologies to ensure the highest possible equipment availability and the lowest possible downtime. Technologies include clustering, firewalls, load balancing, data protection, data replication & hardware redundancy.

 

What is RapidStorage?

RapidStorage

RapidStorage is a NAS service that allows you to protect your data through a customised backup solution. This ensures that you can fully backup your data through a network attached storage device and protects your data against any unexpected disk failures or outages.

 

What is RapidSync?

RapidSync

RapidSync is a fully managed backup solution that allows you to set backups to run at regular intervals, meaning that you can just get things set up and leave things to work their magic. All your data is secured through a number of methods, including all data being transferred to the backup device being encrypted with a secure algorithm and access to the backup server being restricted to a flexibly defined set of IP addresses. This ensures that your data won't fall into the wrong hands and if someone does try to hack the system, all your data will be completely unintelligible.

 

Backup and Recovery

What is disaster recovery and why do I need it?

Disaster recovery

The scope of 'disaster recovery' is wide but pertains mainly to the policies and processes surrounding the recovery or continued uptime of a data centre's infrastructure and equipment. It is important for a data centre to have a fully operational disaster recovery plan so as to protect your data in the case of a natural or man-made catastrophe.

It is also possible to actively backup your data, helping you to ensure that if the worst does happen and the data centre suffers a catastrophic outage, you won't need to worry about your data being lost.

Always check with your data centre provider what their disaster recovery plan is. If you data centre isn't prepared for the worst, they won't be able to deal with any situations if they do occur - which isn't good for your business.

 

 

Data Centre

Why are A & B power feeds important?

A & B power feeds add an extra level of redundancy to your equipment as if one power feed fails, the other can take up the slack. Some data centres also have redundant power supplies in place - normally in the guise of extra power generators which are not connected to the main power grid, which will come into operation if a fault affects the other generators - perfect for fault intolerant and mission critical infrastructures.

What is data centre tiering and why should I consider a tier III data centre?

The Uptime data centre tier standards are a standardised methodology used to determine the availability in a facility. In a nutshell, this means it is easy to establish whether a particular data centre is suitable for the needs of your business as data centres are clearly staged in tiers.

Tier 1

A tier 1 data centre can be seen as the least reliable tier due to the fact that capacity components are non-redundant as well as the distribution path being a single, non-redundant path and as such, if a major power outage or disaster occurs, your equipment is more likely to go offline as there are no backup systems in place to kick in if any issues do occur.

Pros

  • Tier 1 data centres normally provide the cheapest service offering

Cons

  • Single redundancy meaning a considerably lower uptime guarantee than tiers 2, 3 & 4
  • Unplanned outages extremely likely to cause disruption, potentially major
  • Planned outages have the potential to cause issues including equipment downtime. Planned maintenance will often take place at night, which may not be suitable for clients who run their business 24/7 or attract non-UK customers
  • Maintenance and repair work on the facility will require the entire facility to shut down, causing potentially lengthy downtime


Appropriate for:

  • Companies with a passive web marketing presence
  • Small internet based companies with no customer support or e-commerce facilities on-site


Tier 2

Tier 2 data centres are considerably more reliable than tier 1 data centres although they can be subject to problems with uptime. To achieve tier 2, the facility has to meet the criteria achieved with a tier 1 datacentre, as well as ensuring that all capacity components are fully redundant.

Pros

  • Planned service outages can be performed in a way that does not affect equipment uptime
  • Normally cheaper than tier 3 or 4 facilities


Cons

  • A significantly lower uptime guarantee compared to tier 3 facilities
  • Unplanned outages likely to cause disruption, potentially major
  • Maintenance and repair work on the facility will require the entire facility to shut down, causing potentially lengthy downtime


Appropriate for:

  • Internet based companies who can cope with occasional downtime and will incur no penalties for this
  • Companies that do not run 24/7, allowing time for issues to be resolved
  • Higher intensity data driven servers such as model imaging programs


Tier 3

Tier 3 data centres are commonly seen as the most cost effective solutions for the vast majority of medium to large businesses, with availability topping 99.98%, ensuring minimal downtime. To put this figure in perspective, this means that your equipment should see a maximum of two hours of downtime on an annual basis. Tier 3 data centres have to meet all of the requirements of tiers 1 & 2 as well as ensuring all equipment is dual-powered and has multiple uplinks. Some facilities also offer some fully fault-resistant equipment, although to achieve tier 4, all equipment including HVAC, servers, storage, chillers and uplinks must be fully fault-resistant. This tier is generally considered as tier 3+ in the marketplace.

Pros

  • Significantly cheaper than tier 4 facilities
  • Tier 3 facilities offer the most cost-effective solution for the majority of businesses
  • Planned outages will not cause disruption to equipment
  • Unplanned outages unlikely to cause disruption to equipment
  • All equipment required to have dual power inputs, ensuring through one input fails, the other system picks up the slack
  • All maintenance, unless major, can be performed without impact to equipment


Cons

  • Not all equipment fully fault-resistant


Appropriate for:

  • Companies with a worldwide business presence
  • Companies that require have 24/7 operational hours
  • Organisations that require consistent uptime due to financial penalty issues
  • E-commerce and companies running full online operations
  • Call centres
  • VOIP companies
  • Companies with heavy database driven websites
  • Companies that require a constant web presence


Tier 4

A tier 4 data centre is generally considered the most expensive option for businesses. Tier 4 data centres adhere to all the requirement of tiers 1, 2 & 3 as well as ensuring that all equipment is fully fault-resistant. This is achieved by creating physical copies of all essential equipment, otherwise known as N+N.

Pros

  • Tier 4 data centres offer the high availability at over 99.99%
  • Planned & unplanned outages should not cause any disruption to equipment
  • All equipment must be dual-powered
  • All maintenance, unless major, can be performed without impact to equipment
  • All equipment fault-resistant, reducing the likelihood of any lengthy outages


Cons

  • Cost prohibitive for many companies and significantly more expensive than most tier 3 data centres
  • Minimal (under 0.02%) improvement in availability compared to tier 3 facilities


Appropriate for:

  • Large, multinational companies
  • Major worldwide organisations

Why is uptime so essential to my business?

Uptime (the continuous, unhindered running of equipment in a data centre facility) is essential to many businesses that require their website or systems to be available 24 hours a day, 7 days a week, 365 days a year. Different tiers of data centre have different requirements regarding their level of uptime - this are listed below;

  • Tier 1 - 99.671% guaranteed uptime
  • Tier 2 - 99.741% guaranteed uptime
  • Tier 3 - 99.982% guaranteed uptime
  • Tier 4 - 99.995% guaranteed uptime


*note: this is a guaranteed figure, meaning that providers and facilities residing in certain tiers may actually provide a significantly higher uptime than the listed figures.

 

Why is round the clock on-site technical support important for server solutions?

24x7x365 on-site technical support is important for server solutions, particularly for companies with mission critical systems, due to the fact that without on-site technical support, any issues with your equipment either cannot be resolved until;

  • Your staff come in to start their shifts in the morning (if the issue happens outside of working hours)
  • You or an employee of your company visit the site to fix the issue
  • The third party technical support service you have hired arrive on-site to fix the issue, which is either an expensive or lengthy product - or both

Having round the clock on-site technicians means that if something goes wrong with your server, it can be fixed quickly so that your equipment either receives the minimal possible downtime or, where possible, no downtime at all. When it comes to the majority of technical issues, the faster they are resolved, the better - and on-site support means an instant response.

 

What is Smart Hands and why might I need it?

Smart Hands is a technical support system where technical staff who are located on-site are able to either proactively or reactively fix any issues that may arise with your colocation equipment. This means that you don't need to travel in to fix the equipment yourself or hire an outside agency to deal with the issue for you - which will save you time and money, as well as ensuring that your equipment has the minimum downtime possible.

Proactive Smart Hands

Proactive Smart Hands is where the technical team actively monitors you solution and if an issue is established, they will attempt to fix the issue without delay. This means that you can ensure the minimum possible downtime if any issue does occur - and all issues will be resolved without you having to notify the datacentre host that a problem is occurring. You will be notified of any issues that occur and the measures put in place to combat the problem. This proactive approach is ideal for companies with mission critical systems that require the maximum possible uptime.

Reactive Smart Hands

Reactive Smart Hands is where the technical team will fix any issues with your equipment, but you have to notify them of the issue or problem you wish to be resolved, as they will not be monitoring your solution. They can then begin resolving any issues - thus making this option 'reactive' and not 'proactive'.

 

Green data centres

What is carbon offsetting?

Carbon offsetting is where a company subsidises their carbon & greenhouse gas emissions by investing in projects that provide renewable energy, energy efficiency, the destruction of harmful pollutants or forestry projects.

What is PUE & L.E.E.D?

PUE, or Power Usage Effectiveness is the standard measure of how a data centre utilises its power. The PUE figure of a datacentre is the ratio of the total power used by the facility (cooling, heating, lighting etc) divided by the power usage of the equipment located in the facility.

L.E.E.D, or Leadership in Energy and Environmental Design, is a rating system specifically aimed at the design, construction and operation of high performance buildings - in this case, data centres. L.E.E.D focuses on a number of categories, such as sustainability, energy usage, water efficiency and the environmental impact of the resources used in construction.

 

Network

Do I need dark fibre services for my solution?

Dark fibre services are ideal for companies requiring exceptionally high amounts of bandwidth with low latency. Dark fibre services are the process of leasing fibre optic cables from a network service provider - a dark fibre network is a privately operated optical fibre network that can be used for private networking or internet infrastructure networking. When linked into a metro ring, dark fibre services provide cost effective, high performance solutions for businesses with a heavy bandwidth usage.

... and what is a wavelength?

A wavelength is simply light at a particular frequency or colour - these are transferred through dark fibre systems and as you can utilise dozens of different wavelengths per fibre optic core, you can transfer heavy amounts of bandwidth at any one time.

 

What are the advantages of a leased line?

A leased line is a dedicated point to point connection, in this case to a carrier network, which provides continuous, uncontended data upload/download at vastly superior speeds to an ASDL line. Whilst ADSL is heavily limited, leased lines offer the opportunity to reach extremely high data speeds - perfect for bandwidth intensive networks. Leased lines are also uncontended - contended services such as ASDL can be limited at peak usage times whereas with a leased line, you have a dedicated line that only you (or those you assign through your network) have access to.

 

What is metro ethernet?

Metro Ethernet is a metropolitan area network that is commonly used to connect two (or more) data centres. This allows for high volume cross-connectivity and increased network performance. Metro Ethernet can also support high speed data transfers such as Voice over IP (VoIP) applications.

 

What is network peering?

Network peering is the term used for the relationship between two or more ISPs (Internet Service Providers) who link together in order to transfer data in a more efficient manner than through the standard internet backbone. Using network peering will improve data response times as the data effectively has to travel between less nodes before it reaches its destination.

 

What are network providers and why is it important to not rely on a single provider?

Network providers are, quite simply, businesses or organisations who provide direct access to the Internet backbone - most commonly telecommunications companies and ISPs. (Internet server providers) By not relying on a single provider, it is possible to offer a far high uptime availability guarantee as it severely reduces the likelihood of any network faults affecting service.

 

What is redundancy?

Redundancy is the process involved in ensuring that there are contingencies in place if there is an issue with any equipment or facilities within a data centre. Further information on how redundancy can be measured and what is involves can be found on the data centre tier guide page.

What are transit services?

Transit services utilise network peering to guarantee access to the internet by routing connection through a number of tier 1, tier 2 and peering providers. This ensures that you have consistent access to the internet backbone at all times as you are not reliant on one particular ISP at any one time. With transit services, you are normally required to commit to a minimum level of bandwidth usage each month and costs are generally established 'per megabit per second per month'.

 

Server security

What is DoS & DDoS mitigation?

DoS & DDoS mitigation is effectively the implementation of the system that prevents or severely impedes the implications of a DoS (Denial of Service) or DDoS (Distributed Denial of Service) attack on a server. These attacks are specifically aimed at either forcing the server to either restart or consuming enough of its resources to heavily impact on the level of service it can provide. DoS & DDoS attacks can be carried out for a number of malicious reasons and so it is important to ensure your data centre has the best possible mitigation processes in place.

 

Why should I add a firewall to my server solution?

A firewall is an essential addition for many servers - it protects the data in a private network from external users that should not be granted access to this data. A firewall proxy server seeks to act as an intermediary between two end systems and add an extra level of security on top of these. There are various screening methods available for those with a firewall added to their server; from blocking certain IPs to allowing remote access using either a username/password or a secure VPN based system. On a basic level, a software firewall will allow you to block unused ports on your server, ensuring disallowed port access does not occur.

What is NSI Gold accredited physical security?

The NSI, or National Security Inspectorate is a specialist certification body that inspects the companies that provide security to businesses. 'Gold' is the highest standard available - all security companies in this bracket must achieve business excellence through ISO 9001 and achieve long-term performance and reliability.

What is server hardening?

Server hardening sadly doesn't mean that selected servers are made of diamond - it does however relate to the process of enhancing server security through improvements such as firewalls, data encryption, using network ports wisely, maintaining backups & a number of other key security essentials. Remember that with a large array of potentially sensitive data on your servers, it is important to ensure this data is as safe and secure as possible.

 

Technical standards

What is BS 7858?

The BS 7858 is a security screening standard - this is the standard for the security screening of staff where the staff member will be involved in the safety of goods, property or people. This is particularly relevant with data centre service providers as the BS 7858 standard covers data security and confidential record keeping meaning employees are vetted to ensure they are able to provide the optimum level of security required to work with highly sensitive information.

What is ISO 27001?

Being ISO 27001 compliant is essential for any company providing data centre services - it is also known as the Information Security Management Standard. (ISMS) Data security is absolutely essential for businesses and therefore working with a data centre service provider with an ISO 27001 certification should be a top priority for your business. An ISO 27001 certified company has to adhere to a number of stringent data security rules, meaning that you can ensure your data will remain secure and protected against hacking, malicious attacks and accidental data loss.

 

What is ISO 9001?

ISO 9001 is a standard based on the fluent operation of management systems within an organisation. An ISO 9001 compliant company must keep accurate records, perform regular audits, ensures products & services provided are of a consistently high quality and regularly reviews & resolves any potential issues on both a proactive and reactive basis. In a nutshell, this means that ISO 9001 compliant companies should, in theory, provide a consistently high quality service to their clients.

 

Virtualisation

What is a hypervisor?

Why use a hypervisor?

What is cloud?

 

Cloud computing is, in effect, a type of 'shared computing', comparable to grid computing, where a group of devices form together to deal with a particular process. Originally manipulated for high performance processing, cloud computing has become very much part of the public domain and cloud computing solutions are now sought after by individuals and businesses alike.

There are three main service models for Cloud solutions, IaaS, PaaS & SaaS;

IaaS, or Infrastructure as a Service, involves the use of physical or virtual devices. The cloud user is responsible for maintaining their cloud services, including the operating system and software.

PaaS, or Platform as a Service, is where cloud providers deliver a fully managed cloud solution, meaning that developers do not have additional expenditure such as through the purchase of software or hardware. Additionally, the provider will fully manage the solution.

SaaS, or Software as a Service, has slowly taken off over the past couple of years. A commonly known example of SaaS is the Google Apps platform - the client will only see a single access point to the cloud, however the cloud provider will be hosting their software across a number of virtual devices.