2014 SMi Oil and Gas Telecommunications Conference
Written by Paul Downe on April 11th, 2014
ISN sponsored last month’s SMi Oil and Gas Telecommunications conference in London. I found it enlightening to hear about innovations in the sector and what new options are becoming available to support exploration.
There were a number of sub themes under the over-arching banner of telecommunications, like “High Throughput” satellite, Ka band, 4G vs Wi-Fi, microwave and even a talk on the Internet of Things (Which I am sure you will be fully versed with following my last post, so I won’t cover it again here). I have tried to capture a brief summary of a few of the more interesting presentations in this post. So, let’s break it down…
Increased demand for satellite bandwidth
New “high throughput” satellites (Gb/s range) make satellite communications more cost-effective for an increasing range of customer applications, this increases demand for bandwidth and as this demand increases more satellites are launched to keep up (a great business to be in). In general, we see a capacity and performance increase at a rate of one order of magnitude every 8 years. This trend looks set to continue, if not increase. Even the more challenging areas of the globe are being addressed such as the Arctic, with only low-bandwidth mobile satellite systems available today. Due to customer demand, evaluation work is underway for highly elliptical orbit (HEO) satellites to provide 24/7 coverage to complement geo-stationary satellites.
There are some really telling consumption statistics here:
- Did you know there has been a six-fold increase in satellite bandwidth requirements for Internet content and voice usage on vessels, aircraft and vehicles between 2008 and 2013?
- Internet logins on the MTN (African Telecom) network more than doubled to almost 33 million per year.Voice communications increased approximately 50 percent.
- One customer consumed a record 2.2 TB of satellite bandwidth in one month in 2010 – this jumped to 4.5 TB in 2012.
- Another customer with a mega-yacht, reported average vessel monthly consumption was 50 GB in 2012, but this almost doubled to 80-90 GB in 2013.
Ka band satellite services are coming
The new Ka band services coming on line certainly help the demand for higher bandwidths, albeit with certain limitations.
Firstly let’s look at the good:
- These satellites use smaller beams which offers greater frequency re-use
- They use higher frequencies that allow them to carry more bandwidth.
- Access to orbital slots and frequencies is much easier. In fact, it is possible to get up to 40 times the bandwidth for the same cost as traditional satellite services
- The terminal equipment is already available and inexpensive and there is minimal interference between satellite systems
So what’s the down side then?
- Well, smaller beams often mean restricted coverage and thus the bandwidth isn’t always where you need it
- The higher frequencies lead to greater signal attenuation, so, yes it is effected by rain
- Ka services will often be mixed in with consumer systems with no guarantee of QoS or availability
So, Ka-band will drive the cost of satellite bandwidth down and we should expect change in structures, control, systems and pricing over time. However, bandwidth may not be the overwhelming consideration when you have to offset it against availability and quality of service. It is certain though as more of these services are implemented the commoditisation will open up possibilities for service enhancements. Let’s see how it develops.
4G/LTE and WiMAX technology
With the advent of new 4G/LTE mobile technologies becoming ubiquitous in all kinds of devices, deploying 4G solutions in an unlicensed spectrum will have real benefits for oil & gas applications.
For example, video and audio for remote platform communication and increasing telemetry data volumes. Deploying in license-free spectrum provides flexibility to support a range of services with no service charge and the 4G mobile technology, based on LTE/WiMAX, offers low-latency, high-capacity wireless services.
So why use 4G rather than Wi-Fi?
Wi-Fi offers good data rates but ad-hoc bandwidth sharing means this falls quickly with multiple users or devices communicating at the same time. Wi-Fi allows each user to transmit on a number of OFDM subcarriers simultaneously, using OFDM (Orthogonal frequency-division multiplexing) allows more subcarriers to be used than in a narrow band system. But, because Wi-Fi is uncoordinated, more than one user might talk at the same time, therefore there is no latency guarantee.
4G technology on the other hand gets its advantage by using a coordinated, efficient access to spectrum. LTE and WiMAX can allocate OFDM (Orthogonal frequency-division multiplexing) subcarriers to any user depending on need, each user’s transmissions are centrally coordinated meaning all the spectrum and time can be used.
Lots of interesting information and crystal ball gazing at the conference. The general take away should be that bandwidth demand is forcing new technologies to be developed using all kinds of terrestrial and non-terrestrial solutions. Some of these solutions are already here and some are still under development, but they are coming. I believe that these technologies will continue to be developed and deployed over the next few years and become a standard in the upstream oil and gas industry.
Paul Downe – CTO
The Internet of Things
Written by Paul Downe on April 8th, 2014
What on earth is the Internet of Things?
I am often asked “What’s the next big thing in the IT industry?” Recently, my answer has been “The Internet of Things (IoT)” The response is usually the same “What on earth is that?”
Well, the short answer is that it’s a coined term that refers to any uniquely identifiable objects and their virtual representations in an Internet-like structure. So, pretty much anything! For example:
- a person with a heart monitor implant
- a farm animal with a biochip transponder
- a vehicle that has built-in sensors to warn when tyre pressure is low
…or any other natural or man-made object that can be assigned an IP address and provided with the ability to transfer data over a network.
So far, the Internet of Things has been most closely associated with machine-to-machine (M2M) communication in manufacturing, power, and oil & gas utilities. Products built with M2M communication capabilities are often referred to as being ‘smart’, e.g. smart label, smart meter, and smart grid sensor. However, it’s growing incredibly fast. In fact, according to Gartner there will be nearly 26 billion devices on the Internet of Things by 2020 and ABI Research say more than 30 billion devices will be wirelessly connected to the Internet of Things by 2020. So we need to plan for this influx of sensory data.
I say it’s the next big thing, but really it’s already here. Let’s look at a couple real world examples:
The Virtual Power station
Let me try to set the scene for you. Imagine a small town in America with several thousand houses all fitted with bi-directional smart monitors. These monitors report back power usage in real time and open up their control systems to be adjusted within pre-defined tolerances externally. This town has power provided by one physical oil powered power station. The virtual power station is comprised of a number of servers in a data centre with network access to the houses, physical power station and the Internet.
The virtual power station receives power demand telemetry from the smart meters, Power generating capacity from the oil powered power station. This is augmented with other decision information such as: weather forecasts, social & sporting events, futures markets (oil price), etc.
The virtual power station then orchestrates automatically the power delivery and manages the whole power ecosystem. It uses predictive analytics to plan in advance peaks in demand. For example, a hot sunny day when use of air-conditioners will peak, or when a popular sports match is scheduled you would expect a peak in TV power consumption for the duration. The bi-direction control system allows the virtual power station to make minor adjustments to power consuming devices in the home i.e. turn down the heat by 1 degree on 10% of houses or maybe switch off 20% of swimming pool filter pumps for a short time. These small changes go un-noticed by the consumer but have a large impact on the overall power demand.
So in short the whole town becomes a large sensor area network providing telemetry to a decision making engine which then automatically manages energy production, adjusts power demand and buys oil for itself in advance at a good price on the open market.
Remote Heart pacemaker monitor
Another really good example is based in Canada. Let me set the scene before the IoT solution.
A large number of patients fitted with pacemakers live in largely inaccessible parts of Canada. These patients need to be visited by a doctor at least once a year to check that the pacemaker is performing correctly. This involves a doctor typically flying in on a small plane, taking up lots of valuable time and money only to find that the patient was perfectly fine. The net result is a large transport cost and the doctors being only 10% efficient.
So what do they do now? The IoT solution was to send all the patients a device that they place next to their bed. This device monitors their pace-maker while they sleep and sends this telemetry back to data analytics engines in the cloud. The results of the analytics are send to the doctors allowing them to only schedule a trip to the patients that seem to have unusual readings. The net result is the cost have come down dramatically and the doctors are about 90% efficient.
So what’s going to drive the IoT forward?
In my opinion it will be the growth and proliferation of sensory area networks and machine to machine (M2M) networks along with emerging technologies such as Low Power Wide Area Networks (LPWAN)”. Think of LPWANs as large footprint (30-50 miles) low bandwidth wi-fi zones. These LPWAN areas are the ideal transport medium for any number of IoT sensors deployed across large areas.
What are typical Sensor area networks?
Well just for starters how about these:
- Home Area network – applications such as power metering, interactive control, home CCTV & monitoring, smart fridges
- Personal Area Network – wearable technology such as calorie monitors, pedometers, blood pressure monitors, Google Glass
- Vehicle Area Network – real-time engine monitoring, satellite navigation, emergency response detection, tyre pressure monitoring
- SCADA control systems in Oil & Gas refineries and Water treatment plants
All of these new sensor area networks are going to produce massive amounts of data that will need to be captured collated, stored and analysed. This will involve end users and service providers alike seeking to provide extra value and services based on this data. We are going to see a large increase in “Big Data” and analysis services, hosting and storage, IoT management solutions, IoT sensor deployment and IoT cloud services.
This is the future and a massive opportunity for everyone.
Paul Downe – CTO
ISN welcomes new chairman
Written by David Greenwood on April 1st, 2014
ISN’s management team has been strengthened by the appointment of Tom Smith as non-executive chairman
Tom brings enormous experience in growing companies like ISN to provide better services to meet the demands of the oil and gas industry. His technical background, business acumen and knowledge of the oil industry will be invaluable for our future growth.
Tom studied Marine Electronics & Telecommunications at Aberdeen Technical and Southampton College of Technology. He spent several years in the Merchant Navy as a Radio & Electronics Officer working mainly for Canadian Pacific. Around 1975 was asked to manage the electronic and telecoms element of the work scope for the conversion of a tanker which was intended to off-load oil from the recently installed ‘Montrose Alpha’
production platform in the North Sea. This was the first real introduction of the emerging offshore O&G industry in his home town and pointed the way ahead for his career.
Tom worked on the off-loading tanker for almost two years and, aged 26, he left Canadian Pacific to join Plessey EAE. He led the onshore/offshore team managing the telecoms for Shell’s Brent Field. Recognising new opportunities in this sector, Tom joined a small group of people and formed Nessco limited, basically a three man operation offering telecoms engineering services to the O&G industry in the North Sea.
Over the first few formative years, Tom realised he needed to acquire a wider set of managerial skills and embarked on a programme of self-development culminating in the Institute of Directors’ Diploma in Company Direction. Through this period the company was expanding but certainly not in a linear manner. There were many ups and downs along the way particularly as Tom was driving the company to grow and diversify, with a strong emphasis on exports.
The next phase of growth included building its own custom-built premises in Aberdeen, opening offices in
Baku and Houston and acquiring a telecoms business in East Kilbride. The company went through a period of solid growth, underpinning the activity with a solid foundation of processes and systems including HSEQ, IT, Finance, HR, etc. The core revenue streams were a dynamic mix of opex and capex related revenues aiming for a good mix of recurring, visible revenues and lumpier, higher value project related contracts. In 2003 there was the chance to acquire a major competitor and advised by KMPG, acquired Invsat Ltd from Inmarsat plc. (Invsat was originally Plessey EAE the company that Tom started work with in the mid seventies). The integration process was focused and rigorous and we turned around a loss making business in a short space of time and as a result created an enterprise with a significant footprint, critical mass and a global profile.
As the prospect of an ‘exit’ was beginning to emerge Tom finally consummated a very long courtship with Maven Capital Partners (formerly Aberdeen Asset PE) as the first phase of an exit. In July 2012 Tom finally exited the business selling Nessco Group Holdings Ltd to RigNet Inc, a Nasdaq listed company based in Houston.
As well as his involvement with ISN Solutions, Tom now acts as Energy Sector Advisor to PwC and works with extreme sports centre, Transition Extreme. We look forward to working with Tom to help take our business to the next level.
David Greenwood – CEO
Systems Engineer required for oil and gas focused infrastructure provider
Written by David Ellison on April 1st, 2014
ISN Solutions is a London based IT consultancy and service provider specialising in the upstream oil & gas industry. Our business is built on unrivalled international field experience, technical ability and customer service. Excellent staff are the key component of how we deliver results. If you are passionate about your work and want to help towards our company’s and our clients’ success, we want to hear from you.
We are looking for an experienced systems engineer to join our engineering team based in Notting Hill, West London. Strong candidates will have opportunities to gain unique experience, grow their careers and take on additional responsibility. Occasional travel may be required (recent trips included Singapore, Nigeria, Turkey and Kenya). ISN encourages its engineers to train for exams and gain vendor certifications to increase their skills.
Summary of tasks and responsibilities
- Provide technical support for our clients’ and our own internal systems
- Take responsibility for client sites, the front and back office systems they use, carrying out scheduled tasks and system admin
- Act as a point of escalation for the service desk team
- Meet SLAs for escalated incidents/requests
- Take ownership of complex technical tasks
- Plan and execute technical work according to company standards and processes
- Produce quality documentation including network diagrams
- Act as a technical resource for the project delivery team
- Mentor and support junior engineers
- Excellent communication skills and good self-awareness to respond appropriately to different clients
- Excels in high pressure environment, ability to effectively prioritise is a must
- Ability to focus on doing work that ‘makes a difference’
- Excellent team worker and ability to draw on a wide range of sources
- Flexible and organised approach to work with a can do attitude, self-motivated
- Possess excellent time management and attention to detail
- Translate client needs into technical requirements
- Can explain complex technical solutions to non-technical audiences
- Willing to travel to client sites primarily in London, occasionally abroad
- Willing to work out of hours when necessary
- 5 years’ exposure to MS Server, Exchange and desktop products
- Good understanding of storage solutions such as SAN and NAS
- Good understanding of networking principles and concepts
- Proven technical ability across common product suites
- Certifications are highly desirable in Windows Server, Exchange, VMWare, Citrix, NetApp
- Exposure to the following would be advantageous: Cisco, Riverbed, Juniper, AVAYA, FlexPod, WebSense, ConnectWise, N-Able, ITIL, BOSIET
- Hours: 37.5 hours a week Monday to Friday
- Shifts between 08:00am and 06:00pm
- Must be flexible to work out of hours when requiredSalary: up to £38k + benefits + paid overtime
Email your CV to firstname.lastname@example.org with “Systems Engineer” in the subject line
Direct applicants only, strictly no agencies
ISN Solutions secures £4.6m investment from Maven
Written by David Ellison on March 13th, 2014
ISN is pleased to announce a £4.6m investment from Maven Capital Partners. The deal acknowledges the significant progress seen in ISN’s business over the last five years and recognises the opportunity for further growth in ICT provision to the upstream oil and gas sector.
ISN has predominantly served clients who are based in London and operate overseas. However, Maven’s knowledge of the global oil and gas sector, and its extensive contact base, will provide added value and a competitive edge for ISN as it looks to expand into Aberdeen, where Maven has a long track record of oil and gas investment.
David Greenwood, Managing Director at ISN commented “It gives me enormous pleasure to be given the opportunity to enter into a long lasting partnership with Maven Capital Partners. Our association to date has been an extremely positive one and we are certain that together we have all the ingredients to achieve our growth ambitions. This is a significant milestone in the evolution of ISN and we very much look forward to beginning this new chapter in our development.”
Stella Panu, Partner at Maven, said: “Maven Capital Partners is delighted to announce the completion of a strategic investment in London based ISN Solutions Ltd, a specialist IT services provider to the oil and gas sector. The ISN Solutions management led by David Greenwood has been hugely successful in growing the business over the past five years, and we look forward to working with the team to help them meet and exceed their ambitious plans for the future.”
The investment accelerates the ambitious plans ahead for ISN and heralds an exciting next chapter in the future of the business.
For further information please call Stella Panu at Maven on +44 20 3102 2751
Cloud Expo Europe and Data Centre World
Written by Paul Downe on March 10th, 2014
So, off I went to the 2014 Cloud Expo Europe show expecting it to be much like last year. I was wrong; a record breaking 5,000 people flocked to ExCeL with 500 exhibitors and hundreds of the world’s leading industry voices from the worlds of cloud and data centre technology were on show to us mere mortals.
I attended a number of the keynote speeches across both days. Some of the more interesting ones were by former Netflix CTO Adrian Cockroft, and Nebula CEO Chris Kemp, formerly CTO at NASA. But all had useful insights into the future of cloud technologies and services.
Some of the salient points from these speeches were:
- The massive marketing engine has moved away from spouting about convergence of infrastructure, it is seen as old news. Basically, stop talking about convergence. It’s happened! Everything converged already.
- No one is interested in just services. So differentiate yourself around enhanced services and not bells and whistles on the hardware
- Orchestration and Automation is everything – It’s all about the workflow
- With the continuing growth of “Everything as a Service” solutions, customers are struggling to manage and integrate the multiple cloud solutions
Some interesting comments from various cloud surveys during the conference suggested the Level of adoption and change of Cloud services is increasing and that nearly 50 per cent of businesses are planning to make big changes to their cloud(s) to accommodate planned growth in the next 12 months (no surprise there). However, more interestingly a new survey has found that more than half (53%) of UK businesses using cloud providers to run all or part of their IT infrastructure do not feel that a single provider is capable of meeting their requirements and only a quarter of organisations feel that their cloud provider really understands their business.
This has given rise to a whole new breed of provider “the cloud integrator” (a provider that can manage multi-provider services and platforms). With the increasing SaaS, PaaS, DaaS services being consumed from different providers it will become more beneficial for companies to work with these new cloud integrators instead of a single cloud provider in the future.
So what at the key takeaways for managed service providers in the upstream Oil & Gas industry like ISN? Well, in my opinion, we need to look at these trends and understand the feedback from customers. We must make sure we can provide the forward thinking, vision, solutions and services that fully align with our customers business needs.
In general the upstream industry is a slow adopter of full cloud services, mainly due to very large data sets and heavy security requirements. However, this will change as new technologies and offerings start to propagate through our industry (typically in hybrid cloud environments). We need to be prepared to work in these mixed cloud architectures both as a designer, solution manager and service provider.
The cloud based solutions which truly meet the business needs and offer tangible benefits to the oil & gas sector, e.g. remote geoscience workstations and technology agnostic Desktop as a Service (DaaS – Any desktop on any device) will be the catalysts for further cloud adoption in the industry. These are exactly the future developments we at ISN are working on.
Get Microsoft Lync working with Cisco Call Manager
Written by Michael Papalabrou on March 2nd, 2014
Microsoft Lync enables users to connect in new ways and to stay connected, regardless of their physical location
Next generation UC platform
Lync has many communication and collaboration features, all integrated in one easy to use and well designed client. Its tight integration with the Office suite makes it a business hub for connecting with your colleagues and collaborating (federating) with other organisations. It is therefore not a surprise that Lync quickly reached second place in the world in Unified Communications platforms. With its fully-featured, cloud-based Office 365 variant, it can easily transition a business to the next generation of cloud-based computing. Microsoft have quoted growth of 25% for Lync in Q4 2013.
However, a very common problem for IT managers and budget owners is the protection of their past investments. Lync can replace traditional phone systems and bring new features, but as businesses have already invested large amounts in standard IP telephony, replacing them with Lync is a hard decision. This might be the reason why statistics show the vast majority of Lync users are not implementing voice.
The option of deploying Lync independently is not very attractive; users can be confused between choosing their desk phone and the software client; they do not easily switch to using USB or Bluetooth headsets. It all results in them keep using their phones as before and the hoped-for collaboration benefits are not realised.
Powerful collaboration features
But now, there is a solution to all these problems. In Lync Server 2013, support for remote call control scenarios enables users to control their private branch exchange (PBX) phones by using Lync 2013 on their desktop computers. Lync can be integrated with mature IP telephony platforms like Cisco Unified Communications Manager and its Business Edition variant. The integration feature allows Lync to control traditional IP desk phones, while maintaining all its powerful collaboration features. With Lync’s Active Directory integration, a contact in every Office application will have a multimodal communication panel that will include all contact details and phone numbers. Dialling a number could not be easier; just one click and the desk phone will dial the appropriate number. The conversation can be continued any time by just picking up the handset.
The users now can benefit from this approach in many ways:
- They can have accurate presence information for their colleagues; when someone is talking on the phone, presence will be automatically set to “in a call”. Users can now be sure that they’re not interrupting an important phone call when their colleagues are shown as being available.
They receive missed calls and voice messages as attachments in their email inbox
They can instantly call someone from their computer screen instead of having to manually dial the number, a huge productivity improvement for busy people.
For businesses, this integration brings investment protection for existing phone systems and it gets people to accept the idea of using the Lync client as a phone. It also solves known immaturity issues of Lync as a traditional IP PBX replacement including complexity in applying quality of service in the network, feature gaps and more.
Cisco telephone systems seem to be a de facto standard within the oil industry; integration with Lync has been something of a black art in the past with little help coming from either Microsoft or Cisco. Lync is as revolutionary today, in our opinion, as email was in the nineties. If you are interested in unlocking its capabilities and building upon your existing phone system, please call us on +44 20 7313 8300 and ask to speak to a sales consultant.
CTO Blog – Remote Geoscience
Written by Paul Downe on February 18th, 2014
Centralised processing power and data
One of the newest and most interesting IT concepts in the oil and gas industry at the moment is the possibility of running geological and geophysical (G&G) applications remotely. There are a number of really good reasons you would want to do this, e.g. data consolidation and centralisation; security; collaboration; reduced data duplication; and version control.
The current way of working is via very powerful, graphics-intensive local workstations with data storage locally or very close by (on-premise). If you are a large company, you are likely to have multiple instances of these workstations in different office locations around the world each with copies of the same data.
Wouldn’t it be great to have all of this data and graphics-intensive processing power in a centralised data centre that could be accessed from any location and also from any device? Well, that is the goal of this architecture. (Yes, we are talking about a cloud-based solution!)
ISN has put together a test solution with its partners Cisco and Nvidia and run a proof of concept (PoC) to test the capabilities and performance of remote G&G on real customer data to see if the hype is justified.
So, what will the PoC consist of?
For our purposes, we have chosen a Cisco C240 M3 server and the Nvidia K2 GPU to serve as the data centre host for the solution. We will run Citrix XenDesktop with HDX to enable the remote desktop access. The Cisco C240 M3 Rack Server was our first choice for the PoC because it is designed for both performance and expandability over a wide range of application workloads. It is also part of the Cisco UCS solution; this means it can be subsumed into a FlexPod type solution. All good news if you have seen my comments on FlexPod.
We chose the Nvidia K2 because Citrix and Nvidia have been collaborating since 2008 on graphics hardware acceleration for virtualised 3D applications. This latest offering from their collaboration is the Nvidia VGX K2 card, which can be used with XenDesktop HDX 3D Pro. It also provides a new option for XenApp 6.5 HDX 3D, where each GPU serves a multi-user Windows Server 2008 R2 virtual machine. The new NVIDIA VGX K2 offers outstanding rendering performance with two workstation-class Kepler GPUs on each board, each with 1536 CUDA cores and 4GB of video RAM. In my opinion this will provide more than enough GPU horse power for the G&G applications.
Best end-user experience
To give the end user the best possible experience, we intend to optimize the PoC WAN capacity with a Citrix CloudBridge. This will reduce bandwidth consumption per desktop by up to 80% and application traffic by up to 95%. We can leverage the unique reporting and QoS capabilities of CloudBridge to understand application performance and manage the bandwidth accordingly.
Over the next few weeks we will be building the solution and mapping out the testing criteria and schedule. I will report back in this blog to update you all on how the PoC is progressing.
CTO Blog – Flexible data centre architecture
Written by Paul Downe on February 11th, 2014
Ever wondered what the best approach to a flexible data centre architecture might be?
In my opinion, the answer could well be FlexPod. Having spent over 25 years in the IT industry, I have seen a lot of changes and innovations: some good, some bad and many mediocre. FlexPod from NetApp and Cisco is one of the good ones.
Put simply, it’s an efficient, scalable, reference architecture that combines industry leading hardware from NetApp and Cisco. The architecture is endorsed and supported by hypervisor and software vendors such as VMware, Microsoft and Citrix. These vendors have collaborated to produce a standardised, pre-tested, pre-validated, modular solution that hits the ground running.
From a technical perspective
The FlexPod stack consists of NetApp FAS storage, Cisco Unified Computing System (UCS) servers and Cisco Nexus network switches, and either VMware or Microsoft hypervisor technology. This standardised approach holds significant benefit in reducing risk and takes the guesswork out of architecting new environments, making the delivery of scalable and flexible platforms easy.
The key concept of FlexPod is to transform the physical infrastructure into dynamic pools of data centre resources, creating a shared virtualised infrastructure that is both flexible and efficient whilst still retaining the control and security of a dedicated environment.
From a business perspective
Deploying FlexPod architecture enables businesses to become more agile, reducing the time it takes to deliver new products and services to market when compared to traditional IT delivery models. The FlexPod architecture requires less technology too, this simplifies your datacentre, meaning the support team spend less time looking after technology and spend more time focusing on delivering new projects and enabling innovation.
One of the big issues I have experienced in the past when using technology from multiple vendors is support while diagnosing faults or trying to remediate problems. You end up with a bun fight between storage, server, networking and software vendors all saying, “Our bit works fine, it must be something else!” this is not the case with FlexPod. NetApp and Cisco have established a cooperative Support Model. You choose which vendor to call based on your initial assessment of the problem’s origin; multi-vendor engineers then respond to resolve the issue. The cooperative support model includes an ecosystem of software partners such as VMware, Microsoft, Citrix, Redhat, Oracle, and SAP, among others.
In conclusion, deploying FlexPod will simplify your IT environment, giving you predictable performance, on-demand scalability and industry leading data protection. It allows you to meet changing business demands by delivering rapid, repeatable, cost-effective and consistent IT services.
ISN welcomes new CTO, Paul Downe
Written by David Ellison on September 23rd, 2013
“I am really excited to be able to bring my experience to ISN, supporting their journey to develop new and exciting solutions for the upstream oil & gas industry.”
During more than 25 years in the IT industry, Paul has directed some cutting edge ‘enterprise class’ solutions and projects. Most recently he has played the role of principal consultant specialising in leading large IT transformation designs for enterprise level organisations; helping them to do more with less and developing their cloud strategy.
Before starting ISN, Paul was Global Solutions Architect at Dell Services & Solutions, where he initially ran pre-sales engagements for large ($100M+) transformational opportunities; including with large telco, logistics and legal companies. He then moved into a practice development role, which involved harvesting and creating new intellectual property (IP) for the group. Paul spearheaded the use of standardised reusable processes and artefacts to be used for both large and medium sized transformational projects.
Prior to Dell, Paul spent five years at Comdisco building data centres and workarea facilities for disaster recovery purposes and was responsible for helping customers develop, test and improve their business continuity and disaster recovery plans. On multiple occasions those plans were tested by real invocations of the service because of incidents such as power failures, equipment failures, denied access and acts of terrorism.
Paul is looking forward to bringing enterprise experience gained at these companies – as well as the BBC, BT, Intel, Fujitsu and Siemens -to bear on IT challenges within the energy sector and helping ISN’s clients develop scalable enterprise-class infrastructure to support exploration and production globally.