Skip to content
May 11, 2015 / berniespang

Who would have thought Storage would become so exciting again?

I recently dined with a new business partner and we shared our respective histories to get to know each other better. I noted that I am relatively new to the Storage world, having spent my early career in the System z (a.k.a. mainframe) hardware lab, and almost 18 years in IBM Software.  He, on the other hand, had spent most of his career building or selling Storage systems. He said that he is surprised that after many years of Storage being “relatively boring,” it has again become a dynamic and exciting part of the industry.

I was struck by how his words were similar to ones I had heard from a colleague a few years back when referring to data management software. At that time I was still fairly new to that part of the IBM software business. When I moved to the Information Management team, my friends asked me: “Why move to such a boring part of the business – database software is pretty much what it is going to be.” They did not see that the “Smarter Planet” transformation was beginning to drive exciting innovations, beyond the relational database, needed to bring about a new generation of Big Data Analytics.

When I joined the Storage team last year, I heard similar question from my friends and this time I was prepared. I told them that this part of the business is ripe for transformative innovations. As the world generates more data every day from new systems and devices, and adopts new ways to analyze it for greater insights and business value, it needs a significant leap forward in how that data is securely stored and accessed – faster, easier and at lower cost.  As my new friend Eric Herzog is fond of saying, “our clients are experiencing oceans of data…. and surfs up, dude!”

As I prepare for the Edge2015 conference, reviewing the IBM Storage portfolio news and the many compelling stories our clients will be sharing with each other this week in Las Vegas, I am excited to find myself again in the right place at the right time. Several innovations are driving fundamental shifts in how our leading edge clients are skillfully surfing their growing data oceans:

  • Flash storage has become cost effective for “tier 1” storage, and is on an accelerated price-performance curve that spinning disk will not be able to match
  • Storage capabilities are being deployed as software that supports a variety of physical servers and storage devices, offering unprecedented flexibility and cost savings potential
  • Object Storage is emerging as a much easier and cost effective way to handle the growing volume of unstructured data in files, images, videos, etc.
  • Cloud based storage services, while still emerging for broad commercial use, have the potential to offer new levels of simplicity and cost efficiency
  • Innovations in venerable Tape storage continue the push the boundaries of density and cost efficiency, in both on-premises infrastructure and as a foundation for Cloud storage services

IBM is helping our clients take advantage of these shifts to improve costs and accelerate business growth. In February, we launched Spectrum Storage, a new portfolio of Software Defined Storage capabilities. This week we are announcing an exciting addition, a new feature of Spectrum Control called Storage Insights. Available first as a service on the IBM Cloud, Storage Insight can be deployed in under 30 minutes and provides analytics to optimize on-premises storage infrastructures, ease capacity planning, reclaim under-utilized storage and lower the cost of storage up to 50 percent per GB by optimizing data placement.

Another example is a new active archive Cloud service we are introducing as a Technology Preview and will be demonstrating at Edge.  This service, which has entered a pilot phase with our service partner Iron Mountain and a number of design sponsor clients, will enable users to store large amounts of data as Objects via OpenStack Swift, and easily retrieve it on demand, at the lowest cost possible.

Archived data is a huge treasure trove of untapped insights. Consider the decade’s worth of stored transaction and customer data that hold unseen trends that could help predict future opportunities.  Previous generation data archives may have been cost efficient ways to hold onto data; but today our clients need long term storage with the ability to easily retrieve that data at low cost to unlock all of its business value.  Innovations such as Spectrum Scale and Spectrum Archive software that provide policy driven, automatic placement and movement of data among storage tiers, can be deployed on premises, in a Cloud or across Hybrid Cloud environments to provide new generation data protection, retention and lifecycle management solutions.

These are just some examples of what attendees will hear about at Edge2015 this week. I hope you are joining us in person; but if not, I hope you are able to follow it online and read what the attendees have to say.  Too bad the event is not at Mandalay Bay, I would love to see Eric @zoginstor riding a Spectrum Storage surf board in the wave pool!

March 4, 2015 / berniespang

Two pieces of advice: Don’t let Cluster Sprawl happen to you… and, Don’t drown in the Data Ocean, ride the wave

Another exciting conference week in Las Vegas has come to an end.   (Unless you count the flight home… then it still has a few more hours to go.)   This time it was the 1st IBM Interconnect conference, combining 3 previous events into a mega event focused on innovations for the Cloud era.  From what I saw and heard from others it was a terrific event – kudos to all the teams involved.

The conference was a great opportunity for me to speak with clients, partners, prospects and IBMers about Spectrum Storage and Platform Computing.  It also turned out to be a great opportunity to brainstorm a bit with our new marketing VP, Eric Herzog , about how to communicate the value of this Software Defined Infrastructure portfolio in a way that is clear and concise.

Ideas crystalized and were tested live with press and analysts, clients and sellers.   My top 2 are condensed in the title of this post.

IBM Software Defined Infrastructure provides a unique set of capabilities that help our clients:

  • Avoid cluster sprawl by using all compute and storage resources as a single pool that is efficiently shared among a broad set of large scale, high performance applications and analytics
  • Safely ride the wave in an ocean of data by deploying highly efficient and deeply integrated storage solutions, with unprecedented deployment flexibility: as software, as a service, as a system – on-premises, in the Cloud and across hybrid environments

What the heck is Cluster Sprawl and who cares?

Remember the days when folks put each new application on its own physical server with its own storage?  Remember how large those inefficient server and storage “farms” grew?  And how much money was spent on underutilized resources?  And how much more money was spent on evolving them to a virtualized compute and storage environment where a physical resource could be shared among many apps?   (Following the well proven lead of IBM z Systems, a.k.a. the mainframe)

Well it is starting to happen again.  New generation apps and analytics are increasingly looking like traditional high performance / supercomputing workloads that rely on compute and storage clusters to handle large volumes of data at high speed through parallel processing.   As each new scale-out application appears, a new cluster appears to run it.  With the expected result: a growing number of underutilized clusters that are costing their owners more than they should.   What’s worse is that the apps and analytics are often running slower than they could if unused resources in other clusters could temporarily pitch in to help.

One of our clients, a global financial services provider, has prevented cluster sprawl by implementing a software defined infrastructure with IBM Platform Computing and Spectrum Storage – reducing costs and increasing performance of some workloads by 100x  (That is not a type-o, 100 times… not 100% which = only 2x)

In a world where faster business processes and deeper business insights are increasingly run on platforms such as Hadoop, Spark, Cassandra as well as traditional scale-out databases and data warehouses, maybe the better follow-on question to “What the heck is cluster sprawl?” should be:  “Is there any organization that can afford to not care?”

When did data pools overflow data lakes and become a Data Ocean? 

I will save this answer for my next blog entry.   If you can’t wait, watch Eric Herzog, Live from Interconnect 2015.

February 17, 2015 / berniespang

IBM advances Software Defined Storage leadership with new IBM Spectrum Storage

I have been fortunate in my career to have been involved in a number of significant technology transformations that have impacted both IBM and the clients we serve.  I was on the team that worked with Sun Microsystems and across IBM to establish Java as a “write once, run anywhere” application platform; on the team that worked with Microsoft to launch the early Web Services standards and Service Oriented Architecture; and on the IBM team that launched the Eclipse open source community and led the transformation to an open application development environment.  It is exciting to be part of new generation of information technology that enables IBM clients around the world to better serve their customers, patients, and citizens.

My good fortune continues as I am now part of the IBM team leading a transformation to Software Defined Storage as we launch IBM Spectrum Storage.

We live in a world that is increasingly data driven.  The growth of Mobile and Social apps used by people all around the world; growth of meters, sensors and cameras on practically everything capturing data to be analyzed in near real time; and growth of laws and regulations requiring long term data retention, are all factors driving an explosive growth of stored data.  Traditional storage solutions were designed for a different set of applications and usage patterns.   They are too rigid and inefficient to address today’s needs cost effectively.

A more agile storage environment is required to cost effectively handle a broad scope of data whose business value may change over short periods of time.  The insatiable “need for speed” must be answered with breakthroughs that do much better than throwing more inefficient systems at the problem.  That is why there is growing market buzz about the difference Software Defined Storage can make.

IBM Spectrum Storage is a comprehensive set of storage intelligence that is delivered with unmatched flexibility – as software, as Cloud services or pre-integrated in systems and appliances.  These capabilities can be used to optimize storage of files, objects and data on storage-rich-servers and 100s of storage systems from IBM and other companies.  And can be used on premises, in the cloud, and across hybrid cloud environments to optimize both performance and cost.

While IBM Spectrum Storage is new and includes a number of new innovations such as cloud based storage analytics; it is also based on a proven set of technologies that include more than 700 IBM patented innovations used by thousands of clients around the world.  This combination of innovation and proven reliability is a compelling value for solutions that manage and protect the data that are among an organization’s most valuable assets.

Now that we are launched, and I am back on the blogging bandwagon, I look forward to sharing stories about how clients are redefining data economics with IBM Spectrum Storage.

January 24, 2015 / berniespang

Time to rejoin the blogging community

I have to admit – I had hit a period of burn out.  I had been leading IBM Database Software & Systems Marketing & Strategy for some time, and it was time for a change.

A little over a year ago I moved to a new role – leading Strategy for Software Defined Environments in IBM Systems & Technology Group.  It was a wild first year, with a lot of changes – including picking up business line responsibilities for Elastic Storage, our scale-out software defined file & object storage offering – known to most as General Parallel File System, and our software defined computing portfolio, Platform Computing software.  It was a busy year and I had a lot to learn.

We are now at a very exciting point in IBM, having worked though a challenging transformation year in 2014.   I am now business line VP for Software Defined Infrastructure in the new IBM Systems team, and am fired up about our new organization, our product portfolio and the plans we have for 2015.  As such, I figure it is time for me to get back to posting regularly.

That’s enough about me and the transition of topic for this blog.  As a tease for the next post.. I will have a lot to talk about next month as we head into the IBM InterConnect conference.    I invite you to follow the link and register to join us in Vegas at the end of February.  Travel safe.

 

April 28, 2013 / berniespang

BLU Acceleration marks the begining of a new generation for big data analytics

I would imagine you are thinking that headline is a pretty bold statement.    And when I tell you that BLU Acceleration is an exciting capability being introduced in the new DB2 this quarter, you may think it bolder still.

If you have not read any of my past blogs, you may be asking “what does database software have to do with Big Data?”   The most important thing to remember is that meeting today’s “big data” challenges requires different types of systems that use different technologies for managing and analyzing different data in different ways.  This is why the world now has a diverse set of NoSQL systems that have been added to the traditional SQL database systems.    And this why IBM has added new systems (e.g., for Stream and Hadoop processing) as well as new NoSQL capabilities added to SQL systems (e.g., XML and RDF Graph database adds to DB2, and TimeSeries and Spatial database capabilities in Informix.)

In a recent discussion with an industry analyst, I was surprised to learn  that he considers in-memory, columnar management of a SQL relational database to also be NoSQL.    He revised my definition of NoSQL to be – Not Only traditional row-based relational data management via SQL.  And so with the introduction of BLU Acceleration in the new DB2, it becomes a NoSQL data system for another reason.  BLU Acceleration  is dramatically easier and faster for analytics on terabytes of data.  For many organizations, this enables cost effective analytics of more data and for more users.

In his blog, consultant  and IBM Champion Dave Buelke called BLU Acceleration – Best yet for Big Data!  He asserts that there are cases where Hadoop systems are being used or considered for analyzing data, where using BLU Acceleration will be a more simple and lower cost solution.   (Note: neither he nor I am asserting this is true for all Hadoop uses cases.  The point is – no one technology, including Hadoop, is the best answer for all needs.)

I invite you to learn more by joining our web broadcast  on April 30   You can also attend or follow the International DB2 User Group conference which will be held in Orlando Florida this week.

Speaking of User Groups,  my thanks to the International Informix User Group team that hosted their conference this past week in San Diego.  It was great meeting with members of this community and seeing both new and familiar faces among the attendees.  A lot of positive feedback about the enhanced capabilities in the new Informix 12.   This includes extending the use of Dynamic In-memory (technology shared with BLU Acceleration) for TimeSeries data – simplifying and accelerating operation analysis and reporting of growing smart meter and sensor data.

For more Big Data stories and to add your thoughts, I encourage you to join the conversation at the Big Data Hub.

February 9, 2013 / berniespang

The Continuing Role of the Database in the New Era of Big Data

Big data is all about scaling the use of data beyond the norms of the current era of information technology.

You could reasonably argue that the first big data era began more than a half-century ago. On May 25, 1961, President John F. Kennedy gave a speech to the U.S. Congress in which he declared the goal of landing a man on the moon, and returning him safely to Earth.  The amount of data generated and managed throughout the program quickly outgrew data systems of the time.  A brand new “Information Management System” (IMS) was created by IBM and other members of the Apollo team to tackle this new big data challenge.

Now, fast forward more than 50 years and we have ushered in a new era of big data, ignited by the global “Internet of things,” mobile, social and cloud computing, and instrumented systems of all kinds.  Now every transaction, tweet or meter reading has potential value to enhance or destroy a customer relationship; to drive a new business opportunity; or to catch a bad guy.   New types of data systems are needed to handle more data and more types of data, faster and more cost effectively than systems that were state of the art just a few years ago.

The key to making big data work for business is using systems that are designed for workload optimized performance and simplicity.  In some cases that means completely new systems to handle challenges like analyzing data in motion, or spreading complex work among a large number of distributed systems.  In other cases, new capabilities are added to proven systems such as IBM DB2 and Informix, to provide a new mix of production grade capabilities – e.g., for both SQL and NoSQL databases.

Solving today’s big data challenges often requires combining the structured, optimized approach of traditional database systems with the less structured, exploratory approach of new systems.   In fact, modern versions of technology created decades ago may be the best choice for new enterprise challenges; ones that also benefit from their time-proven stability, maturity, and manageability.

So what’s the role of a relational data system in this big data era? 

Some IT professionals may take relational and pre-relational database technologies for granted, but they remain the trusty workhorse in most data centers.  These proven platforms continue to handle the growing volume of data and faster transactions from applications that conduct business every second of every day.   They also enable deep analysis of that data to help organizations make better decisions with the speed needed to affect business operations as they execute.

Organizations leading the pack in big data ingenuity are the ones using the best combination of systems – traditional or new – for each need.  For many organizations building complex systems, running global banking networks, or delivering millions of packages around the world everyday, that includes using the modern descendent of the data system that played a small role in a giant leap for mankind.

Look for more thoughts about Big Data at the speed of business from me and other followers of database technology in the coming weeks.

And if you’re interested in IBM’s next Big Data event, go to this link for details. http://ibm.co/BigDataEvent

November 11, 2012 / berniespang

Information on Demand 2012

Information on Demand 2012 was another great week this year, with a record number of attendees – over 12,000 IBM clients, partners, analysts, reporters and IBMers from around the world.

For those of you who did not join us last month, here is a summary of announcements made at the event.  Also, the folks at Wikibon have assembled a nice set of videos and articles you should check out.  Actually, those of you that were there would also find these summaries valuable.

A few to interviews to highlight given the subject of this blog:

For me it was a particularly exciting year as it also marked the end of “launch month” for our new PureData System.  But the real excitement of this event is the in person interaction with clients, partners, IBM Information Champions, and analysts.  In a job dominated by conference calls and video chats, having the opportunity to participate in less formal conversations is a welcome change.  It is particularly interesting to listen to exchanges among different clients about the challenges they face, and how they are using IBM technologies to meet them.

Speaking of clients and IBM technology.. the InfoSphere, Data Management, and System z product demo rooms and hands-on lab sessions were packed all week.   This conference continues to be a nice mix of technical details and strategic discussions about the application of technology to improve business results.  I spoke to several clients who each had a large group at the conference made up of business and IT leaders as well architects, developers and data professionals.

On a final note, Barenaked Ladies and One Republic both put on great shows.   A great mid-week break from the very full days of business and technical talk.

I hope we see all of you next year…  November 3-7, 2013

November 2, 2012 / berniespang

Introducing the new PureData System

For those who may have noticed,  I should explain my long absence from this blog.   For the better part of this year my team and I have been “heads down” on preparing for and executing the introduction of the new IBM PureData System.  Not having much time to spare was only a part of my excuse.  The real reason was lack of energy and inspiration to write even one more piece beyond what was needed for the launch and for the IOD 2012 Conference last week..

Now that both events are behind us, it is time for me to get back on track….

IBM PureData System

PureData System is the newest member of the IBM PureSystems family of expert integrated systems I wrote about in April.  It is offered in 3 models that deliver optimized performance for transactional, analytic and reporting, and operational analytic workloads.   As an expert integrated system, each PureData System model is integrated software, hardware and built-in expertise that simplify the entire system life cycle – from procurement through retirement.

PureData System provides an efficient, high-performance and high-scale data platform – delivering data services needed for different types of transactional and analytic application workloads.   Providing these values for data services needed for different types of applications requires software and hardware that are designed, integrated and tuned specifically for each type.  Typically, organizations spend their valuable time and resources to design systems of general purpose components and then procure, integrate, configure, tune, manage and maintain each system for its specific use.  PureData System dramatically reduces time, cost and risk when deploying and maintaining  these systems.

  • PureData for Transactions:  integrates DB2 pureScale to deliver high-available, high-throughput transaction database clusters that easily scale without the need to tune the application or database.  This PureData System is available in 3 size configurations and can be used to consolidate more than 100 database servers.
  • PureData for Analytics:  is powered by Netezza technology and is the newly enhanced replacement to the Netezza 1000 (formerly known as TwinFin).  It is optimized for simplicity and performance for analytics and reporting data warehouses.  This new model delivers 20x concurrency and throughput for tactical queries compared to the previous version Netezza technology, and offers the industry’s richest library of in-database analytics functions.
  • PureData for Operational Analytics:  integrates InfoSphere Warehouse software for operational data warehousing that can support continuous  data ingest and more than 1000 concurrent operational queries, while balancing resources for predictable analytics performance.   It also delivers DB2’s adaptive compression which has been used by clients to achieve up to 10x storage space savings.  This PureData System model is a new generation that replaces the Smart Analytics System 7700.

Note:  Previous blog re: different types of data warehouse workloads.

DB2 Analytics Accelerator and the new zEnterprise Analytics System

And if that were not enough, we have also integrated the power and simplicity of Netezza technology with the reliability and security of System z to deliver cost efficient, high-performance analytics and operational analytics on data manages by DB2 for z/OS.   System z clients now have the opportunity to greatly simplify and reduce cost of analyzing their most critical business data.

  • DB2 Analytics Accelerator:  The same Netezza technology that powers the PureData System for Analytics, also powers the newly enhanced DB2 Analytics Accelerator which integrates with DB2 for z/OS for high performance analytics – without modifying applications or the database.  The new High-performance Storage Saver capability reduces demand on System z storage space without sacrificing performance.
  • zEnterprise Analytics System:   combines the new zEnterprize EC12 and DB2 Analytics Accelerator for a hybrid system that merges capabilities optimized for different workloads in a single, highly reliable, and secure system.  The zEnterprise Analytics System 9700 and 9710 models have now replaced the Smart Analytics System 9700 and 9710.

That’s a good (re-)start… I will save my IOD 2012 recap for next week to make sure I get back on my weekly pace.

PS.  My thoughts and prayers are with all those still suffering the effects of Sandy.

June 8, 2012 / berniespang

Clients chose IBM System z for analytics over Teradata and Oracle Exadata

Here is another question where conventional wisdom about “the right answer” has been proven wrong:  can IBM System z be the best solution for data warehousing and analytics?   For many of my early days in the database software and systems business the debate raged about performance and price performance implications of using System z for analytics workloads.  Recent client stories I’ve heard tell me that the advances delivered in DB2 10 for z/OS, and the Netezza powered DB2 Analytics Accelerator, have firmly answered the question.

For those that have not heard of DB2 Analytics Accelerator, it is a Netezza data warehouse appliance that integrates directly with DB2 for z/OS such that deep analytics queries are routed to it without any need to alter the application.  Transactional and operational queries are handled by DB2 as usual, and all data remains under the industry’s highest level of security and availability.

Also, you should know the Smart Analytics System models 9700 and 9710 are integrated offerings that include Cognos BI, InfoSphere Warehouse and DB2 for z/OS software on a zEnterprise z/196 or z/114, respectively.

If you are finding it hard to believe this is a real change in the game, consider the following client examples from our Banking Industry team:

European Bank Group adds IBM DB2 Analytics Accelerator to System z over Exadata
This banking consortium has  IT teams that are Oracle technology friendly, and had invested in an Exadata system last year.   They were considering moving  BI workload to the Exadata system but the IBM team demonstrated the benefits of a BI infrastructure based on IBM System z with the DB2 Analytics Accelerator.  The client chose the IBM solution.

Federal tax authority chooses IBM Smart Analytics System 9700 after DB2 10 for z/OS blows away Oracle in a performance benchmark
A benchmark between Oracle Database and DB2 for z/OS was the first step in this decision process: DB2 proved to have 10 times better performance in the benchmark.  In addition to superior performance, other decision factors for choosing IBM Smart Analytics System over Oracle included:

  • An end-to-end solution, including comprehensive data warehousing and business intelligence software
  • Reliable hardware
  • In-depth services that will support deployment and operation of the new platform

IBM System z selected over Teradata at one of the world’s oldest banks
This bank needed an integrated data warehousing solution for corporate, financial, and marketing information across the bank to reduce costs, improve revenue and drive better profitability.   Factors in choosing IBM System z over Teradata included:

  • Significant savings in hardware, software, operating and people costs
  • Faster time to value with a reduction in the time required to deploy Business Intelligence solutions
  • Industry leading scalability, reliability, availability and security
  • Simplified and faster access to the transactional and operational data on System z

North American Bank moves off Teradata in Favor of IBM Smart Analytics System
Teradata was the warehousing standard at this bank and its team had a misconception that IBM System z was not leading-edge technology or the most cost effective solution.   Fortunately, the team also had open minds and a desire to find the data warehousing and analytics solution that delivered the best value for their business.   The result: a transition from Teradata to an IBM Smart Analytics System powered by System z.

Never say never
Now don’t get me wrong.  I am not saying that System z is the best analytics system choice for all clients in all situations.  I am saying that you should not assume it isn’t the best choice for you and your situation.  Make business decisions based on the reality of today’s facts, not based on outdated misconceptions.

June 1, 2012 / berniespang

Talking Big Data & Warehouse Architectues at the CIO Forum and Executive IT Summit

I am starting this post at 31,000 ft on way home from Atlanta where I spoke at another CIO Forum and Executive IT Summit.  I also spoke at one in Seattle earlier this year.  These are well run events that are specifically for CIOs and senior IT executives.  I really enjoy hearing the exchanges among these peers who are facing many similar challenges, regardless of what their companies do.  Exchanges are often very productive  with cards exchanged for follow-on actions or continuation of the discussion.

Full disclosure: IBM sponsors this event series which will be held at 16 cities across the US and Canada in 2012.  The keynote discussion is about Big Data challenges and the new technologies for tackling them.    We also do a follow-on session on either Evolving Data Warehouse Architectures or Information Integration and Governance.    Today I was talking about the evolving architectures.. many of the points I’ve shared in my earlier posts.

Feedback has generally been positive, but even more so today, I thought.  I had discussions with IT leaders from higher education, marketing services, financial services, health care, and IT consulting services.  They all were talking about the need to evolve their environments to gain more insights from more types of information, and deliver it to business users faster.  It made me feel good about the focus we currently have with our clients.

But it also made me think that  we are not too far from having to figure out the next chapter in the story.  The only way to be among those leading the way forward, is to always be scouting ahead to see what is over the horizon.  The great thing about the computing business – at least over my career so far – is that we are never done inventing the future.

Note:  Another disclosure.. I am actually finishing and posting this a week later.   The holiday weekend with family and friends was too good, the need to connect and post this flew right out of my mind.