Skip to content
May 11, 2015 / berniespang

Who would have thought Storage would become so exciting again?

I recently dined with a new business partner and we shared our respective histories to get to know each other better. I noted that I am relatively new to the Storage world, having spent my early career in the System z (a.k.a. mainframe) hardware lab, and almost 18 years in IBM Software.  He, on the other hand, had spent most of his career building or selling Storage systems. He said that he is surprised that after many years of Storage being “relatively boring,” it has again become a dynamic and exciting part of the industry.

I was struck by how his words were similar to ones I had heard from a colleague a few years back when referring to data management software. At that time I was still fairly new to that part of the IBM software business. When I moved to the Information Management team, my friends asked me: “Why move to such a boring part of the business – database software is pretty much what it is going to be.” They did not see that the “Smarter Planet” transformation was beginning to drive exciting innovations, beyond the relational database, needed to bring about a new generation of Big Data Analytics.

When I joined the Storage team last year, I heard similar question from my friends and this time I was prepared. I told them that this part of the business is ripe for transformative innovations. As the world generates more data every day from new systems and devices, and adopts new ways to analyze it for greater insights and business value, it needs a significant leap forward in how that data is securely stored and accessed – faster, easier and at lower cost.  As my new friend Eric Herzog is fond of saying, “our clients are experiencing oceans of data…. and surfs up, dude!”

As I prepare for the Edge2015 conference, reviewing the IBM Storage portfolio news and the many compelling stories our clients will be sharing with each other this week in Las Vegas, I am excited to find myself again in the right place at the right time. Several innovations are driving fundamental shifts in how our leading edge clients are skillfully surfing their growing data oceans:

  • Flash storage has become cost effective for “tier 1” storage, and is on an accelerated price-performance curve that spinning disk will not be able to match
  • Storage capabilities are being deployed as software that supports a variety of physical servers and storage devices, offering unprecedented flexibility and cost savings potential
  • Object Storage is emerging as a much easier and cost effective way to handle the growing volume of unstructured data in files, images, videos, etc.
  • Cloud based storage services, while still emerging for broad commercial use, have the potential to offer new levels of simplicity and cost efficiency
  • Innovations in venerable Tape storage continue the push the boundaries of density and cost efficiency, in both on-premises infrastructure and as a foundation for Cloud storage services

IBM is helping our clients take advantage of these shifts to improve costs and accelerate business growth. In February, we launched Spectrum Storage, a new portfolio of Software Defined Storage capabilities. This week we are announcing an exciting addition, a new feature of Spectrum Control called Storage Insights. Available first as a service on the IBM Cloud, Storage Insight can be deployed in under 30 minutes and provides analytics to optimize on-premises storage infrastructures, ease capacity planning, reclaim under-utilized storage and lower the cost of storage up to 50 percent per GB by optimizing data placement.

Another example is a new active archive Cloud service we are introducing as a Technology Preview and will be demonstrating at Edge.  This service, which has entered a pilot phase with our service partner Iron Mountain and a number of design sponsor clients, will enable users to store large amounts of data as Objects via OpenStack Swift, and easily retrieve it on demand, at the lowest cost possible.

Archived data is a huge treasure trove of untapped insights. Consider the decade’s worth of stored transaction and customer data that hold unseen trends that could help predict future opportunities.  Previous generation data archives may have been cost efficient ways to hold onto data; but today our clients need long term storage with the ability to easily retrieve that data at low cost to unlock all of its business value.  Innovations such as Spectrum Scale and Spectrum Archive software that provide policy driven, automatic placement and movement of data among storage tiers, can be deployed on premises, in a Cloud or across Hybrid Cloud environments to provide new generation data protection, retention and lifecycle management solutions.

These are just some examples of what attendees will hear about at Edge2015 this week. I hope you are joining us in person; but if not, I hope you are able to follow it online and read what the attendees have to say.  Too bad the event is not at Mandalay Bay, I would love to see Eric @zoginstor riding a Spectrum Storage surf board in the wave pool!

March 4, 2015 / berniespang

Two pieces of advice: Don’t let Cluster Sprawl happen to you… and, Don’t drown in the Data Ocean, ride the wave

Another exciting conference week in Las Vegas has come to an end.   (Unless you count the flight home… then it still has a few more hours to go.)   This time it was the 1st IBM Interconnect conference, combining 3 previous events into a mega event focused on innovations for the Cloud era.  From what I saw and heard from others it was a terrific event – kudos to all the teams involved.

The conference was a great opportunity for me to speak with clients, partners, prospects and IBMers about Spectrum Storage and Platform Computing.  It also turned out to be a great opportunity to brainstorm a bit with our new marketing VP, Eric Herzog , about how to communicate the value of this Software Defined Infrastructure portfolio in a way that is clear and concise.

Ideas crystalized and were tested live with press and analysts, clients and sellers.   My top 2 are condensed in the title of this post.

IBM Software Defined Infrastructure provides a unique set of capabilities that help our clients:

  • Avoid cluster sprawl by using all compute and storage resources as a single pool that is efficiently shared among a broad set of large scale, high performance applications and analytics
  • Safely ride the wave in an ocean of data by deploying highly efficient and deeply integrated storage solutions, with unprecedented deployment flexibility: as software, as a service, as a system – on-premises, in the Cloud and across hybrid environments

What the heck is Cluster Sprawl and who cares?

Remember the days when folks put each new application on its own physical server with its own storage?  Remember how large those inefficient server and storage “farms” grew?  And how much money was spent on underutilized resources?  And how much more money was spent on evolving them to a virtualized compute and storage environment where a physical resource could be shared among many apps?   (Following the well proven lead of IBM z Systems, a.k.a. the mainframe)

Well it is starting to happen again.  New generation apps and analytics are increasingly looking like traditional high performance / supercomputing workloads that rely on compute and storage clusters to handle large volumes of data at high speed through parallel processing.   As each new scale-out application appears, a new cluster appears to run it.  With the expected result: a growing number of underutilized clusters that are costing their owners more than they should.   What’s worse is that the apps and analytics are often running slower than they could if unused resources in other clusters could temporarily pitch in to help.

One of our clients, a global financial services provider, has prevented cluster sprawl by implementing a software defined infrastructure with IBM Platform Computing and Spectrum Storage – reducing costs and increasing performance of some workloads by 100x  (That is not a type-o, 100 times… not 100% which = only 2x)

In a world where faster business processes and deeper business insights are increasingly run on platforms such as Hadoop, Spark, Cassandra as well as traditional scale-out databases and data warehouses, maybe the better follow-on question to “What the heck is cluster sprawl?” should be:  “Is there any organization that can afford to not care?”

When did data pools overflow data lakes and become a Data Ocean? 

I will save this answer for my next blog entry.   If you can’t wait, watch Eric Herzog, Live from Interconnect 2015.

February 17, 2015 / berniespang

IBM advances Software Defined Storage leadership with new IBM Spectrum Storage

I have been fortunate in my career to have been involved in a number of significant technology transformations that have impacted both IBM and the clients we serve.  I was on the team that worked with Sun Microsystems and across IBM to establish Java as a “write once, run anywhere” application platform; on the team that worked with Microsoft to launch the early Web Services standards and Service Oriented Architecture; and on the IBM team that launched the Eclipse open source community and led the transformation to an open application development environment.  It is exciting to be part of new generation of information technology that enables IBM clients around the world to better serve their customers, patients, and citizens.

My good fortune continues as I am now part of the IBM team leading a transformation to Software Defined Storage as we launch IBM Spectrum Storage.

We live in a world that is increasingly data driven.  The growth of Mobile and Social apps used by people all around the world; growth of meters, sensors and cameras on practically everything capturing data to be analyzed in near real time; and growth of laws and regulations requiring long term data retention, are all factors driving an explosive growth of stored data.  Traditional storage solutions were designed for a different set of applications and usage patterns.   They are too rigid and inefficient to address today’s needs cost effectively.

A more agile storage environment is required to cost effectively handle a broad scope of data whose business value may change over short periods of time.  The insatiable “need for speed” must be answered with breakthroughs that do much better than throwing more inefficient systems at the problem.  That is why there is growing market buzz about the difference Software Defined Storage can make.

IBM Spectrum Storage is a comprehensive set of storage intelligence that is delivered with unmatched flexibility – as software, as Cloud services or pre-integrated in systems and appliances.  These capabilities can be used to optimize storage of files, objects and data on storage-rich-servers and 100s of storage systems from IBM and other companies.  And can be used on premises, in the cloud, and across hybrid cloud environments to optimize both performance and cost.

While IBM Spectrum Storage is new and includes a number of new innovations such as cloud based storage analytics; it is also based on a proven set of technologies that include more than 700 IBM patented innovations used by thousands of clients around the world.  This combination of innovation and proven reliability is a compelling value for solutions that manage and protect the data that are among an organization’s most valuable assets.

Now that we are launched, and I am back on the blogging bandwagon, I look forward to sharing stories about how clients are redefining data economics with IBM Spectrum Storage.

January 24, 2015 / berniespang

Time to rejoin the blogging community

I have to admit – I had hit a period of burn out.  I had been leading IBM Database Software & Systems Marketing & Strategy for some time, and it was time for a change.

A little over a year ago I moved to a new role – leading Strategy for Software Defined Environments in IBM Systems & Technology Group.  It was a wild first year, with a lot of changes – including picking up business line responsibilities for Elastic Storage, our scale-out software defined file & object storage offering – known to most as General Parallel File System, and our software defined computing portfolio, Platform Computing software.  It was a busy year and I had a lot to learn.

We are now at a very exciting point in IBM, having worked though a challenging transformation year in 2014.   I am now business line VP for Software Defined Infrastructure in the new IBM Systems team, and am fired up about our new organization, our product portfolio and the plans we have for 2015.  As such, I figure it is time for me to get back to posting regularly.

That’s enough about me and the transition of topic for this blog.  As a tease for the next post.. I will have a lot to talk about next month as we head into the IBM InterConnect conference.    I invite you to follow the link and register to join us in Vegas at the end of February.  Travel safe.

 

April 28, 2013 / berniespang

BLU Acceleration marks the begining of a new generation for big data analytics

I would imagine you are thinking that headline is a pretty bold statement.    And when I tell you that BLU Acceleration is an exciting capability being introduced in the new DB2 this quarter, you may think it bolder still.

If you have not read any of my past blogs, you may be asking “what does database software have to do with Big Data?”   The most important thing to remember is that meeting today’s “big data” challenges requires different types of systems that use different technologies for managing and analyzing different data in different ways.  This is why the world now has a diverse set of NoSQL systems that have been added to the traditional SQL database systems.    And this why IBM has added new systems (e.g., for Stream and Hadoop processing) as well as new NoSQL capabilities added to SQL systems (e.g., XML and RDF Graph database adds to DB2, and TimeSeries and Spatial database capabilities in Informix.)

In a recent discussion with an industry analyst, I was surprised to learn  that he considers in-memory, columnar management of a SQL relational database to also be NoSQL.    He revised my definition of NoSQL to be – Not Only traditional row-based relational data management via SQL.  And so with the introduction of BLU Acceleration in the new DB2, it becomes a NoSQL data system for another reason.  BLU Acceleration  is dramatically easier and faster for analytics on terabytes of data.  For many organizations, this enables cost effective analytics of more data and for more users.

In his blog, consultant  and IBM Champion Dave Buelke called BLU Acceleration – Best yet for Big Data!  He asserts that there are cases where Hadoop systems are being used or considered for analyzing data, where using BLU Acceleration will be a more simple and lower cost solution.   (Note: neither he nor I am asserting this is true for all Hadoop uses cases.  The point is – no one technology, including Hadoop, is the best answer for all needs.)

I invite you to learn more by joining our web broadcast  on April 30   You can also attend or follow the International DB2 User Group conference which will be held in Orlando Florida this week.

Speaking of User Groups,  my thanks to the International Informix User Group team that hosted their conference this past week in San Diego.  It was great meeting with members of this community and seeing both new and familiar faces among the attendees.  A lot of positive feedback about the enhanced capabilities in the new Informix 12.   This includes extending the use of Dynamic In-memory (technology shared with BLU Acceleration) for TimeSeries data – simplifying and accelerating operation analysis and reporting of growing smart meter and sensor data.

For more Big Data stories and to add your thoughts, I encourage you to join the conversation at the Big Data Hub.

February 9, 2013 / berniespang

The Continuing Role of the Database in the New Era of Big Data

Big data is all about scaling the use of data beyond the norms of the current era of information technology.

You could reasonably argue that the first big data era began more than a half-century ago. On May 25, 1961, President John F. Kennedy gave a speech to the U.S. Congress in which he declared the goal of landing a man on the moon, and returning him safely to Earth.  The amount of data generated and managed throughout the program quickly outgrew data systems of the time.  A brand new “Information Management System” (IMS) was created by IBM and other members of the Apollo team to tackle this new big data challenge.

Now, fast forward more than 50 years and we have ushered in a new era of big data, ignited by the global “Internet of things,” mobile, social and cloud computing, and instrumented systems of all kinds.  Now every transaction, tweet or meter reading has potential value to enhance or destroy a customer relationship; to drive a new business opportunity; or to catch a bad guy.   New types of data systems are needed to handle more data and more types of data, faster and more cost effectively than systems that were state of the art just a few years ago.

The key to making big data work for business is using systems that are designed for workload optimized performance and simplicity.  In some cases that means completely new systems to handle challenges like analyzing data in motion, or spreading complex work among a large number of distributed systems.  In other cases, new capabilities are added to proven systems such as IBM DB2 and Informix, to provide a new mix of production grade capabilities – e.g., for both SQL and NoSQL databases.

Solving today’s big data challenges often requires combining the structured, optimized approach of traditional database systems with the less structured, exploratory approach of new systems.   In fact, modern versions of technology created decades ago may be the best choice for new enterprise challenges; ones that also benefit from their time-proven stability, maturity, and manageability.

So what’s the role of a relational data system in this big data era? 

Some IT professionals may take relational and pre-relational database technologies for granted, but they remain the trusty workhorse in most data centers.  These proven platforms continue to handle the growing volume of data and faster transactions from applications that conduct business every second of every day.   They also enable deep analysis of that data to help organizations make better decisions with the speed needed to affect business operations as they execute.

Organizations leading the pack in big data ingenuity are the ones using the best combination of systems – traditional or new – for each need.  For many organizations building complex systems, running global banking networks, or delivering millions of packages around the world everyday, that includes using the modern descendent of the data system that played a small role in a giant leap for mankind.

Look for more thoughts about Big Data at the speed of business from me and other followers of database technology in the coming weeks.

And if you’re interested in IBM’s next Big Data event, go to this link for details. http://ibm.co/BigDataEvent

November 11, 2012 / berniespang

Information on Demand 2012

Information on Demand 2012 was another great week this year, with a record number of attendees – over 12,000 IBM clients, partners, analysts, reporters and IBMers from around the world.

For those of you who did not join us last month, here is a summary of announcements made at the event.  Also, the folks at Wikibon have assembled a nice set of videos and articles you should check out.  Actually, those of you that were there would also find these summaries valuable.

A few to interviews to highlight given the subject of this blog:

For me it was a particularly exciting year as it also marked the end of “launch month” for our new PureData System.  But the real excitement of this event is the in person interaction with clients, partners, IBM Information Champions, and analysts.  In a job dominated by conference calls and video chats, having the opportunity to participate in less formal conversations is a welcome change.  It is particularly interesting to listen to exchanges among different clients about the challenges they face, and how they are using IBM technologies to meet them.

Speaking of clients and IBM technology.. the InfoSphere, Data Management, and System z product demo rooms and hands-on lab sessions were packed all week.   This conference continues to be a nice mix of technical details and strategic discussions about the application of technology to improve business results.  I spoke to several clients who each had a large group at the conference made up of business and IT leaders as well architects, developers and data professionals.

On a final note, Barenaked Ladies and One Republic both put on great shows.   A great mid-week break from the very full days of business and technical talk.

I hope we see all of you next year…  November 3-7, 2013

Follow

Get every new post delivered to your Inbox.

Join 27 other followers