Archive for: ‘Juni 2015’

How to get the most out of your PureData System for Analytics using Hadoop as a cost-efficient extension

23. Juni 2015 Posted by Ralf Götz

Today’s requirements for collecting huge amounts of data are different from several years back when only relational databases satisfied the need for a system of record. Now, new data formats need to be acquired, stored and processed in a convenient and flexible way. Customers need to integrate different systems and platforms to unify data access and acquisition without losing control and security.

The logical data warehouse

More and more relational databases and Hadoop platforms are building the core of a Logical Data Warehouse in which each system handles the workload which it can handle best. We call this using “fit for purpose” stores.

An analytical data warehouse appliance such as PureData System for Analytics is often at the core of this Logical Data Warehouse and it is efficient in many ways. It can host and process several terabytes of valuable, high-quality data enabling lightning fast analytics at scale. And it has been possible (with some effort) to move bulk data between Hadoop and relational databases using Sqoop – an open source component of Hadoop. But there was no way to query both systems using SQL – a huge disadvantage.

Two options for combining relational database and Hadoop

Why move bulk data between different systems or run cross-systems analytical queries? Well, there are several use cases for this scenario but I will only highlight two of them based on a typical business scenario in analytics.

The task: an analyst needs to find out how the stock level of the company’s products will develop throughout the year. This stock level is being updated very frequently and produces lots of data in the current data warehouse system implemented on PureData System for Analytics. Therefore the data cannot be kept in the system for more than a year (hot data). A report on this hot data indicates that the stock level is much too high and needs to be adjusted to keep stock costs low. This would normally trigger immediate sales activities (e.g. a marketing and/or sales campaign with lower prices).

“We need a report, which could analyze all stock levels for all products for the last 10+ years!”

Yet, a historical report, which could analyze all stock levels for all products for the last 10+ years would have indicated that the stock level at this time of the year is a good thing, because a high season is approaching. Therefore, the company would be able to sell most of their products and satisfy the market trend. But how can the company provide such a report with so much data?

Bild

The company would have 2 use case options to satisfy their needs:

  1. Replace the existing analytical data warehouse appliance with a newer and bigger one (This would cost some dollars and has been covered in another blog post.), or
  2. Use an existing Hadoop cluster as a cheap storage and processing extension for the data warehouse appliance (Note that a new, yet to be implemented Hadoop cluster would probably cost more than a bigger PureData box as measured by Total Cost of Ownership).

Option 2 would require a mature, flexible integration interface between Hadoop and PureData. Sqoop would not be able to handle this, because it requires more capabilities than just bulk data movement capabilities from Hadoop to PureData.

IBM Fluid Query for seamless cross-platform data access using standard SQL

These requirements are only two of the reasons why IBM has introduced IBM Fluid Query in March, 2015 as a no charge extension for PureData System for Analytics. Fluid Query enables bulk data movement from Hadoop to PureData and vice versa ANDoperational SQL query federation. With Fluid Query, data residing in Hadoop distributions from Cloudera, Hortonworks and IBM BigInsights for Apache Hadoop can be combined with the data residing in PureData using standard SQL syntax.

“Move and query all data, find the value in the data and integrate only if needed.”

This enables users to seamlessly query older, cooler data and hot data without the complexity of data integration with a more exploratory approach: move and query all data, find the value in the data and integrate only if needed.

Bild

IBM Fluid Query can be downloaded and installed as a free add-on for PureData System for Analytics.

Try it out today. IBM Fluid Query is technology that is available for PureData System for Analytics.  Clients can download and install this software and get started right away with these new capabilities.  Download it here on Fix Central. Doug Dailey’s “Getting Started with Fluid Query” blog for more information and documentation links to get started is highly recommended reading.

Bild

Do you need more information? Follow me on Twitter.

How to get the most out of your PureData System for Analytics using Hadoop as a cost-efficient extension

23. Juni 2015 Posted by Ralf Götz

Today’s requirements for collecting huge amounts of data are different from several years back when only relational databases satisfied the need for a system of record. Now, new data formats need to be acquired, stored and processed in a convenient and flexible way. Customers need to integrate different systems and platforms to unify data access and acquisition without losing control and security.

The logical data warehouse

More and more relational databases and Hadoop platforms are building the core of a Logical Data Warehouse in which each system handles the workload which it can handle best. We call this using “fit for purpose” stores.

An analytical data warehouse appliance such as PureData System for Analytics is often at the core of this Logical Data Warehouse and it is efficient in many ways. It can host and process several terabytes of valuable, high-quality data enabling lightning fast analytics at scale. And it has been possible (with some effort) to move bulk data between Hadoop and relational databases using Sqoop – an open source component of Hadoop. But there was no way to query both systems using SQL – a huge disadvantage.

Two options for combining relational database and Hadoop

Why move bulk data between different systems or run cross-systems analytical queries? Well, there are several use cases for this scenario but I will only highlight two of them based on a typical business scenario in analytics.

The task: an analyst needs to find out how the stock level of the company’s products will develop throughout the year. This stock level is being updated very frequently and produces lots of data in the current data warehouse system implemented on PureData System for Analytics. Therefore the data cannot be kept in the system for more than a year (hot data). A report on this hot data indicates that the stock level is much too high and needs to be adjusted to keep stock costs low. This would normally trigger immediate sales activities (e.g. a marketing and/or sales campaign with lower prices).

“We need a report, which could analyze all stock levels for all products for the last 10+ years!”

Yet, a historical report, which could analyze all stock levels for all products for the last 10+ years would have indicated that the stock level at this time of the year is a good thing, because a high season is approaching. Therefore, the company would be able to sell most of their products and satisfy the market trend. But how can the company provide such a report with so much data?

Bild

The company would have 2 use case options to satisfy their needs:

  1. Replace the existing analytical data warehouse appliance with a newer and bigger one (This would cost some dollars and has been covered in another blog post.), or
  2. Use an existing Hadoop cluster as a cheap storage and processing extension for the data warehouse appliance (Note that a new, yet to be implemented Hadoop cluster would probably cost more than a bigger PureData box as measured by Total Cost of Ownership).

Option 2 would require a mature, flexible integration interface between Hadoop and PureData. Sqoop would not be able to handle this, because it requires more capabilities than just bulk data movement capabilities from Hadoop to PureData.

IBM Fluid Query for seamless cross-platform data access using standard SQL

These requirements are only two of the reasons why IBM has introduced IBM Fluid Query in March, 2015 as a no charge extension for PureData System for Analytics. Fluid Query enables bulk data movement from Hadoop to PureData and vice versa ANDoperational SQL query federation. With Fluid Query, data residing in Hadoop distributions from Cloudera, Hortonworks and IBM BigInsights for Apache Hadoop can be combined with the data residing in PureData using standard SQL syntax.

“Move and query all data, find the value in the data and integrate only if needed.”

This enables users to seamlessly query older, cooler data and hot data without the complexity of data integration with a more exploratory approach: move and query all data, find the value in the data and integrate only if needed.

Bild

IBM Fluid Query can be downloaded and installed as a free add-on for PureData System for Analytics.

Try it out today. IBM Fluid Query is technology that is available for PureData System for Analytics.  Clients can download and install this software and get started right away with these new capabilities.  Download it here on Fix Central. Doug Dailey’s “Getting Started with Fluid Query” blog for more information and documentation links to get started is highly recommended reading.

Bild

Do you need more information? Follow me on Twitter.

Speed up SAP Netweaver Business Intelligence queries using IBM PureData System for Analytics (Part 2)

23. Juni 2015 Posted by Ralf Götz

Imagine that you’re the CIO of a big retailer with both an online and store presence. Christmas season is coming, and last year around this time, your critical business processes almost broke because of the heavy demand for deep analytics and frequent reporting during the high season. What would you do?

Bild

In the first post of this series, I described an alternative reporting and analytics solution to SAP Netweaver Business Intelligence, based on IBM PureData System for Analytics, powered by Netezza technology. In order to provide business users with the analytical performance they need to keep up with the growing demand for more data, faster reports, deeper analytics and less cost.

The challenge

A large retail client in Germany had exactly the problem I introduced in the beginning of this post. Their SAP Netweaver Business Intelligence system is essential for certain business critical processes such as month end closure, inventory and demand planning. Under normal conditions, the reporting and analytical workload can be processed in parallel to the more transaction-oriented processes.

But every Monday just before and just after holiday seasons such as Christmas, the system came very close to its maximum capacity—sometimes to capacity overload.

The evaluation process

What would be the best way to tackle the challenge? First, the client needed to evaluate the technical fesibility, the time it would take for a successful implementation and the associated costs.After the evaluation of the different possible approaches, such as upgrading SAP Netweaver Business Intelligence to the latest release, implementing SAP HANA or changing the underlying database out with another, the client decided to implement a sidecar solution to offload the most critical reports and analytics to reduce the workload on SAP Netweaver Business Intelligence.

The solution

The idea was to introduce a purpose built, easy-to-use, high-speed analytics data warehouse appliance that grows with the business requirements: PureData System for Analytics.

Implementation of such a sidecar solution is a best practice approach also recommended by SAP itself with their Sybase IQ columnar database.

After a one week assessment, the top three SAP Netweaver Business Intelligence reports had been identified and run by thousands of users several times every day. Removing this purely analytical workload from the system would guarantee a smooth season next Christmas.

The data (several terabytes) had been integrated with the help of a business partner using the customer’s incumbent extract, transform and load (ETL) platform, which could be connected to SAP and PureData System for Analytics.

For reporting, the client introduced IBM Cognos along with PureData System for Analytics, gaining the maximum out of the new analytics infrastructure.

The result

The most important fact is that our client survived Christmas season (and Easter as well).

Bild

Their SAP Netweaver Business Intelligence system can still serve its purpose, is running smoothly and has been very stable since then. Only the reporting and analytics run now on the sidecar PureData System for Analytics. The response time for typical queries is mostly under two seconds.

Because of the highly flexible implementation of data model and granularity within PureData System for Analytics, the client was even able to increase the frequency of some reports from monthly to weekly updates, which enabled the business users to do more with less effort in a shorter amount of time.

The retailer started the implementation in April 2013 and finished the project in September 2013, on time and on budget.

What do you think of the implementation? Are you facing similar challenges? Let’s connect and follow me on Twitter.

Speed up SAP Netweaver Business Intelligence queries using IBM PureData System for Analytics (Part 2)

23. Juni 2015 Posted by Ralf Götz

Imagine that you’re the CIO of a big retailer with both an online and store presence. Christmas season is coming, and last year around this time, your critical business processes almost broke because of the heavy demand for deep analytics and frequent reporting during the high season. What would you do?

Bild

In the first post of this series, I described an alternative reporting and analytics solution to SAP Netweaver Business Intelligence, based on IBM PureData System for Analytics, powered by Netezza technology. In order to provide business users with the analytical performance they need to keep up with the growing demand for more data, faster reports, deeper analytics and less cost.

The challenge

A large retail client in Germany had exactly the problem I introduced in the beginning of this post. Their SAP Netweaver Business Intelligence system is essential for certain business critical processes such as month end closure, inventory and demand planning. Under normal conditions, the reporting and analytical workload can be processed in parallel to the more transaction-oriented processes.

But every Monday just before and just after holiday seasons such as Christmas, the system came very close to its maximum capacity—sometimes to capacity overload.

The evaluation process

What would be the best way to tackle the challenge? First, the client needed to evaluate the technical fesibility, the time it would take for a successful implementation and the associated costs.After the evaluation of the different possible approaches, such as upgrading SAP Netweaver Business Intelligence to the latest release, implementing SAP HANA or changing the underlying database out with another, the client decided to implement a sidecar solution to offload the most critical reports and analytics to reduce the workload on SAP Netweaver Business Intelligence.

The solution

The idea was to introduce a purpose built, easy-to-use, high-speed analytics data warehouse appliance that grows with the business requirements: PureData System for Analytics.

Implementation of such a sidecar solution is a best practice approach also recommended by SAP itself with their Sybase IQ columnar database.

After a one week assessment, the top three SAP Netweaver Business Intelligence reports had been identified and run by thousands of users several times every day. Removing this purely analytical workload from the system would guarantee a smooth season next Christmas.

The data (several terabytes) had been integrated with the help of a business partner using the customer’s incumbent extract, transform and load (ETL) platform, which could be connected to SAP and PureData System for Analytics.

For reporting, the client introduced IBM Cognos along with PureData System for Analytics, gaining the maximum out of the new analytics infrastructure.

The result

The most important fact is that our client survived Christmas season (and Easter as well).

Bild

Their SAP Netweaver Business Intelligence system can still serve its purpose, is running smoothly and has been very stable since then. Only the reporting and analytics run now on the sidecar PureData System for Analytics. The response time for typical queries is mostly under two seconds.

Because of the highly flexible implementation of data model and granularity within PureData System for Analytics, the client was even able to increase the frequency of some reports from monthly to weekly updates, which enabled the business users to do more with less effort in a shorter amount of time.

The retailer started the implementation in April 2013 and finished the project in September 2013, on time and on budget.

What do you think of the implementation? Are you facing similar challenges? Let’s connect and follow me on Twitter.

Speed up SAP Netweaver Business Intelligence queries using IBM PureData System for Analytics (Part 2)

23. Juni 2015 Posted by Ralf Götz

Imagine that you’re the CIO of a big retailer with both an online and store presence. Christmas season is coming, and last year around this time, your critical business processes almost broke because of the heavy demand for deep analytics and frequent reporting during the high season. What would you do?

Bild

In the first post of this series, I described an alternative reporting and analytics solution to SAP Netweaver Business Intelligence, based on IBM PureData System for Analytics, powered by Netezza technology. In order to provide business users with the analytical performance they need to keep up with the growing demand for more data, faster reports, deeper analytics and less cost.

The challenge

A large retail client in Germany had exactly the problem I introduced in the beginning of this post. Their SAP Netweaver Business Intelligence system is essential for certain business critical processes such as month end closure, inventory and demand planning. Under normal conditions, the reporting and analytical workload can be processed in parallel to the more transaction-oriented processes.

But every Monday just before and just after holiday seasons such as Christmas, the system came very close to its maximum capacity—sometimes to capacity overload.

The evaluation process

What would be the best way to tackle the challenge? First, the client needed to evaluate the technical fesibility, the time it would take for a successful implementation and the associated costs.After the evaluation of the different possible approaches, such as upgrading SAP Netweaver Business Intelligence to the latest release, implementing SAP HANA or changing the underlying database out with another, the client decided to implement a sidecar solution to offload the most critical reports and analytics to reduce the workload on SAP Netweaver Business Intelligence.

The solution

The idea was to introduce a purpose built, easy-to-use, high-speed analytics data warehouse appliance that grows with the business requirements: PureData System for Analytics.

Implementation of such a sidecar solution is a best practice approach also recommended by SAP itself with their Sybase IQ columnar database.

After a one week assessment, the top three SAP Netweaver Business Intelligence reports had been identified and run by thousands of users several times every day. Removing this purely analytical workload from the system would guarantee a smooth season next Christmas.

The data (several terabytes) had been integrated with the help of a business partner using the customer’s incumbent extract, transform and load (ETL) platform, which could be connected to SAP and PureData System for Analytics.

For reporting, the client introduced IBM Cognos along with PureData System for Analytics, gaining the maximum out of the new analytics infrastructure.

The result

The most important fact is that our client survived Christmas season (and Easter as well).

Bild

Their SAP Netweaver Business Intelligence system can still serve its purpose, is running smoothly and has been very stable since then. Only the reporting and analytics run now on the sidecar PureData System for Analytics. The response time for typical queries is mostly under two seconds.

Because of the highly flexible implementation of data model and granularity within PureData System for Analytics, the client was even able to increase the frequency of some reports from monthly to weekly updates, which enabled the business users to do more with less effort in a shorter amount of time.

The retailer started the implementation in April 2013 and finished the project in September 2013, on time and on budget.

What do you think of the implementation? Are you facing similar challenges? Let’s connect and follow me on Twitter.

Speed up SAP Netweaver Business Intelligence queries using IBM PureData System for Analytics (Part 1)

23. Juni 2015 Posted by Ralf Götz

Have you ever encountered the need to accelerate reporting within SAP Business Warehouse (SAP BW)? Did you find a feasible solution that fits your budget and performance requirements? If not, then you might be interested in how to speed up SAP Business Intelligence queries using IBM PureData System for Analytics, powered by Netezza technology. 

Bild

SAP offers a variety of options to help you to improve the performance of your SAP BW queries. These include SAP HANA (as the underlying database, which is an in-memory solution) or SAP Sybase IQ (which is a columnar database working as a sidecar solution). IBM also offers a SAP BW optimized relational database: IBM DB2 for SAP.

But there are additional ways to approach improving performance.

I believe that a more optimal approach is to widen the scope and chose a solution that will provide business intelligence (BI) service consolidating multiple data sources, of which SAP is just one. This should include an evaluation of BI and extraction, transformation and load (ETL) tools, as well as data warehouse appliances (such as IBM PureData System for Analytics, powered by Netezza technology).

From my personal experience with other clients who have reported on data in SAP enterprise resource planning solutions combined with other data sources, I would recommend a target architecture of a downstream enterprise data warehouse, creating a corporate-wide analytical data service built on PureData System for Analytics technology.

Clients I have worked with have achieved significant benefits by extracting and moving the data into PureData System for Analytics rather than trying to extend the capability using existing solutions. The diagram below outlines the high level architectural approach:

Bild

While I do understand that some clients wish to minimize the impact involved in improving the performance of reports in SAP BW when they use SAP Business Explorer or any other compliant BI tool, I strongly believe that the benefits of this approach far outweigh the disadvantages.

Client experiences with SAP Business Warehouse

From my regular discussions with SAP BW clients, I know that many users experience constraints in the areas of:

  • Performance to build the data (InfoCubes)
  • Performance of queries and analysis
  • Time to develop and meet new reporting and analytical requirements
  • Difficulty in incorporating data from non-SAP sources
  • Accessing the data using non-SAP analytical tools

When clients have implemented SAP Business Warehouse Accelerators as a solution to some of these challenges, they have often needed to reduce the amount of data kept in the InfoCubes held on Business Warehouse Accelerators. This is to allow such data to be loaded in a timely manner, maintain performance and avoid excess licensing costs.

In addition, many SAP users are reviewing the best architectural deployment approach for SAP ERP data going forward. Choosing alternative, open approaches such as PureData System for Analytics can prove to be the optimum solution.

The benefits of IBM PureData System for Analytics

IBM clients have found significant benefit from using PureData System for Analytics as their enterprise data warehouse and foundation for data services, gaining a responsive and easy-to-use open business intelligence environment. PureData System for Analytics users are able to choose the best reporting and analytical tools to meet their requirements, consolidating and analyzing data from all sources, both within and outside of the organization. Additionally they have managed to avoid the complexity of having to manage and maintain the SAP environment, which adds additional infrastructure to an already complex environment. Many clients are also gaining a significant competitive advantage through the advanced analytics capabilities within PureData System for Analytics.

PureData System for Analytics clients have been able to:

  • Significantly improve the performance of business intelligence reporting
  • Dramatically reduce ETL time using the power of the PureData System for Analytics database in doing complex transformations
  • Eliminate the need and complexity of loading non-SAP data into SAP BW
  • Load SAP detailed data into the data warehouse, where it can be used for other purposes and subjects areas
  • Retain and analyze historical transaction and master data changes across multiple years and the lowest level of granularity
  • Deliver new projects much more quickly and with less risk due to the simplicity inherent in PureData System for Analytics operations
  • Drastically reduce SAP BW size, mostly eliminating additional hardware investments and license costs

In one of my next blog posts, I will dive into the details of such a project we just put into production at a large German retailer. Comment below if you’d like to share your experiences, or follow me on Twitter.

Speed up SAP Netweaver Business Intelligence queries using IBM PureData System for Analytics (Part 1)

23. Juni 2015 Posted by Ralf Götz

Have you ever encountered the need to accelerate reporting within SAP Business Warehouse (SAP BW)? Did you find a feasible solution that fits your budget and performance requirements? If not, then you might be interested in how to speed up SAP Business Intelligence queries using IBM PureData System for Analytics, powered by Netezza technology. 

Bild

SAP offers a variety of options to help you to improve the performance of your SAP BW queries. These include SAP HANA (as the underlying database, which is an in-memory solution) or SAP Sybase IQ (which is a columnar database working as a sidecar solution). IBM also offers a SAP BW optimized relational database: IBM DB2 for SAP.

But there are additional ways to approach improving performance.

I believe that a more optimal approach is to widen the scope and chose a solution that will provide business intelligence (BI) service consolidating multiple data sources, of which SAP is just one. This should include an evaluation of BI and extraction, transformation and load (ETL) tools, as well as data warehouse appliances (such as IBM PureData System for Analytics, powered by Netezza technology).

From my personal experience with other clients who have reported on data in SAP enterprise resource planning solutions combined with other data sources, I would recommend a target architecture of a downstream enterprise data warehouse, creating a corporate-wide analytical data service built on PureData System for Analytics technology.

Clients I have worked with have achieved significant benefits by extracting and moving the data into PureData System for Analytics rather than trying to extend the capability using existing solutions. The diagram below outlines the high level architectural approach:

Bild

While I do understand that some clients wish to minimize the impact involved in improving the performance of reports in SAP BW when they use SAP Business Explorer or any other compliant BI tool, I strongly believe that the benefits of this approach far outweigh the disadvantages.

Client experiences with SAP Business Warehouse

From my regular discussions with SAP BW clients, I know that many users experience constraints in the areas of:

  • Performance to build the data (InfoCubes)
  • Performance of queries and analysis
  • Time to develop and meet new reporting and analytical requirements
  • Difficulty in incorporating data from non-SAP sources
  • Accessing the data using non-SAP analytical tools

When clients have implemented SAP Business Warehouse Accelerators as a solution to some of these challenges, they have often needed to reduce the amount of data kept in the InfoCubes held on Business Warehouse Accelerators. This is to allow such data to be loaded in a timely manner, maintain performance and avoid excess licensing costs.

In addition, many SAP users are reviewing the best architectural deployment approach for SAP ERP data going forward. Choosing alternative, open approaches such as PureData System for Analytics can prove to be the optimum solution.

The benefits of IBM PureData System for Analytics

IBM clients have found significant benefit from using PureData System for Analytics as their enterprise data warehouse and foundation for data services, gaining a responsive and easy-to-use open business intelligence environment. PureData System for Analytics users are able to choose the best reporting and analytical tools to meet their requirements, consolidating and analyzing data from all sources, both within and outside of the organization. Additionally they have managed to avoid the complexity of having to manage and maintain the SAP environment, which adds additional infrastructure to an already complex environment. Many clients are also gaining a significant competitive advantage through the advanced analytics capabilities within PureData System for Analytics.

PureData System for Analytics clients have been able to:

  • Significantly improve the performance of business intelligence reporting
  • Dramatically reduce ETL time using the power of the PureData System for Analytics database in doing complex transformations
  • Eliminate the need and complexity of loading non-SAP data into SAP BW
  • Load SAP detailed data into the data warehouse, where it can be used for other purposes and subjects areas
  • Retain and analyze historical transaction and master data changes across multiple years and the lowest level of granularity
  • Deliver new projects much more quickly and with less risk due to the simplicity inherent in PureData System for Analytics operations
  • Drastically reduce SAP BW size, mostly eliminating additional hardware investments and license costs

In one of my next blog posts, I will dive into the details of such a project we just put into production at a large German retailer. Comment below if you’d like to share your experiences, or follow me on Twitter.

Speed up SAP Netweaver Business Intelligence queries using IBM PureData System for Analytics (Part 1)

23. Juni 2015 Posted by Ralf Götz

Have you ever encountered the need to accelerate reporting within SAP Business Warehouse (SAP BW)? Did you find a feasible solution that fits your budget and performance requirements? If not, then you might be interested in how to speed up SAP Business Intelligence queries using IBM PureData System for Analytics, powered by Netezza technology. 

Bild

SAP offers a variety of options to help you to improve the performance of your SAP BW queries. These include SAP HANA (as the underlying database, which is an in-memory solution) or SAP Sybase IQ (which is a columnar database working as a sidecar solution). IBM also offers a SAP BW optimized relational database: IBM DB2 for SAP.

But there are additional ways to approach improving performance.

I believe that a more optimal approach is to widen the scope and chose a solution that will provide business intelligence (BI) service consolidating multiple data sources, of which SAP is just one. This should include an evaluation of BI and extraction, transformation and load (ETL) tools, as well as data warehouse appliances (such as IBM PureData System for Analytics, powered by Netezza technology).

From my personal experience with other clients who have reported on data in SAP enterprise resource planning solutions combined with other data sources, I would recommend a target architecture of a downstream enterprise data warehouse, creating a corporate-wide analytical data service built on PureData System for Analytics technology.

Clients I have worked with have achieved significant benefits by extracting and moving the data into PureData System for Analytics rather than trying to extend the capability using existing solutions. The diagram below outlines the high level architectural approach:

Bild

While I do understand that some clients wish to minimize the impact involved in improving the performance of reports in SAP BW when they use SAP Business Explorer or any other compliant BI tool, I strongly believe that the benefits of this approach far outweigh the disadvantages.

Client experiences with SAP Business Warehouse

From my regular discussions with SAP BW clients, I know that many users experience constraints in the areas of:

  • Performance to build the data (InfoCubes)
  • Performance of queries and analysis
  • Time to develop and meet new reporting and analytical requirements
  • Difficulty in incorporating data from non-SAP sources
  • Accessing the data using non-SAP analytical tools

When clients have implemented SAP Business Warehouse Accelerators as a solution to some of these challenges, they have often needed to reduce the amount of data kept in the InfoCubes held on Business Warehouse Accelerators. This is to allow such data to be loaded in a timely manner, maintain performance and avoid excess licensing costs.

In addition, many SAP users are reviewing the best architectural deployment approach for SAP ERP data going forward. Choosing alternative, open approaches such as PureData System for Analytics can prove to be the optimum solution.

The benefits of IBM PureData System for Analytics

IBM clients have found significant benefit from using PureData System for Analytics as their enterprise data warehouse and foundation for data services, gaining a responsive and easy-to-use open business intelligence environment. PureData System for Analytics users are able to choose the best reporting and analytical tools to meet their requirements, consolidating and analyzing data from all sources, both within and outside of the organization. Additionally they have managed to avoid the complexity of having to manage and maintain the SAP environment, which adds additional infrastructure to an already complex environment. Many clients are also gaining a significant competitive advantage through the advanced analytics capabilities within PureData System for Analytics.

PureData System for Analytics clients have been able to:

  • Significantly improve the performance of business intelligence reporting
  • Dramatically reduce ETL time using the power of the PureData System for Analytics database in doing complex transformations
  • Eliminate the need and complexity of loading non-SAP data into SAP BW
  • Load SAP detailed data into the data warehouse, where it can be used for other purposes and subjects areas
  • Retain and analyze historical transaction and master data changes across multiple years and the lowest level of granularity
  • Deliver new projects much more quickly and with less risk due to the simplicity inherent in PureData System for Analytics operations
  • Drastically reduce SAP BW size, mostly eliminating additional hardware investments and license costs

In one of my next blog posts, I will dive into the details of such a project we just put into production at a large German retailer. Comment below if you’d like to share your experiences, or follow me on Twitter.

Offloading not-so-hot data from your data warehouse without losing value

23. Juni 2015 Posted by Ralf Götz

Have you dreamt of gaining valuable insights from all the data you’ve collected over many years of business without adding an unbearable burden on you data warehouse?

There are many good reasons to limit the amount of data in a data warehouse. This includes costs, storage capacity, backup and restore limitations, to name only a few. Business users and IT personnel used to separate hot data from warm data and cold data. The terms are an analogy for the frequency this data had been queried by business users over the last couple of months:

  • Hot data: data accessed today and during the last week
  • Warm data: data accessed during the last month
  • Cold data: data accessed during the last year (or even longer)

 

Bild

 

The effort required for this categorization is quite high, since you can only act on what has been measured for a long enough period of time (monitoring databases and business intelligence platforms, searching in logs and more). The data supply chain can be long, complex and can potentially consist of many different systems, which need to be included in this consideration. Very often, clients come up with custom-programmed dashboards to show the top 100 queries accessing the data stored in the data warehouse. But who knows if a specific query that needs data from two years ago is less important than another that needs only current data?

Technology can help conquer this time consuming, cost intensive and boring task using (even more) cost intensive features like data temperature, which is available in some relational databases and allows the administrator to use cheaper storage for warm and cold data (such as slower disks or even tape).

Best practice

A valid best practice for many years was to archive cold data on tape (or other slow and cheap storage media). The most obvious disadvantage is the unavailability of the archived data when immediate access is required; the data has to be restored before a user can actually query it.

Bild

This restricts the business from potential valuable insight. Who knows if you will need the data sooner or later? Why should you give up these insights just because of technology and costs?

There is a solution available today that addresses this challenge without changing the relational database hosting your existing data warehouse.

InfoSphere BigInsights for Apache Hadoop as an active archive

How to offload data to Hadoop

With Apache Hadoop, a data warehouse archive is nothing less than an extension of the original data warehouse. Hadoop adds additional storage and processing capacity at a much lower total cost of ownership (TCO) than a data warehouse. The cold data can be extracted and off-loaded from the data warehouse and moved into the Hadoop cluster, residing in Hive tables, without any restriction on volume since storage for Hadoop is cheap.

Structured Query Language (SQL) to access the data

The big advantage compared to an archive residing on tape is that you can still access the data on Hadoop with standard ANSI Structured Query Language (SQL) The business user requesting data from years ago will not notice the difference, apart from a little longer access time. There is no change in the business application that needs to be implemented.

Does this sound good? Well, if you are open to reconsidering your database selection, I have another secret hint for you. Why not use an analytic data warehouse appliance without any compromise on performance and storage capacity?

IBM PureData System for Analytics: the analytic data warehouse appliance

Instead of trying to find the best trade-off for the challenges discussed above, you could consider using a technology that can dramatically reduce your efforts and costs for your data warehouse platform. IBM PureData System for Analytics, powered by Netezza technology, is the most advanced data warehouse appliance on the market, engineered to simplify analytics with lots of storage and processing power, including up to 1.5 petabytes of user data, more than 6 terabytes of RAM and hundreds of high-performance CPUs and Field Programmable Gate Arrays (FPGA), crunching all your business data in seconds.

PureData System for Analytics does not distinguish between hot, warm and cold data since it can hold and process all of it in a fraction of the time conventional databases need.

And (just in case) if you still need more processing and storage for your data, the integration between PureData System for Analytics and Hadoop is easy and mature. With both technologies you are well prepared for the age of big data.

With the right approach and platform, today’s big data requirements can be mastered. Why not start today? Connect with me on Twitter to continue the discussion.

DNUG oder auch: dann beginnen wir mal unsere ersten 100 Tage

22. Juni 2015 Posted by Stefan Gebhardt

Liebes Mitglied,

die Mitgliederversammlung am 09.06.2015 in Dortmund hat das von engagierten Mitgliedern ausgearbeitete Konzept bestätigt und uns als neuen Vorstand gewählt und das Vertrauen ausgesprochen. Die IBM als Hersteller unterstützt das Konzept auch.

Kernbotschaft dieses Konzeptes ist: Die DNUG ist eine Usergroup! Unser Schwerpunkt sind die Collaboration-Produkte der IBM - aber auch über den Tellerrand darf geschaut werden.
Das ganze Konzept findet sich hier: 2015 DNUG Zukunftskonzept (Folien).pdfDetails anzeigen2015 DNUG Zukunftskonzept (Text).pdfDetails anzeigen.

Wir wollen nicht in der Vergangenheit Probleme und deren Ursachen suchen, sondern möglichst schnell neue Wege finden, um die DNUG wieder als starke Interessenvertretung der Anwender zu etablieren. Alles darf in Frage gestellt werden. Was gut war, soll bleiben, alte Zöpfe müssen schnellstens weg. Dabei dürfen die Wurzeln nicht vergessen werden. Wir haben dazu bereits auf der Konferenz gelernt, dass neben allen neuen spannenden Produkten auch IBM Domino nach wie vor für viele dazu gehört.

Diesem Anspruch wollen wir als erstes mit einem neuen Veranstaltungskonzept gerecht werden. Wir wollen wieder ein breites Spektrum an Veranstaltungen anbieten und nicht nur auf zwei große Konferenzen pro Jahr reduziert werden. Selbstverständlich werden wir auch die viel diskutierten Teilnahmekosten in dem Konzept berücksichtigen. Das Format soll deutlich frischer werden. Wer in Dortmund den Open Space miterlebt hat, hat eine erste Idee davon bekommen. Alle sollen das ganze Jahr über wieder das Gefühl haben, in einer aktiven Gruppe von Menschen mit gleichen inhaltlichen Interessen unterwegs zu sein und für sich mehr als nur einen Schreibblock und einen Stift mitnehmen.

Die ersten hundert Tage werden wir aktiv nutzen, alle wichtigen Aufgaben zu identifizieren. Die Liste ist gut eine Woche nach unserer Wahl schon lang. Die Finanzierung und Restrukturierung der DNUG, die Intensivierung der Zusammenarbeit mit der IBM, das Schaffen von neuen Mehrwerten für alle Mitglieder, die Etablierung von Fachgruppen und neue Kooperationen mit anderen Gruppen sind nur einige von diesen Aufgaben.

Dafür bitten wir Dich um aktive Mitarbeit. Wenn Du Ideen hast, wie Du Dich aktiv einbringen kannst - in dieser Community kannst Du Dich sofort beteiligen. Wenn Du Dich einbringen möchtest, aber noch keine Ideen hast - auch dann melde Dich gerne.

Wieso eigentlich die Anrede "Du"? Weil es in Vereinen eine übliche Anredeform ist. Weil es in den elektronischen Collaborationsplattformen schon lange Standard ist. Wir sind der Meinung, dass diese Form bei der Kommunikation einfacher und persönlicher ist und die in einer Usergroup nötige Nähe bringt. Viele Mitglieder duzen sich sowieso seit Jahren. Im persönlichen Kontakt auf Veranstaltungen wird es wie bei einem großen Clubreiseanbieter klappen: wer siezen möchte, der kann das tun ohne das es jemand falsch findet.

 

Wir freuen uns auf eine intensive Zusammenarbeit mit Euch

Stefan Gebhardt, Birgit Krüger, Jörg Rafflenbeul, Daniel Reichelt, Dr. Erik Wüstner

 

PS: Wir haben uns als neuer Vorstand "Transparenz" vorgenommen. Wir werden regelmäßig aus unserer Arbeit und von den Neuerungen berichten und freuen uns, wenn Du in dieser Community und damit bei unseren Themen auf "Folgen" drückst und mit diskutierst.

June 22, 2015 11:30 AM

22. Juni 2015 Posted by Harald Groeger

Wie der Data Lake die Fachabteilung vor dem Baden gehen bewahren kann

Über den Data Lake wird viel geschrieben und diskutiert. Aber was ist der Data Lake überhaupt und welche Vor- und Nachteile bietet er konkret für die IT und die Fachabteilungen?

Man kann sich das Konzept des Data Lake wie einen See vorstellen, in den Daten aus verschiedenen Quellen mit geringem Aufwand für die IT hineinfließen. Dadurch, dass alle Anwender auf diese Daten zugreifen können, wird die IT auch bezüglich der Identifizierung, Bereinigung und Integration von Daten erheblich entlastet.

Die Fachabteilungen, die bisher auf die von der IT bereitgestellten Daten und Berichte angewiesen waren, können sich jetzt selbst aus dem Data Lake bedienen und auf Basis der darin gefundenen Informationen erweiterte oder neue Analysen durchführen, um zusätzlichen Nutzen für das Geschäft zu generieren.

Klingt das für Sie zu schön um wahr zu sein? Ist es auch, denn wie genau sollen die Fachabteilungen relevante Daten in der benötigten Qualität aus dem Data Lake fischen? Wie kann vermieden werden, daß Aufwände nur von der IT zu den Fachabteilungen verlagert werden und dort mehrfach anfallen?

Einen Datensumpf vermeiden

Ohne ein Mindestmaß an zentraler Governance durch die IT verkommt der Data Lake schnell zu einem unübersichtlichen Datensumpf. Um dies zu vermeiden, sollten Fachabteilung und IT gemeinsam Bereinigungsanforderungen vereinbaren. Zudem wird ein zentraler Katalog benötigt, in dem die verfügbaren Datenbereiche mit Zuständigkeiten, Begriffserklärungen, Aktualität und Qualität beschrieben sind.

Die beschriebene Basis-Governance kostet weniger als der aktuelle Aufwand für die Aufnahme von Daten in ein Data Warehouse und kann meist am effektivsten von der IT erbracht werden. Ganz ohne Governance entsteht schnell ein Datensumpf, der keine Geschäftsvorteile bringt. Bei einer kompletten Verlagerung der Governance-Aufwände von der IT zu den Fachabteilungen werden zwar Aufwände verschoben, aber nicht verringert.

Selber einen Data Lake anlegen

Vor der Implementierung eines Data Lakes sollte genau geprüft werden, welche ersten Schritte in diese neue Richtung schnell Zusatznutzen für die Fachabteilung bringen – und gleichzeitig einen geringen Aufwand für die IT bedeuten. Bei dieser Diskussion kann die IBM mit einem kostenfreien halbtägigem Workshop unterstützen, bei dem die Anforderungen und Rahmenbedingungen aller Unternehmensfunktionen aufgenommen und danach ein Lösungsvorschlag erarbeitet wird.

Bei Rückfragen zum Inhalt oder Kundenanfragen zum beschriebenen Data Lake Workshop steht der Author gern unter hgroeger@de.ibm.com zur Verfügung.

Neues Standardverhalten in XPages: Whitelist für Datenquellen

22. Juni 2015 Posted by Thomas Ladehoff

IBM Domino Designer
Seit Domino 9.0.1 FixPack 4 gibt es eine Neuerung bei XPages-Datenquellen, womit jetzt nicht mehr beliebige Servernamen über den URL übergeben werden können.


Bis zu diesem Update war es möglich über den URL Parameter "databaseName" einen anderen Server und Datenbank an eine XPage zu übergeben. Die Parameter werden von der Datenquelle auf der XPage verwendet, sofern nicht die Option ignoreRequestParams="true" für die Datenquelle gesetzt ist.


Mit dem neuen Update werden andere Server, als der aktuelle, standardmäßig nicht mehr zugelassen. Eine Beispieladresse wie die folgende führt dann zu dem Fehler "The databaseName URL parameter value is not one of the allowed database names.":

http ://servername.example.com/discussion.nsf/allDocuments.xsp?search=agenda&databaseName=otherserver!!discussion_data.nsf


Über eine neue Option in der xsp.properties Datei der XPages-Anwendung (bzw. des Servers) können die erlaubten Server und Datenbanken konfiguriert werden:


xsp.data.domino.param.databaseName.whitelist=<currentServer>!!<anyApplication>, otherServer!!app.nsf, otherServer!!app2.nsf


Darüber hinaus gibt zwei weitere neue Optionen für xsp.properties Datei:
  • xsp.data.domino.ignoreRequestParams = false
    Bei setzen auf "true" werden anwendungsweit auf XPages die übergebenen Parameter ignoriert.
  • xsp.data.domino.param.databaseName.usage= whitelist | apply | ignore | error
    Separates steuern des "databaseName"-Parameters. "whitelist" ist das Standardverhalten seit Domino 9.0.1 FixPack 4, davor war es "apply" (uneingeschränkt anwenden), bei "ignore" oder "error" wird der Parameter generell ignoriert bzw. führt zu einem Fehler. Empfehlenswerte Einstellung ist hier "ignore", sofern man diesen Parameter nicht wirklich benötigt.


Quelle und weitere Informationen: Link

Diner en Blanc

21. Juni 2015 Posted by Alexander Kluge

Diner en Blanc

XPages: An optimized JavaScript Resource Renderer

21. Juni 2015 Posted by Sven Hasselbach

Ferry Kranenburg created a nice hack to solve the AMD loader problem with XPages and Dojo, and because of the missing ability to add a resource to the bottom of an XPage by a property, I have created a new JavaScriptRenderer which allows to control where a CSJS script will be rendered.

The renderer has multiple options:

  • NORMAL – handles the CSJS resource as always
  • ASYNC – loads the script in an asynchronous way (with an own script tag)
  • NOAMD – adds the no amd scripts around the resource
  • NORMAL_BOTTOM – adds the script at the bottom of the <body> tag
  • ASYNC_BOTTOM – async, but at the end of the generated HTML page
  • NOAMD_BOTTOM – at the end, with the surrounding no amd scripts

To use the normal mode, you don’t have to change you resource definition. If you want to use the other modes, you have to change the content type of the resource with one of the entries in the list above. This for example would add a script block to the end of the page, including the non amd script blocks around it:

<?xml version="1.0" encoding="UTF-8"?>
<xp:view xmlns:xp="http://www.ibm.com/xsp/core">
    <xp:this.resources>
        <xp:script clientSide="true" type="NOAMD_BOTTOM">
            <xp:this.contents><![CDATA[alert("Hello World!");]]></xp:this.contents>
        </xp:script>
    </xp:this.resources>
</xp:view>

2015-06-21 11_04_09

Here is the code for the resource renderer:

package ch.hasselba.xpages;

import java.io.IOException;
import java.util.Iterator;
import java.util.Map;

import javax.faces.component.UIComponent;
import javax.faces.context.FacesContext;
import javax.faces.context.ResponseWriter;

import com.ibm.xsp.component.UIViewRootEx;
import com.ibm.xsp.renderkit.html_basic.ScriptResourceRenderer;
import com.ibm.xsp.resource.Resource;
import com.ibm.xsp.resource.ScriptResource;
import com.ibm.xsp.util.JSUtil;

public class OptimizedScriptResourceRenderer extends ScriptResourceRenderer {
    private static final String TYPE = "type";
    private static final String SCRIPT = "script";
    private static final String CSJSTYPE = "text/javascript";
    private boolean isBottom = false;
    
    private static enum Mode {
        NORMAL, ASYNC, NOAMD, ASYNC_BOTTOM, NOAMD_BOTTOM, NORMAL_BOTTOM
    }

    public void encodeResourceAtBottom(FacesContext fc,
            UIComponent uiComponent, Resource resource) throws IOException {
        isBottom = true;
        encodeResource(fc, uiComponent, resource);
        isBottom = false;
    }

    @Override
    public void encodeResource(FacesContext fc, UIComponent uiComponent,
            Resource resource) throws IOException {

        ScriptResource scriptResource = (ScriptResource) resource;
        ResponseWriter rw = fc.getResponseWriter();
        String type = scriptResource.getType();
        String charset = scriptResource.getCharset();
        String src = scriptResource.getSrc();

        Mode mode = Mode.NORMAL;
        try{
            mode = Mode.valueOf( type );
        }catch(Exception e){};

        if (mode == Mode.NORMAL || mode == Mode.NORMAL_BOTTOM ) {
            normalBottomJSRenderer( fc, uiComponent, scriptResource, (mode == Mode.NORMAL_BOTTOM), type );
        } else {
            if (mode == Mode.ASYNC || mode == Mode.ASYNC_BOTTOM) {
                asyncJSRenderer(fc, uiComponent, scriptResource, (mode == Mode.ASYNC_BOTTOM), rw, type,
                        charset, src );
            }else if (mode == Mode.NOAMD || mode == Mode.NOAMD_BOTTOM ) {
                noAMDJSRenderer(fc, uiComponent, scriptResource, (mode == Mode.NOAMD_BOTTOM) , rw, 
                        type, charset, src);
            }

        }

    }

    private void normalBottomJSRenderer(FacesContext fc,UIComponent uiComponent,
            ScriptResource scriptResource, final boolean addToBottom, final String type ) throws IOException {
        
        if( addToBottom && !isBottom )
            return;
        scriptResource.setType(null);
        super.encodeResource(fc, uiComponent, scriptResource);
        scriptResource.setType(type);
        
    }
    private void asyncJSRenderer(FacesContext fc,
            UIComponent uiComponent, ScriptResource scriptResource, 
             final boolean addToBottom, ResponseWriter rw, final String type, final String charset,
            final String src) throws IOException {
        
        if( addToBottom && !isBottom )
            return;
        
        Map<String, String> attrs = null;
        String key = null;
        String value = null;
        String id = "";

        if (scriptResource.getContents() == null) {
            attrs = scriptResource.getAttributes();
            if (!attrs.isEmpty()) {
                StringBuilder strBuilder = new StringBuilder(124);
                for (Iterator<String> it = attrs.keySet().iterator(); it
                        .hasNext();) {
                    key = it.next();
                    value = attrs.get(key);
                    strBuilder.append(key).append('(').append(value)
                            .append(')');
                }
                id = strBuilder.toString();
            }

            // check if already added
            UIViewRootEx view = (UIViewRootEx) fc.getViewRoot();

            String resId = "resource_" + ScriptResource.class.getName() + src
                    + '|' + type + '|' + charset + id;
            if (view.hasEncodeProperty(resId)) {
                return;
            }
            view.putEncodeProperty(resId, Boolean.TRUE);

        }
        if (!scriptResource.isClientSide()) {
            return;
        }

        rw.startElement(SCRIPT, uiComponent);
        JSUtil.writeln(rw);
        rw.write("var s = document.createElement('" + SCRIPT + "');");
        JSUtil.writeln(rw);
        rw.write("s.src = '" + src + "';");
        JSUtil.writeln(rw);
        rw.write("s.async = true;");
        JSUtil.writeln(rw);
        rw.write("document.getElementsByTagName('head')[0].appendChild(s);");
        JSUtil.writeln(rw);
        rw.endElement(SCRIPT);
        JSUtil.writeln(rw);
    }

    
    private void noAMDJSRenderer(FacesContext fc,
             UIComponent uiComponent,ScriptResource scriptResource,
            final boolean addToBottom, ResponseWriter rw, final String type, final String charset,
            final String src ) throws IOException {
        
        if( addToBottom && !isBottom )
            return;

        // write the "disable AMD" script
        rw.startElement(SCRIPT, uiComponent);
        rw.writeAttribute(TYPE, CSJSTYPE, TYPE);
        rw.writeText(
                        "'function'==typeof define&&define.amd&&'dojotoolkit.org'==define.amd.vendor&&(define._amd=define.amd,delete define.amd);",
                        null);
        rw.endElement(SCRIPT);
        JSUtil.writeln(rw);

        // write the normal CSJS
        scriptResource.setType(null);
        super.encodeResource(fc, uiComponent, scriptResource);
        scriptResource.setType(type);
        // write the "reenable AMD" script
        rw.startElement(SCRIPT, uiComponent);
        rw.writeAttribute(TYPE, CSJSTYPE, TYPE);
        rw
                .writeText(
                        "'function'==typeof define&&define._amd&&(define.amd=define._amd,delete define._amd);",
                        null);
        rw.endElement(SCRIPT);
        JSUtil.writeln(rw);

    }
}

The ViewRenderer must also be modified, otherwise it is not possible to add the resources at the bottom of the <body> tag:

package ch.hasselba.xpages;

import java.io.IOException;
import java.util.List;

import javax.faces.context.FacesContext;
import javax.faces.context.ResponseWriter;
import javax.faces.render.Renderer;

import com.ibm.xsp.component.UIViewRootEx;
import com.ibm.xsp.render.ResourceRenderer;
import com.ibm.xsp.renderkit.html_basic.ViewRootRendererEx2;
import com.ibm.xsp.resource.Resource;
import com.ibm.xsp.resource.ScriptResource;
import com.ibm.xsp.util.FacesUtil;

public class ViewRootRendererEx3 extends ViewRootRendererEx2 {

    protected void encodeHtmlEnd(UIViewRootEx uiRoot, ResponseWriter rw)
            throws IOException {
        FacesContext fc = FacesContext.getCurrentInstance();

        List<Resource> resources = uiRoot.getResources();
        for (Resource r : resources) {
            if (r instanceof ScriptResource) {
                ScriptResource scriptRes = (ScriptResource) r;
                if (scriptRes.isRendered()) {
                    Renderer renderer = FacesUtil.getRenderer(fc, scriptRes.getFamily(), scriptRes.getRendererType());
                    ResourceRenderer resRenderer = (ResourceRenderer) FacesUtil.getRendererAs(renderer, ResourceRenderer.class);
                    if( resRenderer instanceof OptimizedScriptResourceRenderer ){
                        ((OptimizedScriptResourceRenderer) resRenderer).encodeResourceAtBottom(fc, uiRoot, r);
                    }
                }
            }
        }

        rw.endElement("body");
        writeln(rw);
        rw.endElement("html");
    }

}

To activate the new renderes, you have to add them to the faces-config.xml:

<?xml version="1.0" encoding="UTF-8"?>
<faces-config>
  <render-kit>
    <renderer>
      <component-family>com.ibm.xsp.resource.Resource</component-family>
      <renderer-type>com.ibm.xsp.resource.Script</renderer-type>
      <renderer-class>ch.hasselba.xpages.OptimizedScriptResourceRenderer</renderer-class>
    </renderer>
    <renderer>
      <component-family>javax.faces.ViewRoot</component-family>
      <renderer-type>com.ibm.xsp.ViewRootEx</renderer-type>
      <renderer-class>ch.hasselba.xpages.ViewRootRendererEx3</renderer-class>
    </renderer>
  </render-kit>
</faces-config>

Needless to say that this works in Themes too.

Fix Pack 4 für IBM Notes und Domino 9.0.1 erschienen

20. Juni 2015 Posted by Thomas Bahn

IBM NotesIBM Domino
IBM hat vor ein paar Tagen das 4. Fix Pack für IBM Notes und Domino 9.0.1 heraus gebracht.


Important Notes
  • 9.0.1 Fix Pack 4 updates the embedded Notes/Domino JVM to 1.6 SR16 FP4 to address security vulnerabilities.
  • 9.0.1 Fix Pack 4 adds support for the following: Safari 8 for iNotes; SiteMinder 12.52 SP1

Ein Tag später kam dann ein Security Bulletin, dass der "IBM Domino Web server configured for Webmail has a cross-site scripting vulnerability."

CVEID: CVE-2015-1981

Description: IBM Domino Web server configured for Webmail is vulnerable to cross-site scripting, caused by improper validation of user-supplied input. A remote attacker could exploit this vulnerability using a specially-crafted URL to execute script in a victim's Web browser within the security context of the hosting Web site, once the URL is clicked. An attacker could use this vulnerability to steal the victim's cookie-based authentication credentials. Note that Domino servers configured for iNotes are not vulnerable to this attack.

Weitere Informationen:
IBM Notes/Domino 9.0.1 Fix Pack 4 Release Notice
Download IBM Notes 9.0.1 Fix Pack 4
Download IBM Domino 9.0.1 Fix Pack 4
Security Bulletin: IBM Domino Web Server Cross-site Scripting Vulnerability (CVE-2015-1981)