condusivdev, Author at Condusiv - The Diskeeper Company https://condusiv.com/author/condusivdev/ WE MAKE WINDOWS SERVERS FASTER AND MORE RELIABLE Tue, 27 Sep 2022 21:37:41 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.3 6 Best Practices to Improve SQL Query Performance https://condusiv.com/6-best-practices-to-improve-sql-query-performance/?utm_source=rss&utm_medium=rss&utm_campaign=6-best-practices-to-improve-sql-query-performance https://condusiv.com/6-best-practices-to-improve-sql-query-performance/#respond Mon, 18 Oct 2021 23:07:49 +0000 https://blog.condusiv.com/?p=2484 Tips for Optimized Microsoft SQL Server Performance MS SQL Server is a very popular RDMS (relational database management system) with many benefits and features that empower its efficient operation. As with any such robust platform, however—especially one which has matured as SQL Server has—there have been best practices evolved that allow for its best performance, [...]

The post 6 Best Practices to Improve SQL Query Performance appeared first on Condusiv - The Diskeeper Company.

]]>
Tips for Optimized Microsoft SQL Server Performance

MS SQL Server is a very popular RDMS (relational database management system) with many benefits and features that empower its efficient operation. As with any such robust platform, however—especially one which has matured as SQL Server has—there have been best practices evolved that allow for its best performance, including improving SQL query performance.

For any company utilizing it, Microsoft SQL Server is central to a company’s management and storage of information. In business, time is money, so any company that relies on information to function (and in this digital age that would be pretty much all of them) needs access to that information as rapidly as possible. If you are obtaining information from your database through queries, optimizing SQL query performance is vital.

As 7 out of 10 Condusiv customers came to us because they were experiencing SQL performance issues and user complaints, we have amassed a great deal of experience on this topic and would like to share. We have done extensive work in eliminating I/O inefficiencies and streamlining I/O for optimum performance. It is especially important in SQL to reduce the number of random and “noisy” I/Os as they can be quite problematic. We will cover this as well as some additional best practices. Some of these practices may be more time-consuming, and may even require a SQL consultant, and some are easy to solve.

SQL Server, and frankly all relational databases, are all high I/O utilization systems. They’re going to do a lot of workload against the storage array. Understanding their I/O patterns are important, and more important in virtual environments. — Joey D’Antoni, Senior Architect and SQL Server MVP

Our 6 Best Practices for Improving SQL Query Performance

(also available as downloadable PDF)

1. Tune queries

A great feature of the SQL language is that it is fairly easy to learn and to use in creating commands. Not all database functions are efficient, however. Two queries, while they might appear similar, could vary when it comes to execution time. The difference could be the way they are structured; this is a very involved subject and open to some debate. It’s best to engage a SQL consultant or expert and allow them to assist you with structuring your queries.

Aside from the query structure, there are some great guidelines to follow in defining business requirements before beginning.

•   Identify relevant stakeholders

•   Focus on business outcomes

•   Ask the right questions to develop good query requirements

•   Create very specific requirements and confirm them with stakeholders

2. Add memory

Adding memory will nearly always assist in SQL Server query performance, as SQL Server uses memory in several ways. These include:

•   the buffer cache

•   plan cache, where query plans are stored for re-use

•   buffer pool, in which are stored recently written-to pages

•   sorting and matching data, which all takes place in memory

Some queries require lots of memory for joins, sorts, and other operations. All of these operations require memory, and the more data you aggregate and query, the more memory each query may require.

Tips for current Condusiv users: (1) Provision an additional 4-16GB of memory to the SQL Server if you have additional memory to give. (2) Cap MS-SQL memory usage, leaving the additional memory for the OS and our software. Note – Condusiv software will leverage whatever is unused by the OS (3) If no additional memory to add, cap SQL memory usage leaving 8GB for the OS and our software Note – This may not achieve 2X gains but will likely boost performance 30-50% as SQL is not always efficient with its memory usage

3. Perform index maintenance

Indexes are a key resource to SQL Server database performance gain. The downside, however, is that database indexes degrade over time.

Part of this performance degradation comes about through something that many system administrators will be familiar with: fragmentation. Fragmentation on a storage drive means data stored non-contiguously, so that the system has to search through thousands of fragments, meaning extra I/Os, to retrieve data. It is a similar situation with a database index.

There are two types of database index fragmentation:

•   Internal fragmentation, which occurs when more than one data page is created, neither of which is full. Performance is affected because SQL Server must cache two full pages including empty yet allocated space.

•   External fragmentation, which means pages that are out of order.

When an index is created, all pages are sequential, and rows are sequential across the pages. But as data is manipulated and added, pages are split, new pages are added, and tables become fragmented. This ultimately results in index fragmentation.

There are numerous measures to take in restoring an index so that all data is sequential again. One is to rebuild the index, which will result in a brand-new SQL index. Another is to reorganize the index, which will fix the physical order and compact pages.

There are other measures you can take as well, such as finding and removing unused indexes, detecting and creating missing indexes, and rebuilding or reorganizing indexes weekly.

It is recommended you do not perform such measures unless you are a DBA and/or have a thorough understanding of SQL Server.

4. Add extra spindles or flash drives

Like the increase of memory, increasing storage capacity can be beneficial.

Adding an SSD, the most expensive option, can provide the most benefit as there are no moving parts. The less expensive option is to add spindles. Both of these options can help with decreasing latency times, but it does not get rid of the extra I/Os occurring due to fragmentation. It is not really solving the root cause of I/O inefficiencies.

5. Optimize the I/O subsystem

Optimizing the I/O subsystem is highly important in optimizing SQL Server performance. When configuring a new server, or when adding or modifying the disk configuration of an existing system, determining the capacity of the I/O subsystem before deploying SQL Server is good practice.

There are three primary metrics that are most important when it comes to measuring I/O subsystem performance:

•   Latency, which is the time it takes an I/O to complete.

•   I/O operations per second, which is directly related to latency.

•   Sequential throughput, which is the rate at which you can transfer data.

You can utilize an I/O stress tool to validate performance and ensure that the system is tuned optimally for SQL Server before deployment. This will help identify hardware or I/O configuration-related issues. One such tool is Microsoft DiskSpd, which provides the functionality needed to generate a wide variety of disk request patterns. These can be very helpful in the diagnosis and analysis of I/O performance issues.

You can download DiskSpd.exe here.

Another tool is Condusiv’s I/O Assessment Tool for identifying which systems suffer I/O issues and which systems do not. It identifies and ranks systems with the most I/O issues and displays what those issues are across 11 different key performance metrics by identifying performance deviations when workload is the heaviest.

You can download Condusiv’s I/O Assessment Tool here.

6. Use DymaxIO fast data performance software

Reduce the number of I/Os that you are doing. Because remember, the fastest read from disk you can do is one you don’t do at all. So, if you don’t have to do a read, that’s all the better. —Joey D’Antoni, Senior Architect and SQL Server MVP

Reducing and streamlining small, random, fractured I/O will speed up slow SQL queries, reports and missed SLAs. DymaxIO makes this easy to solve.

Many companies have utilized virtualization to greatly increase server efficiency for SQL Server. While increasing efficiency, at the same time virtualization on Windows systems has a downside. Virtualization itself adds complexity to the data path by mixing and randomizing I/O streams—something known as the “I/O blender effect.” On top of that, when Windows is abstracted from the physical layer, it additionally utilizes very small random read and writes which are less efficient that larger contiguous reads and writes. SQL Server performance is penalized not once, but twice. The net effect is I/O characteristics more fractured and random than they need to be.

The result is that typically systems process workloads about 50 percent slower than they should be, simply because a great deal more I/O is required.

While hardware can help performance problems as covered above, it is only a temporary fix as the cause of Windows I/O inefficiencies is not being addressed. Many sites have discovered that DymaxIO fast data performance software, employed on any Windows server (virtual or physical) is a quicker and far more cost-effective solution. DymaxIO replaces tiny writes with large, clean, contiguous writes so that more payload is delivered with every I/O operation. I/O to storage is further reduced by establishing a tier 0 caching strategy which automatically serves hot reads from idle, otherwise unused memory. The software adjusts itself, moment to moment, to only use unused memory.

The use of DymaxIO can improve performance by 50 percent or more, including SQL query performance. Many sites see twice as much improvement or more, depending on the amount of DRAM available. Condusiv Technologies, developer of DymaxIO, actually provides a money-back guarantee that DymaxIO will solve the toughest application performance challenges on I/O intensive systems such as SQL Server.

 

Originally published on Jan 31, 2020. Updated Oct 19, 2021, Sep 27, 2022.

The post 6 Best Practices to Improve SQL Query Performance appeared first on Condusiv - The Diskeeper Company.

]]>
https://condusiv.com/6-best-practices-to-improve-sql-query-performance/feed/ 0
This is Why a Top Performing School Recommended Condusiv Software and Doubled Performance https://condusiv.com/this-is-why-a-top-performing-school-recommended-condusiv-software-and-doubled-performance/?utm_source=rss&utm_medium=rss&utm_campaign=this-is-why-a-top-performing-school-recommended-condusiv-software-and-doubled-performance https://condusiv.com/this-is-why-a-top-performing-school-recommended-condusiv-software-and-doubled-performance/#respond Thu, 06 Dec 2018 05:04:00 +0000 https://blog.condusiv.com/this-is-why-a-top-performing-school-recommended-condusiv-software-and-doubled-performance/ Lawnswood School is one of the top performing educational institutions in the UK. Their IT environment supports a workload that is split between approximately 1200 students and 200 staff.  With 1400 people and various programs and files to support, they were in search of something to help extend the life of their hardware and increase [...]

The post This is Why a Top Performing School Recommended Condusiv Software and Doubled Performance appeared first on Condusiv - The Diskeeper Company.

]]>
Lawnswood School is one of the top performing educational institutions in the UK. Their IT environment supports a workload that is split between approximately 1200 students and 200 staff.  With 1400 people and various programs and files to support, they were in search of something to help extend the life of their hardware and increase performance. They turned to Condusiv®’s V-locity® and Diskeeper® to extend the life of their storage hardware and maintain performance for their students and staff.

“Condusiv’s V-locity software eliminated almost 50% of all storage I/O requests from having to be dealt with by the disk storage (SAN) layer, and that meant that when replacing the old SAN storage with a ‘like-for-like’ HP MSA 2040, V-locity gave me the confidence to make the purchase without having to over-spend to over-provision the storage in order to cope with all the excess unnecessary storage I/O traffic that V-locity efficiently eliminates,”said Noel Reynolds, IT Manager at Lawnswood School. Before upgrading his SAN, Noel was able to extend the life of the HP MSA 2000 SAN for 8 years “thanks to Condusiv’s I/O reduction software”.

Knowing how well the IT environment was performing at Lawnswood School, another school reached out to Noel for help, as their IT environment was almost identical, but suffering from slow and sluggish performance. They also had three VMware hosts of the same specification, the older HP MSA 2000 SAN storage and workloads that were pretty much identical. Noel Reynolds noted that: “They were almost a ‘clone’ school.”

He continued: “I did the usual checks to discover why it wasn’t working well, such as upgrading the firmware, checking the disks for errors and found nothing wrong other than bad storage performance. After comparing the storage latency, I found that Lawnswood School’s disk storage was 20 times faster, even though the hardware, software and workload types were pretty much identical.”

“We identified six of the ‘most hit’ servers and installed Condusiv’s software on them. Within 24 hours, we saw a 50% boost in performance. Visibly improved performance had been returned to the users, and this really helped the end user experience.

A great example of a real-world solution.” Noel concluded

 Read the full case study                        Download 30-day trial

The post This is Why a Top Performing School Recommended Condusiv Software and Doubled Performance appeared first on Condusiv - The Diskeeper Company.

]]>
https://condusiv.com/this-is-why-a-top-performing-school-recommended-condusiv-software-and-doubled-performance/feed/ 0
Finance Company Deploys V-locity I/O Reduction Software for Blazing Fast VDI https://condusiv.com/finance-company-deploys-v-locity-i-o-reduction-software-for-blazing-fast-vdi/?utm_source=rss&utm_medium=rss&utm_campaign=finance-company-deploys-v-locity-i-o-reduction-software-for-blazing-fast-vdi https://condusiv.com/finance-company-deploys-v-locity-i-o-reduction-software-for-blazing-fast-vdi/#respond Tue, 27 Nov 2018 10:27:00 +0000 https://blog.condusiv.com/finance-company-deploys-v-locity-i-o-reduction-software-for-blazing-fast-vdi/ When the New Mexico Mortgage Finance Authority decided to better support their users by moving away from using physical PCs and migrating to a virtual desktop infrastructure, the challenge was to ensure the fastest possible user experience from their Horizon View VDI implementation. “Anytime an organization starts talking about VDI, the immediate concern in the [...]

The post Finance Company Deploys V-locity I/O Reduction Software for Blazing Fast VDI appeared first on Condusiv - The Diskeeper Company.

]]>
When the New Mexico Mortgage Finance Authority decided to better support their users by moving away from using physical PCs and migrating to a virtual desktop infrastructure, the challenge was to ensure the fastest possible user experience from their Horizon View VDI implementation.

“Anytime an organization starts talking about VDI, the immediate concern in the IT shop is how well we will be able to support it from a performance standpoint to ensure a pristine end user experience. Although supported by EMC VNXe flash storage with high IOPS, one of our primary concerns had to do with Windows write inefficiencies that chews up a large percentage of flash IOPS unnecessarily. When you’re rolling out a VDI initiative, the one thing you can’t afford to waste is IOPS,” said Joseph Navarrete, CIO, MFA.

After Joseph turned to Condusiv’s “Set-It-and-Forget-It®” V-locity® I/O reduction software and bumped up the memory allocation for his VDI instances, V-locity was able to offload 40% of I/O from storage resulting in a much faster VDI experience to his users. When he demo’d V-locity on his MS-SQL server instances, V-locity eliminated 39% of his read I/O traffic from storage due to DRAM read caching and another 40% of write I/O operations by solving Windows write inefficiencies at the source.

After seeing the performance boost and increased efficiency to his hardware stack, Joseph ensured V-locity was running across all his systems like MS-Exchange, SharePoint, and more.

“With V-locity I/O reduction software running on our VDI instances, users no longer have to wait extra time. The same is now true for our other mission critical applications like MS-SQL. The dashboard within the V-locity UI provides all the necessary analytics about our environment and view into what the software is actually doing for us. The fact that all of this runs quietly in the background with near-zero overhead impact and no longer requires a reboot to install or upgrade makes the software truly “set and forget,” said Navarrete.

Read the full case study                        Download 30-day trial

The post Finance Company Deploys V-locity I/O Reduction Software for Blazing Fast VDI appeared first on Condusiv - The Diskeeper Company.

]]>
https://condusiv.com/finance-company-deploys-v-locity-i-o-reduction-software-for-blazing-fast-vdi/feed/ 0
When It Really NEEDS To Be Deleted https://condusiv.com/when-it-really-needs-to-be-deleted/?utm_source=rss&utm_medium=rss&utm_campaign=when-it-really-needs-to-be-deleted https://condusiv.com/when-it-really-needs-to-be-deleted/#respond Fri, 26 Oct 2018 04:53:00 +0000 https://blog.condusiv.com/when-it-really-needs-to-be-deleted/ In late May of this year, the European Union formally adopted an updated set of rules about personal data privacy called the General Data Protection Regulation. Condusiv CEO Jim D’Arezzo, speaking with Marketing Technology Insights, said, “Penalties for noncompliance with GDPR are severe. They can be as much as 4% of an offending company’s global [...]

The post When It Really NEEDS To Be Deleted appeared first on Condusiv - The Diskeeper Company.

]]>
In late May of this year, the European Union formally adopted an updated set of rules about personal data privacy called the General Data Protection Regulation. Condusiv CEO Jim D’Arezzo, speaking with Marketing Technology Insights, said, “Penalties for noncompliance with GDPR are severe. They can be as much as 4% of an offending company’s global turnover, up to a total fine of €20 million.”

A key provision of GDPR is the right to be forgotten, which enables any European citizen to have his or her name and identifying data permanently removed from the archives of any firm holding that data in its possession. One component of the right to be forgotten, D’Arezzo notes, is called “right to erasure,” which requires that the data be permanently deleted, i.e. irrecoverable.

Recently, the EU government has begun cracking down on international enterprises, attempting to extend the EU’s right-to-erasure laws to all websites, regardless of where the traffic originates. Many affected records consist not of fields or records in a database, but of discrete files in formats such as Excel or Word.

So to stay compliant with GDPR—which, the EU being the world’s largest market and twenty million euros being a lot of money—you need to be able to delete a file to the point that you can’t get it back. On the other hand, files get deleted by accident or mistake all the time; unless you want to permanently cripple your data archive, you need to be able to get those files back (quickly and easily).

In other words, you need a two-edged sword. For Windows-based systems, that’s exactly what’s provided by our Undelete® product line. Up to a point, any deleted file or version of an Office file can be easily restored, even if it was deleted before Undelete was installed.

If, however—as in the case of a confirmed “right to erasure” request—you need to delete it forever, you use Undelete’s SecureDelete® feature. Using specific bit patterns specified by the US National Security Agency, SecureDelete will overwrite the file to help make it unrecoverable. A second feature, Wipe Free Space, will overwrite any free space on a selected volume, using the same specific bit patterns, to clear out any previously written data in that free space.

So with Undelete, you’re covered both ways. Customers buy it for its recovery abilities: you need to be able to hit the “oops” button and get a file back. But it can also handle the job when you need to make sure a file is gone.

“No matter how redundant my backups are, how secure our security is, I will always have the one group of users that manage to delete that one critical file. I have found Undelete to be an invaluable tool for just such an occasion. This software has saved us both time and money. When we migrated from a Novell Infrastructure, we needed to find a solution that would allow us to restore ‘accidentally’ deleted data from a network share. Since installing Undelete on all my servers, we have had no lost data due to accidents or mistakes.”
–Juan Saldana II, Network Supervisor, Keppel AmFELS Juan Saldana II, Network Supervisor, Keppel AmFELS

For Undelete help with servers or virtual systems, click Undelete Server

To save money with Undelete on Business PCs, click here Undelete Professional

You can purchase Undelete immediately online or download a free 30-day trial (note that the 30-day free trial does not include the Emergency Undelete feature).

The post When It Really NEEDS To Be Deleted appeared first on Condusiv - The Diskeeper Company.

]]>
https://condusiv.com/when-it-really-needs-to-be-deleted/feed/ 0
Big Data Boom Brings Promises, Problems https://condusiv.com/big-data-boom-brings-promises-problems/?utm_source=rss&utm_medium=rss&utm_campaign=big-data-boom-brings-promises-problems https://condusiv.com/big-data-boom-brings-promises-problems/#respond Fri, 07 Sep 2018 08:40:00 +0000 https://blog.condusiv.com/big-data-boom-brings-promises-problems/ By 2020, an estimated 43 trillion gigabytes of data will have been created—300 times the amount of data in existence fifteen years earlier. The benefits of big data, in virtually every field of endeavor, are enormous. We know more, and in many ways can do more, than ever before. But what of the challenges posed [...]

The post Big Data Boom Brings Promises, Problems appeared first on Condusiv - The Diskeeper Company.

]]>
By 2020, an estimated 43 trillion gigabytes of data will have been created—300 times the amount of data in existence fifteen years earlier. The benefits of big data, in virtually every field of endeavor, are enormous. We know more, and in many ways can do more, than ever before. But what of the challenges posed by this “data tsunami”? Will the sheer ability to manage—or even to physically house—all this information become a problem?

Condusiv CEO Jim D’Arezzo, in a recent discussion with Supply Chain Brain, commented that “As it has over the past 40 years, technology will become faster, cheaper, and more expansive; we’ll be able to store all the data we create. The challenge, however, is not just housing the data, but moving and processing it. The components are storage, computing, and network. All three need to be optimized; I don’t see any looming insurmountable problems, but there will be some bumps along the road.”

One example is healthcare. Speaking with Healthcare IT News, D’Arezzo noted that there are many new solutions open to healthcare providers today. “But with all the progress,” he said, “come IT issues. Improvements in medical imaging, for instance, create massive amounts of data; as the quantity of available data balloons, so does the need for processing capability.”

Giving health-care providers—and professionals in other areas—the benefits of the data they collect is not always easy. In an interview with Transforming Data with Intelligence, D’Arezzo said, “Data center consolidation and updating is a challenge. We run into cases where organizations do consolidation on a ‘forklift’ basis, simply dumping new storage and hardware into the system as a solution. Shortly thereafter, they often discover that performance has degraded. A bottleneck has been created that needs to be handled with optimization.”

The news is all over it. You are experiencing it. Big data. Big problems. At Condusiv®, we get it.  We’ve seen users of our I/O reduction software solutions increase the capability of their storage and servers, including SQL servers, by 30% to 50% or more. In some cases, we’ve seen results as high as 10X initial performance—without the need to purchase a single box of new hardware. The tsunami of data—we’ve got you covered.

If you’re interested in reducing the two biggest silent killers of your SQL performance, download a free 30-day trial of DymaxIO fast data software and boost your  I/O performance now.

If you want to hear why your heaviest workloads are only processing half the throughput they should from VM to storage, view this short video.

The post Big Data Boom Brings Promises, Problems appeared first on Condusiv - The Diskeeper Company.

]]>
https://condusiv.com/big-data-boom-brings-promises-problems/feed/ 0
Financial Sector Battered by Rising Compliance Costs https://condusiv.com/financial-sector-battered-by-rising-compliance-costs/?utm_source=rss&utm_medium=rss&utm_campaign=financial-sector-battered-by-rising-compliance-costs https://condusiv.com/financial-sector-battered-by-rising-compliance-costs/#comments Wed, 15 Aug 2018 08:39:00 +0000 https://blog.condusiv.com/financial-sector-battered-by-rising-compliance-costs/ Finance is already an outlier in terms of IT costs. The industry devotes 10.5% of total revenue to IT—and on average, each financial industry IT staffer supports only 15.7 users, the fewest of any industry. All over the world, financial services companies are facing skyrocketing compliance costs. Almost half the respondents to a recent Accenture [...]

The post Financial Sector Battered by Rising Compliance Costs appeared first on Condusiv - The Diskeeper Company.

]]>
Finance is already an outlier in terms of IT costs. The industry devotes 10.5% of total revenue to IT—and on average, each financial industry IT staffer supports only 15.7 users, the fewest of any industry.

All over the world, financial services companies are facing skyrocketing compliance costs. Almost half the respondents to a recent Accenture survey of compliance officers in 13 countries said they expected 10% to 20% increases, and nearly one in five are expecting increases of more than 20%.

Much of this is driven by international banking regulations. At the beginning of this year, the Common Reporting Standard went into effect. An anti-tax-evasion measure signed by 142 countries, the CRS requires financial institutions to provide detailed account information to the home governments of virtually every sizeable depositor.

Just to keep things exciting, the U.S. government hasn’t signed on to CRS; instead we require banks doing business with Americans to comply with the Foreign Account Tax Compliance Act of 2010. Which requires—surprise, surprise—pretty much the same thing as CRS, but reported differently.

And these are just two examples of the compliance burden the financial sector must deal with. Efficiently, and within a budget. In a recent interview by ValueWalk entitled “Compliance Costs Soaring for Financial Institutions,” Condusiv® CEO Jim D’Arezzo said, “Financial firms must find a path to more sustainable compliance costs.”

Speaking to the site’s audience (ValueWalk is a site focused on hedge funds, large asset managers, and value investing) D’Arezzo noted that finance is already an outlier in terms of IT costs. The industry devotes 10.5% of total revenue to IT, more than government, healthcare, retail, or anybody else. It’s also an outlier in terms of IT staff load; on average, each financial industry IT staffer supports only 15.7 users, the fewest of any industry. (Government averages 37.8 users per IT staff employee.)

To ease these difficulties, D’Arezzo recommends that the financial industry consider advanced technologies that provide cost-effective ways to enhance overall system performance. “The only way financial services companies will be able to meet the compliance demands being placed on them, and at the same time meet their efficiency and profitability targets, will be to improve the efficiency of their existing capacity—especially as regards I/O reduction.”

At Condusiv, that’s our business. We’ve seen users of our I/O reduction software solutions increase the capability of their storage and servers, including SQL servers, by 30% to 50% or more. In some cases, we’ve seen results as high as 10X initial performance—without the need to purchase a single box of new hardware.

If you’re interested in reducing the two biggest silent killers of your SQL performance, download a free 30-day trial of DymaxIO fast data software and boost your  I/O performance now.

For an explanation of why your heaviest workloads are only processing half the throughput they should from VM to storage, view this short video.

The post Financial Sector Battered by Rising Compliance Costs appeared first on Condusiv - The Diskeeper Company.

]]>
https://condusiv.com/financial-sector-battered-by-rising-compliance-costs/feed/ 2
Doing it All: The Internet of Things and the Data Tsunami https://condusiv.com/doing-it-all-the-internet-of-things-and-the-data-tsunami/?utm_source=rss&utm_medium=rss&utm_campaign=doing-it-all-the-internet-of-things-and-the-data-tsunami https://condusiv.com/doing-it-all-the-internet-of-things-and-the-data-tsunami/#comments Tue, 07 Aug 2018 15:44:00 +0000 https://blog.condusiv.com/doing-it-all-the-internet-of-things-and-the-data-tsunami/ “If you’re a CIO today, basically you have no choice. You have to do edge computing and cloud computing, and you have to do them within budgets that don’t allow for wholesale hardware replacement…” For a while there, it looked like corporate IT resource planning was going to be easy. Organizations would move practically everything [...]

The post Doing it All: The Internet of Things and the Data Tsunami appeared first on Condusiv - The Diskeeper Company.

]]>
“If you’re a CIO today, basically you have no choice. You have to do edge computing and cloud computing, and you have to do them within budgets that don’t allow for wholesale hardware replacement…”

For a while there, it looked like corporate IT resource planning was going to be easy. Organizations would move practically everything to the cloud, lean on their cloud service suppliers to maintain performance, cut back on operating expenses for local computing, and reduce—or at least stabilize—overall cost.

Unfortunately, that prediction didn’t reckon with the Internet of Things (IoT), which, in terms of both size and importance, is exploding.

What’s the “edge?”

It varies. To a telecom, the edge could be a cell phone, or a cell tower. To a manufacturer, it could be a machine on a shop floor. To a hospital, it could be a pacemaker. What’s important is that edge computing allows data to be analyzed in near real time, allowing actions to take place at a speed that would be impossible in a cloud-based environment.

(Consider, for example, a self-driving car. The onboard optics spot a baby carriage in an upcoming crosswalk. There isn’t time for that information to be sent upstream to a cloud-based application, processed, and an instruction returned before slamming on the brakes.)

Meanwhile, the need for massive data processing and analytics continues to grow, creating a kind of digital arms race between data creation and the ability to store and analyze it. In the life sciences, for instance, it’s estimated that only 5% of the data ever created has been analyzed.

Condusiv® CEO Jim D’Arezzo was interviewed by App Development magazine (which publishes news to 50,000 IT pros) on this very topic, in an article entitled “Edge computing has a need for speed.” Noting that edge computing is predicted to grow at a CAGR of 46% between now and 2022, Jim said, “If you’re a CIO today, basically you have no choice. You have to do edge computing and cloud computing, and you have to do them within budgets that don’t allow for wholesale hardware replacement. For that to happen, your I/O capacity and SQL performance need to be optimized. And, given the realities of edge computing, so do your desktops and laptops.”

At Condusiv, we’ve seen users of our I/O reduction software solutions increase the capability of their storage and servers, including SQL servers, by 30% to 50% or more. In some cases, we’ve seen results as high as 10X initial performance—without the need to purchase a single box of new hardware.

If you’re interested in reducing the two biggest silent killers of your SQL performance, download a free 30-day trial of DymaxIO fast data software and boost your  I/O performance now.

If you want to hear why your heaviest workloads are only processing half the throughput they should from VM to storage, view this short video.

The post Doing it All: The Internet of Things and the Data Tsunami appeared first on Condusiv - The Diskeeper Company.

]]>
https://condusiv.com/doing-it-all-the-internet-of-things-and-the-data-tsunami/feed/ 2
$2 Million Cancelled https://condusiv.com/2-million-cancelled/?utm_source=rss&utm_medium=rss&utm_campaign=2-million-cancelled Tue, 22 Jul 2014 18:46:36 +0000 https://blog.condusiv.com/?p=1407 CHRISTUS Health cancelled a $2 Million order. Just before they pulled the trigger on a $2 Million storage purchase to improve the performance of their electronic health records application (MEDITECH®), they evaluated V-locity® I/O reduction software. We actually heard the story first hand from the hardware vendor's reseller in the deal at a UBM Xchange conference. [...]

The post $2 Million Cancelled appeared first on Condusiv - The Diskeeper Company.

]]>
CHRISTUS Health cancelled a $2 Million order.

Just before they pulled the trigger on a $2 Million storage purchase to improve the performance of their electronic health records application (MEDITECH®), they evaluated V-locity® I/O reduction software.

We actually heard the story first hand from the hardware vendor’s reseller in the deal at a UBM Xchange conference. He thought he had closed the $2 Million deal only to find out that CHRISTUS was doing some testing with V-locity. After getting the news that the storage order would not be placed, he met us at Xchange to find out more about V-locity since “this V-locity stuff is for real.”

After an initial conversation with anyone about V-locity, the first response is generally the same – skepticism. Can software alone really accelerate the applications in my virtual environment? Since we are conditioned to think only new hardware upgrades can solve performance bottlenecks, organizations end up with spiraling data center costs without any other option except to throw more hardware at the problem.

CHRISTUS Health, like many others, approached us with the same skepticism. But after virtualizing 70+ servers for their EHR application, they noticed a severe performance hit from the “I/O blender” effect. They needed a solution to solve the problem, not just more hardware to medicate the problem on the backend.

Since V-locity comes with an embedded performance benchmark that provides the I/O profile of any VM workload, it makes it easy to see a before/after comparison in real-world environments.

After evaluation, not only did CHRISTUS realize they were able to double their medical records performance, but after trying V-locity on their batch billing job, they dropped a painful 20 hour job down to 12 hours.

In addition to performance gains, V-locity also provides a special benefit to MEDITECH users by eliminating excessive file fragmentation that can cause the File Attribute List (FAL) to reach its size limit and degrade performance further or even threaten availability.

Tom Swearingen, the manager of Infrastructure Services at CHRISTUS Health said it best. “We are constantly scrutinizing our budget, so anything that helps us avoid buying more storage hardware for performance or host-related infrastructure is a huge benefit.”

Read the full case study – CHRISTUS Health Doubles Electronic Health Record Performance with V-locity I/O Reduction Software

The post $2 Million Cancelled appeared first on Condusiv - The Diskeeper Company.

]]>
Experts discuss built-in defragmentation and the superior merits of Diskeeper optimization https://condusiv.com/experts-discuss-built-in-defragmentation-and-the-superior-merits-of-diskeeper-optimization/?utm_source=rss&utm_medium=rss&utm_campaign=experts-discuss-built-in-defragmentation-and-the-superior-merits-of-diskeeper-optimization Fri, 27 Jan 2012 18:51:55 +0000 https://blog.condusiv.com/?p=1418 Recently, there’s been a lot of talk about built-in defragging systems. Is Windows®7 the best option? In the latest issue of Processor Magazine, experts weigh in, making the case for Diskeeper’s optimization in the enterprise. Read the whole article here: https://www.processor.com/articles//P3402/11p02/11p02.pdf?guid

The post Experts discuss built-in defragmentation and the superior merits of Diskeeper optimization appeared first on Condusiv - The Diskeeper Company.

]]>
Recently, there’s been a lot of talk about built-in defragging systems. Is Windows®7 the best option? In the latest issue of Processor Magazine, experts weigh in, making the case for Diskeeper’s optimization in the enterprise. Read the whole article here: https://www.processor.com/articles//P3402/11p02/11p02.pdf?guid

The post Experts discuss built-in defragmentation and the superior merits of Diskeeper optimization appeared first on Condusiv - The Diskeeper Company.

]]>
Congratulations Mike Topping, Network Engineer from Metro Datacom! https://condusiv.com/congratulations-mike-topping-network-engineer-from-metro-datacom/?utm_source=rss&utm_medium=rss&utm_campaign=congratulations-mike-topping-network-engineer-from-metro-datacom https://condusiv.com/congratulations-mike-topping-network-engineer-from-metro-datacom/#respond Fri, 20 May 2011 03:42:00 +0000 https://blog.condusiv.com/congratulations-mike-topping-network-engineer-from-metro-datacom/ Mike sent us his Diskeeper 2011 Disk Performance Report showing 322,347 Disk Access I/Os saved!  Mike is the winner of the drawing for the $100 American Express gift card!

The post Congratulations Mike Topping, Network Engineer from Metro Datacom! appeared first on Condusiv - The Diskeeper Company.

]]>
Mike sent us his Diskeeper 2011 Disk Performance Report showing 322,347 Disk Access I/Os saved! 

Mike is the winner of the drawing for the $100 American Express gift card!

The post Congratulations Mike Topping, Network Engineer from Metro Datacom! appeared first on Condusiv - The Diskeeper Company.

]]>
https://condusiv.com/congratulations-mike-topping-network-engineer-from-metro-datacom/feed/ 0