Search Results

Search found 453 results on 19 pages for 'throughput'.

Page 5/19 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • WebLogic Server(JRockit) - ???????????????????????????|WebLogic Channel|??????

    - by ???02
    ?????????????????????????????????????????????????JVM???????????????Oracle JRockit JVM????????????????????????????????????????JVM???????????????????????????????????????????????????????????????JRockit JVM?????????????????????????    1) ??????????????????    2) ???????????????????·???????    3) ???·??????????    4) ????·??????????????    5) ?????????????    6) ????·????????????????1) ??????????????????????????????????????????????Java????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????·?????????????????????????????????????JVM?????????·???????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????JVM?????????????    ¦Oracle JRockit Mission Control??JRockit????·?????????????????????????·????·??????????·???????????????·???????????????????????????????????????????????????????????????·????·?????????????????Oracle JRockit Mission Control?????·?????????????    ¦-Xverbose???????????????????????????-Xverbose:memdbg,gcpause,gcreport????????????????·????????????????????????????????-Xverbose:memdbg????????????????·????????????????????????????·???????????????????????Java????????????????????????????????????????????????????????JVM???????????????2) ???????????????????·???????????????????????????????JRockit JVM??????????1?????????????·??????·??????????????????????????????????    1) ????????????????????????·??????·???    ????JRockit JVM???????????·??????·???????????????????????????·??????????????????·????????????????    2)??????????·????·??????    ???????·??????·??????????????????????????·??????????????????????????·????·???????????????????????????????????????????????    3)???????·????·????·????·??????    ???????·??????·??????????????????????????·?????????????????????·????·????·????·???????????????????????????????????????????????????·????????????????Oracle JRockit???????·??????·?????4.2??????·?????????????????????????????2.1) ????????????????????????·??????·???JRockit JVM???????????·??????·???(??????????·????????????)??????????????·??????????????????????????????????????????????·????·????????????????????·??????????????????????·???????????????????????????????????????·??????·?????????????????·????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????·??????·?????JRockit JVM???????????·????????????????????????·??????????????????????java -Xgc:throughput myApplication 2.2) ???????????·????·????·?????? ???????·????????????????????????????????????????·????·?????????????????????????????????????????????????????????·????·????(-Xgc:singlepar)????????????????????·???????????????????????????????????????????? 2.3) ??????????·????·?????? ??????????????????????????????????????????????????????????????????·????·????(-Xgc:genpar)??????????????????????????????????????????????????????????????????·????·???????????????????????????????·????????????????????????????????????????????3) ???·???????????????????·????64MB??????3GB(64???·???????)???1GB(32???·???????)??????????????????????·???????????????????????????????(?????1GB???????)????????????????????????-Xms(?????·???)?-Xmx(?????·???)????????·??????????????·??????????????????????-Xms?-Xmx?????????????????????????????????????????????:java -Xms:2g -Xmx:2g myApplication???????????·??????????????????????????????????Oracle JRockit???????·??????·?????4.4????????????????????????????????4) ????·??????????????????(????)??????????·????(-Xgc:throughput?-Xgc:genpar???-Xgc:gencon)????????????????????????????????????Java???????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????·???????????????????????????????????????????????????????????????????    -Xgc:throughput???-Xgc:genpar???????????·????????????????????????????????????????·??????????????    -Xgc:gencon???????????????????·?????????????????·????????????????????????????·????????????????????????????????????(??????????·??????)????????????(?????????·??????)????????????????????????????????????·???????????????????????????????????????????????????·???????????????????????????????????-Xns?????????????:java -Xgc:gencon -Xms:2g -Xmx:2g -Xns:512m myApplication5) ???????????????????????????????????????????????????????????????????????JRockit JVM????????????????????????????????????·???????????????(-Xgc)?????????????????????????????????????????????????·????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????2?????????????????????????????????????????????????????????·?????????????????????????????????????·?????????????????????????????????????????????????????????????????????????Oracle JRockit???????·??????·?????4.3??????????????????????6) ????·??????????????????TLA??????????Java???????????????????????????????????????????????????????????TLA???????????????????????Oracle JRockit JVM?????TLA?????????????????????????????????????????????TLA??????????????????????????????????????????????????????????????????????????????JRockit????·????????????????????????????????????????????????????????????????????????TLA???????????????????????????????????????????????????TLA??????????????Oracle JRockit???????·??????·?????4.4.1??????·??????????????????????????

    Read the article

  • SQL SERVER – Shard No More – An Innovative Look at Distributed Peer-to-peer SQL Database

    - by pinaldave
    There is no doubt that SQL databases play an important role in modern applications. In an ideal world, a single database can handle hundreds of incoming connections from multiple clients and scale to accommodate the related transactions. However the world is not ideal and databases are often a cause of major headaches when applications need to scale to accommodate more connections, transactions, or both. In order to overcome scaling issues, application developers often resort to administrative acrobatics, also known as database sharding. Sharding helps to improve application performance and throughput by splitting the database into two or more shards. Unfortunately, this practice also requires application developers to code transactional consistency into their applications. Getting transactional consistency across multiple SQL database shards can prove to be very difficult. Sharding requires developers to think about things like rollbacks, constraints, and referential integrity across tables within their applications when these types of concerns are best handled by the database. It also makes other common operations such as joins, searches, and memory management very difficult. In short, the very solution implemented to overcome throughput issues becomes a bottleneck in and of itself. What if database sharding was no longer required to scale your application? Let me explain. For the past several months I have been following and writing about NuoDB, a hot new SQL database technology out of Cambridge, MA. NuoDB is officially out of beta and they have recently released their first release candidate so I decided to dig into the database in a little more detail. Their architecture is very interesting and exciting because it completely eliminates the need to shard a database to achieve higher throughput. Each NuoDB database consists of at least three or more processes that enable a single database to run across multiple hosts. These processes include a Broker, a Transaction Engine and a Storage Manager.  Brokers are responsible for connecting client applications to Transaction Engines and maintain a global view of the network to keep track of the multiple Transaction Engines available at any time. Transaction Engines are in-memory processes that client applications connect to for processing SQL transactions. Storage Managers are responsible for persisting data to disk and serving up records to the Transaction Managers if they don’t exist in memory. The secret to NuoDB’s approach to solving the sharding problem is that it is a truly distributed, peer-to-peer, SQL database. Each of its processes can be deployed across multiple hosts. When client applications need to connect to a Transaction Engine, the Broker will automatically route the request to the most available process. Since multiple Transaction Engines and Storage Managers running across multiple host machines represent a single logical database, you never have to resort to sharding to get the throughput your application requires. NuoDB is a new pioneer in the SQL database world. They are making database scalability simple by eliminating the need for acrobatics such as sharding, and they are also making general administration of the database simpler as well.  Their distributed database appears to you as a user like a single SQL Server database.  With their RC1 release they have also provided a web based administrative console that they call NuoConsole. This tool makes it extremely easy to deploy and manage NuoDB processes across one or multiple hosts with the click of a mouse button. See for yourself by downloading NuoDB here. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: CodeProject, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology Tagged: NuoDB

    Read the article

  • Fastest SFTP client

    - by Stan
    Protocol: SFTP (port 22) I've tested CuteFtp, FileZilla, SecureCRT and several others. Looks like CuteFTP has the best throughput, usually 200%-400% than others. I've read something about SecureFtp may have slower rate from here. Can anyone explain why CuteFtp has better throughput? And, is there any other FTP client even faster than CuteFtp? Thanks a lot!

    Read the article

  • HAProxy causing delay

    - by user1221444
    I am trying to configure HAProxy to do load balancing for a custom webserver I created. Right now I am noticing an increasing delay with HAProxy as the size of the return message increases. For example, I ran four different tests, here are the results: Response 15kb through HAProxy: Avg. response time: .34 secs Transacation rate: 763 trans/sec Throughput: 11.08 MB/sec Response 2kb through HAProxy: Avg. response time: .08 secs Transaction rate: 1171 trans / sec Throughput: 2.51 MB/sec Response 15kb directly to server: Avg. response time: .11 sec Transaction rate: 1046 trans/sec throughput: 15.20 MB/sec Response 2kb directly to server: Avg. Response time: .05 secs Transaction rate: 1158 trans/sec Throughput: 2.48 MB/sec All transactions are HTTP requests. As you can see, there seems to be a much bigger difference between response times for when the response is bigger, than when it is smaller. I understand there will be a slight delay when using HAProxy. Not sure if it matters, but the test itself was run using siege. And during the test there was only one server behind the HAProxy(the same that was used in the direct to server tests). Here is my haproxy.config file: global log 127.0.0.1 local0 log 127.0.0.1 local1 notice maxconn 10000 user haproxy group haproxy daemon #debug defaults log global mode http option httplog option dontlognull retries 3 option redispatch option httpclose maxconn 10000 contimeout 10000 clitimeout 50000 srvtimeout 50000 balance roundrobin stats enable stats uri /stats listen lb1 10.1.10.26:80 maxconn 10000 server app1 10.1.10.200:8080 maxconn 5000 I couldn't find much in terms of options in this file that would help my problem. I have heard suggestions that I may have to adjust a few of my sysctl settings. I could not find a lot of information on this however, most documentation is for Linux 2.4 and 2.6 on the sysctl stuff, I am running 3.2(Ubuntu server 12.04), which seems to auto tuning, so I have no clue what I should or shouldn't be changing. Most settings changes I tried had no effect or a negative effect on performance. Just a notice, this is a very preliminary test, and my hope is that at deployment time, my HAProxy will be able to balance 10k-20k requests/sec to many servers, so if anyone could provide information to help me reach that goal, it would be much appreciated. Thank you very much for any information you can provide. And if you need anymore information from me please let me know, I will get you anything I can.

    Read the article

  • How many megabytes per second may I expect from a gigabit USB 2.0 network card under linux?

    - by Nakedible
    I'm interested in the actual real-word throughput attainable with an external 1000BaseT USB 2.0 network card under Linux. I have been able to attain 90 megabytes per second on a PCI-E interface, but the USB 2.0 bus has a theoretical limit of 480Mbit/s, and in practice less than 40 megabytes per second. Is the actual throughput attainable with such a card under linux 40, 30, 20, or even as low as 10 megabytes per second, eg. no better than a normal 100BaseT network card?

    Read the article

  • What a Performance! MySQL 5.5 and InnoDB 1.1 running on Oracle Linux

    - by zeynep.koch(at)oracle.com
    The MySQL performance team in Oracle has recently completed a series of benchmarks comparing Read / Write and Read-Only performance of MySQL 5.5 with the InnoDB and MyISAM storage engines. Compared to MyISAM, InnoDB delivered 35x higher throughput on the Read / Write test and 5x higher throughput on the Read-Only test, with 90% scalability across 36 CPU cores. A full analysis of results and MySQL configuration parameters are documented in a new whitepaperIn addition to the benchmark, the new whitepaper, also includes:- A discussion of the use-cases for each storage engine- Best practices for users considering the migration of existing applications from MyISAM to InnoDB- A summary of the performance and scalability enhancements introduced with MySQL 5.5 and InnoDB 1.1.The benchmark itself was based on Sysbench, running on AMD Opteron "Magny-Cours" processors, and Oracle Linux with the Unbreakable Enterprise Kernel You can learn more about MySQL 5.5 and InnoDB 1.1 from here and download it from here to test whether you witness performance gains in your real-world applications.  By Mat Keep

    Read the article

  • 2012 Oracle Fusion Innovation Awards - Part 2

    - by Michelle Kimihira
    Author: Moazzam Chaudry Continuing from Friday's blog on 2012 Oracle Fusion Innovation Awards, this blog (Part 2) will provide more details around the customers. It was a tremendous honor to be in single room of winners. We only wish we could have had more time to share stories from all the winners.  We received great insight from all the innovative solutions that our customers deploy and would like to share them broadly, so that others can benefit from best practices. There was a customer panel session joined by Ingersoll Rand, Nike and Motability and here is what was discussed: Barry Bonar, Enterprise Architect from Ingersoll Rand shared details around their solution, comprised of Oracle Exalogic, Oracle WebLogic Server and Oracle SOA Suite. This combined solutoin enabled their business transformation to increase decision-making, speed and efficiency, resulting in 40% reduced IT spend, 41X Faster response time and huge cost savings. Ashok Balakrishnan, Architect from Nike shared how they leveraged Oracle Coherence to analyze their digital "footprint" of activities. This helps them compete, collaborate and compare athletic data over time. Lastly, Ashley Doodly, Head of IT from Motability shared details around their solution compromised of Oracle SOA Suite, Service Bus, ADF, Coherence, BO and E-Business Suite. This solution helped Motability achieve 100% ROI within the first few months, performance in seconds vs. 10's of minutes and tremendous improvement in throughput that increased up to 50%.  This year's winners by category are: Oracle Exalogic Customer Results using Fusion Middleware Netshoes ATG on Exalogic: 6X Reduced H/W foot print, 6.2X increased throughput and 3 weeks time to market Claro Part of America Movil, running mission critical Java Application on Exalogic with 35X Faster Java response time, 5X Throughput Underwriters Laboratories Exalogic as an Apps Consolidation platform to power tremendous growth Ingersoll Rand EBS on Exalogic: Up to 40% Reduction in overall IT budget, 3x reduced foot print Oracle Cloud Application Foundation Customer Results using Fusion Middleware  Mazda Motor Corporation Tuxedo ART Batch runtime environment to migrate their batch apps on new open environment and reduce main frame cost. HOTELBEDS Technology Open Source to WebLogic transformation Globalia Corporation Introduced Oracle Coherence to fully reengineer DTH system and provide multiple business and technical benefits Nike Nike+, digital sports platform, has 8M users and is expecting an 5X increase in users, many of who will carry multiple devices that frequently sync data with the Digital Sport platform Comcast Corporation The solution is expected to increase availability, continuity, performance, and simplify and make the code at the application layer more flexible. Oracle SOA and Oracle BPM Customer Results using Fusion Middleware NTT Docomo Network traffic solution based on Oracle event processing and coherence - massive in scale: 12M users (50M in future) - 800,000 events/sec. Schneider National, Inc. SOA/B2B/ADF/Data Integration to orchestrate key order processes across Siebel, OTM & EBS.  Platform runs 60M trans/day and  50 million composite SOA instances per day across 10G and 11G Amadeus Oracle BPM solution: Business Rules and processes vary across local (80), regional (~10) and corporate approval process. Up to 10 levels of approval. Plans to deploy across 20+ markets Navitar SOA solution integrates a fully non-Oracle legacy application/ERP environment using Oracle’s SOA Suite and Oracle AIA Foundation Pack. Motability Uses SOA Suite to synchronize data across the systems and to manage the vehicle remarketing process Oracle WebCenter Customer Results using Fusion Middleware  News Limited Single platform running websites for 50% of Australia's newspapers University of Louisville “Facebook for Medicine”: Oracle Webcenter platform and Oracle BIEE to analyze patient test data and uncover potential health issues. Expecting annualized ROI of 277% China Mobile Jiangsu Company portal (25k users) to drive collaboration & productivity Life Technologies Portal for remotely monitoring & repairing biotech instruments LA Dept. of Water & Power Oracle WebCenter Portal to power ladwp.com on desktop and mobile for 1.6million users Oracle Identity Management Customer Results using Fusion Middleware Education Testing Service Identity Management platform for provisioning & SSO of 6 million GRE, GMAT, TOEFL customers Avea Oracle Identity Manager allowing call center personnel to quickly change Identity Profile to handle varying call loads based on a user self service interface. Decreased Admin Cost by 30% Oracle Data Integration Customer Results using Fusion Middleware Raymond James Near real-time integration for improved systems (throughput & performance) and enhanced operational flexibility in a 24 X 7 environment Wm Morrison Supermarkets Electronic Point of Sale integration handling over 80 million transactions a day in near real time (15 min intervals) Oracle Application Development Framework and Oracle Fusion Development Customer Results using Fusion Middleware Qualcomm Incorporated Solution providing  immediate business value enabling a self-service model necessary for growing the new customer base, an increase in customer satisfaction, reduced “time-to-deliver” Micros Systems, Inc. ADF, SOA Suite, WebCenter  enables services that include managing distribution of hotel rooms availability and rates to channels such as Hotel Web-site, Expedia, etc. Marfin Egnatia Bank A new web 2.0 UI provides a much richer experience through the ADF solution with the end result being one of boosting end-user productivity    Business Analytics (Oracle BI, Oracle EPM, Oracle Exalytics) Customer Results using Fusion Middleware INC Research Self-service customer portal delivering 5–10% of the overall revenue - expected to grow fast with the BI solution Experian Reduction in Time to Complete the Financial Close Process Hologic Inc Solution, saving months of decision-making uncertainty! We look forward to seeing many more innovative nominations. The nominatation process for 2013 begins in April 2013.    Additional Information: Blog: Oracle WebCenter Award Winners Blog: Oracle Identity Management Winners Blog: Oracle Exalogic Winners Blog: SOA, BPM and Data Integration will be will feature award winners in its respective areas this week Subscribe to our regular Fusion Middleware Newsletter Follow us on Twitter and Facebook

    Read the article

  • New Replication, Optimizer and High Availability features in MySQL 5.6.5!

    - by Rob Young
    As the Product Manager for the MySQL database it is always great to announce when the MySQL Engineering team delivers another great product release.  As a field DBA and developer it is even better when that release contains improvements and innovation that I know will help those currently using MySQL for apps that range from modest intranet sites to the most highly trafficked web sites on the web.  That said, it is my pleasure to take my hat off to MySQL Engineering for today's release of the MySQL 5.6.5 Development Milestone Release ("DMR"). The new highlighted features in MySQL 5.6.5 are discussed here: New Self-Healing Replication ClustersThe 5.6.5 DMR improves MySQL Replication by adding Global Transaction Ids and automated utilities for self-healing Replication clusters.  Prior to 5.6.5 this has been somewhat of a pain point for MySQL users with most developing custom solutions or looking to costly, complex third-party solutions for these capabilities.  With 5.6.5 these shackles are all but removed by a solution that is included with the GPL version of the database and supporting GPL tools.  You can learn all about the details of the great, problem solving Replication features in MySQL 5.6 in Mat Keep's Developer Zone article.  New Replication Administration and Failover UtilitiesAs mentioned above, the new Replication features, Global Transaction Ids specifically, are now supported by a set of automated GPL utilities that leverage the new GTIDs to provide administration and manual or auto failover to the most up to date slave (that is the default, but user configurable if needed) in the event of a master failure. The new utilities, along with links to Engineering related blogs, are discussed in detail in the DevZone Article noted above. Better Query Optimization and ThroughputThe MySQL Optimizer team continues to amaze with the latest round of improvements in 5.6.5. Along with much refactoring of the legacy code base, the Optimizer team has improved complex query optimization and throughput by adding these functional improvements: Subquery Optimizations - Subqueries are now included in the Optimizer path for runtime optimization.  Better throughput of nested queries enables application developers to simplify and consolidate multiple queries and result sets into a single unit or work. Optimizer now uses CURRENT_TIMESTAMP as default for DATETIME columns - For simplification, this eliminates the need for application developers to assign this value when a column of this type is blank by default. Optimizations for Range based queries - Optimizer now uses ready statistics vs Index based scans for queries with multiple range values. Optimizations for queries using filesort and ORDER BY.  Optimization criteria/decision on execution method is done now at optimization vs parsing stage. Print EXPLAIN in JSON format for hierarchical readability and Enterprise tool consumption. You can learn the details about these new features as well all of the Optimizer based improvements in MySQL 5.6 by following the Optimizer team blog. You can download and try the MySQL 5.6.5 DMR here. (look under "Development Releases")  Please let us know what you think!  The new HA utilities for Replication Administration and Failover are available as part of the MySQL Workbench Community Edition, which you can download here .Also New in MySQL LabsAs has become our tradition when announcing DMRs we also like to provide "Early Access" development features to the MySQL Community via the MySQL Labs.  Today is no exception as we are also releasing the following to Labs for you to download, try and let us know your thoughts on where we need to improve:InnoDB Online OperationsMySQL 5.6 now provides Online ADD Index, FK Drop and Online Column RENAME.  These operations are non-blocking and will continue to evolve in future DMRs.  You can learn the grainy details by following John Russell's blog.InnoDB data access via Memcached API ("NotOnlySQL") - Improved refresh of an earlier feature releaseSimilar to Cluster 7.2, MySQL 5.6 provides direct NotOnlySQL access to InnoDB data via the familiar Memcached API. This provides the ultimate in flexibility for developers who need fast, simple key/value access and complex query support commingled within their applications.Improved Transactional Performance, ScaleThe InnoDB Engineering team has once again under promised and over delivered in the area of improved performance and scale.  These improvements are also included in the aggregated Spring 2012 labs release:InnoDB CPU cache performance improvements for modern, multi-core/CPU systems show great promise with internal tests showing:    2x throughput improvement for read only activity 6x throughput improvement for SELECT range Read/Write benchmarks are in progress More details on the above are available here. You can download all of the above in an aggregated "InnoDB 2012 Spring Labs Release" binary from the MySQL Labs. You can also learn more about these improvements and about related fixes to mysys mutex and hash sort by checking out the InnoDB team blog.MySQL 5.6.5 is another installment in what we believe will be the best release of the MySQL database ever.  It also serves as a shining example of how the MySQL Engineering team at Oracle leads in MySQL innovation.You can get the overall Oracle message on the MySQL 5.6.5 DMR and Early Access labs features here. As always, thanks for your continued support of MySQL, the #1 open source database on the planet!

    Read the article

  • SPARC T4-4 Delivers World Record Performance on Oracle OLAP Perf Version 2 Benchmark

    - by Brian
    Oracle's SPARC T4-4 server delivered world record performance with subsecond response time on the Oracle OLAP Perf Version 2 benchmark using Oracle Database 11g Release 2 running on Oracle Solaris 11. The SPARC T4-4 server achieved throughput of 430,000 cube-queries/hour with an average response time of 0.85 seconds and the median response time of 0.43 seconds. This was achieved by using only 60% of the available CPU resources leaving plenty of headroom for future growth. The SPARC T4-4 server operated on an Oracle OLAP cube with a 4 billion row fact table of sales data containing 4 dimensions. This represents as many as 90 quintillion aggregate rows (90 followed by 18 zeros). Performance Landscape Oracle OLAP Perf Version 2 Benchmark 4 Billion Fact Table Rows System Queries/hour Users* Response Time (sec) Average Median SPARC T4-4 430,000 7,300 0.85 0.43 * Users - the supported number of users with a given think time of 60 seconds Configuration Summary and Results Hardware Configuration: SPARC T4-4 server with 4 x SPARC T4 processors, 3.0 GHz 1 TB memory Data Storage 1 x Sun Fire X4275 (using COMSTAR) 2 x Sun Storage F5100 Flash Array (each with 80 FMODs) Redo Storage 1 x Sun Fire X4275 (using COMSTAR with 8 HDD) Software Configuration: Oracle Solaris 11 11/11 Oracle Database 11g Release 2 (11.2.0.3) with Oracle OLAP option Benchmark Description The Oracle OLAP Perf Version 2 benchmark is a workload designed to demonstrate and stress the Oracle OLAP product's core features of fast query, fast update, and rich calculations on a multi-dimensional model to support enhanced Data Warehousing. The bulk of the benchmark entails running a number of concurrent users, each issuing typical multidimensional queries against an Oracle OLAP cube consisting of a number of years of sales data with fully pre-computed aggregations. The cube has four dimensions: time, product, customer, and channel. Each query user issues approximately 150 different queries. One query chain may ask for total sales in a particular region (e.g South America) for a particular time period (e.g. Q4 of 2010) followed by additional queries which drill down into sales for individual countries (e.g. Chile, Peru, etc.) with further queries drilling down into individual stores, etc. Another query chain may ask for yearly comparisons of total sales for some product category (e.g. major household appliances) and then issue further queries drilling down into particular products (e.g. refrigerators, stoves. etc.), particular regions, particular customers, etc. Results from version 2 of the benchmark are not comparable with version 1. The primary difference is the type of queries along with the query mix. Key Points and Best Practices Since typical BI users are often likely to issue similar queries, with different constants in the where clauses, setting the init.ora prameter "cursor_sharing" to "force" will provide for additional query throughput and a larger number of potential users. Except for this setting, together with making full use of available memory, out of the box performance for the OLAP Perf workload should provide results similar to what is reported here. For a given number of query users with zero think time, the main measured metrics are the average query response time, the median query response time, and the query throughput. A derived metric is the maximum number of users the system can support achieving the measured response time assuming some non-zero think time. The calculation of the maximum number of users follows from the well-known response-time law N = (rt + tt) * tp where rt is the average response time, tt is the think time and tp is the measured throughput. Setting tt to 60 seconds, rt to 0.85 seconds and tp to 119.44 queries/sec (430,000 queries/hour), the above formula shows that the T4-4 server will support 7,300 concurrent users with a think time of 60 seconds and an average response time of 0.85 seconds. For more information see chapter 3 from the book "Quantitative System Performance" cited below. -- See Also Quantitative System Performance Computer System Analysis Using Queueing Network Models Edward D. Lazowska, John Zahorjan, G. Scott Graham, Kenneth C. Sevcik external local Oracle Database 11g – Oracle OLAP oracle.com OTN SPARC T4-4 Server oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 11/2/2012.

    Read the article

  • To what extent is size a factor in SSD performance?

    - by artif
    To what extent is the size of an SSD a factor in its performance? In my mind, correct me if I'm wrong, a bigger SSD should be, everything else being equal, faster than a smaller one. A bigger SSD would have more erase blocks and thus more leeway for the FTL (flash translation layer) to do garbage collection optimization. Also there would be more time before TRIM became necessary. I see on Wikipedia that it remarks that "The performance of the SSD can scale with the number of parallel NAND flash chips used in the device" so it seems throughput also increases significantly. Also many SSDs contain internal caches of some sort and presumably those caches are larger for correspondingly large SSDs. But supposing this effect exists, I would like a quantitative analysis. Does throughput increase linearly? How much is garbage collection impacted, if at all? Does latency stay the same? And so on. Would the performance of a 8 GB SSD be significantly different from, for example, an 80 GB SSD assuming both used high quality chips, controllers, etc? Are there any resources (webpages, research papers, presentations, books, etc) that discuss correlations between SSD performance (4 KB random write speed, latency, maximum sequential throughput, etc) and size? I realize this does not really sound like a programming question but it is relevant for what I'm working on (using flash for caching hard drive data) which does involve programming. If there is a better place to ask this question, eg a more hardware oriented site, what would that be? Something like the equivalent of stack overflow (or perhaps a forum) for in-depth questions on hardware interfaces, internals, etc would be appreciated.

    Read the article

  • SQL SERVER – Core Concepts – Elasticity, Scalability and ACID Properties – Exploring NuoDB an Elastically Scalable Database System

    - by pinaldave
    I have been recently exploring Elasticity and Scalability attributes of databases. You can see that in my earlier blog posts about NuoDB where I wanted to look at Elasticity and Scalability concepts. The concepts are very interesting, and intriguing as well. I have discussed these concepts with my friend Joyti M and together we have come up with this interesting read. The goal of this article is to answer following simple questions What is Elasticity? What is Scalability? How ACID properties vary from NOSQL Concepts? What are the prevailing problems in the current database system architectures? Why is NuoDB  an innovative and welcome change in database paradigm? Elasticity This word’s original form is used in many different ways and honestly it does do a decent job in holding things together over the years as a person grows and contracts. Within the tech world, and specifically related to software systems (database, application servers), it has come to mean a few things - allow stretching of resources without reaching the breaking point (on demand). What are resources in this context? Resources are the usual suspects – RAM/CPU/IO/Bandwidth in the form of a container (a process or bunch of processes combined as modules). When it is about increasing resources the simplest idea which comes to mind is the addition of another container. Another container means adding a brand new physical node. When it is about adding a new node there are two questions which comes to mind. 1) Can we add another node to our software system? 2) If yes, does adding new node cause downtime for the system? Let us assume we have added new node, let us see what the new needs of the system are when a new node is added. Balancing incoming requests to multiple nodes Synchronization of a shared state across multiple nodes Identification of “downstate” and resolution action to bring it to “upstate” Well, adding a new node has its advantages as well. Here are few of the positive points Throughput can increase nearly horizontally across the node throughout the system Response times of application will increase as in-between layer interactions will be improved Now, Let us put the above concepts in the perspective of a Database. When we mention the term “running out of resources” or “application is bound to resources” the resources can be CPU, Memory or Bandwidth. The regular approach to “gain scalability” in the database is to look around for bottlenecks and increase the bottlenecked resource. When we have memory as a bottleneck we look at the data buffers, locks, query plans or indexes. After a point even this is not enough as there needs to be an efficient way of managing such large workload on a “single machine” across memory and CPU bound (right kind of scheduling)  workload. We next move on to either read/write separation of the workload or functionality-based sharing so that we still have control of the individual. But this requires lots of planning and change in client systems in terms of knowing where to go/update/read and for reporting applications to “aggregate the data” in an intelligent way. What we ideally need is an intelligent layer which allows us to do these things without us getting into managing, monitoring and distributing the workload. Scalability In the context of database/applications, scalability means three main things Ability to handle normal loads without pressure E.g. X users at the Y utilization of resources (CPU, Memory, Bandwidth) on the Z kind of hardware (4 processor, 32 GB machine with 15000 RPM SATA drives and 1 GHz Network switch) with T throughput Ability to scale up to expected peak load which is greater than normal load with acceptable response times Ability to provide acceptable response times across the system E.g. Response time in S milliseconds (or agreed upon unit of measure) – 90% of the time The Issue – Need of Scale In normal cases one can plan for the load testing to test out normal, peak, and stress scenarios to ensure specific hardware meets the needs. With help from Hardware and Software partners and best practices, bottlenecks can be identified and requisite resources added to the system. Unfortunately this vertical scale is expensive and difficult to achieve and most of the operational people need the ability to scale horizontally. This helps in getting better throughput as there are physical limits in terms of adding resources (Memory, CPU, Bandwidth and Storage) indefinitely. Today we have different options to achieve scalability: Read & Write Separation The idea here is to do actual writes to one store and configure slaves receiving the latest data with acceptable delays. Slaves can be used for balancing out reads. We can also explore functional separation or sharing as well. We can separate data operations by a specific identifier (e.g. region, year, month) and consolidate it for reporting purposes. For functional separation the major disadvantage is when schema changes or workload pattern changes. As the requirement grows one still needs to deal with scale need in manual ways by providing an abstraction in the middle tier code. Using NOSQL solutions The idea is to flatten out the structures in general to keep all values which are retrieved together at the same store and provide flexible schema. The issue with the stores is that they are compromising on mostly consistency (no ACID guarantees) and one has to use NON-SQL dialect to work with the store. The other major issue is about education with NOSQL solutions. Would one really want to make these compromises on the ability to connect and retrieve in simple SQL manner and learn other skill sets? Or for that matter give up on ACID guarantee and start dealing with consistency issues? Hybrid Deployment – Mac, Linux, Cloud, and Windows One of the challenges today that we see across On-premise vs Cloud infrastructure is a difference in abilities. Take for example SQL Azure – it is wonderful in its concepts of throttling (as it is shared deployment) of resources and ability to scale using federation. However, the same abilities are not available on premise. This is not a mistake, mind you – but a compromise of the sweet spot of workloads, customer requirements and operational SLAs which can be supported by the team. In today’s world it is imperative that databases are available across operating systems – which are a commodity and used by developers of all hues. An Ideal Database Ability List A system which allows a linear scale of the system (increase in throughput with reasonable response time) with the addition of resources A system which does not compromise on the ACID guarantees and require developers to learn new paradigms A system which does not force fit a new way interacting with database by learning Non-SQL dialect A system which does not force fit its mechanisms for providing availability across its various modules. Well NuoDB is the first database which has all of the above abilities and much more. In future articles I will cover my hands-on experience with it. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: NuoDB

    Read the article

  • Cost Comparison Hard Disk Drive to Solid State Drive on Price per Gigabyte - dispelling a myth!

    - by tonyrogerson
    It is often said that Hard Disk Drive storage is significantly cheaper per GiByte than Solid State Devices – this is wholly inaccurate within the database space. People need to look at the cost of the complete solution and not just a single component part in isolation to what is really required to meet the business requirement. Buying a single Hitachi Ultrastar 600GB 3.5” SAS 15Krpm hard disk drive will cost approximately £239.60 (http://scan.co.uk, 22nd March 2012) compared to an OCZ 600GB Z-Drive R4 CM84 PCIe costing £2,316.54 (http://scan.co.uk, 22nd March 2012); I’ve not included FusionIO ioDrive because there is no public pricing available for it – something I never understand and personally when companies do this I immediately think what are they hiding, luckily in FusionIO’s case the product is proven though is expensive compared to OCZ enterprise offerings. On the face of it the single 15Krpm hard disk has a price per GB of £0.39, the SSD £3.86; this is what you will see in the press and this is what sales people will use in comparing the two technologies – do not be fooled by this bullshit people! What is the requirement? The requirement is the database will have a static size of 400GB kept static through archiving so growth and trim will balance the database size, the client requires resilience, there will be several hundred call centre staff querying the database where queries will read a small amount of data but there will be no hot spot in the data so the randomness will come across the entire 400GB of the database, estimates predict that the IOps required will be approximately 4,000IOps at peak times, because it’s a call centre system the IO latency is important and must remain below 5ms per IO. The balance between read and write is 70% read, 30% write. The requirement is now defined and we have three of the most important pieces of the puzzle – space required, estimated IOps and maximum latency per IO. Something to consider with regard SQL Server; write activity requires synchronous IO to the storage media specifically the transaction log; that means the write thread will wait until the IO is completed and hardened off until the thread can continue execution, the requirement has stated that 30% of the system activity will be write so we can expect a high amount of synchronous activity. The hardware solution needs to be defined; two possible solutions: hard disk or solid state based; the real question now is how many hard disks are required to achieve the IO throughput, the latency and resilience, ditto for the solid state. Hard Drive solution On a test on an HP DL380, P410i controller using IOMeter against a single 15Krpm 146GB SAS drive, the throughput given on a transfer size of 8KiB against a 40GiB file on a freshly formatted disk where the partition is the only partition on the disk thus the 40GiB file is on the outer edge of the drive so more sectors can be read before head movement is required: For 100% sequential IO at a queue depth of 16 with 8 worker threads 43,537 IOps at an average latency of 2.93ms (340 MiB/s), for 100% random IO at the same queue depth and worker threads 3,733 IOps at an average latency of 34.06ms (34 MiB/s). The same test was done on the same disk but the test file was 130GiB: For 100% sequential IO at a queue depth of 16 with 8 worker threads 43,537 IOps at an average latency of 2.93ms (340 MiB/s), for 100% random IO at the same queue depth and worker threads 528 IOps at an average latency of 217.49ms (4 MiB/s). From the result it is clear random performance gets worse as the disk fills up – I’m currently writing an article on short stroking which will cover this in detail. Given the work load is random in nature looking at the random performance of the single drive when only 40 GiB of the 146 GB is used gives near the IOps required but the latency is way out. Luckily I have tested 6 x 15Krpm 146GB SAS 15Krpm drives in a RAID 0 using the same test methodology, for the same test above on a 130 GiB for each drive added the performance boost is near linear, for each drive added throughput goes up by 5 MiB/sec, IOps by 700 IOps and latency reducing nearly 50% per drive added (172 ms, 94 ms, 65 ms, 47 ms, 37 ms, 30 ms). This is because the same 130GiB is spread out more as you add drives 130 / 1, 130 / 2, 130 / 3 etc. so implicit short stroking is occurring because there is less file on each drive so less head movement required. The best latency is still 30 ms but we have the IOps required now, but that’s on a 130GiB file and not the 400GiB we need. Some reality check here: a) the drive randomness is more likely to be 50/50 and not a full 100% but the above has highlighted the effect randomness has on the drive and the more a drive fills with data the worse the effect. For argument sake let us assume that for the given workload we need 8 disks to do the job, for resilience reasons we will need 16 because we need to RAID 1+0 them in order to get the throughput and the resilience, RAID 5 would degrade performance. Cost for hard drives: 16 x £239.60 = £3,833.60 For the hard drives we will need disk controllers and a separate external disk array because the likelihood is that the server itself won’t take the drives, a quick spec off DELL for a PowerVault MD1220 which gives the dual pathing with 16 disks 146GB 15Krpm 2.5” disks is priced at £7,438.00, note its probably more once we had two controller cards to sit in the server in, racking etc. Minimum cost taking the DELL quote as an example is therefore: {Cost of Hardware} / {Storage Required} £7,438.60 / 400 = £18.595 per GB £18.59 per GiB is a far cry from the £0.39 we had been told by the salesman and the myth. Yes, the storage array is composed of 16 x 146 disks in RAID 10 (therefore 8 usable) giving an effective usable storage availability of 1168GB but the actual storage requirement is only 400 and the extra disks have had to be purchased to get the  IOps up. Solid State Drive solution A single card significantly exceeds the IOps and latency required, for resilience two will be required. ( £2,316.54 * 2 ) / 400 = £11.58 per GB With the SSD solution only two PCIe sockets are required, no external disk units, no additional controllers, no redundant controllers etc. Conclusion I hope by showing you an example that the myth that hard disk drives are cheaper per GiB than Solid State has now been dispelled - £11.58 per GB for SSD compared to £18.59 for Hard Disk. I’ve not even touched on the running costs, compare the costs of running 18 hard disks, that’s a lot of heat and power compared to two PCIe cards!Just a quick note: I've left a fair amount of information out due to this being a blog! If in doubt, email me :)I'll also deal with the myth that SSD's wear out at a later date as well - that's just way over done still, yes, 5 years ago, but now - no.

    Read the article

  • The enterprise vendor con - connecting SSD's using SATA 2 (3Gbits) thus limiting there performance

    - by tonyrogerson
    When comparing SSD against Hard drive performance it really makes me cross when folk think comparing an array of SSD running on 3GBits/sec to hard drives running on 6GBits/second is somehow valid. In a paper from DELL (http://www.dell.com/downloads/global/products/pvaul/en/PowerEdge-PowerVaultH800-CacheCade-final.pdf) on increasing database performance using the DELL PERC H800 with Solid State Drives they compare four SSD drives connected at 3Gbits/sec against ten 10Krpm drives connected at 6Gbits [Tony slaps forehead while shouting DOH!]. It is true in the case of hard drives it probably doesn’t make much difference 3Gbit or 6Gbit because SAS and SATA are both end to end protocols rather than shared bus architecture like SCSI, so the hard drive doesn’t share bandwidth and probably can’t get near the 600MiBytes/second throughput that 6Gbit gives unless you are doing contiguous reads, in my own tests on a single 15Krpm SAS disk using IOMeter (8 worker threads, queue depth of 16 with a stripe size of 64KiB, an 8KiB transfer size on a drive formatted with an allocation size of 8KiB for a 100% sequential read test) I only get 347MiBytes per second sustained throughput at an average latency of 2.87ms per IO equating to 44.5K IOps, ok, if that was 3GBits it would be less – around 280MiBytes per second, oh, but wait a minute [...fingers tap desk] You’ll struggle to find in the commodity space an SSD that doesn’t have the SATA 3 (6GBits) interface, SSD’s are fast not only low latency and high IOps but they also offer a very large sustained transfer rate, consider the OCZ Agility 3 it so happens that in my masters dissertation I did the same test but on a difference box, I got 374MiBytes per second at an average latency of 2.67ms per IO equating to 47.9K IOps – cost of an 240GB Agility 3 is £174.24 (http://www.scan.co.uk/products/240gb-ocz-agility-3-ssd-25-sata-6gb-s-sandforce-2281-read-525mb-s-write-500mb-s-85k-iops), but that same drive set in a box connected with SATA 2 (3Gbits) would only yield around 280MiBytes per second thus losing almost 100MiBytes per second throughput and a ton of IOps too. So why the hell are “enterprise” vendors still only connecting SSD’s at 3GBits? Well, my conspiracy states that they have no interest in you moving to SSD because they’ll lose so much money, the argument that they use SATA 2 doesn’t wash, SATA 3 has been out for some time now and all the commodity stuff you buy uses it now. Consider the cost, not in terms of price per GB but price per IOps, SSD absolutely thrash Hard Drives on that, it was true that the opposite was also true that Hard Drives thrashed SSD’s on price per GB, but is that true now, I’m not so sure – a 300GByte 2.5” 15Krpm SAS drive costs £329.76 ex VAT (http://www.scan.co.uk/products/300gb-seagate-st9300653ss-savvio-15k3-25-hdd-sas-6gb-s-15000rpm-64mb-cache-27ms) which equates to £1.09 per GB compared to a 480GB OCZ Agility 3 costing £422.10 ex VAT (http://www.scan.co.uk/products/480gb-ocz-agility-3-ssd-25-sata-6gb-s-sandforce-2281-read-525mb-s-write-410mb-s-30k-iops) which equates to £0.88 per GB. Ok, I compared an “enterprise” hard drive with a “commodity” SSD, ok, so things get a little more complicated here, most “enterprise” SSD’s are SLC and most commodity are MLC, SLC gives more performance and wear, I’ll talk about that another day. For now though, don’t get sucked in by vendor marketing, SATA 2 (3Gbit) just doesn’t cut it, SSD need 6Gbit to breath and even that SSD’s are pushing. Alas, SSD’s are connected using SATA so all the controllers I’ve seen thus far from HP and DELL only do SATA 2 – deliberate? Well, I’ll let you decide on that one.

    Read the article

  • Raid0 performance degradation?

    - by davy8
    Not sure if this belongs here or on SuperUser, feel free to move as appropriate. I've noticed the performance on my RAID0 setup seems to have degraded over the past months. The throughput is fine, but I think the random access time has increased or something. In use I generally see about 1-5mb/sec when loading stuff in Visual Studio and other apps and it doesn't seem like the CPU is bottlenecking as the CPU utilization is pretty low. I don't recall what Access Time used to be, but HD Tune is reporting 12.6ms Read throughput is showing as averaging about 125MB/sec so it should be great for sequential reads. Defrag daily and it shows fragmentation levels low, so that shouldn't be an issue. Additional info, Windows 7 x64, Intel raid controller on mobo, WD Black 500GB (I think 32mb cache) x2.

    Read the article

  • Raid0 performance degradation?

    - by davy8
    Not sure if this belongs here or on SuperUser, feel free to move as appropriate. I've noticed the performance on my RAID0 setup seems to have degraded over the past months. The throughput is fine, but I think the random access time has increased or something. In use I generally see about 1-5mb/sec when loading stuff in Visual Studio and other apps and it doesn't seem like the CPU is bottlenecking as the CPU utilization is pretty low. I don't recall what Access Time used to be, but HD Tune is reporting 12.6ms Read throughput is showing as averaging about 125MB/sec so it should be great for sequential reads. Defrag daily and it shows fragmentation levels low, so that shouldn't be an issue. Additional info, Windows 7 x64, Intel raid controller on mobo, WD Black 500GB (I think 32mb cache) x2.

    Read the article

  • Oracle VM VirtualBox 4.0 Now Available

    - by Paulo Folgado
    Delivering on Oracle's commitment to open source, Oracle VM VirtualBox 4.0 is now available, further enhancing the popular, open source, cross-platform virtualization software.   "Oracle VM VirtualBox 4.0 is the third major product release in just over a year, and adds to the many new product releases across the Oracle Virtualization product line, illustrating the investment and importance that Oracle places on providing a comprehensive desktop to datacenter virtualization solution," says Wim Coekaerts, senior vice president, Linux and Virtualization Engineering, Oracle. "With an improved user interface and added virtual hardware support, customers will find Oracle VM VirtualBox 4.0 provides a richer user experience." Part of Oracle's comprehensive portfolio of virtualization solutions, Oracle VM VirtualBox enables desktop or laptop computers to run multiple guest operating systems simultaneously, allowing users to get the most flexibility and utilization out of their PCs, and supports a variety of host operating systems, including Windows, Mac OS X, most popular flavors of Linux (including Oracle Linux), and Oracle Solaris. Oracle VM VirtualBox 4.0 delivers increased capacity and throughput to handle greater workloads, enhanced virtual appliance capabilities, and significant usability improvements. Support for the latest in virtual hardware, including chipsets supporting PCI Express, further extends the value delivered to customers, partners, and developers. Highlights of Oracle VM VirtualBox 4.0 include New Open Architecture - Oracle and community developers can now create extensions that customize Oracle VM VirtualBox and add features not previously available.Enhanced Usability - A new scalable display mode enables users to view more virtual displays on their existing monitors. Improvements to VM management, including visual VM previews, an optional attributes display, and easy launch shortcut creation enables administrators and power users to customize the interface to make it as simple or as comprehensive as required.Increased Capacity and Throughput - A new asynchronous I/O model for networked (iSCSI) and local storage delivers significant storage related performance improvements, while new optimizations allow larger datacenter-class workloads, such as Oracle's middeware, to be run on 32-bit Windows hosts for testing and demo purposes. Powerful Virtual Appliance Sharing Capabilities - Enhanced support for standards-compliant OVF appliances and added support for OVA format descriptors. All information about a VM may be stored in a single folder to facilitate easier direct sharing among VMs. Support for Latest Virtual Hardware - A new, modern virtual chipset supporting PCI Express and other hardware enhancements including high-definition audio devices helps ensure support for the most demanding virtual workloads.

    Read the article

  • optimizing file share performance on Win2k8?

    - by Kirk Marple
    We have a case where we're accessing a RAID array (drive E:) on a Windows Server 2008 SP2 x86 box. (Recently installed, nothing other than SQL Server 2005 on the server.) In one scenario, when directly accessing it (E:\folder\file.xxx) we get 45MBps throughput to a video file. If we access the same file on the same array, but through UNC path (\server\folder\file.xxx) we get about 23MBps throughput with the exact same test. Obviously the second test is going through more layers of the stack, but that's a major performance hit. What tuning should we be looking at for making the UNC path be closer in performance to the direct access case? Thanks, Kirk (corrected: it is CIFS not SMB, but generalized title to 'file share'.) (additional info: this happens during the read from a single file, not an issue across multiple connections. the file is on the local machine, but exposed via file share. so client and file server are both same Windows 2008 server.)

    Read the article

  • Are FC and SAS DAS devices standard enough?

    - by user222182
    Before I ask my questions, here is some background info that may or may not be useful: For the first time I find myself needing a DAS solution. My priority is data through-put in a single direction. I can write large blocks, and I don't need to read at the same time. The server (the data producing device) is not really a typical server, its a very powerful single board computer. As such I have limited options when it comes to the add-in cards I can install since it must use the fairly uncommon interface, XMC. Currently I believe I am limited PCIex8 gen 1 which means that the likely bottle neck for me will be this 16gbps connection. XMC Boards I have found so far offer the following connections: a) Dual 10GBE ethernet controller, total throughput 20gbps b) Dual Quad SAS 2.0 Connectors (SFF-8XXX) HBA (no raid), total throughput 48 gbps c) Dual FC 8gb HBA (no raid), total throughput 16gbps My questions for you guys are: 1) Are SAS and/or FC, and by extension their HBAs, standard enough that I could purchase a Dell or Aberdeen storage server with a raid controller that has external SAS or FC ports and expect that I can connect it to my SAS or FC HBA, be presented with a single volume (if I so configured the storage server), all without having to check for HBA compatibility? 2) On a device like a Dell PowerVault (either DAS or NAS) is there an OS on it to concern myself with, or is it meant to be remotely managed? Is there a local interface in case I cant remotely manage it (i.e. if my single board computer uses an OS not supported by Dell OpenManage). Would this be true of nearly any device which calls itself a DAS? 3) If I purchase some sort of Supermicro storage chassis, installed a raid controller with external connections, is there a nice lightweight OS I can run just for management of the controller? Would I even need an OS since the raid card would be configured pre-boot anyway? 4) It is much easier to buy XMC based 10gigabit ethernet cards (generally dual port). In what ways would I be getting into trouble by using iSCSI as a DAS are direct cabling with SFP+ cables? Thanks in advance

    Read the article

  • zeroing a disk with dd vs Disk Utility

    - by jdizzle
    I'm attempting to zero a disk on my Mac OS X machine. I'm going for complete zeros and unformatted, so I think of dd. Unfortunately the maximum throughput I've managed to get out of dd is 7MB/s. Just for grins I tried disk utility and it has a throughput of 19MB/s. What gives? I've tried changing the bs option on dd to all sorts of values, but it still hovers around 7MB/s. Why is disk utility so much faster?

    Read the article

  • Short Look at Frends Helium 2.0 Beta

    - by mipsen
    Pekka from Frends gave me the opportunity to have a look at the beta-version of their Helium 2.0. For all of you, who don't know the tool: Helium is a web-application that collects management-data from BizTalk which you usually have to tediously collect yourself, like performance-data (throttling, throughput (like completed Orchestrations/hour), other perfomance-counters) and data about the state of BTS-Applications and presents the data in clearly structured diagrams and overviews which (often) even allow drill-down.  Installing Helium 2 was quite easy. It comes as an msi-file which creates the web-application on IIS. Aditionally a windows-service is deployt which acts as an agent for sending alert-e-mails and collecting data. What I missed during installation was a link to the created web-app at the end, but the link can be found under Program Files/Frends... On the start-page Helium shows two sections: An overview about the BTS-Apps (Running?, suspended messages?) Basic perfomance-data You can drill-down into the BTS-Apps further, to see ReceiveLocations, Orchestrations and SendPorts. And then a very nice feature can be activated: You can set a monitor to each of the ports and/or orchestrations and have an e-mail sent when a threshold of executions/day or hour is not met. I think this is a great idea. The following screeshot shows the configuration of this option. Conclusion: Helium is a useful monitoring  tool for BTS-operations that might save a lot of time for collecting data, writing a tool yourself or documentation for the operations-staff where to find the data. Pros: Simple installation Most important data for BTS-operations in one place Monitor for alerts, if throughput is not met Nice Web-UI Reasonable price Cons: Additional Perormance-counters cannot be added Im am not sure when the final version is to be shipped, but you can see that on Frend's homepage soon, I guess... A trial version is available here

    Read the article

  • ArchBeat Link-o-Rama for 11/18/2011

    - by Bob Rhubart
    IT executives taking lead role with both private and public cloud projects: survey | Joe McKendrick "The survey, conducted among members of the Independent Oracle Users Group, found that both private and public cloud adoption are up—30% of respondents report having limited-to-large-scale private clouds, up from 24% only a year ago. Another 25% are either piloting or considering private cloud projects. Public cloud services are also being adopted for their enterprises by more than one out of five respondents." - Joe McKendrick SOA all the Time; Architects in AZ; Clearing Info Integration Hurdles This week on the Architect Home Page on OTN. OIM 11g OID (LDAP) Groups Request-Based Provisioning with custom approval – Part I | Alex Lopez Iin part one of a two-part blog post, Alex Lopez illustrates "an implementation of a Custom Approval process and a Custom UI based on ADF to request entitlements for users which in turn will be converted to Group memberships in OID." ArchBeat Podcast Information Integration - Part 3/3 "Oracle Information Integration, Migration, and Consolidation" author Jason Williamson, co-author Tom Laszeski, and book contributor Marc Hebert talk about upcoming projects and about what they've learned in writing their book. InfoQ: Enterprise Shared Services and the Cloud | Ganesh Prasad As an industry, we have converged onto a standard three-layered service model (IaaS, PaaS, SaaS) to describe cloud computing, with each layer defined in terms of the operational control capabilities it offers. This is unlike enterprise shared services, which have unique characteristics around ownership, funding and operations, and they span SaaS and PaaS layers. Ganesh Prasad explores the differences. Stress Testing Oracle ADF BC Applications - Do Connection Pooling and TXN Disconnect Level Oracle ACE Director Andrejus Baranovskis describes "how jbo.doconnectionpooling = true and jbo.txn.disconnect_level = 1 properties affect ADF application performance." Exploring TCP throughput with DTrace | Alan Maguire "According to the theory," says Maguire, "when the number of unacknowledged bytes for the connection is less than the receive window of the peer, the path bandwidth is the limiting factor for throughput."

    Read the article

  • How to select a server that supports Windows scheduled file IO

    - by Kristof Verbiest
    Background: I am developing an application that needs to read data from disk with a fairly consistent throughput. It is important that this throughput is not influenced by other actions that happen on the disk (e.g. by other processes). For this purpose, I was hoping to use the 'Scheduled File I/O' feature in Windows (throught the GetFileBandwithReservation and SetFileBandwithReservation functions). However, this StackOverflow question has thought me that this feature is only available if the device driver supports it. Currently I have no computer at my disposition that seems to support this feature (I have an HP Proliant server and a Dell Precision workstation). Question: If I were to order a new server, how can I know beforehand if this feature will be supported by the device driver? How 'upscale' does the server have to be? Has anybody used this feature with success and cares to share his experiences?

    Read the article

  • Slow performance over network (Ubuntu)

    - by Filipe Santos
    i did setup this NodeJs TCP Server and tested it with a message flooder. Just to see how the performance of the server is. While the message throughput is great if i run the server and the message flooder on the same computer (ubuntu), the throughput dramaticaly decreases if i start the server on computer1(ubuntu1) and the message flooder on computer2(also ubuntu). Both PC are on the same network. In fact, they are directly connected to each other. I started searching the internet for reasons and i suppose i need to tune TCP on both Ubuntu-pcs but until now i haven't been successfull at all. Has anyone experienced such problems, or could someone help me out? Thanks Here the flooding code: var net=require('net') var client = net.createConnection(5000, "10.0.0.2") client.addListener("connect", function(){ for(var i = 0; i < 1000; i++) { client.write("message "); } })

    Read the article

  • How to troubleshoot intermittent and irregular connection errors on home network (preferrably Mac client)

    - by Martin
    Hi, I'm experiencing intermittent connection errors in our home network from two different computers, normally "Connection Reset" when browsing, but also other issues such as very slow throughput. I have approximately a network setup as below: ISP-Cable Modem-Dlink DIR 655 Router-(Ethernet)-Fon Router-Mac/Windows laptop Basically, is there a simple way to monitor the network and detect where issues are coming from? Right now we don't know if it is the Fon router, the Dlink router, the modem or the ISP. As the issue is intermittent, is there a software that regularly traceroutes a set of destinations and tests e.g. throughput, something that can help us figure out where in the chain the errors are introduced? The more automatic network monitor, the better.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >