Search Results

Search found 13341 results on 534 pages for 'obiee performance tuning'.

Page 180/534 | < Previous Page | 176 177 178 179 180 181 182 183 184 185 186 187  | Next Page >

  • Do unused vertices in a 3D object affect performance?

    - by Gajet
    For my game I need to generate a mesh dynamically. Now I'm wondering does it have a noticeable affect in FPS if I allocate more vertices than what I'm actually using or not? and does it matter if I'm using DirectX or OpenGL? Edit Final output will be a w*h cell grid, but for technical issues it's much easier for me to allocate (w+1)*(h+1) vertices. Sure I'll only use w*h vertices in indexing, and I know there is some memory wasting there, but I want to know if it also affect FPS or not? (Note that mesh is only generated once in each time you play the game)

    Read the article

  • My Oracle Suport?????

    - by Dongwei Wang
    ????????????????,??????MOS???????(????),????????????????????????????:Note 62143.1 - Troubleshooting: Tuning the Shared Pool and Tuning Library Cache Latch ContentionNote 376442.1 - * How To Collect 10046 Trace (SQL_TRACE) Diagnostics for Performance IssuesNote 749227.1 - * How to Gather Optimizer Statistics on 11gNote 1359094.1 - FAQ: How to Use AWR reports to Diagnose Database Performance IssuesNote 1320966.1 - Things to Consider Before Upgrading to 11.2.0.2 to Avoid Poor Performance or Wrong ResultsNote 1392633.1 - Things to Consider Before Upgrading to 11.2.0.3 to Avoid Poor Performance or Wrong Results????????????????”??“???,?????????????????(PDF??)???????????????”Rate this document“????

    Read the article

  • does unused vertices in a 3D object affect performance?

    - by Gajet
    For my game I need to generate a mesh dynamically. now I'm wondering does it have a noticeable affect in fps if I allocate more vertices than what I'm actually using or not? and does it matter if I'm using DirectX or OpenGL? edit final output will be a w*h cell grid, but for technical issues it's much more easier for me to allocate (w+1)*(h+1) vertices. sure I'll only use w*h vertices in indexing, and I know there is some memory wasting there, but I want to know if it also affect fps or not? (note that mesh is only generated once in each time you play the game)

    Read the article

  • Why am I getting such slow file transfer performance?

    - by kingdango
    Copying 4GB from a USB flash drive to my Linux partition. The flash drive is NTFS formatted (I believe, it's Windows formatted). The transfer is incredibly slow and blocks the computer frequently causing lag and hanging applications. My transfer rate is 1.2 MB/sec and that is the max it has hit when I let the File Operations window have focus. Why is this so slow under Ubuntu and significantly faster in Win 7?

    Read the article

  • When choosing a domain does including your brand affect SEO performance?

    - by bpeterson76
    I've been asked to build a "landing page" for a local branch of an international corporation. While the corporation has a well-established domain name, the local office wants to use a unique, separate url that will be easy for them to relay to clients. However, the corporation is considered a category leader, so the local office is also concerned about the importance to carrying over the company's brand to the URL. Questions that have arisen: From an SEO perspective, is there a benefit to including the brand name in the URL? Would it be more beneficial to buy a domain that relates generically to the INDUSTRY as opposed to the specific brand name? Would the benefits of an easy-to-remember, short domain outweigh any SEO benefits that might be gained by a longer, brand-specific domain?

    Read the article

  • Slow in the Application, Fast in SSMS? Understanding Performance Mysteries

    When I read various forums about SQL Server, I frequently see questions from deeply mystified posters. They have identified a slow query or stored procedure in their application. They cull the SQL batch from the application and run it in SQL Server Management Studio (SSMS) to analyse it, only to find that the response is instantaneous. At this point they are inclined to think that SQL Server is all about magic. A similar mystery is when a developer has extracted a query in his stored procedure to run it stand-alone only to find that it runs much faster – or much slower – than inside the procedure.

    Read the article

  • Platinum Club??????(Oracle Solaris/MySQL) ????

    - by Urakawa
    ORACLE MASTER Platinum???????????Platinum Club????????2011?3?4???????????????? ???????Oracle Solaris 11 Express?????????????????????????????????????MySQL?Performance Tuning??????????????????   ????????????? ??????????????? ????? ?? ?????????????? ???????????????????????????ORACLE MASTER Platinum???????????????????????????????Oracle Solaris 11????????????????????????????MySQL???????????????????????????????????????????????????   ?What's New in Solaris 11 Express???????????????????? ??????????? ????????????? ?? ?? 2011?4????????What's New in Solaris 11 Express??????????3???????????????????????? ????? ???????? ?? ????????????????????????????????????????????????Oracle Solaris 11 ????????????????? Express ?????????????????????????????????????(Crossbow)??Solaris10??????????????? ??(ZFS)?OS?????(????)??????????????????????? Oracle Solaris 11????????????????????? ????8??????Oracle Solaris????????????????????????????????????????OS???????????????????????????????????????????????????   ?MySQL Performance Tuning??????????????????? IT???????????? ????????????1????? ? ???????MySQL Performance Tuning???????????????????????1??????????????????????????????????????????? ????????????Oracle Database???MySQL???????????????MySQL Performance Tuning????????????MySQL????????????????MySQL???????????????????????????????????????????????????? MySQL?????????????????????Oracle Database????????????ORACLE MASER Platinum????????????????????????????????????????????????????2?????????????????????   ????????????????????Platinum Club???????????????????????????????????????????????????????????????? ???Oracle Database??????????????????????????????????Oracle Database????????????ORACLE MASTER Platinum????????????????????????Oracle Solaris????????????????????????MySQL???????????????????????????????????????????????????????????????????????TV????????????????????????????????????????????????????????????Platinum Club??????????????????????????????????????????????????

    Read the article

  • ??????????/??????????????|WebLogic Channel|??????

    - by ???02
    WebLogic Server???????????????????????????????????????????????????????????????????????????1???????????????????????????????????????????????????WebLogic Server???????????????????????????????????????????????????????????????????????·?????????????????????·??????????????????????????????????????????????????2011?11????????Oracle DBA & Developers Days 2011??????????????????????????WebLogic Server??????????????????WebLogic Server???????????????????????????????????????????????????·?????????????????????????????2??????????????????????????????????????????????·??????????(???)?WebLogic Server?????????????????????? WebLogic Server????????????????????????????OS???JVM???WebLogic Server????????????????5????????????????WebLogic Server?????????????????????????????????????? ????????·????????????????????WebLogic Server???????????????? ?????????????????????????????????????Listen????????????????Listen????????Muxer??????????????·????·????????????????????????????????????????·????????????????·?????????????????????????????????????????????????????????????????????????????????JDBC?????????????????????????????????????¦???????????·??????????――?????·??????·???????????WebLogic????? ?????????????????????·?????????????????????????3????????????1????????·?????????????¦????????????! ?????????????????JRockit Flight Recorder?????????1??????????????――????????????????? 1?????????????????? ???????????????????????????????????????????????????????·????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ??????????????????????CPU?????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ??????????????????????????????????????????????????????????????????????????????????????¦????????????WebLogic Server - ???????·?????? ????????????????????????WebLogic Server?????????????????????????????????????????????????????????????3??? ??????????????????????????(????????)?????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ??3?????????????????????????????????????????????????????????????????????? ????????????50~600???????????????100~500???????????????????????????????????????????????300??????????????????????200?400??????????????????????????? ???50?100???????????????????????????????????????????????????????????????????????????????????????????500?600???????????????????????????????????????????????????????????????? ??????????????????????????????50?100????????????500?600???????????????????100~500?????????????????????????????????????????200~400????????????????????????????? ???????????????????????????????????????????????????????????????????"????????"??????????????????????????JDBC?????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????¦????WebLogic Server(JRockit) - ????????????????????????????????2???????????????――???????? ? ?????????????????? 2???????????????????????????????????????????????????????????????????????????????????????????????????????????1???????????400??????(???????????????????????????????domain.xsd????????????????????????????????????????)? ???????????????????????????????????????????????????????????????????????????????????????????????????????????????????Dweblogic.SelfTuningThreadPoolSizeMin={???????}Dweblogic.SelfTuningThreadPoolSizeMax={???????}?????????????????<server>  <name>MANAGED_SRV</name>  <self-tuning-thread-pool-size-min>{???????}</self-tuning-thread-pool-size-min>  <self-tuning-thread-pool-size-max>{???????}</self-tuning-thread-pool-size-max>  ...?...</server> ???????????????????????????????????????????????????????? ??????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????¦????????????WebLogic Server - ????????/???? ?????1?????????????·???????????????????????????????????????????????????·??????????????????????????????????·?????????????0~2????????????????????????????????????????????????????????????????????????????????????????·??????????????????????????????????????????????????????????????????????????????????? ??????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????·??????????????????????????????????? ???????????????????????????????????????????????????????????????????????????????????????????????????????????·???????????? ???????????????????????????????????????????????????????????????????????·??????????????????????????????? ????????????????WebLogic Server??????????????????????????????????????????????????????????????????????????????????????????????????????????????·?????????????? ??????????????????????????2????????·??????????????????????2?????????????¦Oracle DBA & Developers Days 2011????????????????????WebLogic Server??????????????¦????·????Oracle WebLogic Server 12c Forum - ????????Java??????????? -??2012?1?25?13:30~17:00 ?? ????????????13F?????

    Read the article

  • New Enhancements for InnoDB Memcached

    - by Calvin Sun
    In MySQL 5.6, we continued our development on InnoDB Memcached and completed a few widely desirable features that make InnoDB Memcached a competitive feature in more scenario. Notablely, they are 1) Support multiple table mapping 2) Added background thread to auto-commit long running transactions 3) Enhancement in binlog performance  Let’s go over each of these features one by one. And in the last section, we will go over a couple of internally performed performance tests. Support multiple table mapping In our earlier release, all InnoDB Memcached operations are mapped to a single InnoDB table. In the real life, user might want to use this InnoDB Memcached features on different tables. Thus being able to support access to different table at run time, and having different mapping for different connections becomes a very desirable feature. And in this GA release, we allow user just be able to do both. We will discuss the key concepts and key steps in using this feature. 1) "mapping name" in the "get" and "set" command In order to allow InnoDB Memcached map to a new table, the user (DBA) would still require to "pre-register" table(s) in InnoDB Memcached “containers” table (there is security consideration for this requirement). If you would like to know about “containers” table, please refer to my earlier blogs in blogs.innodb.com. Once registered, the InnoDB Memcached will then be able to look for such table when they are referred. Each of such registered table will have a unique "registration name" (or mapping_name) corresponding to the “name” field in the “containers” table.. To access these tables, user will include such "registration name" in their get or set commands, in the form of "get @@new_mapping_name.key", prefix "@@" is required for signaling a mapped table change. The key and the "mapping name" are separated by a configurable delimiter, by default, it is ".". So the syntax is: get [@@mapping_name.]key_name set [@@mapping_name.]key_name  or  get @@mapping_name set @@mapping_name Here is an example: Let's set up three tables in the "containers" table: The first is a map to InnoDB table "test/demo_test" table with mapping name "setup_1" INSERT INTO containers VALUES ("setup_1", "test", "demo_test", "c1", "c2", "c3", "c4", "c5", "PRIMARY");  Similarly, we set up table mappings for table "test/new_demo" with name "setup_2" and that to table "mydatabase/my_demo" with name "setup_3": INSERT INTO containers VALUES ("setup_2", "test", "new_demo", "c1", "c2", "c3", "c4", "c5", "secondary_index_x"); INSERT INTO containers VALUES ("setup_3", "my_database", "my_demo", "c1", "c2", "c3", "c4", "c5", "idx"); To switch to table "my_database/my_demo", and get the value corresponding to “key_a”, user will do: get @@setup_3.key_a (this will also output the value that corresponding to key "key_a" or simply get @@setup_3 Once this is done, this connection will switch to "my_database/my_demo" table until another table mapping switch is requested. so it can continue issue regular command like: get key_b  set key_c 0 0 7 These DMLs will all be directed to "my_database/my_demo" table. And this also implies that different connections can have different bindings (to different table). 2) Delimiter: For the delimiter "." that separates the "mapping name" and key value, we also added a configure option in the "config_options" system table with name of "table_map_delimiter": INSERT INTO config_options VALUES("table_map_delimiter", "."); So if user wants to change to a different delimiter, they can change it in the config_option table. 3) Default mapping: Once we have multiple table mapping, there should be always a "default" map setting. For this, we decided if there exists a mapping name of "default", then this will be chosen as default mapping. Otherwise, the first row of the containers table will chosen as default setting. Please note, user tables can be repeated in the "containers" table (for example, user wants to access different columns of the table in different settings), as long as they are using different mapping/configure names in the first column, which is enforced by a unique index. 4) bind command In addition, we also extend the protocol and added a bind command, its usage is fairly straightforward. To switch to "setup_3" mapping above, you simply issue: bind setup_3 This will switch this connection's InnoDB table to "my_database/my_demo" In summary, with this feature, you now can direct access to difference tables with difference session. And even a single connection, you can query into difference tables. Background thread to auto-commit long running transactions This is a feature related to the “batch” concept we discussed in earlier blogs. This “batch” feature allows us batch the read and write operations, and commit them only after certain calls. The “batch” size is controlled by the configure parameter “daemon_memcached_w_batch_size” and “daemon_memcached_r_batch_size”. This could significantly boost performance. However, it also comes with some disadvantages, for example, you will not be able to view “uncommitted” operations from SQL end unless you set transaction isolation level to read_uncommitted, and in addition, this will held certain row locks for extend period of time that might reduce the concurrency. To deal with this, we introduce a background thread that “auto-commits” the transaction if they are idle for certain amount of time (default is 5 seconds). The background thread will wake up every second and loop through every “connections” opened by Memcached, and check for idle transactions. And if such transaction is idle longer than certain limit and not being used, it will commit such transactions. This limit is configurable by change “innodb_api_bk_commit_interval”. Its default value is 5 seconds, and minimum is 1 second, and maximum is 1073741824 seconds. With the help of such background thread, you will not need to worry about long running uncommitted transactions when set daemon_memcached_w_batch_size and daemon_memcached_r_batch_size to a large number. This also reduces the number of locks that could be held due to long running transactions, and thus further increase the concurrency. Enhancement in binlog performance As you might all know, binlog operation is not done by InnoDB storage engine, rather it is handled in the MySQL layer. In order to support binlog operation through InnoDB Memcached, we would have to artificially create some MySQL constructs in order to access binlog handler APIs. In previous lab release, for simplicity consideration, we open and destroy these MySQL constructs (such as THD) for each operations. This required us to set the “batch” size always to 1 when binlog is on, no matter what “daemon_memcached_w_batch_size” and “daemon_memcached_r_batch_size” are configured to. This put a big restriction on our capability to scale, and also there are quite a bit overhead in creating destroying such constructs that bogs the performance down. With this release, we made necessary change that would keep MySQL constructs as long as they are valid for a particular connection. So there will not be repeated and redundant open and close (table) calls. And now even with binlog option is enabled (with innodb_api_enable_binlog,), we still can batch the transactions with daemon_memcached_w_batch_size and daemon_memcached_r_batch_size, thus scale the write/read performance. Although there are still overheads that makes InnoDB Memcached cannot perform as fast as when binlog is turned off. It is much better off comparing to previous release. And we are continuing optimize the solution is this area to improve the performance as much as possible. Performance Study: Amerandra of our System QA team have conducted some performance studies on queries through our InnoDB Memcached connection and plain SQL end. And it shows some interesting results. The test is conducted on a “Linux 2.6.32-300.7.1.el6uek.x86_64 ix86 (64)” machine with 16 GB Memory, Intel Xeon 2.0 GHz CPU X86_64 2 CPUs- 4 Core Each, 2 RAID DISKS (1027 GB,733.9GB). Results are described in following tables: Table 1: Performance comparison on Set operations Connections 5.6.7-RC-Memcached-plugin ( TPS / Qps) with memcached-threads=8*** 5.6.7-RC* X faster Set (QPS) Set** 8 30,000 5,600 5.36 32 59,000 13,000 4.54 128 68,000 8,000 8.50 512 63,000 6.800 9.23 * mysql-5.6.7-rc-linux2.6-x86_64 ** The “set” operation when implemented in InnoDB Memcached involves a couple of DMLs: it first query the table to see whether the “key” exists, if it does not, the new key/value pair will be inserted. If it does exist, the “value” field of matching row (by key) will be updated. So when used in above query, it is a precompiled store procedure, and query will just execute such procedures. *** added “–daemon_memcached_option=-t8” (default is 4 threads) So we can see with this “set” query, InnoDB Memcached can run 4.5 to 9 time faster than MySQL server. Table 2: Performance comparison on Get operations Connections 5.6.7-RC-Memcached-plugin ( TPS / Qps) with memcached-threads=8 5.6.7-RC* X faster Get (QPS) Get 8 42,000 27,000 1.56 32 101,000 55.000 1.83 128 117,000 52,000 2.25 512 109,000 52,000 2.10 With the “get” query (or the select query), memcached performs 1.5 to 2 times faster than normal SQL. Summary: In summary, we added several much-desired features to InnoDB Memcached in this release, allowing user to operate on different tables with this Memcached interface. We also now provide a background commit thread to commit long running idle transactions, thus allow user to configure large batch write/read without worrying about large number of rows held or not being able to see (uncommit) data. We also greatly enhanced the performance when Binlog is enabled. We will continue making efforts in both performance enhancement and functionality areas to make InnoDB Memcached a good demo case for our InnoDB APIs. Jimmy Yang, September 29, 2012

    Read the article

  • What tools are people using to measure SQL Server database performance?

    - by Paul McLoughlin
    I've experimented with a number of techniques for monitoring the health of our SQL Servers, ranging from using the Management Data Warehouse functionality built into SQL Server 2008, through other commercial products such as Confio Ignite 8 and also of course rolling my own solution using perfmon, performance counters and collecting of various information from the dynamic management views and functions. What I am finding is that whilst each of these approaches has its own associated strengths, they all have associated weaknesses too. I feel that to actually get people within the organisation to take the monitoring of SQL Server performance seriously whatever solution we roll out has to be very simple and quick to use, must provide some form of a dashboard, and the act of monitoring must have minimal impact on the production databases (and perhaps even more importantly, it must be possible to prove that this is the case). So I'm interested to hear what others are using for this task? Any recommendations?

    Read the article

  • Can I expect a performance gain from removing this JOIN?

    - by makeee
    I have a "items" table with 1 million rows and a "users" table with 20,000 rows. When I select from the "items" table I do a join on the "users" table (items.user_id = user.id), so that I can grab the "username" from the users table. I'm considering adding a username column to the items table and removing the join. Can I expect a decent performance increase from this? It's already quite fast, but it would be nice to decrease my load (which is pretty high). The downside is that if the user changes their username, items will still reflect their old username, but this is okay with me if I can expect a decent performance increase. I'm asking stackoverflow because benchmarks aren't telling me too much. Both queries finish very quickly. Regardless, I'm wondering if removing the join would lighten load on the database to any significant degree.

    Read the article

  • How to improve performance of opening Microsoft Word when automated from c#?

    - by Abdullah BaMusa
    I have Microsoft Word template that I automated filling it’s fields from my application, and when the user request print I open this template. but creating word application every time user request print after filling fields is very expensive and lead to some delay while opening the template, so I choose to cache the reference to Word then just open the new filled template. that solve the performance issue as opening file is less expensive than recreating Word each time, but this work while the user just close the document not the entire Word application which when happened my reference to Word become invalid and return with exception says: “The RPC server is unavailable” next time request opening template . I tried to subscribe to BeforClosing event but his trigger for Quitting Word as well as Closing documents. My question is how to know if the word is closing document or quit the entire application so I take the proper action, or any hint for another direction of thinking about improve performance of opening word template.

    Read the article

  • C# performance of static string[] contains() (slooooow) vs. == operator

    - by Andrew White
    Hiya, Just a quick query: I had a piece of code which compared a string against a long list of values, e.g. if(str == "string1" || str = "string2" || str == "string3" || str = "string4". DoSomething(); And the interest of code clarity and maintainability I changed it to public static string[] strValues = { "String1", "String2", "String3", "String4"}; ... if(strValues.Contains(str) DoSomething(); Only to find the code execution time went from 2.5secs to 6.8secs (executed ca. 200,000 times). I certainly understand a slight performance trade off, but 300%? Anyway I could define the static strings differently to enhance performance? Cheers.

    Read the article

  • What are the differences in performance between synchronous and asynchronous JavaScript script loading?

    - by jasdeepkhalsa
    My question is simply: what are the differences in performance between synchronous and asynchronous JavaScript script loading? From what I've gathered synchronous code blocks the loading of a page and/or rest of the code from executing. This happens at two levels. First, at the level of the script actually loading, and secondly, within the JavaScript code itself. For example, on the page: Synchronous: <script src="demo_async.js" type="text/javascript"></script> Asynchronous: <script async src="demo_async.js" type="text/javascript"></script> And within a script: Synchronous: function a() {alert("a"); function b() {alert("b");}} Asynchronous (and self-executing): (function(a, function(b){ alert(b); }) { alert(a); }))(); So what really is the difference in performance from using these different loading methods and JavaScript patterns?

    Read the article

  • BI and EPM Landscape

    - by frank.buytendijk
    Most of my blog entries are not about Oracle products, and most of the latest entries are about topics such as IT strategy and enterprise architecture. However, given my background at Gartner, and at Hyperion, I still keep a close eye on what's happening in BI and EPM. One important reason is that I believe there is significant competitive value for organizations getting BI and EPM right. Davenport and Harris wrote a great book called "Competing on Analytics", in which they explain this in a very engaging and convincing way. At Oracle we have defined the concept of "management excellence" that outlines what organizations have to do to keep or create a competitive edge. It's not only in the business processes, but also in the management processes. Recently, Gartner published its 2009 market shares report for BI, Analytics, and Performance Management. Gartner identifies the same three segments that Oracle does: (1) CPM Suites (Oracle refers not to Corporate Performance Management, but Enterprise Performance Management), (2) BI Platform, and (3) Analytic Applications & Performance Management. According to Gartner, Oracle's share is increasing with revenue growing by more than 5%. Oracle currently holds the #2 market share position in the overall BI Software space based on total BI software revenue. Source: Gartner Dataquest Market Share: Business Intelligence, Analytics and Performance Management Software, Worldwide, 2009; Dan Sommer and Bhavish Sood; Apr 2010 Gartner has ranked Oracle as #1 in the CPM Suites worldwide sub-segment based on total BI software revenue, and Oracle is gaining share with revenue growing by more than 6% in 2009. Source: Gartner Dataquest Market Share: Business Intelligence, Analytics and Performance Management Software, Worldwide, 2009; Dan Sommer and Bhavish Sood; Apr 2010 The Analytic Applications & Performance Management subsegment is more fragmented. It has for instance a very large "Other Vendors" category. The largest player traditionally is SAS. Analytic Applications are often meant for very specific analytic needs in very specific industry sectors. According to Gartner, from the large vendors, again Oracle is the one who is gaining the most share - with total BI software revenue growth close to 15% in 2009. Source: Gartner Dataquest Market Share: Business Intelligence, Analytics and Performance Management Software, Worldwide, 2009; Dan Sommer and Bhavish Sood; Apr 2010 I believe this shows Oracle's integration strategy is working. In fact, integration actually is the innovation. BI and EPM have been silo technology platforms and application suites way too long. Management and measuring performance should be very closely linked to strategy execution, which is the domain of other business application areas such as CRM, ERP, and Supply Chain. BI and EPM are not about "making better decisions" anymore, but are part of a tangible action framework. Furthermore, organizations are getting more serious about ecosystem thinking. They do not evaluate single tools anymore for different application areas, but buy into a complete ecosystem of hardware, software and services. The best ecosystem is the one that offers the most options, in environments where the uncertainty is high and investments are hard to reverse. The key to successfully managing such an environment is middleware, and BI and EPM become increasingly middleware intensive. In fact, given the horizontal nature of BI and EPM, sitting on top of all business functions and applications, you could call them "upperware". Many are active in the BI and EPM space. Big players can offer a lot, but there are always many areas that are covered by specialty vendors. Oracle openly embraces those technologies within the ecosystem as well. Complete, open and integrated still accurately describes the Oracle product strategy. frank

    Read the article

  • How the number of indexes built on a table can impact performances?

    - by Davide Mauri
    We all know that putting too many indexes (I’m talking of non-clustered index only, of course) on table may produce performance problems due to the overhead that each index bring to all insert/update/delete operations on that table. But how much? I mean, we all agree – I think – that, generally speaking, having many indexes on a table is “bad”. But how bad it can be? How much the performance will degrade? And on a concurrent system how much this situation can also hurts SELECT performances? If SQL Server take more time to update a row on a table due to the amount of indexes it also has to update, this also means that locks will be held for more time, slowing down the perceived performance of all queries involved. I was quite curious to measure this, also because when teaching it’s by far more impressive and effective to show to attended a chart with the measured impact, so that they can really “feel” what it means! To do the tests, I’ve create a script that creates a table (that has a clustered index on the primary key which is an identity column) , loads 1000 rows into the table (inserting 1000 row using only one insert, instead of issuing 1000 insert of one row, in order to minimize the overhead needed to handle the transaction, that would have otherwise ), and measures the time taken to do it. The process is then repeated 16 times, each time adding a new index on the table, using columns from table in a round-robin fashion. Test are done against different row sizes, so that it’s possible to check if performance changes depending on row size. The result are interesting, although expected. This is the chart showing how much time it takes to insert 1000 on a table that has from 0 to 16 non-clustered indexes. Each test has been run 20 times in order to have an average value. The value has been cleaned from outliers value due to unpredictable performance fluctuations due to machine activity. The test shows that in a  table with a row size of 80 bytes, 1000 rows can be inserted in 9,05 msec if no indexes are present on the table, and the value grows up to 88 (!!!) msec when you have 16 indexes on it This means a impact on performance of 975%. That’s *huge*! Now, what happens if we have a bigger row size? Say that we have a table with a row size of 1520 byte. Here’s the data, from 0 to 16 indexes on that table: In this case we need near 22 msec to insert 1000 in a table with no indexes, but we need more that 500msec if the table has 16 active indexes! Now we’re talking of a 2410% impact on performance! Now we can have a tangible idea of what’s the impact of having (too?) many indexes on a table and also how the size of a row also impact performances. That’s why the golden rule of OLTP databases “few indexes, but good” is so true! (And in fact last week I saw a database with tables with 1700bytes row size and 23 (!!!) indexes on them!) This also means that a too heavy denormalization is really not a good idea (we’re always talking about OLTP systems, keep it in mind), since the performance get worse with the increase of the row size. So, be careful out there, and keep in mind the “equilibrium” is the key world of a database professional: equilibrium between read and write performance, between normalization and denormalization, between to few and too may indexes. PS Tests are done on a VMWare Workstation 7 VM with 2 CPU and 4 GB of Memory. Host machine is a Dell Precsioni M6500 with i7 Extreme X920 Quad-Core HT 2.0Ghz and 16Gb of RAM. Database is stored on a SSD Intel X-25E Drive, Simple Recovery Model, running on SQL Server 2008 R2. If you also want to to tests on your own, you can download the test script here: Open TestIndexPerformance.sql

    Read the article

  • Why don't SCOM R2 web console performance views load?

    - by Nexus
    When I select any performance view via my SCOM R2 web console, I get the following error: Unexpected error There was an error displaying the page you requested. ... and some suggestions about restarting my browser, which doesn't resolve the issue. The request produces the following event in the logs: Event code: 3005 Event message: An unhandled exception has occurred. Event time: 7/05/2010 11:41:38 AM Event time (UTC): 7/05/2010 1:41:38 AM Event ID: f4c47d1302694e1c8039e6c0088c2520 Event sequence: 18 Event occurrence:1 Event detail code: 0 [snip] Exception information: Exception type: HttpException Exception message: Error executing child request for /ResultViews/ViewTypePerformance.aspx. I'm using forms authentication and all other web console functionality works perfectly. My server is Windows 2008 R2 Standard running SCOM R2 and runs the DB, Web Console and RMS roles. Has anyone else experienced this issue? Is it fixed in the cumulative update release for SCOM R2?

    Read the article

  • What's the performance on USB docking stations, and can they be used when laptop is closed?

    - by David
    I'm looking into a docking station for a Dell Studio laptop. I don't see the traditional docking stations I'm familiar with - the kind (for a Dell Latitude, for example) where you sit the laptop on top of a long row of pins. Instead, I'm seeing a lot of USB docking stations. When I close my laptop, I want it to go into sleep mode. If I then connect a USB docking station to the laptop while it's closed, will it wake up? What's the performance on USB 2.0 docking stations with a new Dell Studio? Can all of the video and internet traffic really go through a USB 2.0 connection while still providing the best video frame rates and internet speeds? When you undock, I assume you'd have to use the "Safely Remove Hardware" feature in Windows. Will that successfully 'remove' everything attached to the docking station - external drives, thumb drives, etc?

    Read the article

  • Server performance worsened after a hardware upgrade: how should I reconfigure the server?

    - by twick
    I'm running a site on an Ubuntu/Apache/Django/PostgreSQL stack. We upgraded our server recently from 1 processor with 2 Gb total RAM (with 0.5 Gb of that RAM assigned to memcached) to a new server that has 2 processors with 4 Gb total RAM (with 2 Gb of that RAM assigned to memcached). However, when I looked at Google Webmaster Tools, I found out that the average page speed has worsened from 5 seconds to 15 seconds. Why would performance get worse with a hardware upgrade? What should I check and tune? Is this more likely to be a problem with memcached, Apache, Django, or PostgreSQL?

    Read the article

  • Maximum number of files in one ext3 directory while still getting acceptable performance?

    - by knorv
    I have an application writing to an ext3 directory which over time has grown to roughly three million files. Needless to say, reading the file listing of this directory is unbearably slow. I don't blame ext3. The proper solution would have been to let the application code write to sub-directories such as ./a/b/c/abc.ext rather than using only ./abc.ext. I'm changing to such a sub-directory structure and my question is simply: roughly how many files should I expect to store in one ext3 directory while still getting acceptable performance? What's your experience? Or in other words; assuming that I need to store three million files in the structure, how many levels deep should the ./a/b/c/abc.ext structure be? Obviously this is a question that cannot be answered exactly, but I'm looking for a ball park estimate.

    Read the article

  • Advise about performance for local or remote SQL Server?

    - by TruMan1
    I currently have my web server and SQL Express / MySQL server on the same server. It is on a VPS. I have been having problems with my hosting so I am thinking of separating the web and db server into 2 VPS servers. Does anyone recommend this? I am worried that changing my setup from a local DB server to a remote one will degrade performance heavily. They will not be on the same network, but will reference each other via an IP address. Anything I should be aware of?

    Read the article

  • Move virtual machine hard disk to a separate physical hard disk for better performance?

    - by joeeoj
    I have a dual-core machine with the host OS and many guest virtual OSs. Although I have 8GB of RAM, I notice a slowdown when I turn some virtual machine on (and it takes only 1GB RAM). I was told that I should move virtual machine hard disk file to a separate (another) physical hard drive in my PC to get better performance. This way the head of the hard disk would not have to jump from the virtual OS to the host OS as each hard drive would have its own head to deal with the OS: hard drive 1 head for host OS and hard drive 2 head for guest OS. Is this true? Should I get another hard disk only for virtual machine hard disk files?

    Read the article

< Previous Page | 176 177 178 179 180 181 182 183 184 185 186 187  | Next Page >