Search Results

Search found 13608 results on 545 pages for 'performance dashboard'.

Page 43/545 | < Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >

  • Install Visual Studio 2010 on SSD drive, or HDD for better performance?

    - by Steve
    I'm going to be installing Visual Studio 2010. I already have my source code on the SSD. For best performance, especially time to open the solution and compiling time, would it be better to install VS 2010 on the SSD or install it on the HDD. If both were on the SSD, loading the VS 2010 files would be quicker, but there would be contention between loading the source and the program files. Thanks!

    Read the article

  • Server controls and performance

    Greetings, If you are insteresting to know how some of the internal design of server controls affect performance, check the article I wrote about it at http://blogs.msdn.com/josere/archive/2010/05/05/asp-net-internals-server-controls-fields-and-performance.aspx Thanks for reading, Jose....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Improve backup performance by watching files added/modified in given directories?

    - by OverTheRainbow
    I use SyncbackSE on Windows to back up files between two hard drives daily. Every time the application starts, it scans every single file in the directories that it watches before copying files that were added or modified. To improve performance, I was wondering if there were a Windows backup application that would hook into Windows to keep tracks of files that were added/modified in given directories, so that it only needs to go through this list when it comes time for a backup. Thank you.

    Read the article

  • Install Visual Studio 2010 on SSD drive, or HDD for better performance?

    - by Steve
    I'm going to be installing Visual Studio 2010. I already have my source code on the SSD. For best performance, especially time to open the solution and compiling time, would it be better to install VS 2010 on the SSD or install it on the HDD. If both were on the SSD, loading the VS 2010 files would be quicker, but there would be contention between loading the source and the program files. Thanks!

    Read the article

  • Database Mirroring Performance Monitoring

    Many people deploy performance monitoring solutions in a "one-size-fits-all" manner. That is, they tend to build a solution that can be easily deployed to multiple servers and capture basic information from each server. The trouble is that not every server is identical, not even within the same shop. For example, not every server may have database mirroring deployed, which means your performance monitoring solution may be missing some critical pieces of information with regards to monitoring database mirroring.

    Read the article

  • Binding Data to Web Performance Tests

    Web Performance Tests provide a simple means of ensuring correct and performant responses are being returned from your web application. Testing a wide variety of inputs can be tedious without a way to separate test recording and input selection. Data binding provides a convenient and simple way to try an unlimited number of different inputs as part of your web performance tests using Visual Studio 2010.

    Read the article

  • Relevance of Performance Based SEO Services

    It was an interesting conversation between a Search Engine Optimization (SEO) business development person and a prospective customer. The prospective customer had scoffed at the SEO person's sales pitch about them offering "Performance Based SEO Services", saying Performance Based Services of any kind in life is not a luxury but a fundamental requirement.

    Read the article

  • Granularity and Parallel Performance

    One key to attaining good parallel performance is choosing the right granularity for the application. The goal is to determine the right granularity (usually larger is better) for parallel tasks, while avoiding load imbalance and communication overhead to achieve the best performance.

    Read the article

  • Creating my own SAP dashboard/reporting tool for mobile

    - by novwareman
    Hi! Is it possible to create my own mobile dashboard, reporting tool for SAP? I understand the standard components to be used for this purpose are: Business Objects Enterprise Server XI - this provides reporting functionalities for SAP Business Objects Mobile Server - this connects to Enterprise Server and optimizes reports stored there for mobile devices Mobile Web Browser or Mobile Business Objects Report Client - for viewing the reports I don't intend to purchase the components mentioned above. I only want to know if it's feasible to build my own solution. I am familiar with BAPI's to retrieve data from SAP. What is the proper approach to build an SAP reporting tool? Thank you!

    Read the article

  • SQLAuthority News – Three Posts on Reporting – T-SQL Tuesday #005

    - by pinaldave
    If you are following my blog, you already know that I am more of “T-SQL and Performance Tuning” type of person. I do have a good understanding of Business Intelligence suit and I also do certain training sessions on the same subject. When I was writing the blog post for T-SQL Tuesday #005 – Reporting, I realized that I have written a post that clearly explains how to generate reports using SQL Server Management Studio. Here is a quick recap on how one can use SSMS and out-of-the-box reports which can help many developers. Please note that they can be resource-intensive as well, so please use SSMS carefully. SQL SERVER – Generate Report for Index Physical Statistics – SSMS SQL SERVER – Out of the Box – Activity and Performance Reports from SSSMS SQL SERVER – Configure Management Data Collection in Quick Steps – T-SQL Tuesday #005 Junior developers and DBA can use these reports right away and can also start learning and exploring most database performance issues with the help of Sr. DBAs. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Reporting, SQL Reports

    Read the article

  • SQL SERVER – What is Fill Factor and What is the Best Value for Fill Factor

    - by pinaldave
    Working in performance tuning area, one has to know about Index and Index Maintenance. For any Index the most important property is Fill Factor. Fill factor is the value that determines the percentage of space on each leaf-level page to be filled with data. In an SQL Server, the smallest unit is a page, which is made of  Page with size 8K. Every page can store one or more rows based on the size of the row. The default value of the Fill Factor is 100, which is same as value 0. The default Fill Factor (100 or 0) will allow the SQL Server to fill the leaf-level pages of an index with the maximum numbers of the rows it can fit. There will be no or very little empty space left in the page, when the fill factor is 100. I have written following two article about Fill Factor. What is Fill factor? – Index, Fill Factor and Performance – Part 1 What is the best value for the Fill Factor? – Index, Fill Factor and Performance – Part 2 I strongly encourage read them and provide your feedback. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Index, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – 2008 – Unused Index Script – Download

    - by pinaldave
    Download Missing Index Script with Unused Index Script Performance Tuning is quite interesting and Index plays a vital role in it. A proper index can improve the performance and a bad index can hamper the performance. Here is the script from my script bank which I use to identify unused indexes on any database. Please note, if you should not drop all the unused indexes this script suggest. This is just for guidance. You should not create more than 5-10 indexes per table. Additionally, this script sometime does not give accurate information so use your common sense. Any way, the scripts is good starting point. You should pay attention to User Scan, User Lookup and User Update when you are going to drop index. The generic understanding is if this values are all high and User Seek is low, the index needs tuning. The index drop script is also provided in the last column. Download Missing Index Script with Unused Index Script -- Unused Index Script -- Original Author: Pinal Dave (C) 2011 SELECT TOP 25 o.name AS ObjectName , i.name AS IndexName , i.index_id AS IndexID , dm_ius.user_seeks AS UserSeek , dm_ius.user_scans AS UserScans , dm_ius.user_lookups AS UserLookups , dm_ius.user_updates AS UserUpdates , p.TableRows , 'DROP INDEX ' + QUOTENAME(i.name) + ' ON ' + QUOTENAME(s.name) + '.' + QUOTENAME(OBJECT_NAME(dm_ius.OBJECT_ID)) AS 'drop statement' FROM sys.dm_db_index_usage_stats dm_ius INNER JOIN sys.indexes i ON i.index_id = dm_ius.index_id AND dm_ius.OBJECT_ID = i.OBJECT_ID INNER JOIN sys.objects o ON dm_ius.OBJECT_ID = o.OBJECT_ID INNER JOIN sys.schemas s ON o.schema_id = s.schema_id INNER JOIN (SELECT SUM(p.rows) TableRows, p.index_id, p.OBJECT_ID FROM sys.partitions p GROUP BY p.index_id, p.OBJECT_ID) p ON p.index_id = dm_ius.index_id AND dm_ius.OBJECT_ID = p.OBJECT_ID WHERE OBJECTPROPERTY(dm_ius.OBJECT_ID,'IsUserTable') = 1 AND dm_ius.database_id = DB_ID() AND i.type_desc = 'nonclustered' AND i.is_primary_key = 0 AND i.is_unique_constraint = 0 ORDER BY (dm_ius.user_seeks + dm_ius.user_scans + dm_ius.user_lookups) ASC GO Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Download, SQL Index, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SPARC M7 Chip - 32 cores - Mind Blowing performance

    - by Angelo-Oracle
    The M7 Chip Oracle just announced its Next Generation Processor at the HotChips HC26 conference. As the Tech Lead in our Systems Division's Partner group, I had a front row seat to the extraordinary price performance advantage of Oracle current T5 and M6 based systems. Partner after partner tested  these systems and were impressed with it performance. Just read some of the quotes to see what our partner has been saying about our hardware. We just announced our next generation processor, the M7. This has 32 cores (up from 16-cores in T5 and 12-cores in M6). With 20 nm technology  this is our most advanced processor. The processor has more cores than anything else in the industry today. After the Sun acquisition Oracle has released 5 processors in 4 years and this is the 6th.  The S4 core  The M7 is built using the foundation of the S4 core. This is the next generation core technology. Like its predecessor, the S4 has 8 dynamic threads. It increases the frequency while maintaining the Pipeline depth. Each core has its own fine grain power estimator that keeps the core within its power envelop in 250 nano-sec granularity. Each core also includes Software in Silicon features for Application Acceleration Support. Each core includes features to improve Application Data Integrity, with almost no performance loss. The core also allows using part of the Virtual Address to store meta-data.  User-Level Synchronization Instructions are also part of the S4 core. Each core has 16 KB Instruction and 16 KB Data L1 cache. The Core Clusters  The cores on the M7 chip are organized in sets of 4-core clusters. The core clusters share  L2 cache.  All four cores in the complex share 256 KB of 4 way set associative L2 Instruction Cache, with over 1/2 TB/s of throughput. Two cores share 256 KB of 8 way set associative L2 Data Cache, with over 1/2 TB/s of throughput. With this innovative Core Cluster architecture, the M7 doubles core execution bandwidth. to maximize per-thread performance.  The Chip  Each  M7 chip has 8 sets of these core-clusters. The chip has 64 MB on-chip L3 cache. This L3 caches is shared among all the cores and is partitioned into 8 x 8 MB chunks. Each chunk is  8-way set associative cache. The aggregate bandwidth for the L3 cache on the chip is over 1.6TB/s. Each chip has 4 DDR4 memory controllers and can support upto 16 DDR4 DIMMs, allowing for 2 TB of RAM/chip. The chip also includes 4 internal links of PCIe Gen3 I/O controllers.  Each chip has 7 coherence links, allowing for 8 of these chips to be connected together gluelessly. Also 32 of these chips can be connected in an SMP configuration. A potential system with 32 chips will have 1024 cores and 8192 threads and 64 TB of RAM.  Software in Silicon The M7 chip has many built in Application Accelerators in Silicon. These features will be exposed to our Software partners using the SPARC Accelerator Program.  The M7  has built-in logic to decompress data at the speed of memory access. This means that applications can directly work on compressed data in memory increasing the data access rates. The VA Masking feature allows the use of part of the virtual address to store meta-data.  Realtime Application Data Integrity The Realtime Application Data Integrity feature helps applications safeguard against invalid, stale memory reference and buffer overflows. The first 4-bits if the Pointer can be used to store a version number and this version number is also maintained in the memory & cache lines. When a pointer accesses memory the hardware checks to make sure the two versions match. A SEGV signal is raised when there is a mismatch. This feature can be used by the Database, applications and the OS.  M7 Database In-Memory Query Accelerator The M7 chip also includes a In-Silicon Query Engines.  These accelerate tasks that work on In-Memory Columnar Vectors. Oracle In-Memory options stores data in Column Format. The M7 Query Engine can speed up In-Memory Format Conversion, Value and Range Comparisons and Set Membership lookups. This engine can work on Compressed data - this means not only are we accelerating the query performance but also increasing the memory bandwidth for queries.  SPARC Accelerated Program  At the Hotchips conference we also introduced the SPARC Accelerated Program to provide our partners and third part developers access to all the goodness of the M7's SPARC Application Acceleration features. Please get in touch with us if you are interested in knowing more about this program. 

    Read the article

  • SPARC T4-4 Delivers World Record Performance on Oracle OLAP Perf Version 2 Benchmark

    - by Brian
    Oracle's SPARC T4-4 server delivered world record performance with subsecond response time on the Oracle OLAP Perf Version 2 benchmark using Oracle Database 11g Release 2 running on Oracle Solaris 11. The SPARC T4-4 server achieved throughput of 430,000 cube-queries/hour with an average response time of 0.85 seconds and the median response time of 0.43 seconds. This was achieved by using only 60% of the available CPU resources leaving plenty of headroom for future growth. The SPARC T4-4 server operated on an Oracle OLAP cube with a 4 billion row fact table of sales data containing 4 dimensions. This represents as many as 90 quintillion aggregate rows (90 followed by 18 zeros). Performance Landscape Oracle OLAP Perf Version 2 Benchmark 4 Billion Fact Table Rows System Queries/hour Users* Response Time (sec) Average Median SPARC T4-4 430,000 7,300 0.85 0.43 * Users - the supported number of users with a given think time of 60 seconds Configuration Summary and Results Hardware Configuration: SPARC T4-4 server with 4 x SPARC T4 processors, 3.0 GHz 1 TB memory Data Storage 1 x Sun Fire X4275 (using COMSTAR) 2 x Sun Storage F5100 Flash Array (each with 80 FMODs) Redo Storage 1 x Sun Fire X4275 (using COMSTAR with 8 HDD) Software Configuration: Oracle Solaris 11 11/11 Oracle Database 11g Release 2 (11.2.0.3) with Oracle OLAP option Benchmark Description The Oracle OLAP Perf Version 2 benchmark is a workload designed to demonstrate and stress the Oracle OLAP product's core features of fast query, fast update, and rich calculations on a multi-dimensional model to support enhanced Data Warehousing. The bulk of the benchmark entails running a number of concurrent users, each issuing typical multidimensional queries against an Oracle OLAP cube consisting of a number of years of sales data with fully pre-computed aggregations. The cube has four dimensions: time, product, customer, and channel. Each query user issues approximately 150 different queries. One query chain may ask for total sales in a particular region (e.g South America) for a particular time period (e.g. Q4 of 2010) followed by additional queries which drill down into sales for individual countries (e.g. Chile, Peru, etc.) with further queries drilling down into individual stores, etc. Another query chain may ask for yearly comparisons of total sales for some product category (e.g. major household appliances) and then issue further queries drilling down into particular products (e.g. refrigerators, stoves. etc.), particular regions, particular customers, etc. Results from version 2 of the benchmark are not comparable with version 1. The primary difference is the type of queries along with the query mix. Key Points and Best Practices Since typical BI users are often likely to issue similar queries, with different constants in the where clauses, setting the init.ora prameter "cursor_sharing" to "force" will provide for additional query throughput and a larger number of potential users. Except for this setting, together with making full use of available memory, out of the box performance for the OLAP Perf workload should provide results similar to what is reported here. For a given number of query users with zero think time, the main measured metrics are the average query response time, the median query response time, and the query throughput. A derived metric is the maximum number of users the system can support achieving the measured response time assuming some non-zero think time. The calculation of the maximum number of users follows from the well-known response-time law N = (rt + tt) * tp where rt is the average response time, tt is the think time and tp is the measured throughput. Setting tt to 60 seconds, rt to 0.85 seconds and tp to 119.44 queries/sec (430,000 queries/hour), the above formula shows that the T4-4 server will support 7,300 concurrent users with a think time of 60 seconds and an average response time of 0.85 seconds. For more information see chapter 3 from the book "Quantitative System Performance" cited below. -- See Also Quantitative System Performance Computer System Analysis Using Queueing Network Models Edward D. Lazowska, John Zahorjan, G. Scott Graham, Kenneth C. Sevcik external local Oracle Database 11g – Oracle OLAP oracle.com OTN SPARC T4-4 Server oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 11/2/2012.

    Read the article

  • Logging library for (c++) games

    - by Klaim
    I know a lot of logging libraries but didn't test a lot of them. (GoogleLog, Pantheios, the coming boost::log library...) In games, especially in remote multiplayer and multithreaded games, logging is vital to debugging, even if you remove all logs in the end. Let's say I'm making a PC game (not console) that needs logs (multiplayer and multithreaded and/or multiprocess) and I have good reasons for looking for a library for logging (like, I don't have time or I'm not confident in my ability to write one correctly for my case). Assuming that I need : performance ease of use (allow streaming or formating or something like that) reliable (don't leak or crash!) cross-platform (at least Windows, MacOSX, Linux/Ubuntu) Wich logging library would you recommand? Currently, I think that boost::log is the most flexible one (you can even log to remotely!), but have not good performance. Pantheios is often cited but I don't have comparison points on performance and usage. I've used my own lib for a long time but I know it don't manage multithreading so it's a big problem, even if it's fast enough. Google Log seems interesting, I just need to test it but if you already have compared those libs and more, your advice might be of good use. Games are often performance demanding while complex to debug so it would be good to know logging libraries that, in our specific case, have clear advantages.

    Read the article

  • Which opcodes are faster at the CPU level?

    - by Geotarget
    In every programming language there are sets of opcodes that are recommended over others. I've tried to list them here, in order of speed. Bitwise Integer Addition / Subtraction Integer Multiplication / Division Comparison Control flow Float Addition / Subtraction Float Multiplication / Division Where you need high-performance code, C++ can be hand optimized in assembly, to use SIMD instructions or more efficient control flow, data types, etc. So I'm trying to understand if the data type (int32 / float32 / float64) or the operation used (*, +, &) affects performance at the CPU level. Is a single multiply slower on the CPU than an addition? In MCU theory you learn that speed of opcodes is determined by the number of CPU cycles it takes to execute. So does it mean that multiply takes 4 cycles and add takes 2? Exactly what are the speed characteristics of the basic math and control flow opcodes? If two opcodes take the same number of cycles to execute, then both can be used interchangeably without any performance gain / loss? Any other technical details you can share regarding x86 CPU performance is appreciated

    Read the article

  • How to show dashboard for Micrmax 353G in ubuntu 12.04?

    - by newuser
    I am using ubuntu 12.04 and Micromax 353G dongle for internet. Everything is working fine. But here I want to see my data used in that session and the balance data. It is showing all the features in windows but as I am using ubuntu and I love this so I want all this in ubuntu. So is this possible to get a dashboard for Micromax dongle dashboard on ubuntu 12.04? Any help and suggestions will be highly appreciable.

    Read the article

  • What CPU hardware performance counter tool do you guys use?

    - by Hao Shen
    I just want to know the popular tool that I can use. Originally, I used Perfmon2. However, recently, I installed the lastest Ubuntu 12 on my Ivy Bridge i7 machine. Then there is some compilation error for the Pfmon. :< From here http://comments.gmane.org/gmane.comp.linux.perfmon2.devel/3255 , it seems that the Perfmon2 will not work after the kernel 2.6.30. So it suggests to use perf tool. I just want to confirm is there any popular application level performance counter monitor tool which can be used for the later kernel version?

    Read the article

  • SQL Server "Long running transaction" performance counter: why no workee?

    - by Sleepless
    Please explain to me the following observation: I have the following piece of T-SQL code that I run from SSMS: BEGIN TRAN SELECT COUNT (*) FROM m WHERE m.[x] = 123456 or m.[y] IN (SELECT f.x FROM f) SELECT COUNT (*) FROM m WHERE m.[x] = 123456 or m.[y] IN (SELECT f.x FROM f) COMMIT TRAN The query takes about twenty seconds to run. I have no other user queries running on the server. Under these circumstances, I would expect the performance counter "MSSQL$SQLInstanceName:Transactions\Longest Transaction Running Time" to rise constantly up to a value of 20 and then drop rapidly. Instead, it rises to around 12 within two seconds and then oscillates between 12 and 14 for the duration of the query after which it drops again. According to the MS docs, the counter measures "The length of time (in seconds) since the start of the transaction that has been active longer than any other current transaction." But apparently, it doesn't. What gives?

    Read the article

  • MacBook makes a noise from the MagSafe connection every time I flip a widget...

    - by Timmy
    OK this is bizarre. But basically, what happens is on my MacBook (latest model) whenever I 'flip' a Dashboard Widget I hear a little noise for the duration of the flip: like a tiny continuous squeak noise. It's quite strange. However, it only ever happens when the MagSafe cable is plugged in. WHY?! :'( The noise appears to me emitted from the MagSafe area itself, but, there's not any other CoreAnimation or Quartzey bits that cause this to happen, it only happens with Dashboard widgets... The noise is very quiet and you can only hear it in a silent room. Edit: Running the latest Snow Leopard.

    Read the article

  • MacBook makes a noise from the power adapter every time I flip a widget...

    - by Timmy
    OK this is bizarre. But basically, what happens is on my MacBook (latest model) whenever I 'flip' a Dashboard Widget I hear a little noise for the duration of the flip: like a tiny continuous squeak noise. It's quite strange. However, it only ever happens when the MagSafe cable is plugged in. WHY?! :'( The noise appears to me emitted from the MagSafe area itself, but, there's not any other CoreAnimation or Quartzey bits that cause this to happen, it only happens with Dashboard widgets... The noise is very quiet and you can only hear it in a silent room. Edit: Running the latest Snow Leopard.

    Read the article

  • What could cause Vista machines to degrade in performance?

    - by gencha
    We have installed several machines running Windows Vista at a client site. These machines run the same set of applications each day and are not operated with by any user. They mainly run a 3D application for digital signage purposes. After a few months I noticed that several machines suffer from really bad stuttering. The performance is unacceptable. At first I figured it was faulty hardware. But after putting a fresh image on them, they run perfectly fine again. These are Core i7 machines with GTX 280 graphics adapters. They should be able to handle the workload without any trouble (and they do, at least for a while). I really have no idea how to track down the source of this issue. Especially since not all of the machines display the same issue.

    Read the article

< Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >