Search Results

Search found 13654 results on 547 pages for 'ssis performance'.

Page 58/547 | < Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >

  • Binding Data to Web Performance Tests

    Web Performance Tests provide a simple means of ensuring correct and performant responses are being returned from your web application. Testing a wide variety of inputs can be tedious without a way to separate test recording and input selection. Data binding provides a convenient and simple way to try an unlimited number of different inputs as part of your web performance tests using Visual Studio 2010.

    Read the article

  • Relevance of Performance Based SEO Services

    It was an interesting conversation between a Search Engine Optimization (SEO) business development person and a prospective customer. The prospective customer had scoffed at the SEO person's sales pitch about them offering "Performance Based SEO Services", saying Performance Based Services of any kind in life is not a luxury but a fundamental requirement.

    Read the article

  • Granularity and Parallel Performance

    One key to attaining good parallel performance is choosing the right granularity for the application. The goal is to determine the right granularity (usually larger is better) for parallel tasks, while avoiding load imbalance and communication overhead to achieve the best performance.

    Read the article

  • SQLAuthority News – Three Posts on Reporting – T-SQL Tuesday #005

    - by pinaldave
    If you are following my blog, you already know that I am more of “T-SQL and Performance Tuning” type of person. I do have a good understanding of Business Intelligence suit and I also do certain training sessions on the same subject. When I was writing the blog post for T-SQL Tuesday #005 – Reporting, I realized that I have written a post that clearly explains how to generate reports using SQL Server Management Studio. Here is a quick recap on how one can use SSMS and out-of-the-box reports which can help many developers. Please note that they can be resource-intensive as well, so please use SSMS carefully. SQL SERVER – Generate Report for Index Physical Statistics – SSMS SQL SERVER – Out of the Box – Activity and Performance Reports from SSSMS SQL SERVER – Configure Management Data Collection in Quick Steps – T-SQL Tuesday #005 Junior developers and DBA can use these reports right away and can also start learning and exploring most database performance issues with the help of Sr. DBAs. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Reporting, SQL Reports

    Read the article

  • SQL SERVER – What is Fill Factor and What is the Best Value for Fill Factor

    - by pinaldave
    Working in performance tuning area, one has to know about Index and Index Maintenance. For any Index the most important property is Fill Factor. Fill factor is the value that determines the percentage of space on each leaf-level page to be filled with data. In an SQL Server, the smallest unit is a page, which is made of  Page with size 8K. Every page can store one or more rows based on the size of the row. The default value of the Fill Factor is 100, which is same as value 0. The default Fill Factor (100 or 0) will allow the SQL Server to fill the leaf-level pages of an index with the maximum numbers of the rows it can fit. There will be no or very little empty space left in the page, when the fill factor is 100. I have written following two article about Fill Factor. What is Fill factor? – Index, Fill Factor and Performance – Part 1 What is the best value for the Fill Factor? – Index, Fill Factor and Performance – Part 2 I strongly encourage read them and provide your feedback. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Index, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – 2008 – Unused Index Script – Download

    - by pinaldave
    Download Missing Index Script with Unused Index Script Performance Tuning is quite interesting and Index plays a vital role in it. A proper index can improve the performance and a bad index can hamper the performance. Here is the script from my script bank which I use to identify unused indexes on any database. Please note, if you should not drop all the unused indexes this script suggest. This is just for guidance. You should not create more than 5-10 indexes per table. Additionally, this script sometime does not give accurate information so use your common sense. Any way, the scripts is good starting point. You should pay attention to User Scan, User Lookup and User Update when you are going to drop index. The generic understanding is if this values are all high and User Seek is low, the index needs tuning. The index drop script is also provided in the last column. Download Missing Index Script with Unused Index Script -- Unused Index Script -- Original Author: Pinal Dave (C) 2011 SELECT TOP 25 o.name AS ObjectName , i.name AS IndexName , i.index_id AS IndexID , dm_ius.user_seeks AS UserSeek , dm_ius.user_scans AS UserScans , dm_ius.user_lookups AS UserLookups , dm_ius.user_updates AS UserUpdates , p.TableRows , 'DROP INDEX ' + QUOTENAME(i.name) + ' ON ' + QUOTENAME(s.name) + '.' + QUOTENAME(OBJECT_NAME(dm_ius.OBJECT_ID)) AS 'drop statement' FROM sys.dm_db_index_usage_stats dm_ius INNER JOIN sys.indexes i ON i.index_id = dm_ius.index_id AND dm_ius.OBJECT_ID = i.OBJECT_ID INNER JOIN sys.objects o ON dm_ius.OBJECT_ID = o.OBJECT_ID INNER JOIN sys.schemas s ON o.schema_id = s.schema_id INNER JOIN (SELECT SUM(p.rows) TableRows, p.index_id, p.OBJECT_ID FROM sys.partitions p GROUP BY p.index_id, p.OBJECT_ID) p ON p.index_id = dm_ius.index_id AND dm_ius.OBJECT_ID = p.OBJECT_ID WHERE OBJECTPROPERTY(dm_ius.OBJECT_ID,'IsUserTable') = 1 AND dm_ius.database_id = DB_ID() AND i.type_desc = 'nonclustered' AND i.is_primary_key = 0 AND i.is_unique_constraint = 0 ORDER BY (dm_ius.user_seeks + dm_ius.user_scans + dm_ius.user_lookups) ASC GO Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Download, SQL Index, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SPARC M7 Chip - 32 cores - Mind Blowing performance

    - by Angelo-Oracle
    The M7 Chip Oracle just announced its Next Generation Processor at the HotChips HC26 conference. As the Tech Lead in our Systems Division's Partner group, I had a front row seat to the extraordinary price performance advantage of Oracle current T5 and M6 based systems. Partner after partner tested  these systems and were impressed with it performance. Just read some of the quotes to see what our partner has been saying about our hardware. We just announced our next generation processor, the M7. This has 32 cores (up from 16-cores in T5 and 12-cores in M6). With 20 nm technology  this is our most advanced processor. The processor has more cores than anything else in the industry today. After the Sun acquisition Oracle has released 5 processors in 4 years and this is the 6th.  The S4 core  The M7 is built using the foundation of the S4 core. This is the next generation core technology. Like its predecessor, the S4 has 8 dynamic threads. It increases the frequency while maintaining the Pipeline depth. Each core has its own fine grain power estimator that keeps the core within its power envelop in 250 nano-sec granularity. Each core also includes Software in Silicon features for Application Acceleration Support. Each core includes features to improve Application Data Integrity, with almost no performance loss. The core also allows using part of the Virtual Address to store meta-data.  User-Level Synchronization Instructions are also part of the S4 core. Each core has 16 KB Instruction and 16 KB Data L1 cache. The Core Clusters  The cores on the M7 chip are organized in sets of 4-core clusters. The core clusters share  L2 cache.  All four cores in the complex share 256 KB of 4 way set associative L2 Instruction Cache, with over 1/2 TB/s of throughput. Two cores share 256 KB of 8 way set associative L2 Data Cache, with over 1/2 TB/s of throughput. With this innovative Core Cluster architecture, the M7 doubles core execution bandwidth. to maximize per-thread performance.  The Chip  Each  M7 chip has 8 sets of these core-clusters. The chip has 64 MB on-chip L3 cache. This L3 caches is shared among all the cores and is partitioned into 8 x 8 MB chunks. Each chunk is  8-way set associative cache. The aggregate bandwidth for the L3 cache on the chip is over 1.6TB/s. Each chip has 4 DDR4 memory controllers and can support upto 16 DDR4 DIMMs, allowing for 2 TB of RAM/chip. The chip also includes 4 internal links of PCIe Gen3 I/O controllers.  Each chip has 7 coherence links, allowing for 8 of these chips to be connected together gluelessly. Also 32 of these chips can be connected in an SMP configuration. A potential system with 32 chips will have 1024 cores and 8192 threads and 64 TB of RAM.  Software in Silicon The M7 chip has many built in Application Accelerators in Silicon. These features will be exposed to our Software partners using the SPARC Accelerator Program.  The M7  has built-in logic to decompress data at the speed of memory access. This means that applications can directly work on compressed data in memory increasing the data access rates. The VA Masking feature allows the use of part of the virtual address to store meta-data.  Realtime Application Data Integrity The Realtime Application Data Integrity feature helps applications safeguard against invalid, stale memory reference and buffer overflows. The first 4-bits if the Pointer can be used to store a version number and this version number is also maintained in the memory & cache lines. When a pointer accesses memory the hardware checks to make sure the two versions match. A SEGV signal is raised when there is a mismatch. This feature can be used by the Database, applications and the OS.  M7 Database In-Memory Query Accelerator The M7 chip also includes a In-Silicon Query Engines.  These accelerate tasks that work on In-Memory Columnar Vectors. Oracle In-Memory options stores data in Column Format. The M7 Query Engine can speed up In-Memory Format Conversion, Value and Range Comparisons and Set Membership lookups. This engine can work on Compressed data - this means not only are we accelerating the query performance but also increasing the memory bandwidth for queries.  SPARC Accelerated Program  At the Hotchips conference we also introduced the SPARC Accelerated Program to provide our partners and third part developers access to all the goodness of the M7's SPARC Application Acceleration features. Please get in touch with us if you are interested in knowing more about this program. 

    Read the article

  • SPARC T4-4 Delivers World Record Performance on Oracle OLAP Perf Version 2 Benchmark

    - by Brian
    Oracle's SPARC T4-4 server delivered world record performance with subsecond response time on the Oracle OLAP Perf Version 2 benchmark using Oracle Database 11g Release 2 running on Oracle Solaris 11. The SPARC T4-4 server achieved throughput of 430,000 cube-queries/hour with an average response time of 0.85 seconds and the median response time of 0.43 seconds. This was achieved by using only 60% of the available CPU resources leaving plenty of headroom for future growth. The SPARC T4-4 server operated on an Oracle OLAP cube with a 4 billion row fact table of sales data containing 4 dimensions. This represents as many as 90 quintillion aggregate rows (90 followed by 18 zeros). Performance Landscape Oracle OLAP Perf Version 2 Benchmark 4 Billion Fact Table Rows System Queries/hour Users* Response Time (sec) Average Median SPARC T4-4 430,000 7,300 0.85 0.43 * Users - the supported number of users with a given think time of 60 seconds Configuration Summary and Results Hardware Configuration: SPARC T4-4 server with 4 x SPARC T4 processors, 3.0 GHz 1 TB memory Data Storage 1 x Sun Fire X4275 (using COMSTAR) 2 x Sun Storage F5100 Flash Array (each with 80 FMODs) Redo Storage 1 x Sun Fire X4275 (using COMSTAR with 8 HDD) Software Configuration: Oracle Solaris 11 11/11 Oracle Database 11g Release 2 (11.2.0.3) with Oracle OLAP option Benchmark Description The Oracle OLAP Perf Version 2 benchmark is a workload designed to demonstrate and stress the Oracle OLAP product's core features of fast query, fast update, and rich calculations on a multi-dimensional model to support enhanced Data Warehousing. The bulk of the benchmark entails running a number of concurrent users, each issuing typical multidimensional queries against an Oracle OLAP cube consisting of a number of years of sales data with fully pre-computed aggregations. The cube has four dimensions: time, product, customer, and channel. Each query user issues approximately 150 different queries. One query chain may ask for total sales in a particular region (e.g South America) for a particular time period (e.g. Q4 of 2010) followed by additional queries which drill down into sales for individual countries (e.g. Chile, Peru, etc.) with further queries drilling down into individual stores, etc. Another query chain may ask for yearly comparisons of total sales for some product category (e.g. major household appliances) and then issue further queries drilling down into particular products (e.g. refrigerators, stoves. etc.), particular regions, particular customers, etc. Results from version 2 of the benchmark are not comparable with version 1. The primary difference is the type of queries along with the query mix. Key Points and Best Practices Since typical BI users are often likely to issue similar queries, with different constants in the where clauses, setting the init.ora prameter "cursor_sharing" to "force" will provide for additional query throughput and a larger number of potential users. Except for this setting, together with making full use of available memory, out of the box performance for the OLAP Perf workload should provide results similar to what is reported here. For a given number of query users with zero think time, the main measured metrics are the average query response time, the median query response time, and the query throughput. A derived metric is the maximum number of users the system can support achieving the measured response time assuming some non-zero think time. The calculation of the maximum number of users follows from the well-known response-time law N = (rt + tt) * tp where rt is the average response time, tt is the think time and tp is the measured throughput. Setting tt to 60 seconds, rt to 0.85 seconds and tp to 119.44 queries/sec (430,000 queries/hour), the above formula shows that the T4-4 server will support 7,300 concurrent users with a think time of 60 seconds and an average response time of 0.85 seconds. For more information see chapter 3 from the book "Quantitative System Performance" cited below. -- See Also Quantitative System Performance Computer System Analysis Using Queueing Network Models Edward D. Lazowska, John Zahorjan, G. Scott Graham, Kenneth C. Sevcik external local Oracle Database 11g – Oracle OLAP oracle.com OTN SPARC T4-4 Server oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 11/2/2012.

    Read the article

  • Logging library for (c++) games

    - by Klaim
    I know a lot of logging libraries but didn't test a lot of them. (GoogleLog, Pantheios, the coming boost::log library...) In games, especially in remote multiplayer and multithreaded games, logging is vital to debugging, even if you remove all logs in the end. Let's say I'm making a PC game (not console) that needs logs (multiplayer and multithreaded and/or multiprocess) and I have good reasons for looking for a library for logging (like, I don't have time or I'm not confident in my ability to write one correctly for my case). Assuming that I need : performance ease of use (allow streaming or formating or something like that) reliable (don't leak or crash!) cross-platform (at least Windows, MacOSX, Linux/Ubuntu) Wich logging library would you recommand? Currently, I think that boost::log is the most flexible one (you can even log to remotely!), but have not good performance. Pantheios is often cited but I don't have comparison points on performance and usage. I've used my own lib for a long time but I know it don't manage multithreading so it's a big problem, even if it's fast enough. Google Log seems interesting, I just need to test it but if you already have compared those libs and more, your advice might be of good use. Games are often performance demanding while complex to debug so it would be good to know logging libraries that, in our specific case, have clear advantages.

    Read the article

  • Which opcodes are faster at the CPU level?

    - by Geotarget
    In every programming language there are sets of opcodes that are recommended over others. I've tried to list them here, in order of speed. Bitwise Integer Addition / Subtraction Integer Multiplication / Division Comparison Control flow Float Addition / Subtraction Float Multiplication / Division Where you need high-performance code, C++ can be hand optimized in assembly, to use SIMD instructions or more efficient control flow, data types, etc. So I'm trying to understand if the data type (int32 / float32 / float64) or the operation used (*, +, &) affects performance at the CPU level. Is a single multiply slower on the CPU than an addition? In MCU theory you learn that speed of opcodes is determined by the number of CPU cycles it takes to execute. So does it mean that multiply takes 4 cycles and add takes 2? Exactly what are the speed characteristics of the basic math and control flow opcodes? If two opcodes take the same number of cycles to execute, then both can be used interchangeably without any performance gain / loss? Any other technical details you can share regarding x86 CPU performance is appreciated

    Read the article

  • SQL SERVER – Automated Type Conversion using Expressor Studio

    - by pinaldave
    Recently I had an interesting situation during my consultation project. Let me share to you how I solved the problem using Expressor Studio. Consider a situation in which you need to read a field, such as customer_identifier, from a text file and pass that field into a database table. In the source file’s metadata structure, customer_identifier is described as a string; however, in the target database table, customer_identifier is described as an integer. Legitimately, all the source values for customer_identifier are valid numbers, such as “109380”. To implement this in an ETL application, you probably would have hard-coded a type conversion function call, such as: output.customer_identifier=stringToInteger(input.customer_identifier) That wasn’t so bad, was it? For this instance, programming this hard-coded type conversion function call was relatively easy. However, hard-coding, whether type conversion code or other business rule code, almost always means that the application containing hard-coded fields, function calls, and values is: a) specific to an instance of use; b) is difficult to adapt to new situations; and c) doesn’t contain many reusable sub-parts. Therefore, in the long run, applications with hard-coded type conversion function calls don’t scale well. In addition, they increase the overall level of effort and degree of difficulty to write and maintain the ETL applications. To get around the trappings of hard-coding type conversion function calls, developers need an access to smarter typing systems. Expressor Studio product offers this feature exactly, by providing developers with a type conversion automation engine based on type abstraction. The theory behind the engine is quite simple. A user specifies abstract data fields in the engine, and then writes applications against the abstractions (whereas in most ETL software, developers develop applications against the physical model). When a Studio-built application is run, Studio’s engine automatically converts the source type to the abstracted data field’s type and converts the abstracted data field’s type to the target type. The engine can do this because it has a couple of built-in rules for type conversions. So, using the example above, a developer could specify customer_identifier as an abstract data field with a type of integer when using Expressor Studio. Upon reading the string value from the text file, Studio’s type conversion engine automatically converts the source field from the type specified in the source’s metadata structure to the abstract field’s type. At the time of writing the data value to the target database, the engine doesn’t have any work to do because the abstract data type and the target data type are just the same. Had they been different, the engine would have automatically provided the conversion. ?Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Database, Pinal Dave, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology Tagged: SSIS

    Read the article

  • What CPU hardware performance counter tool do you guys use?

    - by Hao Shen
    I just want to know the popular tool that I can use. Originally, I used Perfmon2. However, recently, I installed the lastest Ubuntu 12 on my Ivy Bridge i7 machine. Then there is some compilation error for the Pfmon. :< From here http://comments.gmane.org/gmane.comp.linux.perfmon2.devel/3255 , it seems that the Perfmon2 will not work after the kernel 2.6.30. So it suggests to use perf tool. I just want to confirm is there any popular application level performance counter monitor tool which can be used for the later kernel version?

    Read the article

  • SQL Server "Long running transaction" performance counter: why no workee?

    - by Sleepless
    Please explain to me the following observation: I have the following piece of T-SQL code that I run from SSMS: BEGIN TRAN SELECT COUNT (*) FROM m WHERE m.[x] = 123456 or m.[y] IN (SELECT f.x FROM f) SELECT COUNT (*) FROM m WHERE m.[x] = 123456 or m.[y] IN (SELECT f.x FROM f) COMMIT TRAN The query takes about twenty seconds to run. I have no other user queries running on the server. Under these circumstances, I would expect the performance counter "MSSQL$SQLInstanceName:Transactions\Longest Transaction Running Time" to rise constantly up to a value of 20 and then drop rapidly. Instead, it rises to around 12 within two seconds and then oscillates between 12 and 14 for the duration of the query after which it drops again. According to the MS docs, the counter measures "The length of time (in seconds) since the start of the transaction that has been active longer than any other current transaction." But apparently, it doesn't. What gives?

    Read the article

  • What could cause Vista machines to degrade in performance?

    - by gencha
    We have installed several machines running Windows Vista at a client site. These machines run the same set of applications each day and are not operated with by any user. They mainly run a 3D application for digital signage purposes. After a few months I noticed that several machines suffer from really bad stuttering. The performance is unacceptable. At first I figured it was faulty hardware. But after putting a fresh image on them, they run perfectly fine again. These are Core i7 machines with GTX 280 graphics adapters. They should be able to handle the workload without any trouble (and they do, at least for a while). I really have no idea how to track down the source of this issue. Especially since not all of the machines display the same issue.

    Read the article

  • Delay index build until SQL Server table load is complete with SSIS

    - by Mattew
    I have a large table that I am updating. Is it possible to disable index updates on the destination table until the load is complete? It seems like a waste for it to be constantly updating the index with each commit. I can just drop and recreate the index before and after the load, I just want to know if there is a quick way to configure that in the OLEDB or SQL Server destination. Server is Windows Server 2003 Datacenter Edition, running SQL Server 2008 Standard Edition with SSIS.

    Read the article

  • SQL Server 2005 Import from Excel

    - by user327045
    I'd like to know what my best option would be to import data from an excel file on a weekly or monthly basis. At first, I thought I would use SSIS, but after much struggle with seemingly simple tasks, I'm starting to rethink my plan. Would it be better/easier to just write the SQL by hand or use the services of an SSIS package? The basic process will be as follows: A separate process will download an .xls file to a local fileshare. The xls file will have a filename like: 'myfilename MON YY'. I will need to read the month and year from the the filename, reformat it to a sql date and then query a DimDate table to find the corresponding date key. For each row (after the first 2 header rows), insert the data with the date key, unless the row is a total row, then ignore. Here are some of the issues I've been encountering with SSIS: I can parse the date string from a flat file datasource, but can't seem to do it with an excel data source. Also, once parsed, i cannot seem to convert the string to a date in order to perform the lookup for the date key. For example, I want to do something like this: select DateKey from DimDate where ActualDate = convert(datetime, '01-' + 'JAN-10', 120) but i don't think it is possible to use the 'convert' or 'datetime' keywords in an expression builder. I have been also unable to find where I can edit the SQL to ignore the first 2 rows of data. I'm very skeptical of using SSIS because it seems like a Kludgy way of doing something that can probably be accomplished more efficiently writing the SQL yourself, but I may be forced to use SSIS. Thoughts?

    Read the article

  • How anti-virus on the host machne affects performance of virtual machines?

    - by Ladislav Mrnka
    I'm diagnosing some issue with Oracle virtual box where virtual machine sometimes perform terribly slow (much slower then notebook with worse configuration): Notebook i7 (2 cores with HT = 4 logical CPUs), 4GB RAM, 5400 rpm disk, Win 7 64bit Virtual machine (Oracle Virtual Box) Host: i7 (4 cores with HT = 8 logical CPUs, 12 GB RAM, system runs from SSD, virtual machine from 7200 rpm disk, Win 7 64bit) Virtual machine: 4 cores assigned, 8 GB RAM assigned, Win 2008 R2 Enterprise (64 bit) Virtual machine uses bridge to separate network interface (machine has two) VPN for network communication No other virtual machine runs on the host Host has installed ESET Smart Security All SW is updated with last version. My question is if anti-virus on the host machine can somehow affect performance of the virtual machine and if so how can I turn it off without turning the anti-virus itself?

    Read the article

  • Testing performance from around the world - how do I get a linux shell easily in multiple countries?

    - by Matthew O'Riordan
    We are building a socket based service where latency is paramount, and as such we have servers distributed into 7 data centres around the world. However, whilst we know we're bringing the servers closer to the clients, it's very difficult to know how effective this is, and importantly, what difference this makes compared to our competitors. As such, we want to run simple scripts that test latency and throughput for both our service and our competitors, which is easy enough using Amazon, however Amazon only have 7 data centres. We would like to know for example how we perform in locations all over the world such as South Africa, Australia, China, Peru etc. Does anyone know of any service where we could piggy back off their global infrastructure and run some scripts to test this performance? The obvious contenders are people like Monitis, but I don't think they would allow us to run custom scripts, only standard protocol monitors. Thanks for your help. Matt

    Read the article

  • Does NTFS performance degrade significantly in volumes larger than five or six TB?

    - by Josh Yeager
    One of my customers is planning to set up a new document store, which will probably grow by 1-2TB per year. One of my co-workers says that Windows performance is extremely bad if it has a single NTFS volume that is bigger than five or six TB. He thinks that we need to set up their system with multiple volumes so that no single volume will exceed that limit. Is this a real problem? Does Windows or NTFS slow down when the volume size reaches several terabytes? Or is it possible to create a single volume of 10 or more TB?

    Read the article

  • Is there a MySQL performance benchmark to measure the impact of utf8_unicode_ci versus utf8_general_ci?

    - by MiniQuark
    I read here and there that using the utf8_unicode_ci collation ensures a better treatment of unicode text (for example, it knowns how to expand characters such as 'œ' into 'oe' for searching and ordering) compared to the default utf8_general_ci which basically just strips diacritics. Unfortunately, both sources indicate that utf8_unicode_ci is slightly slower than utf8_general_ci. So my question is: what does "slightly slower" mean? Has anyone run benchmarks? Are we talking about a -0.01% performance impact or rather something like -25%? Thanks for your help.

    Read the article

  • How can I get a windows server to collate and email 4 daily reports/graphs for server performance

    - by Glyn Darkin
    I run a windows 2008 webserver and would like to setup the most basic performance monitoring in the world. What i would like is: a plot of ASP.Net request time for each w3wp process for a 24hour period a plot of CPU% utilisation for each w3wp process for a 24hour period a plot of Memory utilisation for each w3wp process for a 24hour period a plot of network utilisation for each w3wp process for a 24hour period a plot of disk utilisation for each w3wp process for a 24hour period I would these plots to be emailed to me each morning. Anybody know what is the simplest way to set this up? Thanks for your help in advance. Glyn

    Read the article

  • Performance problem with an Intel Wireless 5100

    - by Piet
    I just installed 12.04 on a HP Elitebook 2530p. The performance with wireless connection to an Eminem 4551 is very bad. Wired connection just works great. If I connect to my router (Eminent 4551) wired I have a good performance accessing internet pages with Firefox. When I try to access the same internet pages wireless I get a real bad performance. piet@piet-HP-EliteBook-2530p:~$ lspci 00:00.0 Host bridge: Intel Corporation Mobile 4 Series Chipset Memory Controller Hub (rev 07) 00:02.0 VGA compatible controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 07) 00:02.1 Display controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 07) 00:03.0 Communication controller: Intel Corporation Mobile 4 Series Chipset MEI Controller (rev 07) 00:03.2 IDE interface: Intel Corporation Mobile 4 Series Chipset PT IDER Controller (rev 07) 00:03.3 Serial controller: Intel Corporation Mobile 4 Series Chipset AMT SOL Redirection (rev 07) 00:19.0 Ethernet controller: Intel Corporation 82567LM Gigabit Network Connection (rev 03) 00:1a.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #4 (rev 03) 00:1a.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #5 (rev 03) 00:1a.2 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #6 (rev 03) 00:1a.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #2 (rev 03) 00:1b.0 Audio device: Intel Corporation 82801I (ICH9 Family) HD Audio Controller (rev 03) 00:1c.0 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 1 (rev 03) 00:1c.1 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 2 (rev 03) 00:1c.2 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 3 (rev 03) 00:1d.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #1 (rev 03) 00:1d.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #2 (rev 03) 00:1d.2 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #3 (rev 03) 00:1d.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #1 (rev 03) 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev 93) 00:1f.0 ISA bridge: Intel Corporation ICH9M-E LPC Interface Controller (rev 03) 00:1f.2 SATA controller: Intel Corporation 82801IBM/IEM (ICH9M/ICH9M-E) 4 port SATA Controller [AHCI mode] (rev 03) 02:00.0 Network controller: Intel Corporation PRO/Wireless 5100 AGN [Shiloh] Network Connection 44:06.0 FireWire (IEEE 1394): Ricoh Co Ltd R5C832 IEEE 1394 Controller (rev 05) 44:06.1 SD Host controller: Ricoh Co Ltd R5C822 SD/SDIO/MMC/MS/MSPro Host Adapter (rev 22) piet@piet-HP-EliteBook-2530p:~$

    Read the article

  • SQL SERVER – SQL Server Performance: Indexing Basics – SQL in Sixty Seconds #006 – Video

    - by pinaldave
    A DBA’s role is critical, because a production environment has to run 24×7, hence maintenance, trouble shooting, and quick resolutions are the need of the hour.  The first baby step into any performance tuning exercise in SQL Server involves creating, analyzing, and maintaining indexes. Though we have learnt indexing concepts from our college days, indexing implementation inside SQL Server can vary.  Understanding this behavior and designing our applications appropriately will make sure the application is performed to its highest potential. Vinod Kumar and myself we often thought about this and realized that practical understanding of the indexes is very important. One can not master every single aspects of the index. However there are some minimum expertise one should gain if performance is one of the concern. More on Indexes: SQL Index SQL Performance I encourage you to submit your ideas for SQL in Sixty Seconds. We will try to accommodate as many as we can. Here is the interview of Vinod Kumar by myself. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Video

    Read the article

  • Unexpected SQL Server 2008 Performance Tip: Avoid local variables in WHERE clause

    - by Jim Duffy
    Sometimes an application needs to have every last drop of performance it can get, others not so much. We’re in the process of converting some legacy Visual FoxPro data into SQL Server 2008 for an application and ran into a situation that required some performance tweaking. I figured the Making Microsoft SQL Server 2008 Fly session that Yavor Angelov (SQL Server Program Manager – Query Processing) presented at PDC 2009 last November would be a good place to start. I was right. One tip among the list of incredibly useful tips Yavor presented was “local variables are bad news for the Query Optimizer and they cause the Query Optimizer to guess”. What that means is you should be avoiding code like this in your stored procs even though it seems such an intuitively good idea. DECLARE @StartDate datetime SET @StartDate = '20091125' SELECT * FROM Orders WHERE OrderDate = @StartDate Instead you should be referencing the value directly in the WHERE clause so the Query Optimizer can create a better execution plan. SELECT * FROM Orders WHERE OrderDate = '20091125' My first thought about this one was we reference variables in the form of passed in parameters in WHERE clauses in many of our stored procs. Not to worry though because parameters ARE available to the Query Optimizer as it compiles the execution plan. I highly recommend checking out Yavor’s session for additional tips to help you squeeze every last drop of performance out of your queries. Have a day. :-|

    Read the article

  • Performance of concurrent software on multicore processors

    - by Giorgio
    Recently I have often read that, since the trend is to build processors with multiple cores, it will be increasingly important to have programming languages that support concurrent programming in order to better exploit the parallelism offered by these processors. In this respect, certain programming paradigms or models are considered well-suited for writing robust concurrent software: Functional programming languages, e.g. Haskell, Scala, etc. The actor model: Erlang, but also available for Scala / Java (Akka), C++ (Theron, Casablanca, ...), and other programming languages. My questions: What is the state of the art regarding the development of concurrent applications (e.g. using multi-threading) using the above languages / models? Is this area still being explored or are there well-established practices already? Will it be more complex to program applications with a higher level of concurrency, or is it just a matter of learning new paradigms and practices? How does the performance of highly concurrent software compare to the performance of more traditional software when executed on multiple core processors? For example, has anyone implemented a desktop application using C++ / Theron, or Java / Akka? Was there a boost in performance on a multiple core processor due to higher parallelism?

    Read the article

< Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >