Search Results

Search found 13151 results on 527 pages for 'performance counters'.

Page 36/527 | < Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >

  • SQL Native Client 10 Performance miserable (due to server-side cursors)

    - by namezero
    we have an application that uses ODBC via CDatabase/CRecordset in MFC (VS2010). We have two backends implemented. MSSQL and MySQL. Now, when we use MSSQL (with the Native Client 10.0), retrieving records with SELECT is dramatically slow via slow links (VPN, for example). The MySQL ODBC driver does not exhibit this nasty behavior. For example: CRecordset r(&m_db); r.Open(CRecordset::snapshot, L"SELECT a.something, b.sthelse FROM TableA AS a LEFT JOIN TableB AS b ON a.ID=b.Ref"); r.MoveFirst(); while(!r.IsEOF()) { // Retrieve CString strData; crs.GetFieldValue(L"a.something", strData); crs.MoveNext(); } Now, with the MySQL driver, everything runs as it should. The query is returned, and everything is lightning fast. However, with the MSSQL Native Client, things slow down, because on every MoveNext(), the driver communicates with the server. I think it is due to server-side cursors, but I didn't find a way to disable them. I have tried using: ::SQLSetConnectAttr(m_db.m_hdbc, SQL_ATTR_ODBC_CURSORS, SQL_CUR_USE_ODBC, SQL_IS_INTEGER); But this didn't help either. There are still long-running exec's to sp_cursorfetch() et al in SQL Profiler. I have also tried a small reference project with SQLAPI and bulk fetch, but that hangs in FetchNext() for a long time, too (even if there is only one record in the resultset). This however only happens on queries with LEFT JOINS, table-valued functions, etc. Note that the query doesn't take that long - executing the same SQL via SQL Studio over the same connection returns in a reasonable time. Question1: Is is possible to somehow get the native client to "cache" all results locally use local cursors in a similar fashion as the MySQL driver seems to do it? Maybe this is the wrong approach altogether, but I'm not sure how else to do this. All we want is to retrieve all data at once from a SELECT, then never talk the server again until the next query. We don't care about recordset updates, deletes, etc or any of that nonsense. We only want to retrieve data. We take that recordset, get all the data, and delete it. Question2: Is there a more efficient way to just retrieve data in MFC with ODBC?

    Read the article

  • Java: Calculate distance between a large number of locations and performance

    - by Ally
    I'm creating an application that will tell a user how far away a large number of points are from their current position. Each point has a longitude and latitude. I've read over this article http://www.movable-type.co.uk/scripts/latlong.html and seen this post http://stackoverflow.com/questions/837872/calculate-distance-in-meters-when-you-know-longitude-and-latitude-in-java There are a number of calculations (50-200) that need carried about. If speed is more important than the accuracy of these calculations, which one is best?

    Read the article

  • Performance problem loading lots of user controls

    - by codymanix
    My application is loading a bunch of the same user control into a ScrollPanel. The problem is, this is very slow. The profiler show that the method Application.LoadComponent(), which is called internally by in the designer code in the constructor of my user control, is the bottleneck. The documentation of this method says, that this method load XAML files. I alway though the compiler compiles XAML to BAML and embedds it into the assembly. So the question is, how can I use BAML instead of XAML? Is there another way to make loading my user controls faster?

    Read the article

  • FOR loop performance in Javascript

    - by AndrewMcLagan
    As my research leads me to believe that for loops are the fastest iteration construct in javascript language. I was thinking that also declaring a conditional length value for the for loop would be faster... to make it clearer, which of the following do you think would be faster? Example ONE for(var i = 0; i < myLargeArray.length; i++ ) { console.log(myLargeArray[i]); } Example TWO var count = myLargeArray.length; for(var i = 0; i < count; i++ ) { console.log(myLargeArray[i]); } my logic follows that on each iteration in example one accessing the length of myLargeArray on each iteration is more computationally expensive then accessing a simple integer value as in example two?

    Read the article

  • SQL Compact performance on device

    - by Ben M
    My SQL Compact database is very simple, with just three tables and a single index on one of the tables (the table with 200k rows; the other two have less than a hundred each). The first time the .sdf file is used by my Compact Framework application on the target Windows Mobile device, the system hangs for well over a minute while "something" is done to the database: when deployed, the DB is 17 megabytes, and after this first usage, it balloons to 24 megs. All subsequent usage is pretty fast, so I'm assuming there's some sort of initialization / index building going on during this first usage. I'd rather not subject the user to this delay, so I'm wondering what this initialization process is and whether it can be performed before deployment. For now, I've copied the "initialized" database back to my desktop for use in the setup project, but I'd really like to have a better answer / solution. I've tried "full compact / repair" in the VS Database Properties dialog, but this made no difference. Any ideas? For the record, I should add that the database is only read from by the device application -- no modifications are made by that code.

    Read the article

  • Periodic GPU performance problem

    - by Peter Lillevold
    Hi folks! I have a WinForms application that uses XNA to animate 3D models in a control. The app have been doing just fine for months but recently I've started to experience periodic pauses in the animation. Setting out to investigate what is going on I have established these facts: It (currently) happens on my machine only Removing everything from my render loop does not improve the problem In 2. I didn't actually remove everything, I limited my loop to set the viewport on my GraphicsDevice and then do a GraphicsDevice.Present. Trying to dig further I fired up PIX to capture some statistics. Screenshots of two PIX runs can be viewed here (Run6) and here (Run14). Run6 is using my original render loop and Run14 is using the bare-bones Present loop. PIX tells me that the GPU is periodically doing something, and I assume this is causing the pauses. What could be the cause of this? Or how do I go about finding out what the GPU is actually doing? Note: I'm using XNA 3.1 on a Windows 7 x64 dual-core machine with 8GB RAM. Note2: also posted this question on the XNA Creators forums here.

    Read the article

  • Performance: float to int cast and clippling result to range

    - by durandai
    I'm doing some audio processing with float. The result needs to be converted back to PCM samples, and I noticed that the cast from float to int is surprisingly expensive. Whats furthermore frustrating that I need to clip the result to the range of a short (-32768 to 32767). While I would normally instictively assume that this could be assured by simply casting float to short, this fails miserably in Java, since on the bytecode level it results in F2I followed by I2S. So instead of a simple: int sample = (short) flotVal; I needed to resort to this ugly sequence: int sample = (int) floatVal; if (sample > 32767) { sample = 32767; } else if (sample < -32768) { sample = -32768; } Is there a faster way to do this? (about ~6% of the total runtime seems to be spent on casting, while 6% seem to be not that much at first glance, its astounding when I consider that the processing part involves a good chunk of matrix multiplications and IDCT)

    Read the article

  • mySQL Inconsistent Performance

    - by Jon Hatfield
    Hi, I'm running a mySQL query that joins various tables of 500,000+ rows. Sometimes it takes a second, other times around 15 seconds! This is on my local machine. I have experienced similarly varied times before on other intensive queries, does anyone know why this is? Thanks

    Read the article

  • JDBC batch insert performance

    - by wo_shi_ni_ba_ba
    I need to insert a couple hundreds of millions of records into the mysql db. I'm batch inserting it 1 million at a time. Please see my code below. It seems to be slow. Is there any way to optimize it? try { // Disable auto-commit connection.setAutoCommit(false); // Create a prepared statement String sql = "INSERT INTO mytable (xxx), VALUES(?)"; PreparedStatement pstmt = connection.prepareStatement(sql); Object[] vals=set.toArray(); for (int i=0; i

    Read the article

  • Indexing/Performance strategies for vast amount of the same value

    - by DrColossos
    Base information: This is in context to the indexing process of OpenStreetMap data. To simplify the question: the core information is divided into 3 main types with value "W", "R", "N" (VARCHAR(1)). The table has somewhere around ~75M rows, all columns with "W" make up ~42M rows. Existing indexes are not relevant to this question. Now the question itself: The indexing of the data is done via an procedure. Inside this procedure, there are some loops that do the following: [...] SELECT * FROM table WHERE the_key = "W"; [...] The results get looped again and the above query itself is also in a loop. This takes a lot of time and slows down the process massivly. An indexon the_key is obviously useless since all the values that the index might use are the same ("W"). The script itself is running with a speed that is OK, only the SELECTing takes very long. Do I need to create a "special" kind of index that takes this into account and makes the SELECT quicker? If so, which one? need to tune some of the server parameters (they are already tuned and the result that they deliver seem to be good. If needed, I can post them)? have to live with the speed and simply get more hardware to gain more power (Tim Taylor grunt grunt)? Any alternatives to the above points (except rewriting it or not using it)?

    Read the article

  • Objective C iPhone performance issue

    - by Asad Khan
    Ok guys I am developing an iPhone app I have a Model class which follows a Singleton design pattern. Now I have an NSArray in it which is initialized to around some 1000 NSStrings in the init method. Now I need to use this data in some view controller. so I import Model.h, I create an array of NSString objects in view controller & set the data to it. But now the problem is that now I have 2000 NSStrings currently allocated, which I believe is not a good thing on iPhone due to memory considerations. releasing model object wont help because I've overrided release method to release nothing according to the pattern & I cannot change the design now because now a lot of code works on the assumption of model being a singleton. & in future maybe the initial NSStrings may grow to 2000 or even more & then I'll have 4000 NSStrings allocated at one time .... I am a little confused on how to go about it any suggestions

    Read the article

  • C99 variable length automatic array performance

    - by aaa
    Is there significant cpu/memory overhead associated with using automatic arrays with g++/Intel on 64-bit x86 linux platform? int function(int N) { double array[N]; overhead compared to allocating array before hand (assuming function is called multiple times) overhead compared to using new overhead compared to using malloc The range of N may be from 1kb to 16kb roughly, stack overrun is not a problem.

    Read the article

  • PropertyUtils performance

    - by mR_fr0g
    I have a problem where i need to walk through an object graph and pick out a particular property value. My original solution caches a linked list of property names that need to be applied in order to get from point A to point B in the object graph. I then use apache commons PropertyUtils to iterate through the linked list calling getProperty(Object bean, String name) until i have reached point B. My question is around how this will perform compared to perhaps cahing the Method objects for each step. What is propertyUtils doing under the bonnet? Is it doing a lot of reflection / heavy lifting?

    Read the article

  • Strange performance behaviour

    - by plastilino
    I'm puzzled with this. In my machine Direct calculation: 375 ms Method calculation: 3594 ms, about TEN times SLOWER If I place the method calulation BEFORE the direct calculation, both times are SIMILAR. Woud you check it in your machine? class Test { static long COUNT = 50000 * 10000; private static long BEFORE; /*--------METHOD---------*/ public static final double hypotenuse(double a, double b) { return Math.sqrt(a * a + b * b); } /*--------TIMER---------*/ public static void getTime(String text) { if (BEFORE == 0) { BEFORE = System.currentTimeMillis(); return; } long now = System.currentTimeMillis(); long elapsed = (now - BEFORE); BEFORE = System.currentTimeMillis(); if (text.equals("")) { return; } String message = "\r\n" + text + "\r\n" + "Elapsed time: " + elapsed + " ms"; System.out.println(message); } public static void main(String[] args) { double a = 0.2223221101; double b = 122333.167; getTime(""); /*--------DIRECT CALCULATION---------*/ for (int i = 1; i < COUNT; i++) { Math.sqrt(a * a + b * b); } getTime("Direct: "); /*--------METHOD---------*/ for (int k = 1; k < COUNT; k++) { hypotenuse(a, b); } getTime("Method: "); } }

    Read the article

  • C++ Performance/memory optimization guidelines

    - by ML
    Hi All, Does anyone have a resource for C++ memory optimization guidelines? Best practices, tuning, etc? As an example: Class xxx { public: xxx(); virtual ~xxx(); protected: private: }; Would there be ANY benefit on the compiler or memory allocation to get rid of protected and private since there there are no items that are protected and private in this class?

    Read the article

  • HTML 5 performance on Firefox ?

    - by asksuperuser
    I tried this sample here: http://9elements.com/io/projects/html5/canvas/ After a few minutes, Firefox slows down so much I can't even popup any menu. When I closed the tab, Firefox comes back to normal again. So is HTML 5 really a good choice now ?

    Read the article

  • C++ application as a service with high performance

    - by sand
    I need to provide a C++ application as a service. Client of the service and the service can be on the same machine or distributed on different machines based on the load. This application takes a ~2KB string as input and returns almost similar size of string after some processing. Turnaround time for the client should be really quick. What is the best mechanism to implement this?

    Read the article

  • Sharepoint Web performance optimization

    - by hertzel
    We are running on SSL on following server topology: 1 ISA (SSL Terminate/cache/proxy+AD authentication) 1 Sharepoint 1 IBM DB2 Database as enterprise/corporate DB 1 MS SQL Server as local DB We have recently optimized the caching, compression, minification, and other ASP.net best practices such as viewstate and cookie sizes, minimizing round trips, parallel connections/domain sharding and a lot more.... Now we are not convinced that the we are in an optimized position as the network resources i.e. bandwidth and especially latency are out of our control!! The client/browser to server/sharepoint is trans-Atlantic i.e. (ASIA, USA, EUROPE). As of my understanding the only ways to improve the network (latency) are: - TCP/SSL optimization - hardware/software? - CDNs - cloud or our own ? Your opinion and insights would be much appreciated Best regards Hertzel

    Read the article

  • Improve disk read performance (multiple files) with threading

    - by pablo
    I need to find a method to read a big number of small files (about 300k files) as fast as possible. Reading them sequentially using FileStream and reading the entire file in a single call takes between 170 and 208 seconds (you know, you re-run, disk cache plays its role and time varies). Then I tried using PInvoke with CreateFile/ReadFile and using FILE_FLAG_SEQUENTIAL_SCAN, but I didn't appreciate any changes. I tried with several threads (divide the big set in chunks and have every thread reading its part) and this way I was able to improve speed just a little bit (not even a 5% with every new thread up to 4). Any ideas on how to find the most effective way to do this?

    Read the article

  • Performance of Serialized Objects in C++

    - by jm1234567890
    Hi Everyone, I'm wondering if there is a fast way to dump an STL set to disk and then read it back later. The internal structure of a set is a binary tree, so if I serialize it naively, when I read it back the program will have to go though the process of inserting each element again. I think this is slow even if it is read back in correct order, correct me if I am wrong. Is there a way to "dump" the memory containing the set into disk and then read it back later? That is, keep everything in binary format, thus avoiding the re-insertion. Do the boost serialization tools do this? Thanks! EDIT: oh I should probably read, http://www.parashift.com/c++-faq-lite/serialization.html I will read it now... no it doesn't really help

    Read the article

  • postgres min function performance

    - by wutzebaer
    hi i need the lowest value for runnerId this query: SELECT "runnerId" FROM betlog WHERE "marketId" = '107416794' ; takes 80ms (1968 result rows) this SELECT min("runnerId") FROM betlog WHERE "marketId" = '107416794' ; takes 1600ms is there a faster way to find the minimum, or should i calc the min in my java programm? "Result (cost=100.88..100.89 rows=1 width=0)" " InitPlan 1 (returns $0)" " -> Limit (cost=0.00..100.88 rows=1 width=9)" " -> Index Scan using runneridindex on betlog (cost=0.00..410066.33 rows=4065 width=9)" " Index Cond: ("runnerId" IS NOT NULL)" " Filter: ("marketId" = 107416794::bigint)" CREATE INDEX marketidindex ON betlog USING btree ("marketId" COLLATE pg_catalog."default"); another idea SELECT "runnerId" FROM betlog WHERE "marketId" = '107416794' ORDER BY "runnerId" LIMIT 1 >1600ms SELECT "runnerId" FROM betlog WHERE "marketId" = '107416794' ORDER BY "runnerId" >>100ms how can a limit slow the query down?

    Read the article

< Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >