Search Results

Search found 13403 results on 537 pages for 'epm performance tuning'.

Page 177/537 | < Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >

  • JavaScript: Is there a better way to retain your array but efficiently concat or replace items?

    - by Michael Mikowski
    I am looking for the best way to replace or add to elements of an array without deleting the original reference. Here is the set up: var a = [], b = [], c, i, obj; for ( i = 0; i < 100000; i++ ) { a[ i ] = i; b[ i ] = 10000 - i; } obj.data_list = a; Now we want to concatenate b INTO a without changing the reference to a, since it is used in obj.data_list. Here is one method: for ( i = 0; i < b.length; i++ ) { a.push( b[ i ] ); } This seems to be a somewhat terser and 8x (on V8) faster method: a.splice.apply( a, [ a.length, 0 ].concat( b ) ); I have found this useful when iterating over an "in-place" array and don't want to touch the elements as I go (a good practice). I start a new array (let's call it keep_list) with the initial arguments and then add the elements I wish to retain. Finally I use this apply method to quickly replace the truncated array: var keep_list = [ 0, 0 ]; for ( i = 0; i < a.length; i++ ){ if ( some_condition ){ keep_list.push( a[ i ] ); } // truncate array a.length = 0; // And replace contents a.splice.apply( a, keep_list ); There are a few problems with this solution: there is a max call stack size limit of around 50k on V8 I have not tested on other JS engines yet. This solution is a bit cryptic Has anyone found a better way?

    Read the article

  • PHP : flatten array - fastest way?

    - by Industrial
    Is there any fast way to flatten an array and select subkeys ('key'&'value' in this case) without running a foreach loop, or is the foreach always the fastest way? Array ( [0] => Array ( [key] => string [value] => a simple string [cas] => 0 ) [1] => Array ( [key] => int [value] => 99 [cas] => 0 ) [2] => Array ( [key] => array [value] => Array ( [0] => 11 [1] => 12 ) [cas] => 0 ) ) To: Array ( [int] => 99 [string] => a simple string [array] => Array ( [0] => 11 [1] => 12 ) )

    Read the article

  • how do i do very fast inserts to SQL Server 2008

    - by CharlesO
    I have a project that involves recording data from a device directly into a sql table. I do very little processing in code before writing to sql server (2008 express by the way) typically i use the sqlhelper class's ExecuteNoneQuery method and pass in a stored proc name and list of parameters that the SP expects. This is very convenient, but i need a much faster way of doing this. Thanks.

    Read the article

  • Update table with index is too slow

    - by pauloya
    Hi, I was watching the Profiler on a live system of our application and I saw that there was an update instruction that we run periodically (every second) that was quite slow. It took around 400ms every time. The query includes this update (which is the slow part) UPDATE BufferTable SET LrbCount = LrbCount + 1, LrbUpdated = getdate() WHERE LrbId = @LrbId This is the table CREATE TABLE BufferTable( LrbId [bigint] IDENTITY(1,1) NOT NULL, ... LrbInserted [datetime] NOT NULL, LrbProcessed [bit] NOT NULL, LrbUpdated [datetime] NOT NULL, LrbCount [tinyint] NOT NULL, ) The table has 2 indexes (non unique and non clustered) with the fields by this order: * Index1 - (LrbProcessed, LrbCount) * Index2 - (LrbInserted, LrbCount, LrbProcessed) When I looked at this I thought that the problem would come from Index1 since LrbCount is changing a lot and it changes the order of the data in the index. But after desactivating index1 I saw the query was taking the same time as initially. Then I rebuilt index1 and desactivated index2, this time the query was very fast. It seems to me that Index2 should be faster to update, the order of the data shouldn't change since the LrbInserted time is not changed. Can someone explain why index2 is much heavier to update then index1? Thank you!

    Read the article

  • MySQL Insert Query Randomly Takes a Long Time

    - by ShimmerTroll
    I am using MySQL to manage session data for my PHP application. When testing the app, it is usually very quick and responsive. However, seemingly randomly the response will stall before finally completing after a few seconds. I have narrowed the problem down to the session write query which looks something like this: INSERT INTO Session VALUES('lvg0p9peb1vd55tue9nvh460a7', '1275704013', '') ON DUPLICATE KEY UPDATE sessAccess='1275704013',sessData=''; The slow query log has this information: Query_time: 0.524446 Lock_time: 0.000046 Rows_sent: 0 Rows_examined: 0 This happens about 1 out of every 10 times. The query usually only takes ~0.0044 sec. The table is InnoDB with about 60 rows. sessId is the primary key with a BTREE index. Since this is accessed on every page view, it is clearly not an acceptable execution time. Why is this happening? Update: Table schema is: sessId:varchar(32), sessAccess:int(10), sessData:text

    Read the article

  • VS2008 is very slow on a specific large C++ solution

    - by VioletRose
    I have a solution with 21 C++ projects and 1 VB.NET project. The IDE responds very slowly when I simply move the carret in a file or try to open the menu. The process seems to take 50% of CPU for each movement. It only happens with this solution and only on my machine. The solution has total of 2380 source and header files, of which 1280 are header files. I tried to remove all connection to the source control (Perforce) but it didn't help. Also, I have Visual Assist installed but even after removing it (uninstall), the same behavior continued. Any idea?

    Read the article

  • sql query is too slow, how to improve speed

    - by user1289282
    I have run into a bottleneck when trying to update one of my tables. The player table has, among other things, id, skill, school, weight. What I am trying to do is: SELECT id, skill FROM player WHERE player.school = (current school of 4500) AND player.weight = (current weight of 14) to find the highest skill of all players returned from the query UPDATE player SET starter = 'TRUE' WHERE id = (highest skill) move to next weight and repeat when all weights have been completed move to next school and start over all schools completed, done I have this code implemented and it works, but I have approximately 4500 schools totaling 172000 players and the way I have it now, it would take probably a half hour or more to complete (did not wait it out), which is way too slow. How to speed this up? Short of reducing the scale of the system, I am willing to do anything that gets the intended result. Thanks! *the weights are the standard folk style wrestling weights ie, 103, 113, 120, 126, 132, 138, 145, 152, 160, 170, 182, 195, 220, 285 pounds

    Read the article

  • Django: IE doesn't load locahost or loads very SLOWLY

    - by reedvoid
    I'm just starting to learn Django, building a project on my computer, running Windows 7 64-bit, Python 2.7, Django 1.3. Basically whatever I write, it loads in Chrome and Firefox instantly. But for IE (version 9), it just stalls there, and does nothing. I can load up "http://127.0.0.1:8000" on IE and leave the computer on for hours and it doesn't load. Sometimes, when I refresh a couple of times or restart IE it'll work. If I change something in the code, again, Chrome and Firefox reflects changes instantly, whereas IE doesn't - if it loads the page at all. What is going on? I'm losing my mind here....

    Read the article

  • Using FILE_FLAG_NO_BUFFERING will return noticeable speed gain?

    - by 9dan
    Recently noticed detail description of FILE_FLAG_NO_BUFFERING flag in MSDN, and read several Google search results about unbuffered I/O in Windows. http://msdn.microsoft.com/en-us/library/aa363858(v=vs.85).aspx I wondering now, is it really important to consider unbuffered option in file I/O programming? Because many programs use plain old C stream I/O or C++ iostream, I didn't gave any attention to FILE_FLAG_NO_BUFFERING flag before. Let's say we are developing photo explorer program like Picasa. If we implement unbuffered I/O, could thumbnail display speed show noticeable difference in ordinary users?

    Read the article

  • Does beginTransaction in Hibernate allocate a new DB connection?

    - by illscience
    Hi folks - Just wondering if beginning a new transaction in Hibernate actually allocates a connection to the DB? I'm concerned b/c our server begins a new transaction for each request received, even if that request doesn't interact with the DB. We're seeing DB connections as a major bottleneck, so I'm wondering if I should take the time narrow the scope of my transactions. Searched everywhere and haven't been able to find a good answer. The very simple code is here: SessionFactory sessionFactory = (SessionFactory) Context.getContext().getBean("sessionFactory"); sessionFactory.getCurrentSession().beginTransaction(); sessionFactory.getCurrentSession().setFlushMode(FlushMode.AUTO); thanks very much! a

    Read the article

  • Nested loop traversing arrays

    - by alecco
    There are 2 very big series of elements, the second 100 times bigger than the first. For each element of the first series, there are 0 or more elements on the second series. This can be traversed and processed with 2 nested loops. But the unpredictability of the amount of matching elements for each member of the first array makes things very, very slow. The actual processing of the 2nd series of elements involves logical and (&) and a population count. I couldn't find good optimizations using C but I am considering doing inline asm, doing rep* mov* or similar for each element of the first series and then doing the batch processing of the matching bytes of the second series, perhaps in buffers of 1MB or something. But the code would be get quite messy. Does anybody know of a better way? C preferred but x86 ASM OK too. Many thanks! Sample/demo code with simplified problem, first series are "people" and second series are "events", for clarity's sake. (the original problem is actually 100m and 10,000m entries!) #include <stdio.h> #include <stdint.h> #define PEOPLE 1000000 // 1m struct Person { uint8_t age; // Filtering condition uint8_t cnt; // Number of events for this person in E } P[PEOPLE]; // Each has 0 or more bytes with bit flags #define EVENTS 100000000 // 100m uint8_t P1[EVENTS]; // Property 1 flags uint8_t P2[EVENTS]; // Property 2 flags void init_arrays() { for (int i = 0; i < PEOPLE; i++) { // just some stuff P[i].age = i & 0x07; P[i].cnt = i % 220; // assert( sum < EVENTS ); } for (int i = 0; i < EVENTS; i++) { P1[i] = i % 7; // just some stuff P2[i] = i % 9; // just some other stuff } } int main(int argc, char *argv[]) { uint64_t sum = 0, fcur = 0; int age_filter = 7; // just some init_arrays(); // Init P, P1, P2 for (int64_t p = 0; p < PEOPLE ; p++) if (P[p].age < age_filter) for (int64_t e = 0; e < P[p].cnt ; e++, fcur++) sum += __builtin_popcount( P1[fcur] & P2[fcur] ); else fcur += P[p].cnt; // skip this person's events printf("(dummy %ld %ld)\n", sum, fcur ); return 0; } gcc -O5 -march=native -std=c99 test.c -o test

    Read the article

  • Best way to get distinct values from large table

    - by derivation
    I have a db table with about 10 or so columns, two of which are month and year. The table has about 250k rows now, and we expect it to grow by about 100-150k records a month. A lot of queries involve the month and year column (ex, all records from march 2010), and so we frequently need to get the available month and year combinations (ie do we have records for april 2010?). A coworker thinks that we should have a separate table from our main one that only contains the months and years we have data for. We only add records to our main table once a month, so it would just be a small update on the end of our scripts to add the new entry to this second table. This second table would be queried whenever we need to find the available month/year entries on the first table. This solution feels kludgy to me and a violation of DRY. What do you think is the correct way of solving this problem? Is there a better way than having two tables?

    Read the article

  • How to explain to a developer that adding extra if - else if conditions is not a good way to "improv

    - by Lilit
    Recently I've bumped into the following C++ code: if (a) { f(); } else if (b) { f(); } else if (c) { f(); } Where a, b and c are all different conditions, and they are not very short. I tried to change the code to: if (a || b || c) { f(); } But the author opposed saying that my change will decrease readability of the code. I had two arguments: 1) You should not increase readability by replacing one branching statement with three (though I really doubt that it's possible to make code more readable by using else if instead of ||). 2) It's not the fastest code, and no compiler will optimize this. But my arguments did not convince him. What would you tell a programmer writing such a code? Do you think complex condition is an excuse for using else if instead of OR?

    Read the article

  • SQL Server - Missing Indexes - What would use the index?

    - by BankZ
    I am using SQL Server 2008 and we are using the DMV's to find missing indexes. However, before I create the new index I am trying to figure out what proc/query is wanting that index. I want the most information I can get so I can make informed decision on my indexes. Sometimes the indexes SQL Server wants does not make sense to me. Does anyone know how I can figure out what wants it?

    Read the article

  • Start all tab's activities for pre-cache

    - by Pentium10
    I have a TabActivity with three tabs defined. The first tab is light-weight and renders in acceptable time. But the 2nd and 3rd tab, does need a couple of seconds to get visually rendered, after I click them. I would like to launch them, after I've loaded my first tab, in background for pre-cache. Once they are loaded, I can switch quickly between them. So I am wondering how can I launch the 2nd and 3rd tab. They are intents loaded in the view area.

    Read the article

  • fast load big object graph from DB

    - by Famos
    Hi I have my own data structure written in C# like: public class ElectricScheme { public List<Element> Elements { get; set; } public List<Net> Nets { get; set; } } public class Element { public string IdName { get; set; } public string Func { get; set; } public string Name { get; set; } public BaseElementType Type { get; set; } public List<Pin> Pins { get; set; } } public class Pin { public string IdName { get; set; } public string Name { get; set; } public BasePinType PinType { get; set; } public BasePinDirection PinDirection { get; set; } } public class Net { public string IdName { get; set; } public string Name { get; set; } public List<Tuple<Element,Pin>> ConnectionPoints { get; set; } } Where Elements count ~19000, each element contain =3 Pin, Nets count ~20000, each net contain =3 pair (Element, Pin) Parse txt (file size ~17mb) file takes 5 minutes. Serilization / Deserialization by default serializer ~3 minutes. Load from DB 20 minutes and not loaded... I use Entity Framework like public ElectricScheme LoadScheme(int schemeId) { var eScheme = (from s in container.ElectricSchemesSet where s.IdElectricScheme.Equals(schemeId) select s).FirstOrDefault(); if (eScheme == null) return null; container.LoadProperty(eScheme, "Elements"); container.LoadProperty(eScheme, "Nets"); container.LoadProperty(eScheme, "Elements.Pins"); return eScheme; } The problem is dependencies between Element and Pin... (for ~19000 elements ~95000 pins) Any ideas?

    Read the article

  • How much faster is C++ than C#?

    - by Trap
    Or is it now the other way around? From what I've heard there are some areas in which C# proves to be faster than C++, but I've never had the guts to test it by myself. Thought any of you could explain these differences in detail or point me to the right place for information on this.

    Read the article

  • SQL Server 15MM rows, simple COUNT query. 15+ seconds?

    - by john
    We took over a website from another company after a client decided to switch. We have a table that grows by about 25k records a day, and is currently at 15MM records. The table looks something like: id (PK, int, not null) member_id (int, not null) another_id (int, not null) date (datetime, not null) SELECT COUNT(id) FROM tbl can take up to 15 seconds. A simple inner join on 'another_id' takes over 30 seconds. I can't imagine why this is taking so long. Any advice? SQL Server 2005 Express

    Read the article

  • Why is Python so slow?

    - by Riemannliness
    Why is Python such a slow language, on average, compared to C/C++? I learned Python as my first programming language, but I've only just started with C and already I can feel and see the difference.

    Read the article

  • Delete from empty table taking forver

    - by Will
    Hello, I have an empty table that previously had a large amount of rows. The table has about 10 columns and indexes on many of them, as well as indexes on multiple columns. DELETE FROM item WHERE 1=1 This takes approximately 40 seconds to complete SELECT * FROM item this takes 4 seconds. The execution plan of SELECT * FROM ITEM shows the following; SQL> select * from midas_item; no rows selected Elapsed: 00:00:04.29 Execution Plan ---------------------------------------------------------- 0 SELECT STATEMENT Optimizer=CHOOSE (Cost=19 Card=123 Bytes=73 80) 1 0 TABLE ACCESS (FULL) OF 'MIDAS_ITEM' (Cost=19 Card=123 Byte s=7380) Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 5263 consistent gets 5252 physical reads 0 redo size 1030 bytes sent via SQL*Net to client 372 bytes received via SQL*Net from client 1 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 0 rows processed any idea why these would be taking so long and how to fix it would be greatly appreciated!!

    Read the article

  • SQL Server - how to determine if indexes aren't being used?

    - by rwmnau
    I have a high-demand transactional database that I think is over-indexed. Originally, it didn't have any indexes at all, so adding some for common processes made a huge difference. However, over time, we've created indexes to speed up individual queries, and some of the most popular tables have 10-15 different indexes on them, and in some cases, the indexes are only slightly different from each other, or are the same columns in a different order. Is there a straightforward way to watch database activity and tell if any indexes are not hit anymore, or what their usage percentage is? I'm concerned that indexes were created to speed up either a single daily/weekly query, or even a query that's not being run anymore, but the index still has to be kept up to date every time the data changes. In the case of the high-traffic tables, that's a dozen times/second, and I want to eliminate indexes that are weighing down data updates while providing only marginal improvement.

    Read the article

  • Fastest way to read/store lots of multidimensional data? (Java)

    - by RemiX
    I have three questions about three nested loops: for (int x=0; x<400; x++) { for (int y=0; y<300; y++) { for (int z=0; z<400; z++) { // compute and store value } } } And I need to store all computed values. My standard approach would be to use a 3D-array: values[x][y][z] = 1; // test value but this turns out to be slow: it takes 192 ms to complete this loop, where a single int-assignment int value = 1; // test value takes only 66 ms. 1) Why is an array so relatively slow? 2) And why does it get even slower when I put this in the inner loop: values[z][y][x] = 1; // (notice x and z switched) This takes more than 4 seconds! 3) Most importantly: Can I use a data structure that is as quick as the assignment of a single integer, but can store as much data as the 3D-array?

    Read the article

  • What is the time complexity of LinkedList.getLast() in Java?

    - by i.
    I have a private LinkedList in a Java class & will frequently need to retrieve the last element in the list. The lists need to scale, so I'm trying to decide whether I need to keep a reference to the last element when I make changes (to achieve O(1)) or if the LinkedList class does that already with the getLast() call. What is the big-O cost of LinkedList.getLast() and is it documented? (i.e. can I rely on this answer or should I make no assumptions & cache it even if it's O(1)?)

    Read the article

  • how does array_diff work?

    - by SpawnCxy
    I just wonder how array_diff() works.And obviously it couldn't work as follows function array_diff($arraya, $arrayb) { $diffs = array(); foreach ($arraya as $keya => $valuea) { foreach ($arrayb as $valueb) { if ($valuea == $valueb) { break; } $diffs[$keya]=$valuea; } } return $diffs; } //couldn't be worse than this Hope someone can show me some better solution. Thanks.

    Read the article

< Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >