Search Results

Search found 13403 results on 537 pages for 'epm performance tuning'.

Page 41/537 | < Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >

  • Web service performance testing plan, Microsoft .NET WS, SQL

    - by zxed
    Trying to answer a question to come up with a testing plan. It has to do with using a website and/or webservice that queries a sql server to get data and display to user. * Solution must be able to handle an estimated 2000 users, approximately 700 concurrent users, 10,000 + website hits a month. Database calls should handle 100,000 queries via the website/webservice a month. The system is used at multiple times during a 24 hour period; however networking and bandwidth traffic decreases after 5 pm * two windows 2003 servers are used, one for web, another for sql. Both are located in the same room. User access is varied and users can be far/near (its a centralized system), users access via www

    Read the article

  • mysql query performance help

    - by Stefano
    Hi I have a quite large table storing words contained in email messages mysql> explain t_message_words; +----------------+---------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +----------------+---------+------+-----+---------+----------------+ | mwr_key | int(11) | NO | PRI | NULL | auto_increment | | mwr_message_id | int(11) | NO | MUL | NULL | | | mwr_word_id | int(11) | NO | MUL | NULL | | | mwr_count | int(11) | NO | | 0 | | +----------------+---------+------+-----+---------+----------------+ table contains about 100M rows mwr_message_id is a FK to messages table mwr_word_id is a FK to words table mwr_count is the number of occurrencies of word mwr_word_id in message mwr_message_id To calculate most used words, I use the following query SELECT SUM(mwr_count) AS word_count, mwr_word_id FROM t_message_words GROUP BY mwr_word_id ORDER BY word_count DESC LIMIT 100; that runs almost forever (more than half an hour on the test server) mysql> show processlist; +----+------+----------------+--------+---------+------+----------------------+----------------------------------------------------- | Id | User | Host | db | Command | Time | State | Info +----+------+----------------+--------+---------+------+----------------------+----------------------------------------------------- processlist | 41 | root | localhost:3148 | tst_db | Query | 1955 | Copying to tmp table | SELECT SUM(mwr_count) AS word_count, mwr_word_id FROM t_message_words GROUP BY mwr_word_id | +----+------+----------------+--------+---------+------+----------------------+----------------------------------------------------- 3 rows in set (0.00 sec) Is there anything I can do to "speed up" the query (apart from adding more ram, more cpu, faster disks)? thank you in advance stefano

    Read the article

  • High performance text file parsing in .net

    - by diamandiev
    Here is the situation: I am making a small prog to parse server log files. I tested it with a log file with several thousand requests (between 10000 - 20000 don't know exactly) What i have to do is to load the log text files into memory so that i can query them. This is taking the most resources. The methods that take the most cpu time are those (worst culprits first): string.split - splits the line values into a array of values string.contains - checking if the user agent contains a specific agent string. (determine browser ID) string.tolower - various purposes streamreader.readline - to read the log file line by line. string.startswith - determine if line is a column definition line or a line with values there were some others that i was able to replace. For example the dictionary getter was taking lots of resources too. Which i had not expected since its a dictionary and should have its keys indexed. I replaced it with a multidimensional array and saved some cpu time. Now i am running on a fast dual core and the total time it takes to load the file i mentioned is about 1 sec. Now this is really bad. Imagine a site that has tens of thousands of visits a day. It's going to take minutes to load the log file. So what are my alternatives? If any, cause i think this is just a .net limitation and i can't do much about it.

    Read the article

  • Performance issue on Android's MapView Navigation-App

    - by poeschlorn
    Hey Guys, I've got a question on making an navigation app more faster and more stable. The basic layer of my app is a simple mapview, covered with several overlays (2 markers for start and destination and one for the route). My idea is to implement a thread to display the route, so that the app won't hang up during the calculation of a more complex route (like it does right now). After implementing the thread there are no updates shows any more, maybe you can help me with a short glance at an excerpt of my code below: private class MyLocationListener implements LocationListener { @Override public void onLocationChanged(Location loc) { posUser = new GeoPoint((int) (loc.getLatitude() * 1E6), (int) (loc .getLongitude() * 1E6)); new Thread(){ public void run(){ mapView.invalidate(); // Erase old overlays mapView.getOverlays().clear(); // Draw updated overlay elements and adjust basic map settings updateText(); if(firstRefresh){ adjustMap(); firstRefresh = false; } getAndPaintRoute(); drawMarkers(); } }; } Some features have been summarized to a method like "drawMarkers()" or "updateText()"...(they don't need any more attention ;-))

    Read the article

  • Odd performance with C# Asynchronous server socket

    - by The.Anti.9
    I'm working on a web server in C# and I have it running on Asynchronous socket calls. The weird thing is that for some reason, when you start loading pages, the 3rd request is where the browser won't connect. It just keeps saying "Connecting..." and doesn't ever stop. If I hit stop. and then refresh, it will load again, but if I try another time after that it does the thing where it doesn't load again. And it continues in that cycle. I'm not really sure what is making it do that. The code is kind of hacked together from a couple of examples and some old code I had. Any miscellaneous tips would be helpful as well. Heres my little Listener class that handles everything (pastied here. thought it might be easier to read this way) using System; using System.Collections.Generic; using System.Net; using System.Net.Sockets; using System.Text; using System.Threading; namespace irek.Server { public class Listener { private int port; private Socket server; private Byte[] data = new Byte[2048]; static ManualResetEvent allDone = new ManualResetEvent(false); public Listener(int _port) { port = _port; } public void Run() { server = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); IPEndPoint iep = new IPEndPoint(IPAddress.Any, port); server.Bind(iep); Console.WriteLine("Server Initialized."); server.Listen(5); Console.WriteLine("Listening..."); while (true) { allDone.Reset(); server.BeginAccept(new AsyncCallback(AcceptCon), server); allDone.WaitOne(); } } private void AcceptCon(IAsyncResult iar) { allDone.Set(); Socket s = (Socket)iar.AsyncState; Socket s2 = s.EndAccept(iar); SocketStateObject state = new SocketStateObject(); state.workSocket = s2; s2.BeginReceive(state.buffer, 0, SocketStateObject.BUFFER_SIZE, 0, new AsyncCallback(Read), state); } private void Read(IAsyncResult iar) { try { SocketStateObject state = (SocketStateObject)iar.AsyncState; Socket s = state.workSocket; int read = s.EndReceive(iar); if (read > 0) { state.sb.Append(Encoding.ASCII.GetString(state.buffer, 0, read)); if (s.Available > 0) { s.BeginReceive(state.buffer, 0, SocketStateObject.BUFFER_SIZE, 0, new AsyncCallback(Read), state); return; } } if (state.sb.Length > 1) { string requestString = state.sb.ToString(); // HANDLE REQUEST HERE // Temporary response string resp = "<h1>It Works!</h1>"; string head = "HTTP/1.1 200 OK\r\nContent-Type: text/html;\r\nServer: irek\r\nContent-Length:"+resp.Length+"\r\n\r\n"; byte[] answer = Encoding.ASCII.GetBytes(head+resp); // end temp. state.workSocket.BeginSend(answer, 0, answer.Length, SocketFlags.None, new AsyncCallback(Send), state.workSocket); } } catch (Exception) { return; } } private void Send(IAsyncResult iar) { try { SocketStateObject state = (SocketStateObject)iar.AsyncState; int sent = state.workSocket.EndSend(iar); state.workSocket.Shutdown(SocketShutdown.Both); state.workSocket.Close(); } catch (Exception) { } return; } } } And my SocketStateObject: public class SocketStateObject { public Socket workSocket = null; public const int BUFFER_SIZE = 1024; public byte[] buffer = new byte[BUFFER_SIZE]; public StringBuilder sb = new StringBuilder(); }

    Read the article

  • Optimize code performance when odd/even threads are doing different things in CUDA

    - by Orion Nebula
    Hi all! I have two large vectors, I am trying to do some sort of element multiplication, where an even-numbered element in the first vector is multiplied by the next odd-numbered element in the second vector .... and where the odd-numbered element in the first vector is multiplied by the preceding even-numbered element in the second vector Ex. vector 1 is V1(1) V1(2) V1(3) V1(4) vector 2 is V2(1) V2(2) V2(3) V2(4) V1(1) * V2(2) V1(3) * V2(4) V1(2) * V2(1) V1(4) * V2(3) I have written a Cuda code to do this: (Pds has the elements of the first vector in shared memory, Nds the second Vector) //instead of using %2 .. i check for the first bit to decide if number is odd/even -- faster if ((tx & 0x0001) == 0x0000) Nds[tx+1] = Pds[tx] * Nds[tx+1]; else Nds[tx-1] = Pds[tx] * Nds[tx-1]; __syncthreads(); Is there anyway to further accelerate this code or avoid divergence ? Thanks

    Read the article

  • Profile memory-performance for part of an rails project

    - by Florian Pilz
    I want to test the profile usage of an important library-class of my rails-project. It uses ActiveRecord so I need all rails dependencies to profile it. As far as I know, I need a patched ruby (rubygc) so script/profile and script/benchmark can track memory usage. I tried to follow this official guide to patch the source code of ruby 1.8.6 (p399) and 1.8.7 (p248), but both fail with the following message: patching file gc.c Hunk #2 succeeded at 50 with fuzz 2 (offset 2 lines). Hunk #3 succeeded at 87 with fuzz 2 (offset 6 lines). Hunk #4 succeeded at 153 with fuzz 1 (offset 45 lines). Hunk #5 succeeded at 409 with fuzz 2 (offset 274 lines). Hunk #6 FAILED at 462. Hunk #7 FAILED at 506. Hunk #8 FAILED at 520. Hunk #9 FAILED at 745. Hunk #10 FAILED at 754. Hunk #11 FAILED at 923. Hunk #12 succeeded at 711 (offset 46 lines). Hunk #13 succeeded at 730 (offset 46 lines). Hunk #14 succeeded at 766 (offset 55 lines). Hunk #15 succeeded at 1428 (offset 87 lines). Hunk #16 succeeded at 1492 (offset 89 lines). Hunk #17 FAILED at 1541. Hunk #18 FAILED at 1551. Hunk #19 succeeded at 1571 (offset 91 lines). Hunk #20 succeeded at 1592 (offset 91 lines). Hunk #21 succeeded at 1601 (offset 91 lines). Hunk #22 succeeded at 1826 (offset 108 lines). Hunk #23 succeeded at 1843 (offset 108 lines). Hunk #24 succeeded at 1926 (offset 108 lines). Hunk #25 succeeded at 2118 (offset 108 lines). Hunk #26 succeeded at 2563 (offset 100 lines). Hunk #27 succeeded at 2611 with fuzz 1 (offset 102 lines). Hunk #28 succeeded at 2628 (offset 102 lines). 8 out of 28 hunks FAILED -- saving rejects to file gc.c.rej patching file intern.h Hunk #1 succeeded at 268 (offset 15 lines). I also tried to use ruby-prof, but I always get the error "uninitialized constant RubyProf::Test". I don't know how to use the gem "memory" and neither "memprof" nor "bleak_house" could be installed successfully. If I get a patched ruby running, I should be fine. But any other possibility to profile the memory of library classes are welcome. Thanks for helping!

    Read the article

  • XmlSerializer Performance Issue when Specifying XmlRootAttribute

    - by Dougc
    I'm currently having a really weird issue and I can't seem to figure out how to resolve it. I've got a fairly complex type which I'm trying to serialize using the XmlSerializer class. This actually functions fine and the type serializes properly, but seems to take a very long time in doing so; around 5 seconds depending on the data in the object. After a bit of profiling I've narrowed the issue down - bizarrely - to specifying an XmlRootAttribute when calling XmlSerializer.Serialize. I do this to change the name of a collection being serialized from ArrayOf to something a bit more meaningful. Once I remove the parameter the operation is almost instant! Any thoughts or suggestions would be excellent as I'm entirely stumped on this one!

    Read the article

  • Need advice on comparing the performance of 2 equivalent linq to sql queries

    - by uvita
    I am working on tool to optimize linq to sql queries. Basically it intercepts the linq execution pipeline and makes some optimizations like for example removing a redundant join from a query. Of course, there is an overhead in the execution time before the query gets executed in the dbms, but then, the query should be processed faster. I don't want to use a sql profiler because I know that the generated query will be perform better in the dbms than the original one, I am looking for a correct way of measuring the global time between the creation of the query in linq and the end of its execution. Currently, I am using the Stopwatch class and my code looks something like this: var sw = new Stopwatch(); sw.Start(); const int amount = 100; for (var i = 0; i < amount; i++) { ExecuteNonOptimizedQuery(); } sw.Stop(); Console.Writeline("Executing the query {2} times took: {0}ms. On average, each query took: {1}ms", sw.ElapsedMilliseconds, sw.ElapsedMilliseconds / amount, amount); Basically the ExecutenNonOptimizedQuery() method creates a new DataContext, creates a query and then iterates over the results. I did this for both versions of the query, the normal one and the optimized one. I took the idea from this post from Frans Bouma. Is there any other approach/considerations I should take? Thanks in advance!

    Read the article

  • Performance of looping over an Unboxed array in Haskell

    - by Joey Adams
    First of all, it's great. However, I came across a situation where my benchmarks turned up weird results. I am new to Haskell, and this is first time I've gotten my hands dirty with mutable arrays and Monads. The code below is based on this example. I wrote a generic monadic for function that takes numbers and a step function rather than a range (like forM_ does). I compared using my generic for function (Loop A) against embedding an equivalent recursive function (Loop B). Having Loop A is noticeably faster than having Loop B. Weirder, having both Loop A and B together is faster than having Loop B by itself (but slightly slower than Loop A by itself). Some possible explanations I can think of for the discrepancies. Note that these are just guesses: Something I haven't learned yet about how Haskell extracts results from monadic functions. Loop B faults the array in a less cache efficient manner than Loop A. Why? I made a dumb mistake; Loop A and Loop B are actually different. Note that in all 3 cases of having either or both Loop A and Loop B, the program produces the same output. Here is the code. I tested it with ghc -O2 for.hs using GHC version 6.10.4 . import Control.Monad import Control.Monad.ST import Data.Array.IArray import Data.Array.MArray import Data.Array.ST import Data.Array.Unboxed for :: (Num a, Ord a, Monad m) => a -> a -> (a -> a) -> (a -> m b) -> m () for start end step f = loop start where loop i | i <= end = do f i loop (step i) | otherwise = return () primesToNA :: Int -> UArray Int Bool primesToNA n = runSTUArray $ do a <- newArray (2,n) True :: ST s (STUArray s Int Bool) let sr = floor . (sqrt::Double->Double) . fromIntegral $ n+1 -- Loop A for 4 n (+ 2) $ \j -> writeArray a j False -- Loop B let f i | i <= n = do writeArray a i False f (i+2) | otherwise = return () in f 4 forM_ [3,5..sr] $ \i -> do si <- readArray a i when si $ forM_ [i*i,i*i+i+i..n] $ \j -> writeArray a j False return a primesTo :: Int -> [Int] primesTo n = [i | (i,p) <- assocs . primesToNA $ n, p] main = print $ primesTo 30000000

    Read the article

  • Socket ping-pong performance

    - by Kamil_H
    I have written two simple programs (tried it in C++ and C#). This is pseudo code: -------- Client --------------- for(int i = 0; i < 200.000; i++) { socket_send("ping") socket_receive(buff) } --------- Server ------------- while(1) { socket_receive(buff) socket_send("pong") } I tried it on Windows. Execution time of client is about 45 seconds. Can somebody explain me why this takes so long? I understand that if there were real network connection between client and server the time of one 'ping-pong' would be: generate_ping + send_via_network + generate_pong + send_via_network but here everything is done in 'local' mode. Is there any way to make this inter process ping-pong faster using network sockets (I'm not asking about shared memory for example :) )

    Read the article

  • Required help to Increase the performance of the MySQL query

    - by Joseph
    Hi all, I am using a following query in MySQl for fetching data from a table. Its taking too long because the conditional check within the aggregate function.Please help how to make it faster SELECT testcharfield ,SUM(IF (Type = 'pi',quantity, 0)) AS OB ,SUM(IF (Type = 'pe',quantity, 0)) AS CB FROM Table1 WHERE sequenceID = 6107 GROUP BY testcharfield

    Read the article

  • XQuery performance in SQL Server

    - by Carl Hörberg
    Why does this quite simple xquery takes 10min to execute in sql server (the 2mb xml document stored in one column) compared to 14 seconds when using oxygen/file based querying? SELECT model.query('declare default element namespace "http://www.sbml.org/sbml/level2"; for $all_species in //species, $all_reactions in //reaction where data($all_species/@compartment)="plasma_membrane" and $all_reactions/listOfReactants/speciesReference/@species=$all_species/@id return <result>{data($all_species/@id)}</result>') from sbml;

    Read the article

  • Searching for patterns to create a TCP Connection Pool for high performance messaging

    - by JoeGeeky
    I'm creating a new Client / Server application in C# and expect to have a fairly high rate of connections. That made me think of database connection pools which help mitigate the expense of creating and disposing connections between the client and database. I would like to create a similar capability for my application and haven't been able to find any good examples of how to apply this pattern. Do I really need to spin up an instance of a TcpClient every time I want to send a message to the server and receive a receipt message? Each connection is expected to transport between 1-5KB with each receiving a 1KB response message. I realize this question is somewhat vague, but I am starting from scratch so I am open to suggestions. Even if that means my suppositions are all wrong.

    Read the article

  • Delphi Performance: Case Versus If

    - by Andreas Rejbrand
    I guess there might be some overlapping with previous SO questions, but I could not find a Delphi-specific question on this topic. Suppose that you want to check if an unsigned 32-bit integer variable "MyAction" is equal to any of the constants ACTION1, ACTION2, ... ACTIONn, where n is - say 1000. I guess that, besides being more elegant, case MyAction of ACTION1: {code}; ACTION2: {code}; ... ACTIONn: {code}; end; if much faster than if MyAction = ACTION1 then // code else if MyAction = ACTION2 then // code ... else if MyAction = ACTIONn then // code; I guess that the if variant takes time O(n) to complete (i.e. to find the right action) if the right action ACTIONi has a high value of i, whereas the case variant takes a lot less time (O(1)?). Am I correct that switch is much faster? Am I correct that the time required to find the right action in the switch case actually is independent of n? I.e. is it true that it does not really take any longer to check a million cases than to check 10 cases? How, exactly, does this work?

    Read the article

  • Javascript fine grain performance tweaking

    - by thermal7
    I have been writing my first jQuery plugin and struggling to find a means to time how long different pieces of code take to run. I can use firebug and console.time/profile. However, it seems that because my code executes so fast I get no results with profile and with time it spits out 0ms. (http://stackoverflow.com/questions/2690697/firebug-profiling-issue-no-activity-to-profile/2690846#2690846) Is there a way to get the time at a greater level of detail that milliseconds in javascript?

    Read the article

  • Mysql select - improve performance

    - by realshadow
    Hey, I am working on an e-shop which sells products only via loans. I display 10 products per page in any category, each product has 3 different price tags - 3 different loan types. Everything went pretty well during testing time, query execution time was perfect, but today when transfered the changes to the production server, the site "collapsed" in about 2 minutes. The query that is used to select loan types sometimes hangs for ~10 seconds and it happens frequently and thus it cant keep up and its hella slow. The table that is used to store the data has approximately 2 milion records and each select looks like this: SELECT * FROM products_loans WHERE KOD IN("X17/Q30-10", "X17/12", "X17/5-24") AND 369.27 BETWEEN CENA_OD AND CENA_DO; 3 loan types and the price that needs to be in range between CENA_OD and CENA_DO, thus 3 rows are returned. But since I need to display 10 products per page, I need to run it trough a modified select using OR, since I didnt find any other solution to this. I have asked about it here, but got no answer. As mentioned in the referencing post, this has to be done separately since there is no column that could be used in a join (except of course price and code, but that ended very, very badly). Here is the show create table, kod and CENA_OD/CENA_DO very indexed via INDEX. CREATE TABLE `products_loans` ( `KOEF_ID` bigint(20) NOT NULL, `KOD` varchar(30) NOT NULL, `AKONTACIA` int(11) NOT NULL, `POCET_SPLATOK` int(11) NOT NULL, `koeficient` decimal(10,2) NOT NULL default '0.00', `CENA_OD` decimal(10,2) default NULL, `CENA_DO` decimal(10,2) default NULL, `PREDAJNA_CENA` decimal(10,2) default NULL, `AKONTACIA_SUMA` decimal(10,2) default NULL, `TYP_VYHODY` varchar(4) default NULL, `stage` smallint(6) NOT NULL default '1', PRIMARY KEY (`KOEF_ID`), KEY `CENA_OD` (`CENA_OD`), KEY `CENA_DO` (`CENA_DO`), KEY `KOD` (`KOD`), KEY `stage` (`stage`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 And also selecting all loan types and later filtering them trough php doesnt work good, since each type has over 50k records and the select takes too much time as well... Any ides about improving the speed are appreciated.

    Read the article

  • mysql statement with nested SELECT - how to improve performance

    - by ernie
    This statement appears inefficient because only one one out of 10 records are selected and only 1 of 100 entries contain comments. What can I do to improve it ? $query = "SELECT A,B,C, (SELECT COUNT(*) FROM comments WHERE comments.nid = header_file.nid) as my_comment_count FROM header_file Where A = 'admin' " edit: I want header records even if no comments are found.

    Read the article

  • Performance: float to int cast and clipping result to range

    - by durandai
    I'm doing some audio processing with float. The result needs to be converted back to PCM samples, and I noticed that the cast from float to int is surprisingly expensive. Whats furthermore frustrating that I need to clip the result to the range of a short (-32768 to 32767). While I would normally instictively assume that this could be assured by simply casting float to short, this fails miserably in Java, since on the bytecode level it results in F2I followed by I2S. So instead of a simple: int sample = (short) flotVal; I needed to resort to this ugly sequence: int sample = (int) floatVal; if (sample > 32767) { sample = 32767; } else if (sample < -32768) { sample = -32768; } Is there a faster way to do this? (about ~6% of the total runtime seems to be spent on casting, while 6% seem to be not that much at first glance, its astounding when I consider that the processing part involves a good chunk of matrix multiplications and IDCT) EDIT The cast/clipping code above is (not surprisingly) in the body of a loop that reads float values from a float[] and puts them into a byte[]. I have a test suite that measures total runtime on several test cases (processing about 200MB of raw audio data). The 6% were concluded from the runtime difference when the cast assignment "int sample = (int) floatVal" was replaced by assigning the loop index to sample. EDIT @leopoldkot: I'm aware of the truncation in Java, as stated in the original question (F2I, I2S bytecode sequence). I only tried the cast to short because I assumed that Java had an F2S bytecode, which it unfortunately does not (comming originally from an 68K assembly background, where a simple "fmove.w FP0, D0" would have done exactly what I wanted).

    Read the article

  • Improving performance in this query

    - by Luiz Gustavo F. Gama
    I have 3 tables with user logins: sis_login = administrators tb_rb_estrutura = coordinators tb_usuario = clients I created a VIEW to unite all these users by separating them by levels, as follows: create view `login_names` as select `n1`.`cod_login` as `id`, '1' as `level`, `n1`.`nom_user` as `name` from `dados`.`sis_login` `n1` union all select `n2`.`id` as `id`, '2' as `level`, `n2`.`nom_funcionario` as `name` from `tb_rb_estrutura` `n2` union all select `n3`.`cod_usuario` as `id`, '3' as `level`, `n3`.`dsc_nome` as `name` from `tb_usuario` `n3`; So, can occur up to three ids repeated for different users, which is why I separated by levels. This VIEW is just to return me user name, according to his id and level. considering it has about 500,000 registered users, this view takes about 1 second to load. too much time, but is becomes very small when I need to return the latest posts on the forum of my website. The tables of the forums return the user id and level, then look for a name in this VIEW. I have registered 18 forums. When I run the query, it takes one second for each forum = 18 seconds. OMG. This page loads every time somebody enter my website. This is my query: select `x`.`forum_id`, `x`.`topic_id`, `l`.`nome` from ( select `t`.`forum_id`, `t`.`topic_id`, `t`.`data`, `t`.`user_id`, `t`.`user_level` from `tb_forum_topics` `t` union all select `a`.`forum_id`, `a`.`topic_id`, `a`.`data`, `a`.`user_id`, `a`.`user_level` from `tb_forum_answers` `a` ) `x` left outer join `login_names` `l` on `l`.`id` = `x`.`user_id` and `l`.`level` = `x`.`user_level` group by `x`.`forum_id` asc USING EXPLAIN: id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY <derived2> ALL NULL NULL NULL NULL 6 Using temporary; Using filesort 1 PRIMARY <derived4> ALL NULL NULL NULL NULL 530415 4 DERIVED n1 ALL NULL NULL NULL NULL 114 5 UNION n2 ALL NULL NULL NULL NULL 2 6 UNION n3 ALL NULL NULL NULL NULL 530299 NULL UNION RESULT ALL NULL NULL NULL NULL NULL 2 DERIVED t ALL NULL NULL NULL NULL 3 3 UNION r ALL NULL NULL NULL NULL 3 NULL UNION RESULT ALL NULL NULL NULL NULL NULL Somebody can help me or give a suggestion?

    Read the article

  • Cassandra performance slow down with counter column

    - by tubcvt
    I have a cluster (4 node ) and a node have 16 core and 24 gb ram: 192.168.23.114 datacenter1 rack1 Up Normal 44.48 GB 25.00% 192.168.23.115 datacenter1 rack1 Up Normal 44.51 GB 25.00% 192.168.23.116 datacenter1 rack1 Up Normal 44.51 GB 25.00% 192.168.23.117 datacenter1 rack1 Up Normal 44.51 GB 25.00% We use about 10 column family (counter column) to make some system statistic report. Problem on here is that When i set replication_factor of this keyspace from 1 to 2 (contain 10 counter column family ), all cpu of node increase from 10% ( when use replication factor=1) to --- 90%. :( :( who can help me work around that :( . why counter column consume too much cpu time :(. thanks all

    Read the article

  • Java Collection performance question

    - by Shervin
    I have created a method that takes two Collection<String> as input and copies one to the other. However, I am not sure if I should check if the collections contain the same elements before I start copying, or if I should just copy regardless. This is the method: /** * Copies from one collection to the other. Does not allow empty string. * Removes duplicates. * Clears the too Collection first * @param target * @param dest */ public static void copyStringCollectionAndRemoveDuplicates(Collection<String> target, Collection<String> dest) { if(target == null || dest == null) return; //Is this faster to do? Or should I just comment this block out if(target.containsAll(dest)) return; dest.clear(); Set<String> uniqueSet = new LinkedHashSet<String>(target.size()); for(String f : target) if(!"".equals(f)) uniqueSet.add(f); dest.addAll(uniqueSet); } Maybe it is faster to just remove the if(target.containsAll(dest)) return; Because this method will iterate over the entire collection anyways.

    Read the article

  • How significant are JPA lazy loading performance benefits?

    - by Robert
    I understand that this is highly specific to the concrete application, but I'm just wondering what's the general opinion, or at least some personal experiences on the issue. I have an aversion towards the 'open session in view' pattern, so to avoid it, I'm thinking about simply fetching everything small eagerly, and using queries in the service layer to fetch larger stuff. Has anyone used this and regretted it? And is there maybe some elegant solution to lazy loading in the view layer that I'm not aware of?

    Read the article

< Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >