Search Results

Search found 19055 results on 763 pages for 'high performance'.

Page 39/763 | < Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >

  • Javamail performance

    - by cbz
    Hi, I've been using javamail to retrieve mails from IMAP server (currently GMail). Javamail retrieves list of messages (only ids) in a particular folder from server very fast, but when I actually fetch message (only envelop not even contents) it takes around 1 to 2 seconds for each message. What are the techniques should be used for fast retrieval?

    Read the article

  • SQLite self-join performance

    - by Derk
    What I essentially want, is to retreive all features and values of products which have a particular feature and value. For example: I want to know all available hard drive sizes of products that have an Intel processor. I have three tables: product_to_value (product_id, feature_id, value_id) features (id, value) // for example Processor family, Storage size, etc. values (id, value) // for example Intel, 60GB, etc The simplified query I have now: SELECT features.name, featurevalues.name, featurevalues.value FROM products, products as prod2, features, features as feat2, values, values as val2 WHERE products.feature = features.id AND products.value = values.id AND products.product = prod2.product AND prod2.feature_id = feat2.id AND prod2.value_id = val2.id AND features.id = ? AND feat2.id = ? All columns have an index. I am using SQLite. The problem is that it's very slow (70ms per query, without the self-join it's <1ms). Is there a smarter way to fetch data like this? Or is this too much to ask from SQLite? I personally think I am simply overlooking something, as I am quite new to SQLite.

    Read the article

  • Performance problem with System.Net.Mail

    - by Saif Khan
    I have this unusual problem with mailing from my app. At first it wasn't working (getting unable to relay error crap) anyways I added the proper authentication and it works. My problem now is, if I try to send around 300 emails (each with a 500k attachment) the app starts hanging around 95% thru the process. Here is some of my code which is called for each mail to be sent Using mail As New MailMessage() With mail .From = New MailAddress(My.Resources.EmailFrom) For Each contact As Contact In Contacts .To.Add(contact.Email) Next .Subject = "Accounting" .Body = My.Resources.EmailBody 'Back the stream up to the beginning orelse the attachment 'will be sent as a zero (0) byte file. attachment.Seek(0, SeekOrigin.Begin) .Attachments.Add(New Attachment(attachment, String.Concat(Item.Year, Item.AttachmentType.Extension))) End With Dim smtp As New SmtpClient("192.168.1.2") With smtp .DeliveryMethod = SmtpDeliveryMethod.Network .UseDefaultCredentials = False .Credentials = New NetworkCredential("username", "password") .Send(mail) End With End Using With item .SentStatus = True .DateSent = DateTime.Now.Date .Save() End With Return I was thinking, can I just prepare all the mails and add them to a collection then open one SMTP conenction and just iterate the collection, calling the send like this Using mail As New MailMessage() ... MailCollection.Add(mail) End Using ... Dim smtp As New SmtpClient("192.168.1.2") With smtp .DeliveryMethod = SmtpDeliveryMethod.Network .UseDefaultCredentials = False .Credentials = New NetworkCredential("username", "password") For Each mail in MainCollection .Send(mail) Next End With

    Read the article

  • Horrible WPF performance!

    - by Erik
    Why am i using over 80% CPU when just hovering some links? As you can see in the video i uploaded: http://www.youtube.com/watch?v=3ALF9NquTRE the CPU goes to 80% CPU when i move my mouse over the links. My style for the items are as follows <Style x:Key="LinkStyle" TargetType="{x:Type Hyperlink}"> <Style.Triggers> <Trigger Property="IsMouseOver" Value="True"> <Setter Property="Foreground" Value="White" /> </Trigger> </Style.Triggers> <Setter Property="TextBlock.TextDecorations" Value="{x:Null}" /> <Setter Property="Foreground" Value="#FFDDDDDD"/> <Setter Property="Cursor" Value="Arrow" /> </Style> Why?

    Read the article

  • Sharing large objects between ruby processes without a performance hit

    - by Gdeglin
    I have a Ruby hash that reaches approximately 10 megabytes if written to a file using Marshal.dump. After gzip compression it is approximately 500 kilobytes. Iterating through and altering this hash is very fast in ruby (fractions of a millisecond). Even copying it is extremely fast. The problem is that I need to share the data in this hash between Ruby on Rails processes. In order to do this using the Rails cache (file_store or memcached) I need to Marshal.dump the file first, however this incurs a 1000 millisecond delay when serializing the file and a 400 millisecond delay when serializing it. Ideally I would want to be able to save and load this hash from each process in under 100 milliseconds. One idea is to spawn a new Ruby process to hold this hash that provides an API to the other processes to modify or process the data within it, but I want to avoid doing this unless I'm certain that there are no other ways to share this object quickly. Is there a way I can more directly share this hash between processes without needing to serialize or deserialize it? Here is the code I'm using to generate a hash similar to the one I'm working with: @a = [] 0.upto(500) do |r| @a[r] = [] 0.upto(10_000) do |c| if rand(10) == 0 @a[r][c] = 1 # 10% chance of being 1 else @a[r][c] = 0 end end end @c = Marshal.dump(@a) # 1000 milliseconds Marshal.load(@c) # 400 milliseconds

    Read the article

  • Performance for myCollection.Add() vs. myCollection["key"]

    - by Atomiton
    When dealing with a collection of key/value pairs is there any difference between using its Add() method and directly assigning it? For example, a HtmlGenericControl will have an Attributes Collection: var anchor = new HtmlGenericControl("a"); // These both work: anchor.Attributes.Add("class", "xyz"); anchor.Attributes["class"] = "xyz"; Is it purely a matter of preference, or is there a reason for doing one or the other?

    Read the article

  • mysql view performance

    - by vamsivanka
    I have a table for about 100,000 users in it. First Case: explain select * from users where state = 'ca' when i do an explain plan for the above query i got the cost as 5200 Second Case: Create or replace view vw_users as select * from users Explain select * from vw_users where state = 'ca' when i do an explain plan on the second query i got the cost as 100,000. How does the where clause in the view work ?? Is the where clause applied after the view retrieves all the rows. Please let know, how can i fix this issue. Thanks

    Read the article

  • Performance Tricks for C# Logging

    - by Charles
    I am looking into C# logging and I do not want my log messages to spend any time processing if the message is below the logging threshold. The best I can see log4net does is a threshold check AFTER evaluating the log parameters. Example: _logger.Debug( "My complicated log message " + thisFunctionTakesALongTime() + " will take a long time" ) Even if the threshold is above Debug, thisFunctionTakesALongTime will still be evaluated. In log4net you are supposed to use _logger.isDebugEnabled so you end up with if( _logger.isDebugEnabled ) _logger.Debug( "Much faster" ) I want to know if there is a better solution for .net logging that does not involve a check each time I want to log. In C++ I am allowed to do LOG_DEBUG( "My complicated log message " + thisFunctionTakesALongTime() + " will take no time" ) since my LOG_DEBUG macro does the log level check itself. This frees me to have a 1 line log message throughout my app which I greatly prefer. Anyone know of a way to replicate this behavior in C#?

    Read the article

  • Web service performance testing plan, Microsoft .NET WS, SQL

    - by zxed
    Trying to answer a question to come up with a testing plan. It has to do with using a website and/or webservice that queries a sql server to get data and display to user. * Solution must be able to handle an estimated 2000 users, approximately 700 concurrent users, 10,000 + website hits a month. Database calls should handle 100,000 queries via the website/webservice a month. The system is used at multiple times during a 24 hour period; however networking and bandwidth traffic decreases after 5 pm * two windows 2003 servers are used, one for web, another for sql. Both are located in the same room. User access is varied and users can be far/near (its a centralized system), users access via www

    Read the article

  • mysql query performance help

    - by Stefano
    Hi I have a quite large table storing words contained in email messages mysql> explain t_message_words; +----------------+---------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +----------------+---------+------+-----+---------+----------------+ | mwr_key | int(11) | NO | PRI | NULL | auto_increment | | mwr_message_id | int(11) | NO | MUL | NULL | | | mwr_word_id | int(11) | NO | MUL | NULL | | | mwr_count | int(11) | NO | | 0 | | +----------------+---------+------+-----+---------+----------------+ table contains about 100M rows mwr_message_id is a FK to messages table mwr_word_id is a FK to words table mwr_count is the number of occurrencies of word mwr_word_id in message mwr_message_id To calculate most used words, I use the following query SELECT SUM(mwr_count) AS word_count, mwr_word_id FROM t_message_words GROUP BY mwr_word_id ORDER BY word_count DESC LIMIT 100; that runs almost forever (more than half an hour on the test server) mysql> show processlist; +----+------+----------------+--------+---------+------+----------------------+----------------------------------------------------- | Id | User | Host | db | Command | Time | State | Info +----+------+----------------+--------+---------+------+----------------------+----------------------------------------------------- processlist | 41 | root | localhost:3148 | tst_db | Query | 1955 | Copying to tmp table | SELECT SUM(mwr_count) AS word_count, mwr_word_id FROM t_message_words GROUP BY mwr_word_id | +----+------+----------------+--------+---------+------+----------------------+----------------------------------------------------- 3 rows in set (0.00 sec) Is there anything I can do to "speed up" the query (apart from adding more ram, more cpu, faster disks)? thank you in advance stefano

    Read the article

  • Performance issue on Android's MapView Navigation-App

    - by poeschlorn
    Hey Guys, I've got a question on making an navigation app more faster and more stable. The basic layer of my app is a simple mapview, covered with several overlays (2 markers for start and destination and one for the route). My idea is to implement a thread to display the route, so that the app won't hang up during the calculation of a more complex route (like it does right now). After implementing the thread there are no updates shows any more, maybe you can help me with a short glance at an excerpt of my code below: private class MyLocationListener implements LocationListener { @Override public void onLocationChanged(Location loc) { posUser = new GeoPoint((int) (loc.getLatitude() * 1E6), (int) (loc .getLongitude() * 1E6)); new Thread(){ public void run(){ mapView.invalidate(); // Erase old overlays mapView.getOverlays().clear(); // Draw updated overlay elements and adjust basic map settings updateText(); if(firstRefresh){ adjustMap(); firstRefresh = false; } getAndPaintRoute(); drawMarkers(); } }; } Some features have been summarized to a method like "drawMarkers()" or "updateText()"...(they don't need any more attention ;-))

    Read the article

  • Odd performance with C# Asynchronous server socket

    - by The.Anti.9
    I'm working on a web server in C# and I have it running on Asynchronous socket calls. The weird thing is that for some reason, when you start loading pages, the 3rd request is where the browser won't connect. It just keeps saying "Connecting..." and doesn't ever stop. If I hit stop. and then refresh, it will load again, but if I try another time after that it does the thing where it doesn't load again. And it continues in that cycle. I'm not really sure what is making it do that. The code is kind of hacked together from a couple of examples and some old code I had. Any miscellaneous tips would be helpful as well. Heres my little Listener class that handles everything (pastied here. thought it might be easier to read this way) using System; using System.Collections.Generic; using System.Net; using System.Net.Sockets; using System.Text; using System.Threading; namespace irek.Server { public class Listener { private int port; private Socket server; private Byte[] data = new Byte[2048]; static ManualResetEvent allDone = new ManualResetEvent(false); public Listener(int _port) { port = _port; } public void Run() { server = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); IPEndPoint iep = new IPEndPoint(IPAddress.Any, port); server.Bind(iep); Console.WriteLine("Server Initialized."); server.Listen(5); Console.WriteLine("Listening..."); while (true) { allDone.Reset(); server.BeginAccept(new AsyncCallback(AcceptCon), server); allDone.WaitOne(); } } private void AcceptCon(IAsyncResult iar) { allDone.Set(); Socket s = (Socket)iar.AsyncState; Socket s2 = s.EndAccept(iar); SocketStateObject state = new SocketStateObject(); state.workSocket = s2; s2.BeginReceive(state.buffer, 0, SocketStateObject.BUFFER_SIZE, 0, new AsyncCallback(Read), state); } private void Read(IAsyncResult iar) { try { SocketStateObject state = (SocketStateObject)iar.AsyncState; Socket s = state.workSocket; int read = s.EndReceive(iar); if (read > 0) { state.sb.Append(Encoding.ASCII.GetString(state.buffer, 0, read)); if (s.Available > 0) { s.BeginReceive(state.buffer, 0, SocketStateObject.BUFFER_SIZE, 0, new AsyncCallback(Read), state); return; } } if (state.sb.Length > 1) { string requestString = state.sb.ToString(); // HANDLE REQUEST HERE // Temporary response string resp = "<h1>It Works!</h1>"; string head = "HTTP/1.1 200 OK\r\nContent-Type: text/html;\r\nServer: irek\r\nContent-Length:"+resp.Length+"\r\n\r\n"; byte[] answer = Encoding.ASCII.GetBytes(head+resp); // end temp. state.workSocket.BeginSend(answer, 0, answer.Length, SocketFlags.None, new AsyncCallback(Send), state.workSocket); } } catch (Exception) { return; } } private void Send(IAsyncResult iar) { try { SocketStateObject state = (SocketStateObject)iar.AsyncState; int sent = state.workSocket.EndSend(iar); state.workSocket.Shutdown(SocketShutdown.Both); state.workSocket.Close(); } catch (Exception) { } return; } } } And my SocketStateObject: public class SocketStateObject { public Socket workSocket = null; public const int BUFFER_SIZE = 1024; public byte[] buffer = new byte[BUFFER_SIZE]; public StringBuilder sb = new StringBuilder(); }

    Read the article

  • Optimize code performance when odd/even threads are doing different things in CUDA

    - by Orion Nebula
    Hi all! I have two large vectors, I am trying to do some sort of element multiplication, where an even-numbered element in the first vector is multiplied by the next odd-numbered element in the second vector .... and where the odd-numbered element in the first vector is multiplied by the preceding even-numbered element in the second vector Ex. vector 1 is V1(1) V1(2) V1(3) V1(4) vector 2 is V2(1) V2(2) V2(3) V2(4) V1(1) * V2(2) V1(3) * V2(4) V1(2) * V2(1) V1(4) * V2(3) I have written a Cuda code to do this: (Pds has the elements of the first vector in shared memory, Nds the second Vector) //instead of using %2 .. i check for the first bit to decide if number is odd/even -- faster if ((tx & 0x0001) == 0x0000) Nds[tx+1] = Pds[tx] * Nds[tx+1]; else Nds[tx-1] = Pds[tx] * Nds[tx-1]; __syncthreads(); Is there anyway to further accelerate this code or avoid divergence ? Thanks

    Read the article

  • Profile memory-performance for part of an rails project

    - by Florian Pilz
    I want to test the profile usage of an important library-class of my rails-project. It uses ActiveRecord so I need all rails dependencies to profile it. As far as I know, I need a patched ruby (rubygc) so script/profile and script/benchmark can track memory usage. I tried to follow this official guide to patch the source code of ruby 1.8.6 (p399) and 1.8.7 (p248), but both fail with the following message: patching file gc.c Hunk #2 succeeded at 50 with fuzz 2 (offset 2 lines). Hunk #3 succeeded at 87 with fuzz 2 (offset 6 lines). Hunk #4 succeeded at 153 with fuzz 1 (offset 45 lines). Hunk #5 succeeded at 409 with fuzz 2 (offset 274 lines). Hunk #6 FAILED at 462. Hunk #7 FAILED at 506. Hunk #8 FAILED at 520. Hunk #9 FAILED at 745. Hunk #10 FAILED at 754. Hunk #11 FAILED at 923. Hunk #12 succeeded at 711 (offset 46 lines). Hunk #13 succeeded at 730 (offset 46 lines). Hunk #14 succeeded at 766 (offset 55 lines). Hunk #15 succeeded at 1428 (offset 87 lines). Hunk #16 succeeded at 1492 (offset 89 lines). Hunk #17 FAILED at 1541. Hunk #18 FAILED at 1551. Hunk #19 succeeded at 1571 (offset 91 lines). Hunk #20 succeeded at 1592 (offset 91 lines). Hunk #21 succeeded at 1601 (offset 91 lines). Hunk #22 succeeded at 1826 (offset 108 lines). Hunk #23 succeeded at 1843 (offset 108 lines). Hunk #24 succeeded at 1926 (offset 108 lines). Hunk #25 succeeded at 2118 (offset 108 lines). Hunk #26 succeeded at 2563 (offset 100 lines). Hunk #27 succeeded at 2611 with fuzz 1 (offset 102 lines). Hunk #28 succeeded at 2628 (offset 102 lines). 8 out of 28 hunks FAILED -- saving rejects to file gc.c.rej patching file intern.h Hunk #1 succeeded at 268 (offset 15 lines). I also tried to use ruby-prof, but I always get the error "uninitialized constant RubyProf::Test". I don't know how to use the gem "memory" and neither "memprof" nor "bleak_house" could be installed successfully. If I get a patched ruby running, I should be fine. But any other possibility to profile the memory of library classes are welcome. Thanks for helping!

    Read the article

  • XmlSerializer Performance Issue when Specifying XmlRootAttribute

    - by Dougc
    I'm currently having a really weird issue and I can't seem to figure out how to resolve it. I've got a fairly complex type which I'm trying to serialize using the XmlSerializer class. This actually functions fine and the type serializes properly, but seems to take a very long time in doing so; around 5 seconds depending on the data in the object. After a bit of profiling I've narrowed the issue down - bizarrely - to specifying an XmlRootAttribute when calling XmlSerializer.Serialize. I do this to change the name of a collection being serialized from ArrayOf to something a bit more meaningful. Once I remove the parameter the operation is almost instant! Any thoughts or suggestions would be excellent as I'm entirely stumped on this one!

    Read the article

  • Need advice on comparing the performance of 2 equivalent linq to sql queries

    - by uvita
    I am working on tool to optimize linq to sql queries. Basically it intercepts the linq execution pipeline and makes some optimizations like for example removing a redundant join from a query. Of course, there is an overhead in the execution time before the query gets executed in the dbms, but then, the query should be processed faster. I don't want to use a sql profiler because I know that the generated query will be perform better in the dbms than the original one, I am looking for a correct way of measuring the global time between the creation of the query in linq and the end of its execution. Currently, I am using the Stopwatch class and my code looks something like this: var sw = new Stopwatch(); sw.Start(); const int amount = 100; for (var i = 0; i < amount; i++) { ExecuteNonOptimizedQuery(); } sw.Stop(); Console.Writeline("Executing the query {2} times took: {0}ms. On average, each query took: {1}ms", sw.ElapsedMilliseconds, sw.ElapsedMilliseconds / amount, amount); Basically the ExecutenNonOptimizedQuery() method creates a new DataContext, creates a query and then iterates over the results. I did this for both versions of the query, the normal one and the optimized one. I took the idea from this post from Frans Bouma. Is there any other approach/considerations I should take? Thanks in advance!

    Read the article

  • Socket ping-pong performance

    - by Kamil_H
    I have written two simple programs (tried it in C++ and C#). This is pseudo code: -------- Client --------------- for(int i = 0; i < 200.000; i++) { socket_send("ping") socket_receive(buff) } --------- Server ------------- while(1) { socket_receive(buff) socket_send("pong") } I tried it on Windows. Execution time of client is about 45 seconds. Can somebody explain me why this takes so long? I understand that if there were real network connection between client and server the time of one 'ping-pong' would be: generate_ping + send_via_network + generate_pong + send_via_network but here everything is done in 'local' mode. Is there any way to make this inter process ping-pong faster using network sockets (I'm not asking about shared memory for example :) )

    Read the article

  • Performance of looping over an Unboxed array in Haskell

    - by Joey Adams
    First of all, it's great. However, I came across a situation where my benchmarks turned up weird results. I am new to Haskell, and this is first time I've gotten my hands dirty with mutable arrays and Monads. The code below is based on this example. I wrote a generic monadic for function that takes numbers and a step function rather than a range (like forM_ does). I compared using my generic for function (Loop A) against embedding an equivalent recursive function (Loop B). Having Loop A is noticeably faster than having Loop B. Weirder, having both Loop A and B together is faster than having Loop B by itself (but slightly slower than Loop A by itself). Some possible explanations I can think of for the discrepancies. Note that these are just guesses: Something I haven't learned yet about how Haskell extracts results from monadic functions. Loop B faults the array in a less cache efficient manner than Loop A. Why? I made a dumb mistake; Loop A and Loop B are actually different. Note that in all 3 cases of having either or both Loop A and Loop B, the program produces the same output. Here is the code. I tested it with ghc -O2 for.hs using GHC version 6.10.4 . import Control.Monad import Control.Monad.ST import Data.Array.IArray import Data.Array.MArray import Data.Array.ST import Data.Array.Unboxed for :: (Num a, Ord a, Monad m) => a -> a -> (a -> a) -> (a -> m b) -> m () for start end step f = loop start where loop i | i <= end = do f i loop (step i) | otherwise = return () primesToNA :: Int -> UArray Int Bool primesToNA n = runSTUArray $ do a <- newArray (2,n) True :: ST s (STUArray s Int Bool) let sr = floor . (sqrt::Double->Double) . fromIntegral $ n+1 -- Loop A for 4 n (+ 2) $ \j -> writeArray a j False -- Loop B let f i | i <= n = do writeArray a i False f (i+2) | otherwise = return () in f 4 forM_ [3,5..sr] $ \i -> do si <- readArray a i when si $ forM_ [i*i,i*i+i+i..n] $ \j -> writeArray a j False return a primesTo :: Int -> [Int] primesTo n = [i | (i,p) <- assocs . primesToNA $ n, p] main = print $ primesTo 30000000

    Read the article

  • Required help to Increase the performance of the MySQL query

    - by Joseph
    Hi all, I am using a following query in MySQl for fetching data from a table. Its taking too long because the conditional check within the aggregate function.Please help how to make it faster SELECT testcharfield ,SUM(IF (Type = 'pi',quantity, 0)) AS OB ,SUM(IF (Type = 'pe',quantity, 0)) AS CB FROM Table1 WHERE sequenceID = 6107 GROUP BY testcharfield

    Read the article

  • XQuery performance in SQL Server

    - by Carl Hörberg
    Why does this quite simple xquery takes 10min to execute in sql server (the 2mb xml document stored in one column) compared to 14 seconds when using oxygen/file based querying? SELECT model.query('declare default element namespace "http://www.sbml.org/sbml/level2"; for $all_species in //species, $all_reactions in //reaction where data($all_species/@compartment)="plasma_membrane" and $all_reactions/listOfReactants/speciesReference/@species=$all_species/@id return <result>{data($all_species/@id)}</result>') from sbml;

    Read the article

  • Javascript fine grain performance tweaking

    - by thermal7
    I have been writing my first jQuery plugin and struggling to find a means to time how long different pieces of code take to run. I can use firebug and console.time/profile. However, it seems that because my code executes so fast I get no results with profile and with time it spits out 0ms. (http://stackoverflow.com/questions/2690697/firebug-profiling-issue-no-activity-to-profile/2690846#2690846) Is there a way to get the time at a greater level of detail that milliseconds in javascript?

    Read the article

  • Mysql select - improve performance

    - by realshadow
    Hey, I am working on an e-shop which sells products only via loans. I display 10 products per page in any category, each product has 3 different price tags - 3 different loan types. Everything went pretty well during testing time, query execution time was perfect, but today when transfered the changes to the production server, the site "collapsed" in about 2 minutes. The query that is used to select loan types sometimes hangs for ~10 seconds and it happens frequently and thus it cant keep up and its hella slow. The table that is used to store the data has approximately 2 milion records and each select looks like this: SELECT * FROM products_loans WHERE KOD IN("X17/Q30-10", "X17/12", "X17/5-24") AND 369.27 BETWEEN CENA_OD AND CENA_DO; 3 loan types and the price that needs to be in range between CENA_OD and CENA_DO, thus 3 rows are returned. But since I need to display 10 products per page, I need to run it trough a modified select using OR, since I didnt find any other solution to this. I have asked about it here, but got no answer. As mentioned in the referencing post, this has to be done separately since there is no column that could be used in a join (except of course price and code, but that ended very, very badly). Here is the show create table, kod and CENA_OD/CENA_DO very indexed via INDEX. CREATE TABLE `products_loans` ( `KOEF_ID` bigint(20) NOT NULL, `KOD` varchar(30) NOT NULL, `AKONTACIA` int(11) NOT NULL, `POCET_SPLATOK` int(11) NOT NULL, `koeficient` decimal(10,2) NOT NULL default '0.00', `CENA_OD` decimal(10,2) default NULL, `CENA_DO` decimal(10,2) default NULL, `PREDAJNA_CENA` decimal(10,2) default NULL, `AKONTACIA_SUMA` decimal(10,2) default NULL, `TYP_VYHODY` varchar(4) default NULL, `stage` smallint(6) NOT NULL default '1', PRIMARY KEY (`KOEF_ID`), KEY `CENA_OD` (`CENA_OD`), KEY `CENA_DO` (`CENA_DO`), KEY `KOD` (`KOD`), KEY `stage` (`stage`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 And also selecting all loan types and later filtering them trough php doesnt work good, since each type has over 50k records and the select takes too much time as well... Any ides about improving the speed are appreciated.

    Read the article

  • SQL Server Express performance issue

    - by Developer IT
    Hi folks ! I know my questions will sound silly and probably nobody will have perfect answer but since I am in a complete dead-end with the situation it will make me feel better to post it here. So... I have a SQL Server Express database that's 500 Mb. It contains 5 tables and maybe 30 stored procedure. This database is use to store articles and is use for the Developer It web site. Normally the web pages load quickly, let's say 2 ou 3 sec. BUT, sqlserver process uses 100% of the processor for those 2 or 3 sec. I try to find which stored procedure was the problem and I could not find one. It seems like every read into the table dans contains the articles (there are about 155,000 of them and 20 or so gets added every 15 minutes). I added few index but without luck... It is because the table is full text indexed ? Should I have order with the primary key instead of date ? I never had any problems with ordering by dates.... Should I use dynamic SQL ? Should I add the primary key into the url of the articles ? Should I use mutiple indexes for seperate columns or one big index ? I you want more details or code bits, just ask for it. Basicly, every little hint is much apreciated. Thanks.

    Read the article

  • mysql statement with nested SELECT - how to improve performance

    - by ernie
    This statement appears inefficient because only one one out of 10 records are selected and only 1 of 100 entries contain comments. What can I do to improve it ? $query = "SELECT A,B,C, (SELECT COUNT(*) FROM comments WHERE comments.nid = header_file.nid) as my_comment_count FROM header_file Where A = 'admin' " edit: I want header records even if no comments are found.

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >