Search Results

Search found 15401 results on 617 pages for 'memory optimization'.

Page 190/617 | < Previous Page | 186 187 188 189 190 191 192 193 194 195 196 197  | Next Page >

  • Performance considerations of a large hard-coded array in the .cs file

    - by terence
    I'm writing some code where performance is important. In one part of it, I have to compare a large set of pre-computed data against dynamic values. Currently, I'm storing that pre-computed data in a giant array in the .cs file: Data[] data = { /* my data set */ }; The data set is about 90kb, or roughly 13k elements. I was wondering if there's any downside to doing this, as opposed to loading it in from an external file? I'm not entirely sure how C# works internally, so I just wanted to be aware of any performance issues I might encounter with this method.

    Read the article

  • How can I get a COUNT(col) ... GROUP BY to use an index?

    - by thecoop
    I've got a table (col1, col2, ...) with an index on (col1, col2, ...). The table has got millions of rows in it, and I want to run a query: SELECT col1, COUNT(col2) WHERE col1 NOT IN (<couple of exclusions>) GROUP BY col1 Unfortunately, this is resulting in a full table scan of the table, which takes upwards of a minute. Is there any way of getting oracle to use the index on the columns to return the results much faster?

    Read the article

  • Anything wrong with this MySQL quert? takes 10 seconds+ to load

    - by user345426
    I have a search that is taking 10 seconds+ to execute! Keep in mind it is also searching over 200,000 products in the database. I posted the explain and MySQL query here. 1 SIMPLE p ref PRIMARY,products_status,prod_prodid_status,product... products_status 1 const 9048 Using where; Using temporary; Using filesort 1 SIMPLE v ref PRIMARY,vendors_id,vendors_vendorid vendors_vendorid 4 rhinomar_rhinomartnew.p.vendors_id 1 1 SIMPLE s ref products_id products_id 4 rhinomar_rhinomartnew.p.products_id 1 1 SIMPLE pd ref PRIMARY,products,prod_desc_prodid_prodname prod_desc_prodid_prodname 4 rhinomar_rhinomartnew.p.products_id 1 1 SIMPLE p2c ref PRIMARY,ptc_catidx PRIMARY 4 rhinomar_rhinomartnew.p.products_id 1 Using where; Using index 1 SIMPLE c eq_ref PRIMARY PRIMARY 4 rhinomar_rhinomartnew.p2c.categories_id 1 Using where MySQL Query: select p.products_id, p.products_image, p.products_price, p.products_weight, p.products_unit_quantity, s.specials_new_products_price, s.status, pd.products_name, pd.products_img_alt from products p left join vendors v ON v.vendors_id = p.vendors_id left join specials s on s.products_id = p.products_id left join products_description pd on pd.products_id = p.products_id left join products_to_categories p2c on p2c.products_id = p.products_id left join categories c on c.categories_id = p2c.categories_id where ( ( pd.products_name like '%apparel%' ) or p2c.categories_id IN (773, 132, 135, 136, 119, 122, 124, 125, 126, 1749, 1753, 1747, 123, 127, 130, 131, 178, 137, 140, 164, 165, 166, 167, 168, 169, 832, 2045 ) or p.products_id = 'apparel' or p.products_model = 'apparel' or CONCAT(v.vendors_prefix, '-') = 'apparel' or CONCAT( v.vendors_prefix, '-', p.products_id ) = 'apparel' ) and p.products_status = '1' and c.categories_status = '1' group by p.products_id order by pd.products_name

    Read the article

  • How do I release an object allocated in a different AutoReleasePool ?

    - by ajcaruana
    Hi, I have a problem with the memory management in Objective-C. Say I have a method that allocates an object and stores the reference to this object as a member of the class. If I run through the same function a second time, I need to release this first object before creating a new one to replace it. Supposing that the first line of the function is: NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; This means that a different auto-release pool will be in place. The code to allocate the object is as follows: if (m_object != nil) [m_object release]; m_object = [[MyClass alloc] init]; [m_object retain]; The problem is that the program crashes when running the last line of the method: [pool release]; What am I doing wrong ? How can I fix this ? Regards Alan

    Read the article

  • How to make if-elif-else statement in python more space-saving?

    - by Neverland
    I have a lot of if-elif-else statements in my code if message == '0' or message == '3' or message == '5' or message == '7': ... elif message == '1' or message == '2' or message == '4' or message == '6' or message == '8': ... else: ... Is it possible to format this in a more space-saving way? I tried it this way: if message == '0' or '3' or '5' or '7': ... elif message == '1' or '2' or '4' or '6' or '8': ... else: ... But without success.

    Read the article

  • Thin down jQuery

    - by Taylor Satula
    Hi, I have been optimizing my website but the one problem that stands in my way is all the jQuery functions that I do not use. The only ones that I use are for a smooth page scroller. It just seems like such a waste of download time. My question is: Is there any script or program that will remove the jQuery code that I do not need and leave the 1 or 2 functions that I do need.

    Read the article

  • Creating objects makes the VM faster?

    - by Sudhir Jonathan
    Look at this piece of code: MessageParser parser = new MessageParser(); for (int i = 0; i < 10000; i++) { parser.parse(plainMessage, user); } For some reason, it runs SLOWER (by about 100ms) than for (int i = 0; i < 10000; i++) { MessageParser parser = new MessageParser(); parser.parse(plainMessage, user); } Any ideas why? The tests were repeated a lot of times, so it wasn't just random. How could creating an object 10000 times be faster than creating it once?

    Read the article

  • MySQL query paralyzes site

    - by nute
    Once in a while, at random intervals, our website gets completely paralyzed. Looking at SHOW FULL PROCESSLIST;, I've noticed that when this happens, there is a specific query that is "Copying to tmp table" for a loooong time (sometimes 350 seconds), and almost all the other queries are "Locked". The part I don't understand is that 90% of the time, this query runs fine. I see it going through in the process list and it finishes pretty quickly most of the time. This query is being called by an ajax call on our homepage to display product recommendations based your browsing history (a la amazon). Just sometimes, randomly (but too often), it gets stuck at "copying to tmp table". Here is a caught instance of the query that was up 109 seconds when I looked: SELECT DISTINCT product_product.id, product_product.name, product_product.retailprice, product_product.imageurl, product_product.thumbnailurl, product_product.msrp FROM product_product, product_xref, product_viewhistory WHERE ( (product_viewhistory.productId = product_xref.product_id_1 AND product_xref.product_id_2 = product_product.id) OR (product_viewhistory.productId = product_xref.product_id_2 AND product_xref.product_id_1 = product_product.id) ) AND product_product.outofstock='N' AND product_viewhistory.cookieId = '188af1efad392c2adf82' AND product_viewhistory.productId IN (24976, 25873, 26067, 26073, 44949, 16209, 70528, 69784, 75171, 75172) ORDER BY product_xref.hits DESC LIMIT 10 Of course the "cookieId" and the list of "productId" changes dynamically depending on the request. I use php with PDO.

    Read the article

  • Fast find object by string property

    - by Andrew Kalashnikov
    Hello, colleagues. I've got task to fast find object by its string property. Object: class DicDomain { public virtual string Id{ get; set; } public virtual string Name { get; set; } } For storing my object I use List[T] dictionary where T is DicDomain for now . I've got 5-10 such lists, which contain about 500-20000 at each one. Task is find objects by its Name. I use next code now: List<T> entities = dictionary.FindAll(s => s.Name.Equals(word, StringComparison.OrdinalIgnoreCase)); I've got some questions: Is my search speed optimal. I think now. Data structure. It List good for this task. What about hashtable,sorted... Method Find. May be i should use string intern?? I haven't much exp at these tasks. Can u give me good advice for increase perfomance. Thanks

    Read the article

  • Scalably processing large amount of comlpicated database data in PHP, many times a day.

    - by Eph
    I'm soon to be working on a project that poses a problem for me. It's going to require, at regular intervals throughout the day, processing tens of thousands of records, potentially over a million. Processing is going to involve several (potentially complicated) formulas and the generation of several random factors, writing some new data to a separate table, and updating the original records with some results. This needs to occur for all records, ideally, every three hours. Each new user to the site will be adding between 50 and 500 records that need to be processed in such a fashion, so the number will not be steady. The code hasn't been written, yet, as I'm still in the design process, mostly because of this issue. I know I'm going to need to use cron jobs, but I'm concerned that processing records of this size may cause the site to freeze up, perform slowly, or just piss off my hosting company every three hours. I'd like to know if anyone has any experience or tips on similar subjects? I've never worked at this magnitude before, and for all I know, this will be trivial to the server and not pose much of an issue. As long as ALL records are processed before the next three hour period occurs, I don't care if they aren't processed simultaneously (though, ideally, all records belonging to a specific user should be processed in the same batch), so I've been wondering if I should process in batches every 5 minutes, 15 minutes, hour, whatever works, and how best to approach this (and make it scalable in a way that is fair to all users)?

    Read the article

  • How to get REALLY fast python over a simple loop

    - by totallymike
    I'm working on a spoj problem, INTEST. The goal is to specify the number of test cases (n) and a divisor (k), then feed your program n numbers. The program will accept each number on a newline of stdin and after receiving the nth number, will tell you how many were divisible by k. The only challenge in this problem is getting your code to be FAST because it k can be anything up to 10^7 and the test cases can be as high as 10^9. I'm trying to write it in python and having trouble speeding it up. Any ideas? import sys first_in = raw_input() thing = first_in.split() n = int(thing[0]) k = int(thing[1]) total = 0 i = 0 for line in sys.stdin: t = int(line) if t % k == 0: total += 1 print total

    Read the article

  • Execute a method less times possible - PHP

    - by serhio
    I have a site in multiple languages. I have a method that returns me the today currencies in a array. I display that currencies in a table then. // --- en/index.php <?php include_once "../exchangeRates.php"; $currencies = ReadExchangeRates(); // --- fr/index.php <?php include_once "../exchangeRates.php"; $currencies = ReadExchangeRates(); ... // somewhere in the page <td><?php echo $currencies["eur"]["today"]; ?></td> So, every time I load, en/ or fr/ or other language, I request the exchange rates from a external site. Can I optimize this behavior (reading once per day or session)? maybe to store a global variable and check the update date?

    Read the article

  • Using Custom Generic Collection faster with objects than List

    - by Kaminari
    I'm iterating through a List<> to find a matching element. The problem is that object has only 2 significant values, Name and Link (both strings), but has some other values which I don't want to compare. I'm thinking about using something like HashSet (which is exactly what I'm searching for -- fast) from .NET 3.5 but target framework has to be 2.0. There is something called Power Collections here: http://powercollections.codeplex.com/, should I use that? But maybe there is other way? If not, can you suggest me a suitable custom collection?

    Read the article

  • Java - Optimize finding a string in a list

    - by Mark
    I have an ArrayList of objects where each object contains a string 'word' and a date. I need to check to see if the date has passed for a list of 500 words. The ArrayList could contain up to a million words and dates. The dates I store as integers, so the problem I have is attempting to find the word I am looking for in the ArrayList. Is there a way to make this faster? In python I have a dict and mWords['foo'] is a simple lookup without looping through the whole 1 million items in the mWords array. Is there something like this in java? for (int i = 0; i < mWords.size(); i++) { if ( word == mWords.get(i).word ) { mLastFindIndex = i; return mWords.get(i); } }

    Read the article

  • C++ pimpl idiom wastes an instruction vs. C style?

    - by Rob
    (Yes, I know that one machine instruction usually doesn't matter. I'm asking this question because I want to understand the pimpl idiom, and use it in the best possible way; and because sometimes I do care about one machine instruction.) In the sample code below, there are two classes, Thing and OtherThing. Users would include "thing.hh". Thing uses the pimpl idiom to hide it's implementation. OtherThing uses a C style – non-member functions that return and take pointers. This style produces slightly better machine code. I'm wondering: is there a way to use C++ style – ie, make the functions into member functions – and yet still save the machine instruction. I like this style because it doesn't pollute the namespace outside the class. Note: I'm only looking at calling member functions (in this case, calc). I'm not looking at object allocation. Below are the files, commands, and the machine code, on my Mac. thing.hh: class ThingImpl; class Thing { ThingImpl *impl; public: Thing(); int calc(); }; class OtherThing; OtherThing *make_other(); int calc(OtherThing *); thing.cc: #include "thing.hh" struct ThingImpl { int x; }; Thing::Thing() { impl = new ThingImpl; impl->x = 5; } int Thing::calc() { return impl->x + 1; } struct OtherThing { int x; }; OtherThing *make_other() { OtherThing *t = new OtherThing; t->x = 5; } int calc(OtherThing *t) { return t->x + 1; } main.cc (just to test the code actually works...) #include "thing.hh" #include <cstdio> int main() { Thing *t = new Thing; printf("calc: %d\n", t->calc()); OtherThing *t2 = make_other(); printf("calc: %d\n", calc(t2)); } Makefile: all: main thing.o : thing.cc thing.hh g++ -fomit-frame-pointer -O2 -c thing.cc main.o : main.cc thing.hh g++ -fomit-frame-pointer -O2 -c main.cc main: main.o thing.o g++ -O2 -o $@ $^ clean: rm *.o rm main Run make and then look at the machine code. On the mac I use otool -tv thing.o | c++filt. On linux I think it's objdump -d thing.o. Here is the relevant output: Thing::calc(): 0000000000000000 movq (%rdi),%rax 0000000000000003 movl (%rax),%eax 0000000000000005 incl %eax 0000000000000007 ret calc(OtherThing*): 0000000000000010 movl (%rdi),%eax 0000000000000012 incl %eax 0000000000000014 ret Notice the extra instruction because of the pointer indirection. The first function looks up two fields (impl, then x), while the second only needs to get x. What can be done?

    Read the article

  • optimize a string.Format + replace.

    - by acidzombie24
    I have this function. The visual studio profile marked the line with string.Format as hot and were i spend much of my time. How can i write this loop more efficiently? public string EscapeNoPredicate(string sz) { var s = new StringBuilder(sz); s.Replace(sepStr, sepStr + sepStr); foreach (char v in IllegalChars) { string s2 = string.Format("{0}{1:X2}", seperator, (Int16)v); s.Replace(v.ToString(), s2); } return s.ToString(); }

    Read the article

  • MySQL Datefields: duplicate or calculate?

    - by Konerak
    We are using a table with a structure imposed upon us more than 10 years ago. We are allowed to add columns, but urged not to change existing columns. Certain columns are meant to represent dates, but are put in different format. Amongst others: * CHAR(6): YYMMDD * CHAR(6): DDMMYY * CHAR(8): YYYYMMDD * CHAR(8): DDMMYYYY * DATE * DATETIME Since we now would like to do some more complex queries, using advanced date functions, my manager proposed to d*uplicate those problem columns* to a proper FORMATTED_OLDCOLUMNNAME column using a DATE or DATETIME format. Is this the way to go? Couldn't we just use the STR_TO_DATE function each time we accessed the columns? To avoid every query having to copy-paste the function, I could still work with a view or a stored procedure, but duplicating data to avoid recalculation sounds wrong. Solutions I see (I guess I prefer 2.2.1) 1. Physically duplicate columns 1.1 In the same table 1.1.1 Added by each script that does a modification (INSERT/UPDATE/REPLACE/...) 1.1.2 Maintained by a trigger on each modification 1.2 In a separate table 1.2.1 Added by each script that does a modification (INSERT/UPDATE/REPLACE/...) 1.2.2 Maintained by a trigger on each modification 2. On-demand transformation 2.1 Each query has to perform the transformation 2.1.1 Using copy-paste in the source code 2.1.2 Using a library 2.1.3 Using a STORED PROCEDURE 2.2 A view performs the transformation 2.2.1 A separate table replacing the entire table 2.2.2 A separate table just adding the date-fields for the primary keys Am I right to say it's better to recalculate than to store? And would a view be a good solution?

    Read the article

  • How do I select a random record efficiently in MySQL?

    - by user198729
    mysql> EXPLAIN SELECT * FROM urls ORDER BY RAND() LIMIT 1; +----+-------------+-------+------+---------------+------+---------+------+-------+---------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+------+---------------+------+---------+------+-------+---------------------------------+ | 1 | SIMPLE | urls | ALL | NULL | NULL | NULL | NULL | 62228 | Using temporary; Using filesort | +----+-------------+-------+------+---------------+------+---------+------+-------+---------------------------------+ The above doesn't qualify as efficient,how should I do it properly?

    Read the article

  • Is count(*) really expensive ?

    - by Anil Namde
    I have a page where I have 4 tabs displaying 4 different reports based off different tables. I obtain the row count of each table using a select count(*) from <table> query and display number of rows available in each table on the tabs. As a result, each page postback causes 5 count(*) queries to be executed (4 to get counts and 1 for pagination) and 1 query for getting the report content. Now my question is: are count(*) queries really expensive -- should I keep the row counts (at least those that are displayed on the tab) in the view state of page instead of querying multiple times? How expensive are COUNT(*) queries ?

    Read the article

  • Word frequency tally script is too slow

    - by Dave Jarvis
    Background Created a script to count the frequency of words in a plain text file. The script performs the following steps: Count the frequency of words from a corpus. Retain each word in the corpus found in a dictionary. Create a comma-separated file of the frequencies. The script is at: http://pastebin.com/VAZdeKXs Problem The following lines continually cycle through the dictionary to match words: for i in $(awk '{if( $2 ) print $2}' frequency.txt); do grep -m 1 ^$i\$ dictionary.txt >> corpus-lexicon.txt; done It works, but it is slow because it is scanning the words it found to remove any that are not in the dictionary. The code performs this task by scanning the dictionary for every single word. (The -m 1 parameter stops the scan when the match is found.) Question How would you optimize the script so that the dictionary is not scanned from start to finish for every single word? The majority of the words will not be in the dictionary. Thank you!

    Read the article

  • Get count matches in query on large table very slow

    - by Roy Roes
    I have a mysql table "items" with 2 integer fields: seid and tiid The table has about 35000000 records, so it's very large. seid tiid ----------- 1 1 2 2 2 3 2 4 3 4 4 1 4 2 The table has a primary key on both fields, an index on seid and an index on tiid. Someone types in 1 or more tiid values and now I would like to get the seid with most results. For example when someone types 1,2,3, I would like to get seid 2 and 4 as result. They both have 2 matches on the tiid values. My query so far: SELECT COUNT(*) as c, seid FROM items WHERE tiid IN (1,2,3) GROUP BY seid HAVING c = (SELECT COUNT(*) as c, seid FROM items WHERE tiid IN (1,2,3) GROUP BY seid ORDER BY c DESC LIMIT 1) But this query is extremly slow, because of the large table. Does anyone know how to construct a better query for this purpose?

    Read the article

  • Why isn't the copy constructor elided here?

    - by Jesse Beder
    (I'm using gcc with -O2.) This seems like a straightforward opportunity to elide the copy constructor, since there are no side-effects to accessing the value of a field in a bar's copy of a foo; but the copy constructor is called, since I get the output meep meep!. #include <iostream> struct foo { foo(): a(5) { } foo(const foo& f): a(f.a) { std::cout << "meep meep!\n"; } int a; }; struct bar { foo F() const { return f; } foo f; }; int main() { bar b; int a = b.F().a; return 0; }

    Read the article

  • MySQL, delete and index hint

    - by Manuel Darveau
    I have to delete about 10K rows from a table that has more than 100 million rows based on some criteria. When I execute the query, it takes about 5 minutes. I ran an explain plan (the delete query converted to select * since MySQL does not support explain delete) and found that MySQL uses the wrong index. My question is: is there any way to tell MySQL which index to use during delete? If not, what ca I do? Select to temp table then delete from temp table? Thank you!

    Read the article

< Previous Page | 186 187 188 189 190 191 192 193 194 195 196 197  | Next Page >