Search Results

Search found 1631 results on 66 pages for 'optimize'.

Page 44/66 | < Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >

  • Can you iterate over chunks() with request.POST in Django?

    - by Sebastian
    I'm trying to optimize a site I'm building with Django/Flash and am having a problem using Django's iterate over chunks() feature. I'm sending an image from Flash to Django using request.POST data rather than through a form (using request.FILES). The problem I foresee is that if there is large user volume, I could potentially kill memory. But it seems that Django only allows iterating over chunks with request.FILES. Is there a way to: 1) wrap my request.POST data into a request.FILES (thus spoofing Django) or 2) use chunks() with request.POST data

    Read the article

  • Fulltext and composite indexes and how they affect the query

    - by Brett
    Just say I had a query as below.. SELECT name,category,address,city,state FROM table WHERE MATCH(name,subcategory,category,tag1) AGAINST('education') AND city='Oakland' AND state='CA' LIMIT 0, 10; ..and I had a fulltext index as name,subcategory,category,tag1 and a composite index as city,state; is this good enough for this query? Just wondering if something extra is needed when mixing additional AND's when making use of the fulltext index with the MATCH/AGAINST. Edit: What I am trying to understand is, what happens with the additional columns that are within the query but are not indexed in the chosen index (the fulltext index), the above example being city and state. How does MySQL now find the matching rows for these since it can't use two indexes (or can it?) - so, basically, I'm trying to understand how MySQL goes about finding the data optimally for the columns NOT in the chosen fulltext index and if there is anything I can or should do to optimize the query.

    Read the article

  • How to find validity of a string of parentheses, curly brackets and square brackets?

    - by Rajendra
    I recently came in contact with this interesting problem. You are given a string containing just the characters '(', ')', '{', '}', '[' and ']', for example, "[{()}]", you need to write a function which will check validity of such an input string, function may be like this: bool isValid(char* s); these brackets have to close in the correct order, for example "()" and "()[]{}" are all valid but "(]", "([)]" and "{{{{" are not! I came out with following O(n) time and O(n) space complexity solution, which works fine: Maintain a stack of characters. Whenever you find opening braces '(', '{' OR '[' push it on the stack. Whenever you find closing braces ')', '}' OR ']' , check if top of stack is corresponding opening bracket, if yes, then pop the stack, else break the loop and return false. Repeat steps 2 - 3 until end of the string. This works, but can we optimize it for space, may be constant extra space, I understand that time complexity cannot be less than O(n) as we have to look at every character. So my question is can we solve this problem in O(1) space?

    Read the article

  • Have you ever derived a programming solution from nature?

    - by Ryu
    When you step back and look at ... the nature of animals, insects, plants and the problems they have organically solved perhaps even the nature and balance of the universe Have you ever been able to solve a problem by deriving an approach from nature? I've heard of Ant Colony Algorithms being able to optimize supply chain amongst other things. Also Fractal's being the "geometry of nature" have been applied to a wide range of problems. Now that spring is here again and the world is coming back to life I'm wondering if anybody has some experiences they can share. Thanks PS I would recommend watching the "Hunting the Hidden Dimension" Nova episode on fractals.

    Read the article

  • mysql select query optimization

    - by Saharsh Shah
    I have two table testa & testb. CREATE TABLE `testa` ( `id` INT(10) NOT NULL AUTO_INCREMENT, `name` VARCHAR(50) DEFAULT NULL, PRIMARY KEY (`id`) ); CREATE TABLE `testb` ( `id` INT(10) NOT NULL AUTO_INCREMENT, `name` VARCHAR(50) DEFAULT NULL, `aid1` INT(10) DEFAULT NULL, `aid2` INT(10) DEFAULT NULL, `aid3` INT(10) DEFAULT NULL, PRIMARY KEY (`id`) ); Currently I am running below query for retrieving all rows where id in testa table matches with any columns of aid1,aid2,aid3 in tableb. The query is retreiving acurate result but it is taking minimum 30 seconds to execute which is too much. I have also tried to optimise my query using UNION but failed to do so. SELECT a.id, a.name, b.name, b.id FROM testb b INNER JOIN testa a ON b.aid1 = a.id OR b.aid2 = a.id OR b.aid3 = a.id ; How do i optimize my query so it's total execution time is within 2-3 seconds? Thanks in advance...

    Read the article

  • Should I use "id" or "unique username"?

    - by roa3
    I am using PHP, AS3 and mysql. I have a website. A flash(as3) website. The flash website store the members' information in mysql database through php. In "members" table, i have "id" as the primary key and "username" as a unique field. Now my situation is: When flash want to display a member's profile. My questions: Should Flash pass the member "ID" or "username" to php to process the mysql query? Is there any different passing the "id" or "username"? Which one is more secure? Which one you recommend? I would like to optimize my website in terms of security and performance.

    Read the article

  • web page db query optimisation

    - by morpheous
    I am putting together a web page which is quite 'expensive' in terms of Db hits. I dont want to start optimizing at this stage - though with me trying to hit a deadline, I may end up not optimising at all. Currently the page requires 18 (thats right eighteen) hits to the db. I am already using joins, and some of the queries are UNIONed to minimize the trips to the db. My local dev machine can handle this (page is not slow) however, I feel if I release this into the wild, the number of queries will quickly overwhelm my database (mySQL). I could always use memcache or something similar, but I would much rather continue with my other dev work that needs to be completed before the deadline - at least retrieving the page work, its simply a matter of optimization. My question therefore is - is 18 db queries for a single page retrieval completely outrageous - (i.e. I should put everything on hold and optimize the hell of the retrieval logic), or shall I continue as normal, meet the deadline and release on schedule and see what happens?

    Read the article

  • mysql inserts & updates optimized

    - by user271619
    This is an optimization question, mostly. I have many forms on my sites that do simple Inserts and Updates. (Nothing complicated) But, several of the form's input fields are not necessary and may be left empty. (again, nothing complicated) However, my SQL query will have all columns in the Statement. My question, is it best to optimize the Inserts/Update queries appropriately? And only apply the columns that are changed into the query? We all hear that we shouldn't use the "SELECT *" query, unless it's absolutely needed for displaying all columns. But what about Inserts & Updates? Hope this makes sense. I'm sure any amount of optimization is acceptable. But I never really hear about this, specifically, from anyone.

    Read the article

  • How to inline string.h function on linux?

    - by tz1
    I want to optimize some code such that all the functions in string.h will be inlined. I'm on x86_64. I've tried -O3, -minline-all-stringops and when I do "nm a.out" it shows it is calling the glibc version. Checking with gcc -S, I see the calls. What am I missing? There are dozens of #ifdef _SOME_SETTING_ in string.h, and bits/string3.h shows the inline version, but I don't know how to get there. for example: $ cat test.c include main() { char *a, b; strcpy(b,a); } / When compiled with: gcc -minline-all-stringops -O6 -I. -S -o test.S test.c .file "test.c" .text .p2align 4,,15 .globl main .type main, @function main: .LFB12: .cfi_startproc subq $8, %rsp .cfi_def_cfa_offset 16 xorl %esi, %esi xorl %edi, %edi call strcpy addq $8, %rsp .cfi_def_cfa_offset 8 ret .cfi_endproc .LFE12: .size main, .-main .ident "GCC: (GNU) 4.5.1 20100924 (Red Hat 4.5.1-4)" .section .note.GNU-stack,"",@progbits */

    Read the article

  • How do i get a document index so i can delete with lucene?

    - by acidzombie24
    Basically i am doing this I think i'll set the document id as the thread id on my site (even if some types of thread wont be searched). So i can search by thread id but i am clue less of how to delete. I found pages that say use the document index and i need to optimize or close before changes take effect but i dont know how to get the document index. How do i? Also i seen one that said to use IndexWriter to delete but i couldnt figure out how to do it with that either.

    Read the article

  • Execute a method less times possible - PHP

    - by serhio
    I have a site in multiple languages. I have a method that returns me the today currencies in a array. I display that currencies in a table then. // --- en/index.php <?php include_once "../exchangeRates.php"; $currencies = ReadExchangeRates(); // --- fr/index.php <?php include_once "../exchangeRates.php"; $currencies = ReadExchangeRates(); ... // somewhere in the page <td><?php echo $currencies["eur"]["today"]; ?></td> So, every time I load, en/ or fr/ or other language, I request the exchange rates from a external site. Can I optimize this behavior (reading once per day or session)? maybe to store a global variable and check the update date?

    Read the article

  • How to batch retrieve documents with mongoDB?

    - by edude05
    Hello everyone, I have an application that queries data from a mongoDB using the mongoDB C# driver something like this: public void main() { foreach (int i in listOfKey) { list.add(getObjectfromDB(i); } } public myObject getObjFromDb(int primaryKey) { document query = new document(); query["primKey"] = primaryKey; document result= mongo["myDatabase"]["myCollection"].findOne(query); return parseObject(result); } On my local (development) machine to get 100 object this way takes less than a second. However, I recently moved the database to a server on the internet, and this query takes about 30 seconds to execute for the same number of object. Furthermore, looking at the mongoDB log, it seems to open about 8-10 connections to the DB to perform this query. So what I'd like to do is have the query the database for an array of primaryKeys and get them all back at once, then do the parsing in a loop afterwards, using one connection if possible. How could I optimize my query to do so? Thanks, --Michael

    Read the article

  • MySQL query optimization.

    - by PiKey
    I'm so bad in making good MySQL queries. I've created this one: http://pastebin.com/GtDfgky8 products Table have about 17k rows, allegro Table have about 3k of rows. The query Idea is select all products, where stock_quanity 3, where is photo, and where is no product id in allegro table. Now query takes about 10 seconds... I have no idea how I can optimize this query. Please help my, I'll be thankfully! :) & Sorry for my bad English also

    Read the article

  • why optimization does not happen?

    - by aaa
    hi. I have C/C++ code, that looks like this: static int function(double *I) { int n = 0; // more instructions, loops, for (int i; ...; ++i) n += fabs(I[i] > tolerance); return n; } function(I); // return value is not used. compiler inlines function, however it does not optimize out n manipulations. I would expect compiler is able to recognize that value is never used as rhs only. Is there some side effect, which prevents optimization? Thanks

    Read the article

  • JDBC batch insert performance

    - by wo_shi_ni_ba_ba
    I need to insert a couple hundreds of millions of records into the mysql db. I'm batch inserting it 1 million at a time. Please see my code below. It seems to be slow. Is there any way to optimize it? try { // Disable auto-commit connection.setAutoCommit(false); // Create a prepared statement String sql = "INSERT INTO mytable (xxx), VALUES(?)"; PreparedStatement pstmt = connection.prepareStatement(sql); Object[] vals=set.toArray(); for (int i=0; i

    Read the article

  • How can I sqldump a huge database?

    - by meder
    SELECT count(*) from table gives me 3296869 rows. The table only contains 4 columns, storing dropped domains. I tried to dump the sql through: $backupFile = $dbname . date("Y-m-d-H-i-s") . '.gz'; $command = "mysqldump --opt -h $dbhost -u $dbuser -p $dbpass $dbname | gzip > $backupFile"; However, this just dumps an empty 20 KB gzipped file. My client is using shared hosting so the server specs and resource usage aren't top of the line. I'm not even given ssh access or access directly to the database so I have to make queries through PHP scripts I upload via FTP ( SFTP isn't an option, again ). Is there some way I can perhaps sequentially download portions of it, or pass an argument to mysqldump that will optimize it? I came across http://jeremy.zawodny.com/blog/archives/000690.html which mentions the -q flag and tried that but it didn't seem to do anything differently.

    Read the article

  • What is the reason not to use select * ?

    - by Chris Lively
    I've seen a number of people claim that you should specifically name each column you want in your select query. Assuming I'm going to use all of the columns anyway, why would I not use SELECT *? Even considering the question from 9/24, I don't think this is an exact duplicate as I'm approaching the issue from a slightly different perspective. One of our principles is to not optimize before it's time. With that in mind, it seems like using SELECT * should be the preferred method until it is proven to be a resource issue or the schema is pretty much set in stone. Which, as we know, won't occur until development is completely done. That said, is there an overriding issue to not use SELECT *?

    Read the article

  • Compile Flex application without debug? Optimisation options for flex compiler?

    - by maoanz
    I have created a simple test application with the following code var i : int; for (i=0; i<3000000; i++){ trace(i); } When I run the application, it's very slow to load, which means the "trace" is running. I check the flash player by right-clicking, the debugger option is not enable. So I wonder if there is an option to put in compiler to exclude the trace. Otherwise, I have to remove manually all the trace in the program. Are there any other options of compiler to optimize the flex application in a maximum way? Thanks

    Read the article

  • How to make my view better to save Django

    - by user558251
    Hy guys sorry for this post but i need help with my application, i need optimize my view. I have 5 models, how i can do this? def save(request): # get the request.POST in content if request.POST: content = request.POST dicionario = {} # create a dict to get the values in content for key,value in content.items(): # get my fk Course.objects if key == 'curso' : busca_curso = Curso.objects.get(id=value) dicionario.update({key:busca_curso}) else: dicionario.update({key:value}) #create the new teacher Professor.objects.create(**dicionario) my questions are? 1 - How i can do this function in a generic way? Can I pass a variable in a %s to create and get? like this way ? foo = "Teacher" , bar = "Course" def save(request, bar, foo): if request post: ... if key == 'course' : get_course = (%s.objects.get=id=value) %bar ... (%s.objects.create(**dict)) %foo ??? i tried do this in my view but don't work =/, can somebody help me to make this work ? Thanks

    Read the article

  • How to create a better tables Structure.

    - by user160820
    For my website i have tables Category :: id | name Product :: id | name | categoryid Now each category may have different sizes, for that I have also created a table Size :: id | name | categoryid | price Now the problem is that each category has also different ingredients that customer can choose to add to his purchased product. And these ingredients have different prices for different sizes. For that I also have a table like Ingredient :: id | name | sizeid | categoryid | price I am not sure if this Structure really normalized is. Can someone please help me to optimize this structure and which indexed do i need for this Structure?

    Read the article

  • Does MVC replace traditional manually created BLL?

    - by used2could
    I'm used to creating the UI, BLL, DAL by hand (some times I've used LINQ-to-SQL or SubSonic for the DAL). I've done several small projects using MVC since its release. On these projects I've still continued to write a BLL and DAL by hand and then incorporate those into the MVC's models/controllers. I'm looking to optimize my time on projects this seems like overkill and a potential waste of time. Question Would it be acceptable to roll a DAL such as SubSonic and directly use it in the Models/Controllers of my MVC web app? Now the models & controllers would act as the BLL. I just see this as a major timesaver to not have to worry about another tier. UPDATE: I just wanted to add that my concern isn't really with the DAL (I frequently use SubSonic and NH) but rather focus on the BLL. Sorry for the confusion.

    Read the article

  • Static variable for optimization

    - by keithjgrant
    I'm wondering if I can use a static variable for optimization: public function Bar() { static $i = moderatelyExpensiveFunctionCall(); if ($i) { return something(); } else { return somethingElse(); } } I know that once $i is initialized, it won't be changed by by that line of code on successive calls to Bar(). I assume this means that moderatelyExpensiveFunctionCall() won't be evaluated every time I call, but I'd like to know for certain. Once PHP sees a static variable that has been initialized, does it skip over that line of code? In other words, is this going to optimize my execution time if I make a lot of calls to Bar(), or am I wasting my time?

    Read the article

  • New thread per client connection in socket server?

    - by Olaseni
    I am trying to optimize multiple connections per time to a TCP socket server. Is it considered good practice, or even rational to initiate a new thread in the listening server every time I receive a connection request? At what time should I begin to worry about a server based on this infrastructure? What is the maximum no of background threads I can work, until it doesn't make any sense anymore? Platform is C#, framework is Mono, target OS is CentOS, RAM is 2.4G, server is on the clouds, and I'm expecting about 200 connection requests per second.

    Read the article

  • How do I profile memory usage in my project

    - by Gacek
    Are there any good, free tools to profile memory usage in C# ? Details: I have a visualization project that uses quite large collections. I would like to check which parts of this project - on the data-processing side, or on the visualization side - use most of the memory, so I could optimize it. I know that when it comes to computing size of the collection the case is quite simple and I can do it on my own. But there are also certain elements for which I cannot estimate the memory usage so easily. The memory usage is quite big, for example processing a file of size 35 MB my program uses a little bit more than 250 MB of RAM.

    Read the article

  • C++ performance, optimizing compiler, empty function in .cpp

    - by Dodo
    I've a very basic class, name it Basic, used in nearly all other files in a bigger project. In some cases, there needs to be debug output, but in release mode, this should not be enabled and be a NOOP. Currently there is a define in the header, which switches a makro on or off, depending on the setting. So this is definetely a NOOP, when switched off. I'm wondering, if I have the following code, if a compiler (MSVS / gcc) is able to optimize out the function call, so that it is again a NOOP. (By doing that, the switch could be in the .cpp and switching will be much faster, compile/link time wise). --Header-- void printDebug(const Basic* p); class Basic { Basic() { simpleSetupCode; // this should be a NOOP in release, // but constructor could be inlined printDebug(this); } }; --Source-- // PRINT_DEBUG defined somewhere else or here #if PRINT_DEBUG void printDebug(const Basic* p) { // Lengthy debug print } #else void printDebug(const Basic* p) {} #endif

    Read the article

< Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >