Search Results

Search found 4580 results on 184 pages for 'faster'.

Page 158/184 | < Previous Page | 154 155 156 157 158 159 160 161 162 163 164 165  | Next Page >

  • PHP OO vs Procedure with AJAX

    - by vener
    I currently have a AJAX heavy(almost everything) intranet webapp for a business. It is highly modularized(components and modules ala Joomla), with plenty of folders and files. ~ 80-100 different viewing pages (each very unique in it's own sense) on last count and will likely to increase in the near future. I based around the design around commands and screens, the client request a command and sends the required data and receives the data that is displayed via javascript on the screen. That said, there are generally two types of files, a display files with html, javascript, and a little php for templating. And also a php backend file with a single switch statement with actions such as, save, update and delete and maybe other function. There is very little code reuse. Recently, I have been adding an server sided undo function that requires me to reuse some code. So, I took the chance to try out OOP but I notice that some functions are so simple, that creating a class, retrieving all the data then update all the related rows on the database seems like overkill for a simple action as speed is quite critical. Also I noticed there is only one class in an entire file. So, what if the entire php is a class. So, between creating a class and methods, and using global variables and functions. Which is faster?

    Read the article

  • Listening UDP or switch to TCP in a MFC application

    - by Alexander.S
    I'm editing a legacy MFC application, and I have to add some basic network functionalities. The operating side has to receive a simple instruction (numbers 1,2,3,4...) and do something based on that. The clients wants the latency to be as fast as possible, so naturally I decided to use datagrams (UDP). But reading all sorts of resources left me bugged. I cannot listen to UDP sockets (CAsyncSocket) in MFC, it's only possible to call Receive which blocks and waits. Blocking the UI isn't really a smart. So I guess I could use some threading technique, but since I'm not all that experienced with MFC how should that be implemented? The other part of the question is should I do this, or revert to TCP, considering reliability and implementation issues. I know that UDP is unreliable, but just how unreliable is it really? I read that it is up to 50% faster, which is a lot for me. References I used: http://msdn.microsoft.com/en-us/library/09dd1ycd(v=vs.80).aspx

    Read the article

  • own drawImage / drawLine in OpenGL

    - by Chrise
    I'm implementing some native 2D-draw functions in my graphics engine for android, but now there's another question coming up, when I observe the performance of my program. At the moment I'm implementing a drawLine/drawImage function. In summary, there are following different values for drawing each different line / image: the color the alpha value the width of the line rotation (only for images) size/scale (also for images) blending method (subrtract, add, normal-alpha) Now, when an imageLine is drawn, I put the CPU-calculated vertex-positions and uv-values for 6 vertices (2 triangles), into a Floatbuffer and draw it immediately with drawArrays, after passing information for drawing (color,alpha, etc.) via uniforms to the shader. When I draw an image, the pre-set VBO is directly drawn after passing information. The first fact I recognized, is: of course drawing Images is much faster, than imagelines (beacuse of VBOs), but also: I cannot pre-put vertex-data into a VBO for imageLines, because imageLines have no static shape like normal images (varying linelength, varying linewidth and the vertex positions of x1,y1 and x2,y2 change too often) That's why I use a normal Floatbuffer, instead of a VBO. So my question is: What's the best way for managing images, and other 2D-graphics functions. For me it's some kind of important, that the user of the engine is able to draw as many images/2D graphics as possible, without loosing to much performance. You can find the functions for drawing images, imagelines, rects, quads, etc. here: https://github.com/Chrise55/LLama3D/blob/master/Llama3DLibrary/src/com/llama3d/object/graphics/image/ImageBase.java Here an example how it looks with many images (testing artificial neural networks), it works fine, but already little bit slow with that many images... :(

    Read the article

  • Algorithm to determine if array contains n...n+m?

    - by Kyle Cronin
    I saw this question on Reddit, and there were no positive solutions presented, and I thought it would be a perfect question to ask here. This was in a thread about interview questions: Write a method that takes an int array of size m, and returns (True/False) if the array consists of the numbers n...n+m-1, all numbers in that range and only numbers in that range. The array is not guaranteed to be sorted. (For instance, {2,3,4} would return true. {1,3,1} would return false, {1,2,4} would return false. The problem I had with this one is that my interviewer kept asking me to optimize (faster O(n), less memory, etc), to the point where he claimed you could do it in one pass of the array using a constant amount of memory. Never figured that one out. Along with your solutions please indicate if they assume that the array contains unique items. Also indicate if your solution assumes the sequence starts at 1. (I've modified the question slightly to allow cases where it goes 2, 3, 4...) edit: I am now of the opinion that there does not exist a linear in time and constant in space algorithm that handles duplicates. Can anyone verify this? The duplicate problem boils down to testing to see if the array contains duplicates in O(n) time, O(1) space. If this can be done you can simply test first and if there are no duplicates run the algorithms posted. So can you test for dupes in O(n) time O(1) space?

    Read the article

  • jQuery - Wait until image loads before performing function

    - by Steven
    I'm trying to create a simple portfolio page. I have a list of thumbs and an image. When you click on a thumb, the image will change. When a thumbnail is clicked, I'd like to have the image fade out, wait until the image is loaded, then fade back in. The problem I have right now is that some of the images are pretty big, so it fades out, then fades back in immediately, sometimes while the image is still loading. I'd like to avoid using setTimeout, since sometimes an image will load faster or slower than the time I set. Here's my code: $(function() { $('img#image').attr("src", $('ul#thumbs li:first img').attr("src")); $('ul#thumbs li img').click(function() { $('img#image').fadeOut(700); var src = $(this).attr("src"); $('img#image').attr("src", src); $('img#image').fadeIn(700); }); }); <img id="image" src="" alt="" /> <ul id="thumbs"> <li><img src="/images/thumb1.png" /></li> <li><img src="/images/thumb2.png" /></li> <li><img src="/images/thumb3.png" /></li> </ul>

    Read the article

  • Why Can't Businesses Upgrade their Browsers from IE6/IE7?

    - by viatropos
    I have read lots these past few weeks on IE6, seeing if it was really that bad to make it look right. I have just learned HTML and CSS this past year so I've been spoiled to start with basically CSS3 and HTML5, and I can do some really cool stuff super fast. I'm no IE6 master and I don't have years of experience with IE. So I thought it'd take a little time to figure out all the hacks to IE6/7 discovered and just implement them. But it's way harder than that (or maybe just way too much work). I'd have to either completely rebuild my design using "Internet Explorer 'Principles'", or cut out a lot of the neat things I could do using more recent technologies. For a million and one other reasons, everyone who builds things online seems to think IE should die. My question is, why can't businesses upgrade their browsers? When I work with businesses, they almost always resist the first time I ask, but 5 seconds later I'll show them what it looks like on my computer and talk about how great the latest stuff is (how much more secure later browser are, all the famous IE security cases, how much smoother and faster they new browsers are, how the IE team has basically missed the boat entirely, how much smoother business processes run, etc.), and they get excited! And within a few seconds they're up and running with Chrome or something. So can businesses not upgrade for some reasons? What are the reasons a business cannot upgrade? The main reason I think of is because they have an old version of windows. But a) wasn't there a legal case against this? and b) somebody must have figured out how to install Chrome or Firefox on ancient versions of Windows by now.

    Read the article

  • Logic: Best way to sample & count bytes of a 100MB+ file

    - by Jami
    Let's say I have this 170mb file (roughly 180 million bytes). What I need to do is to create a table that lists: all 4096 byte combinations found [column 'bytes'], and the number of times each byte combination appeared in it [column 'occurrences'] Assume two things: I can save data very fast, but I can update my saved data very slow. How should I sample the file and save the needed information? Here're some suggestions that are (extremely) slow: Go through each 4096 byte combinations in the file, save each data, but search the table first for existing combinations and update it's values. this is unbelievably slow Go through each 4096 byte combinations in the file, save until 1 million rows of data in a temporary table. Go through that table and fix the entries (combine repeating byte combinations), then copy to the big table. Repeat going through another 1 million rows of data and repeat the process. this is faster by a bit, but still unbelievably slow This is kind of like taking the statistics of the file. NOTE: I know that sampling the file can generate tons of data (around 22Gb from experience), and I know that any solution posted would take a bit of time to finish. I need the most efficient saving process

    Read the article

  • Progressbar behaves strangely

    - by wanderameise
    I just created an application in C# that uses a thread which polls the UART for a receive event. If data is received an event is triggered in my main thread (GUI) and a progress bar is controlled via PerformStep() method (of course, I previously set the Max value accordingly). PerformStep is invoked using the following expression to handle cross threading this.Invoke((Action)delegate{progressBar2.PerformStep();}) When running this application the progressbar never hits its final value. It stops at 80%. When debugging and stopping at the line mentioned above, everything works fine using single steps. I have no idea what is going one! Start read thread on main thread: pThreadWrite = new Thread(new ThreadStart(ReadThread)); pThreadWrite.Start(); Read Thread: private void ReadThread() { while(1) { if (ReceiveEvent) { FlashProgressBar(); } } } Event that is triggered in main thread: private void FlashProgressBar() { this.Invoke((Action)delegate { progressBar2.PerformStep();}); } (It's a simplified representation of my code) It seems as if the internal progress is faster than the visual one.

    Read the article

  • Porting Python algorithm to C++ - different solution

    - by cb0
    Hello, I have written a little brute string generation script in python to generate all possible combinations of an alphabet within a given length. It works quite nice, but for the reason I wan't it to be faster I try to port it to C++. The problem is that my C++ Code is creating far too much combination for one word. Heres my example in python: ./test.py gives me aaa aab aac aad aa aba .... while ./test (the c++ programm gives me) aaa aaa aaa aaa aa Here I also get all possible combinations, but I get them twice ore more often. Here is the Code for both programms: #!/usr/bin/env python import sys #Brute String Generator #Start it with ./brutestringer.py 4 6 "abcdefghijklmnopqrstuvwxyz1234567890" "" #will produce all strings with length 4 to 6 and chars from a to z and numbers 0 to 9 def rec(w, p, baseString): for c in "abcd": if (p<w - 1): rec(w, p + 1, baseString + "%c" % c) print baseString for b in range(3,4): rec(b, 0, "") And here the C++ Code #include <iostream> using namespace std; string chars="abcd"; void rec(int w,int b,string p){ unsigned int i; for(i=0;i<chars.size();i++){ if(b < (w-1)){ rec(w, (b+1), p+chars[i]); } cout << p << "\n"; } } int main () { int a=3, b=0; rec (a+1,b, ""); return 0; } Does anybody see my fault ? I don't have much experience with C++. Thanks indeed

    Read the article

  • Optimize SQL connection?

    - by user1484035
    I am building a multi-page web project in HTML and Javascript that is constantly reading from AND writing to an SQL database. I can connect to the database and successfully run my project with this type of connection. var connection = new ActiveXObject("ADODB.Connection") ; var connectionstring="Data Source=<server>;Initial Catalog=<catalog>;User ID=<user>; Password=<password>;Provider=SQLOLEDB"; connection.Open(connectionstring); var rs = new ActiveXObject("ADODB.Recordset"); rs.Open("SELECT * FROM table", connection); rs.MoveFirst while(!rs.eof) { document.write(rs.fields(1)); rs.movenext; } rs.close; connection.close; Works great and runs fine. BUT, the first 5 lines (from var connection = to var rs =) causes the whole browser to freeze for a few seconds while it establishes the connection. I need to speed that up since I am constantly connecting to the database throughout my project. Is there a more effective way of connecting to a SQL database? or is my computer just bad and this should run faster?

    Read the article

  • Most efficient way of checking if Date object and Calendar object are in the same month

    - by Indigenuity
    I am working on a project that will run many thousands of comparisons between dates to see if they are in the same month, and I am wondering what the most efficient way of doing it would be. This isn't exactly what my code looks like, but here's the gist: List<Date> dates = getABunchOfDates(); Calendar month = Calendar.getInstance(); for(int i = 0; i < numMonths; i++) { for(Date date : dates) { if(sameMonth(month, date) .. doSomething } month.add(Calendar.MONTH, -1); } Creating a new Calendar object for every date seems like a pretty hefty overhead when this comparison will happen thousands of times, soI kind of want to cheat a bit and use the deprecated method Date.getMonth() and Date.getYear() public static boolean sameMonth(Calendar month, Date date) { return month.get(Calendar.YEAR) == date.getYear() && month.get(Calendar.MONTH) == date.getMonth(); } I'm pretty close to just using this method, since it seems to be the fastest, but is there a faster way? And is this a foolish way, since the Date methods are deprecated? Note: This project will always run with Java 7

    Read the article

  • Ensure that my C# desktop application is making requests to my ASP .NET MVC action?

    - by Mathias Lykkegaard Lorenzen
    I've seen questions that are almost identical to this one, except minor but important differences that I would like to get detailed. Let's say that I have a controller and an action method in MVC which therefore accepts requests on the following URL: http://example.com/api/myapimethod?data=some-data-here. This URL is then being called regularly by 1000 clients or more spread out in the public. The reason for this is crowdsourcing. The clients around the globe help feed a global cache on my server, which makes it faster for the rest of the clients to fetch the data. Now, if I'm sneaky (and I am), I can go into Fiddler, Ethereal, Wireshark or any other packet sniffing tool and figure out which requests the program is making. By figuring that out, I can also replicate them, and fill the service with false corrupted data. What is the best approach to ensuring that the data received in my ASP .NET MVC action method is actually from the desktop client application, and not some falsely generated data that the user invented? Since it is all based on crowdsourcing, would it be a good idea for my users to be able to "vote" if some data is falsified, and then let an automatic cleanup commence if there are enough votes? I do not have access to a tool like SmartAssembly, so unfortunately my .NET program is fully decompilable. I realize this might be impossible to accomplish in an error-proof manner, but I would like to know where my best chances are.

    Read the article

  • Parsing large txt files in ruby taking a lot of time?

    - by hershey92
    below is the code to download a txt file from internet approx 9000 lines and populate the database, I have tried a lot but it takes a lot of time more than 7 minutes. I am using win 7 64 bit and ruby 1.9.3. Is there a way to do it faster ?? require 'open-uri' require 'dbi' dbh = DBI.connect("DBI:Mysql:mfmodel:localhost","root","") #file = open('http://www.amfiindia.com/spages/NAV0.txt') file = File.open('test.txt','r') lines = file.lines 2.times { lines.next } curSubType = '' curType = '' curCompName = '' lines.each do |line| line.strip! if line[-1] == ')' curType,curSubType = line.split('(') curSubType.chop! elsif line[-4..-1] == 'Fund' curCompName = line.split(" Mutual Fund")[0] elsif line == '' next else sCode,isin_div,isin_re,sName,nav,rePrice,salePrice,date = line.split(';') sCode = Integer(sCode) sth = dbh.prepare "call mfmodel.populate(?,?,?,?,?,?,?)" sth.execute curCompName,curSubType,curType,sCode,isin_div,isin_re,sName end end dbh.do "commit" dbh.disconnect file.close 106799;-;-;HDFC ARBITRAGE FUND RETAIL PLAN DIVIDEND OPTION;10.352;10.3;10.352;29-Jun-2012 This is the format of data to be inserted in the table. Now there are 8000 such lines and how can I do an insert by combining all that and call the procedure just once. Also, does mysql support arrays and iteration to do such a thing inside the routine. Please give your suggestions.Thanks.

    Read the article

  • Simple aggregating query very slow in PostgreSql, any way to improve?

    - by Ash
    HI I have a table which holds files and their types such as CREATE TABLE files ( id SERIAL PRIMARY KEY, name VARCHAR(255), filetype VARCHAR(255), ... ); and another table for holding file properties such as CREATE TABLE properties ( id SERIAL PRIMARY KEY, file_id INTEGER CONSTRAINT fk_files REFERENCES files(id), size INTEGER, ... // other property fields ); The file_id field has an index. The file table has around 800k lines, and the properties table around 200k (not all files necessarily have/need a properties). I want to do aggregating queries, for example find the average size and standard deviation for all file types. But it's very slow - around 70 seconds for the latter query. I understand it needs a sequential scan, but still it seems too much. Here's the query SELECT f.filetype, avg(size), stddev(size) FROM files as f, properties as pr WHERE f.id = pr.file_id GROUP BY f.filetype; and the explain HashAggregate (cost=140292.20..140293.94 rows=116 width=13) (actual time=74013.621..74013.954 rows=110 loops=1) -> Hash Join (cost=6780.19..138945.47 rows=179564 width=13) (actual time=1520.104..73156.531 rows=179499 loops=1) Hash Cond: (f.id = pr.file_id) -> Seq Scan on files f (cost=0.00..108365.41 rows=1140941 width=9) (actual time=0.998..62569.628 rows=805270 loops=1) -> Hash (cost=3658.64..3658.64 rows=179564 width=12) (actual time=1131.053..1131.053 rows=179499 loops=1) -> Seq Scan on properties pr (cost=0.00..3658.64 rows=179564 width=12) (actual time=0.753..557.171 rows=179574 loops=1) Total runtime: 74014.520 ms Any ideas why it is so slow/how to make it faster?

    Read the article

  • Why does reusing arrays increase performance so significantly in c#?

    - by Willem
    In my code, I perform a large number of tasks, each requiring a large array of memory to temporarily store data. I have about 500 tasks. At the beginning of each task, I allocate memory for an array : double[] tempDoubleArray = new double[M]; M is a large number depending on the precise task, typically around 2000000. Now, I do some complex calculations to fill the array, and in the end I use the array to determine the result of this task. After that, the tempDoubleArray goes out of scope. Profiling reveals that the calls to construct the arrays are time consuming. So, I decide to try and reuse the array, by making it static and reusing it. It requires some additional juggling to figure out the minimum size of the array, requiring an extra pass through all tasks, but it works. Now, the program is much faster (from 80 sec to 22 sec for execution of all tasks). double[] tempDoubleArray = staticDoubleArray; However, I'm a bit in the dark of why precisely this works so well. Id say that in the original code, when the tempDoubleArray goes out of scope, it can be collected, so allocating a new array should not be that hard right? I ask this because understanding why it works might help me figuring out other ways to achieve the same effect, and because I would like to know in what cases allocation gives performance issues.

    Read the article

  • Is programming overrated?

    - by aengine
    [Subjective and intended to be a community wiki] I am sorry for such an offensive question: But here are my arguments Most of the progress in "computing" has came from non-programming sources. i.e. People invented faster microprocessors and better routers and novel memory devices. I dont think on average people are writting more efficient programs than those written 10 years ago. And the newer and popular languages are infact slower than C. though speed is one of the lesser criterias. Most of the progress came from novel paradigms. Web, Internet, Cloud computing and Social networking are novel paradigms and did not involve progress in programming as such. Heck even facebook was written in PHP and not some extreme language. Though it did face scalability issues (same with twitter) but i believe money and better programmers (who came in much later) took care of that. Thus ideating capability trumped programming capability/ Even things like Map-Reduce, Column oriented database and Probablistic algorithms (E.g. bloom filters) came from hardcore Algorithms research, rather than some programming convention. Thus my final point is why programming skill is so overstressed? To point a recent example about how only 10% of programmers can "write code" (binary search) without debugging. Isnt it a bit hypocritical, considering your real successs lies in coming up with better algorithm or a novel feature rather than getting right first time???

    Read the article

  • Optimizing a "set in a string list" to a "set as a matrix" operation

    - by Eric Fournier
    I have a set of strings which contain space-separated elements. I want to build a matrix which will tell me which elements were part of which strings. For example: "" "A B C" "D" "B D" Should give something like: A B C D 1 2 1 1 1 3 1 4 1 1 Now I've got a solution, but it runs slow as molasse, and I've run out of ideas on how to make it faster: reverseIn <- function(vector, value) { return(value %in% vector) } buildCategoryMatrix <- function(valueVector) { allClasses <- c() for(classVec in unique(valueVector)) { allClasses <- unique(c(allClasses, strsplit(classVec, " ", fixed=TRUE)[[1]])) } resMatrix <- matrix(ncol=0, nrow=length(valueVector)) splitValues <- strsplit(valueVector, " ", fixed=TRUE) for(cat in allClasses) { if(cat=="") { catIsPart <- (valueVector == "") } else { catIsPart <- sapply(splitValues, reverseIn, cat) } resMatrix <- cbind(resMatrix, catIsPart) } colnames(resMatrix) <- allClasses return(resMatrix) } Profiling the function gives me this: $by.self self.time self.pct total.time total.pct "match" 31.20 34.74 31.24 34.79 "FUN" 30.26 33.70 74.30 82.74 "lapply" 13.56 15.10 87.86 97.84 "%in%" 12.92 14.39 44.10 49.11 So my actual questions would be: - Where are the 33% spent in "FUN" coming from? - Would there be any way to speed up the %in% call? I tried turning the strings into factors prior to going into the loop so that I'd be matching numbers instead of strings, but that actually makes R crash. I've also tried going for partial matrix assignment (IE, resMatrix[i,x] <- 1) where i is the number of the string and x is the vector of factors. No dice there either, as it seems to keep on running infinitely.

    Read the article

  • Parallelize or vectorize all-against-all operation on a large number of matrices?

    - by reve_etrange
    I have approximately 5,000 matrices with the same number of rows and varying numbers of columns (20 x ~200). Each of these matrices must be compared against every other in a dynamic programming algorithm. In this question, I asked how to perform the comparison quickly and was given an excellent answer involving a 2D convolution. Serially, iteratively applying that method, like so list = who('data_matrix_prefix*') H = cell(numel(list),numel(list)); for i=1:numel(list) for j=1:numel(list) if i ~= j eval([ 'H{i,j} = compare(' char(list(i)) ',' char(list(j)) ');']); end end end is fast for small subsets of the data (e.g. for 9 matrices, 9*9 - 9 = 72 calls are made in ~1 s). However, operating on all the data requires almost 25 million calls. I have also tried using deal() to make a cell array composed entirely of the next element in data, so I could use cellfun() in a single loop: # who(), load() and struct2cell() calls place k data matrices in a 1D cell array called data. nextData = cell(k,1); for i=1:k [nextData{:}] = deal(data{i}); H{:,i} = cellfun(@compare,data,nextData,'UniformOutput',false); end Unfortunately, this is not really any faster, because all the time is in compare(). Both of these code examples seem ill-suited for parallelization. I'm having trouble figuring out how to make my variables sliced. compare() is totally vectorized; it uses matrix multiplication and conv2() exclusively (I am under the impression that all of these operations, including the cellfun(), should be multithreaded in MATLAB?). Does anyone see a (explicitly) parallelized solution or better vectorization of the problem?

    Read the article

  • Build OpenGL model in parallel?

    - by Brendan Long
    I have a program which draws some terrain and simulates water flowing over it (in a cheap and easy way). Updating the water was easy to parallelize using OpenMP, so I can do ~50 updates per second. The problem is that even with a small amounts of water, my draws per second are very very low (starts at 5 and drops to around 2 once there's a significant amount of water). It's not a problem with the video card because the terrain is more complicated and gets drawn so quickly that boost::timer tells me that I get infinity draws per second if I turn the water off. It may be related to memory bandwidth though (since I assume the model stays on the card and doesn't have to be transfered every time). What I'm concerned about is that on every draw, I'm calling glVertex3f() about a million times (max size is 450*600, 4 vertices each), and it's done entirely sequentially because Glut won't let me call anything in parallel. So.. is if there's some way of building the list in parallel and then passing it to OpenGL all at once? Or some other way of making it draw this faster? Am I using the wrong method (besides the obvious "use less vertices")?

    Read the article

  • What is the fastest (to access) struct-like object in Python?

    - by DNS
    I'm optimizing some code whose main bottleneck is running through and accessing a very large list of struct-like objects. Currently I'm using namedtuples, for readability. But some quick benchmarking using 'timeit' shows that this is really the wrong way to go where performance is a factor: Named tuple with a, b, c: >>> timeit("z = a.c", "from __main__ import a") 0.38655471766332994 Class using __slots__, with a, b, c: >>> timeit("z = b.c", "from __main__ import b") 0.14527461047146062 Dictionary with keys a, b, c: >>> timeit("z = c['c']", "from __main__ import c") 0.11588272541098377 Tuple with three values, using a constant key: >>> timeit("z = d[2]", "from __main__ import d") 0.11106188992948773 List with three values, using a constant key: >>> timeit("z = e[2]", "from __main__ import e") 0.086038238242508669 Tuple with three values, using a local key: >>> timeit("z = d[key]", "from __main__ import d, key") 0.11187358437882722 List with three values, using a local key: >>> timeit("z = e[key]", "from __main__ import e, key") 0.088604143037173344 First of all, is there anything about these little timeit tests that would render them invalid? I ran each several times, to make sure no random system event had thrown them off, and the results were almost identical. It would appear that dictionaries offer the best balance between performance and readability, with classes coming in second. This is unfortunate, since, for my purposes, I also need the object to be sequence-like; hence my choice of namedtuple. Lists are substantially faster, but constant keys are unmaintainable; I'd have to create a bunch of index-constants, i.e. KEY_1 = 1, KEY_2 = 2, etc. which is also not ideal. Am I stuck with these choices, or is there an alternative that I've missed?

    Read the article

  • How to speed up a slow UPDATE query

    - by Mike Christensen
    I have the following UPDATE query: UPDATE Indexer.Pages SET LastError=NULL where LastError is not null; Right now, this query takes about 93 minutes to complete. I'd like to find ways to make this a bit faster. The Indexer.Pages table has around 506,000 rows, and about 490,000 of them contain a value for LastError, so I doubt I can take advantage of any indexes here. The table (when uncompressed) has about 46 gigs of data in it, however the majority of that data is in a text field called html. I believe simply loading and unloading that many pages is causing the slowdown. One idea would be to make a new table with just the Id and the html field, and keep Indexer.Pages as small as possible. However, testing this theory would be a decent amount of work since I actually don't have the hard disk space to create a copy of the table. I'd have to copy it over to another machine, drop the table, then copy the data back which would probably take all evening. Ideas? I'm using Postgres 9.0.0. UPDATE: Here's the schema: CREATE TABLE indexer.pages ( id uuid NOT NULL, url character varying(1024) NOT NULL, firstcrawled timestamp with time zone NOT NULL, lastcrawled timestamp with time zone NOT NULL, recipeid uuid, html text NOT NULL, lasterror character varying(1024), missingings smallint, CONSTRAINT pages_pkey PRIMARY KEY (id ), CONSTRAINT indexer_pages_uniqueurl UNIQUE (url ) ); I also have two indexes: CREATE INDEX idx_indexer_pages_missingings ON indexer.pages USING btree (missingings ) WHERE missingings > 0; and CREATE INDEX idx_indexer_pages_null ON indexer.pages USING btree (recipeid ) WHERE NULL::boolean; There are no triggers on this table, and there is one other table that has a FK constraint on Pages.PageId.

    Read the article

  • Issue with GCD and too many threads

    - by dariaa
    I have an image loader class which provided with NSURL loads and image from the web and executes completion block. Code is actually quite simple - (void)downloadImageWithURL:(NSString *)URLString completion:(BELoadImageCompletionBlock)completion { dispatch_async(_queue, ^{ // dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{ UIImage *image = nil; NSURL *URL = [NSURL URLWithString:URLString]; if (URL) { image = [UIImage imageWithData:[NSData dataWithContentsOfURL:URL]]; } dispatch_async(dispatch_get_main_queue(), ^{ completion(image, URLString); }); }); } When I replace dispatch_async(_queue, ^{ with commented out dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{ Images are loading much faster, wich is quite logical (before that images would be loaded one at a time, now a bunch of them are loading simultaneously). My issue is that I have perhaps 50 images and I call downloadImageWithURL:completion: method for all of them and when I use global queue instead of _queue my app eventually crashes and I see there are 85+ threads. Can the problem be that my calling dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0) 50 times in a row makes GCD create too many threads? I thought that gcd handles all the treading and makes sure the number of threads is not huge, but if it's not the case is there any way I can influence number of threads?

    Read the article

  • How to check user input for correct formatting

    - by Arcadian
    This is what i've come up with so far private void CheckFormatting() { StringReader objReaderf = new StringReader(txtInput.Text); List<String> formatTextList = new List<String>(); do { formatTextList.Add(objReaderf.ReadLine()); } while (objReaderf.Peek() != -1); objReaderf.Close(); for (int i = 0; i < formatTextList.Count; i++) { } } What it is designed to do is check that the user has entered their information in this format Gxx:xx:xx:xx JGxx where "x" can be any integer. As you can see the user inputs their data into a multi-line textbox. i then take that data and enter it into a list. the next part is where i'm stuck. i create a for loop to go through the list line by line, but i guess i will also need to go through each line character by character. How do i do this? or is there a faster way of doing it? thanks in advance

    Read the article

  • Need to sort 3 arrays by one key array

    - by jeff6461
    I am trying to get 3 arrays sorted by one key array in objective c for the iphone, here is a example to help out... Array 1 Array 2 Array 3 Array 4 1 15 21 7 3 12 8 9 6 7 8 0 2 3 4 8 When sorted i want this to look like Array 1 Array 2 Array 3 Array 4 1 15 21 7 2 3 4 8 3 12 8 9 6 7 8 0 So array 2,3,4 are moving with Array 1 when sorted. Currently i am using a bubble sort to do this but it lags so bad that it crashes by app. The code i am using to do this is int flag = 0; int i = 0; int temp = 0; do { flag=1; for(i = 0; i < distancenumber; i++) { if(distance[i] > distance[i+1]) { temp = distance[i]; distance[i]=distance[i + 1]; distance[i + 1]=temp; temp = FlowerarrayNumber[i]; FlowerarrayNumber[i] = FlowerarrayNumber[i+1]; FlowerarrayNumber[i + 1] = temp; temp = BeearrayNumber[i]; BeearrayNumber[i] = BeearrayNumber[i + 1]; BeearrayNumber[i + 1] = temp; flag=0; } } }while (flag==0); where distance number is the amount of elements in all of the arrays, distance is array 1 or my key array. and the other 2 are getting sorted. If anyone can help me get a merge sort(or something faster, it is running on a iPhone so it needs to be quick and light) to do this that would be great i cannot figure out how the recursion works in this method and so having a hard time to get the code to work. Any help would be greatly appreciated

    Read the article

  • How to perform a Depth First Search iteratively using async/parallel processing?

    - by Prabhu
    Here is a method that does a DFS search and returns a list of all items given a top level item id. How could I modify this to take advantage of parallel processing? Currently, the call to get the sub items is made one by one for each item in the stack. It would be nice if I could get the sub items for multiple items in the stack at the same time, and populate my return list faster. How could I do this (either using async/await or TPL, or anything else) in a thread safe manner? private async Task<IList<Item>> GetItemsAsync(string topItemId) { var items = new List<Item>(); var topItem = await GetItemAsync(topItemId); Stack<Item> stack = new Stack<Item>(); stack.Push(topItem); while (stack.Count > 0) { var item = stack.Pop(); items.Add(item); var subItems = await GetSubItemsAsync(item.SubId); foreach (var subItem in subItems) { stack.Push(subItem); } } return items; } EDIT: I was thinking of something along these lines, but it's not coming together: var tasks = stack.Select(async item => { items.Add(item); var subItems = await GetSubItemsAsync(item.SubId); foreach (var subItem in subItems) { stack.Push(subItem); } }).ToList(); if (tasks.Any()) await Task.WhenAll(tasks); UPDATE: If I wanted to chunk the tasks, would something like this work? foreach (var batch in items.BatchesOf(100)) { var tasks = batch.Select(async item => { await DoSomething(item); }).ToList(); if (tasks.Any()) { await Task.WhenAll(tasks); } } The language I'm using is C#.

    Read the article

< Previous Page | 154 155 156 157 158 159 160 161 162 163 164 165  | Next Page >