Search Results

Search found 4580 results on 184 pages for 'faster'.

Page 35/184 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • How to create temporary files on the client machine, from Web Application?

    - by Gaurav Srivastava
    I am creating a Web Application using JSP, Struts, EJB and Servlets. The Application is a combined CRM and Accounting Package so the Database size is very huge. So, in order to make Execution faster, I want prevent round trips to the Database. For that purpose, what I want to do is create some temporary XML files on the client Machine and use them whenever required. How can I do this, as Javascript do not permits me to do so. Is there any way of doing this? Or, is there any other solution which I can adopt in order to make my application Faster?

    Read the article

  • Solr date field tdate vs date?

    - by user550178
    So I have a question about Solr's field date types which is pretty straight forward: what's the difference between a 'date' field and a 'tdate' one? The schema .xml claims that 'For faster range queries, consider the tdate type' and 'A Trie based date field for faster date range queries and date faceting. ' Fair enough... but what's the precisionStep="6" all about? should i change this? does it change the way i would create the query in case I use the tdate? What's the real advantage or what does Solr do that makes it better? P.S went through google, Solr manual, solr wiki and the java docs without any luck so I'd appreciate a kind and explanatory answer :)... Also checked: http://www.lucidimagination.com/blog/2009/05/13/exploring-lucene-and-solrs-trierange-capabilities/ http://web.archiveorange.com/archive/v/AAfXfqRYyLnDFtskmLRi

    Read the article

  • Is there any valid reason radians are used as the inputs to trig function in many modern languages?

    - by johnmortal
    Is there any pressing reason trig functions should use radian inputs in modern programming languages? As far as I know radians are typically ugly to deal with except in three cases: (1) You want to compute an arc length and you know the angle of the arc and (2) You need to do symbolic calculus with trig functions (3) certain infinite series expansion look prettier if the input is in radians. None of these scenarios seem like a worthy justification for every programming language I am familiar with using radian inputs for Sin, Cos, Tangent, etc... The third one sounds good because it might mean one gets faster computations using radians (very slightly faster- the cost of one additional floating point multiplication ) , but I am dubious even of that because most commonly the developer had to take an extra step to put the angle in radians in the first place. The other two are ridiculous justifications for all the added obscurity.

    Read the article

  • Threading vs single thread

    - by user177883
    Is it always guaranteed that a multi-threaded application would run faster than a single threaded application? I have two threads that populates data from a data source but different entities (eg: database, from two different tables), seems like single threaded version of the application is running faster than the version with two threads. Why would the reason be? when i look at the performance monitor, both cpu s are very spikey ? is this due to context switching? what are the best practices to jack the CPU and fully utilize it? I hope this is not ambiguous.

    Read the article

  • What is the chance a CouchDB document update handler will get a revision conflict?

    - by jhs
    How likely is a revision conflict when using an update handler? Should I concern myself with conflict-handling code when writing a robust update function? As described in Document Update Handlers, CouchDB 0.10 and later allows on-demand server-side document modification. Update handlers can process non-JSON formats; but the other major features are these: An HTTP front-end to arbitrarily complex document modification code Similar code needn't be written for all possible clients—a DRY architecture Execution is faster and less likely to hit a revision conflict I am unclear about the third point. Executing locally, the update handler will run much faster and with lower latency. But in situations with high contention, that does not guarantee a successful update. Or does the update handler guarantee a successful update?

    Read the article

  • mystified by qr.Q(): what is an orthonormal matrix in "compact" form?

    - by gappy
    R has a qr() function, which performs QR decomposition using either LINPACK or LAPACK (in my experience, the latter is 5% faster). The main object returned is a matrix "qr" that contains in the upper triangular matrix R (i.e. R=qr[upper.tri(qr)]). So far so good. The lower triangular part of qr contains Q "in compact form". One can extract Q from the qr decomposition by using qr.Q(). I would like to find the inverse of qr.Q(). In other word, I do have Q and R, and would like to put them in a "qr" object. R is trivial but Q is not. The goal is to apply to it qr.solve(), which is much faster than solve() on large systems.

    Read the article

  • Ruby on Rails: What are Erubis' disadvantages and why isn't it packaged with Rails by default? How t

    - by williamjones
    I just discovered Erubis, a replacement for the default view renderer for Ruby on Rails. However, from what I can tell from reading about it, it's superior across the board. It is much faster. It has many more options. It can prevent cross site scripting without having to use h. Does this have any disadvantages versus the standard erb renderer? Why isn't this the standard renderer packaged with Rails? Also, the docs for Erubis say to install it just by installing the gem, and then add the following to environment.rb: require 'erubis/helpers/rails_helper' #Erubis::Helpers::RailsHelper.engine_class = Erubis::Eruby # or Erubis::FastEruby Reading the docs, FastEruby seems to be just a faster renderer than Eruby. Why wouldn't it be default and used by everyone? I'm highly interested in using the engine erubis::EscapedEruby which automatically calls h to escape html on fields from the database. Are there any gotchas I should be aware of or does this pretty much solve all cross site scripting?

    Read the article

  • Fastest way to deploy rails apps with Passenger

    - by yuval
    I am working on a Dreamhost server with Rails 2.3.5. Every time I make changes to a site, I have to ssh into the site, remove all the files, upload a zip file containing all the new files for the site, unzip that file, migrate the database, and go. Something tells me there's a faster way to deploy rails apps. I am using mac Time Machine to keep track of different versions of my applications. I know git tracks files, but I don't really know how to work with it to deploy my applications, since passenger takes care of all the magic for me. What would be a faster way to deploy my applications (and avoid the downtime associated with my method when I delete all files on the server)? Thanks!

    Read the article

  • design pattern tools to use?

    - by ajsie
    i have noticed that every area has some tools you can use to make things easier. eg. css = dreamweaver doctrine/propel = orm designer // you dont have to hardcore code schemas manually and remembering all the syntax/variables mysql = mysql workbench // the same etc. in this way you get aided and dont have to type things the hard way, and can learn the structure, but then use GUI tools to help you develop faster. now i'm learning design patterns (singleton, factory, command, memento etc) and im wondering if there are some kind of tools you can use that will help you develop faster. i dont know exactly what tools i'm trying to find, just helping me when coding with design patterns (schema drawings? references?) are there any?

    Read the article

  • Script Speed vs Memory Usage

    - by Doug Neiner
    I am working on an image generation script in PHP and have gotten it working two ways. One way is slow but uses a limited amount of memory, the second is much faster, but uses 6x the memory . There is no leakage in either script (as far as I can tell). In a limited benchmark, here is how they performed: -------------------------------------------- METHOD | TOTAL TIME | PEAK MEMORY | IMAGES -------------------------------------------- One | 65.626 | 540,036 | 200 Two | 20.207 | 3,269,600 | 200 -------------------------------------------- And here is the average of the previous numbers (if you don't want to do your own math): -------------------------------------------- METHOD | TOTAL TIME | PEAK MEMORY | IMAGES -------------------------------------------- One | 0.328 | 540,036 | 1 Two | 0.101 | 3,269,600 | 1 -------------------------------------------- Which method should I use and why? I anticipate this being used by a high volume of users, with each user making 10-20 requests to this script during a normal visit. I am leaning toward the faster method because though it uses more memory, it is for a 1/3 of the time and would reduce the number of concurrent requests.

    Read the article

  • Alternative to 'where col in (list)' for MySQL

    - by user210481
    Hi I have the following table T: id 1 2 3 4 col a b a c I want to do a select that returns the id,col when group by(col) having count(col)1 One way of doing it is SELECT id,col FROM T WHERE col IN (SELECT col FROM T GROUP BY(col) HAVING COUNT(col)>1); The intern select (from the right) returns 'a' and main one (left) will return 1,a and 3,a The problem is that the where in statement seems to be extremely slow. In my real case, the results from the internal select has many 'col's, something about 70000 and it's taking hours. Right now it's much faster to do the internal select and the main select getting all ids and upcs and do the intersection locally. MySQL should be able to handle this kind of query efficiently. Can I substitute the where in for a join or something faster? Thanks

    Read the article

  • CUDA small kernel 2d convolution - how to do it

    - by paulAl
    I've been experimenting with CUDA kernels for days to perform a fast 2D convolution between a 500x500 image (but I could also vary the dimensions) and a very small 2D kernel (a laplacian 2d kernel, so it's a 3x3 kernel.. too small to take a huge advantage with all the cuda threads). I created a CPU classic implementation (two for loops, as easy as you would think) and then I started creating CUDA kernels. After a few disappointing attempts to perform a faster convolution I ended up with this code: http://www.evl.uic.edu/sjames/cs525/final.html (see the Shared Memory section), it basically lets a 16x16 threads block load all the convolution data he needs in the shared memory and then performs the convolution. Nothing, the CPU is still a lot faster. I didn't try the FFT approach because the CUDA SDK states that it is efficient with large kernel sizes. Whether or not you read everything I wrote, my question is: how can I perform a fast 2D convolution between a relatively large image and a very small kernel (3x3) with CUDA?

    Read the article

  • Is Python a beginner language or is it robust?

    - by orokusaki
    I am already working on some software in Python but I'm having one of those days where I step back and reflect just to make sure I'm not spinning my wheels. I know that Twitter launched with RoR because it was fast to build. Then they almost moved into another language in 2008 because of scalability issues. This has caused me to step back and introspect for a moment to make sure I'm heading down the right path. I've read in some tutorials and other places that Python is "a great first language" or a "nice beginner language" as though it's not capable of larger tasks. I look at it as Python can do what Java or ASP can but with about 1/4th of the code, not to mention I don't have to build or compile, etc. I've read that Java runs quite a few times faster than Python which is important of course, but then I read everywhere that hardware keeps getting cheaper and there are projects like Unladen Swallow by Google to make Python faster. Should I be concerned or is this just the remnants of Java developers?

    Read the article

  • How does the verbosity of identifiers affect the performance of a programmer?

    - by DR
    I always wondered: Are there any hard facts which would indicate that either shorter or longer identifiers are better? Example: clrscr() opposed to ClearScreen() Short identifiers should be faster to read because there are fewer characters but longer identifiers often better resemble natural language and therefore also should be faster to read. Are there other aspects which suggest either a short or a verbose style? EDIT: Just to clarify: I didn't ask: "What would you do in this case?". I asked for reasons to prefer one over the other, i.e. this is not a poll question. Please, if you can, add some reason on why one would prefer one style over the other.

    Read the article

  • BULK INSERT from one table to another all on the server

    - by steve_d
    I have to copy a bunch of data from one database table into another. I can't use SELECT ... INTO because one of the columns is an identity column. Also, I have some changes to make to the schema. I was able to use the export data wizard to create an SSIS package, which I then edited in Visual Studio 2005 to make the changes desired and whatnot. It's certainly faster than an INSERT INTO, but it seems silly to me to download the data to a different computer just to upload it back again. (Assuming that I am correct that that's what the SSIS package is doing). Is there an equivalent to BULK INSERT that runs directly on the server, allows keeping identity values, and pulls data from a table? (as far as I can tell, BULK INSERT can only pull data from a file) Edit: I do know about IDENTITY_INSERT, but because there is a fair amount of data involved, INSERT INTO ... SELECT is kinda of slow. SSIS/BULK INSERT dumps the data into the table without regards to indexes and logging and whatnot, so it's faster. (Of course creating the clustered index on the table once it's populated is not fast, but it's still faster than the INSERT INTO...SELECT that I tried in my first attempt) Edit 2: The schema changes include (but are not limited to) the following: 1. Splitting one table into two new tables. In the future each will have its own IDENTITY column, but for the migration I think it will be simplest to use the identity from the original table as the identity for the both new tables. Once the migration is over one of the tables will have a one-to-many relationship to the other. 2. Moving columns from one table to another. 3. Deleting some cross reference tables that only cross referenced 1-to-1. Instead the reference will be a foreign key in one of the two tables. 4. Some new columns will be created with default values. 5. Some tables aren’t changing at all, but I have to copy them over due to the "put it all in a new DB" request.

    Read the article

  • Async stream writing in a thread

    - by blez
    I have a thread in which I write to 2 streams. The problem is that the thread is blocked until the first one finishes writing (until all data is transferred on the other side of the pipe), and I don't want that. Is there a way to make it asynchronous? chunkOutput is a Dictionary filled with data from multiple threads, so the faster checking for existing keys is, the faster the pipe will write. void ConsumerMethod(object totalChunks) { while(true) { if (chunkOutput.ContainsKey(curChunk)) { if (outputStream != null && chunkOutput[curChunk].Length > 0) { outputStream.Write(chunkOutput[curChunk]); // <-- here it stops } ChunkDownloader.AppendData("outfile.dat", chunkOutput[curChunk], chunkOutput[curChunk].Length); curChunk++; if (curChunk >= (int) totalChunks) return; } Thread.Sleep(10); } }

    Read the article

  • Optimizing simple search script in PowerShell

    - by cc0
    I need to create a script to search through just below a million files of text, code, etc. to find matches and then output all hits on a particular string pattern to a CSV file. So far I made this; $location = 'C:\Work*' $arr = "foo", "bar" #Where "foo" and "bar" are string patterns I want to search for (separately) for($i=0;$i -lt $arr.length; $i++) { Get-ChildItem $location -recurse | select-string -pattern $($arr[$i]) | select-object Path | Export-Csv "C:\Work\Results\$($arr[$i]).txt" } This returns to me a CSV file named "foo.txt" with a list of all files with the word "foo" in it, and a file named "bar.txt" with a list of all files containing the word "bar". Is there any way anyone can think of to optimize this script to make it work faster? Or ideas on how to make an entirely different, but equivalent script that just works faster? All input appreciated!

    Read the article

  • Best datastructure for frequently queried list of objects

    - by panzerschreck
    Hello, I have a list of objects say, List. The Entity class has an equals method,on few attributes ( business rule ) to differentiate one Entity object from the other. The task that we usually carry out on this list is to remove all the duplicates something like this : List<Entity> noDuplicates = new ArrayList<Entity>(); for(Entity entity: lstEntities) { int indexOf = noDuplicates.indexOf(entity); if(indexOf >= 0 ) { noDuplicates.get(indexOf).merge(entity); } else { noDuplicates.add(entity); } } Now, the problem that I have been observing is that this part of the code, is slowing down considerably as soon as the list has objects more than 10000.I understand arraylist is doing a o(N) search. Is there a faster alternative, using HashMap is not an option, because the entity's uniqueness is built upon 4 of its attributes together, it would be tedious to put in the key itself into the map ? will sorted set help in faster querying ? Thanks

    Read the article

  • Could someone give me their two cents on this optimization strategy

    - by jimstandard
    Background: I am writing a matching script in python that will match records of a transaction in one database to names of customers in another database. The complexity is that names are not unique and can be represented multiple different ways from transaction to transaction. Rather than doing multiple queries on the database (which is pretty slow) would it be faster to get all of the records where the last name (which in this case we will say never changes) is "Smith" and then have all of those records loaded into memory as you go though each looking for matches for a specific "John Smith" using various data points. Would this be faster, is it feasible in python, and if so does anyone have any recommendations for how to do it?

    Read the article

  • More elegant way to parse inline variables in strings

    - by Tom
    Currently I have this: function parse_string($string, $variables){ extract($variables); return eval('return "'. addcslashes($string, '"') .'";'); } So I can input this string: 'Hi {$name}, my name is {$own_name}' Together with this array: array('name' => 'John', 'own_name' => 'Tom') And get this back: 'Hi John, my name is Tom'   I've never liked this eval() approach but it works and it's fast (faster than regex at least). Question: Is there a more elegant way to do this (faster than using regex) in PHP5?

    Read the article

  • fast scrolling background

    - by Andre
    i want a game that scrolls the background in a similar way to a UItableView. I solved it with a timer that moves the background up and brings another copy of the same picture up if (bg1.center.y <= - self.view.bounds.size.height/2 ) { bg1.center = CGPointMake(bg1.center.x, 690); } if (bg2.center.y <= - self.view.bounds.size.height/2 ) { bg2.center = CGPointMake(bg2.center.x, 690); bg1.center = CGPointMake(bg1.center.x, bg1.center.y - movement); bg2.center = CGPointMake(bg2.center.x, bg2.center.y - movement); But the faster i move the pictures the more problems occur: There are appearing gaps between the backgrounds and they are getting biggiger the faster i move them! movement is defined by the speed of swiping over the screen Any idea to solve that?

    Read the article

  • Copy an array backwards? Array.Copy?

    - by daniel
    I have a List<T> that I want to be able to copy to an array backwards, meaning start from List.Count and copy maybe 5 items starting at the end of the list and working its way backwards. I could do this with a simple reverse for loop; however there is probably a faster/more efficient way of doing this so I thought I should ask. Can I use Array.Copy somehow? Originally I was using a Queue as that pops it off in the correct order I need, but I now need to pop off multiple items at once into an array and I thought a list would be faster.

    Read the article

  • FOR loop performance in Javascript

    - by AndrewMcLagan
    As my research leads me to believe that for loops are the fastest iteration construct in javascript language. I was thinking that also declaring a conditional length value for the for loop would be faster... to make it clearer, which of the following do you think would be faster? Example ONE for(var i = 0; i < myLargeArray.length; i++ ) { console.log(myLargeArray[i]); } Example TWO var count = myLargeArray.length; for(var i = 0; i < count; i++ ) { console.log(myLargeArray[i]); } my logic follows that on each iteration in example one accessing the length of myLargeArray on each iteration is more computationally expensive then accessing a simple integer value as in example two?

    Read the article

  • Counting bits set in a .Net BitArray Class

    - by Sam
    I am implementing a library where I am extensively using the .Net BitArray class and need an equivalent to the Java BitSet.Cardinality() method, i.e. a method which returns the number of bits set. I was thinking of implementing it as an extension method for the BitArray class. The trivial implementation is to iterate and count the bits set (like below), but I wanted a faster implementation as I would be performing thousands of set operations and counting the answer. Is there a faster way than the example below? count = 0; for (int i = 0; i < mybitarray.Length; i++) { if (mybitarray [i]) count++; }

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >