Search Results

Search found 4580 results on 184 pages for 'faster'.

Page 34/184 | < Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >

  • Can a CouchDB document update handler get an update conflict?

    - by jhs
    How likely is a revision conflict when using an update handler? Should I concern myself with conflict-handling code when writing a robust update function? As described in Document Update Handlers, CouchDB 0.10 and later allows on-demand server-side document modification. Update handlers can process non-JSON formats; but the other major features are these: An HTTP front-end to arbitrarily complex document modification code Similar code needn't be written for all possible clients—a DRY architecture Execution is faster and less likely to hit a revision conflict I am unclear about the third point. Executing locally, the update handler will run much faster and with lower latency. But in situations with high contention, that does not guarantee a successful update. Or does the update handler guarantee a successful update?

    Read the article

  • bat file using winrar taking too long to run

    - by Jessie
    hi guys, i have this scripts which extracts all my folder's and files from my c:\projects locations and put its in winrar and transfers them to c:\backup\project for /f "delims==" %%D in ('DIR C:\projects /A /B /S') do ( "C:\Program Files\WinRAR\WinRAR.EXE" m -r "c:\backup\projects.rar" "%%D" ) i have also tried the below script which uses the same source c:\projects but put them in their own separate winrar folder like in the source then transfers the folders into my c:\backup. FOR /F "DELIMS==" %%D in ('DIR C:\projects /AD /B') DO ( "C:\Program Files\WinRAR\WinRAR.EXE" m -r "C:\Backup\%%D.rar" "%%D" ) my question is, my second scripts only takes two hours to run when my first script takes over 24 hours to run, is there any way to make my first script faster? if anything shouldn't my first script be faster?

    Read the article

  • One large file or multiple small files?

    - by Dan
    I have an application (currently written in Python as we iron out the specifics but eventually it will be written in C) that makes use of individual records stored in plain text files. We can't use a database and new records will need to be manually added regularly. My question is this: would it be faster to have a single file (500k-1Mb) and have my application open, loop through, find and close a file OR would it be faster to have the records separated and named using some appropriate convention so that the application could simply loop over filenames to find the data it needs? I know my question is quite general so direction to any good articles on the topic are as appreciated as much as suggestions. Thanks very much in advance for your time, Dan

    Read the article

  • Java 'Prototype' pattern - new vs clone vs class.newInstance

    - by Guillaume
    In my project there are some 'Prototype' factories that create instances by cloning a final private instance. The author of those factories says that this pattern provides better performance than calling 'new' operator. Using google to get some clues about that, I've found nothing really relevant about that. Here is a small excerpt found in a javdoc from an unknown project javdoc from an unknown project Sadly, clone() is rather slower than calling new. However it is a lot faster than calling java.lang.Class.newInstance(), and somewhat faster than rolling our own "cloner" method. For me it's looking like an old best practice of the java 1.1 time. Does someone know more about this ? Is this a good practice to use that with 'modern' jvm ?

    Read the article

  • Kohana or CodeIgniter?

    - by Thinker
    I'm looking for a PHP framework. I did a research here and found CodeIgniter would be best for me. But then I discovered that there is Kohana based on CodeIgniter. And I'm annoyed, because since Kohana is based on CodeIgniter, it should work similar, but it is optimized for PHP 5, so it should be faster in a PHP 5 environment. I was going to choose Kohana and then I found benchmarks that points CodeIgniter as the faster one. I don't understand it. If Kohana is updated more frequently and optimized for PHP 5, shouldn't it be better then? I'm experienced PHP programmer, but Smarty is the only framework I know, and I don't want to waste next few month for learning a bad framework. Thanks for your reply, I will give Kohana a try.

    Read the article

  • Serializing C# objects to DB

    - by Robert Koritnik
    I'm using a DB table with various different entities. This means that I can't have an arbitrary number of fields in it to save all kinds of different entities. I want instead save just the most important fields (dates, reference IDs - kind of foreign key to various other tables, most important text fields etc.) and an additional text field where I'd like to store more complete object data. the most obvious solution would be to use XML strings and store those. The second most obvious choice would be JSON, that usually shorter and probably also faster to serialize/deserialize... And is probably also faster. But is it really? My objects also wouldn't need to be strictly serializable, because JsonSerializer is usually able to serialize anything. Even anonymous objects, that may as well be used here. What would be the most optimal solution to solve this problem?

    Read the article

  • Invert bitmap colors

    - by Alex Orlov
    I have the following problem. I have a charting program, and it's design is black, but the charts (that I get from the server as images) are light (it actually uses only 5 colors: red, green, white, black and gray). To fit with the design inversion does a good job, the only problem is that red and green are inverted also (green - pink, red - green). Is there a way to invert everything except those 2 colors, or a way to repaint those colors after inversion? And how costly are those operations (since I get the chart updates pretty often)? Thanks in advance :) UPDATE I tried replacing colors with setPixel method in a loop for(int x = 0 ;x < chart.getWidth();x++) { for(int y = 0;y < chart.getHeight();y++) { final int replacement = getColorReplacement(chart.getPixel(x, y)); if(replacement != 0) { chart.setPixel(x, y, replacement); } } } Unfortunetely, the method takes too long (~650ms), is there a faster way to do it, and will setPixels() method work faster?

    Read the article

  • Enforcing a query in MySql to use a specific index

    - by Hossein
    Hi, I have large table. consisting of only 3 columns (id(INT),bookmarkID(INT),tagID(INT)).I have two BTREE indexes one for each bookmarkID and tagID columns.This table has about 21 Million records. I am trying to run this query: SELECT bookmarkID,COUNT(bookmarkID) AS count FROM bookmark_tag_map GROUP BY tagID,bookmarkID HAVING tagID IN (-----"tagIDList"-----) AND count >= N which takes ages to return the results.I read somewhere that if make an index in which it has tagID,bookmarkID together, i will get a much faster result. I created the index after some time. Tried the query again, but it seems that this query is not using the new index that I have made.I ran EXPLAIN and saw that it is actually true. My question now is that how I can enforce a query to use a specific index? also comments on other ways to make the query faster are welcome. Thanks

    Read the article

  • serving js libraries: better performance from google code or using asset packager?

    - by brahn
    I am working on a rails application that uses big javascript libraries (e.g. jquery UI), and I also have a handful of my own javascript files. I'm using asset packager to package up my own javascript. I'm considering two ways of serving these files: Link to the jQuery libraries from Google Code as described at http://code.google.com/apis/ajaxlibs/documentation/#jquery , and separately package up and serve my javascript files using asset packager. Host the jquery libraries myself, and package them together with my own javascript as one big merged javascript file. My hosting solution is of course not going to beat out Google's content delivery network, so at first I assumed that end users would experience faster page loads via option #1. However, it also occured to me that if I serve them myself, users would only need to issue one request to get the merged javascript (as opposed to one for my merged javascript and another for the libraries served by google). Which approach will provide the best end-user experience (presumably in the form of faster load times?)

    Read the article

  • C# Threading vs single thread

    - by user177883
    Is it always guaranteed that a multi-threaded application would run faster than a single threaded application? I have two threads that populates data from a data source but different entities (eg: database, from two different tables), seems like single threaded version of the application is running faster than the version with two threads. Why would the reason be? when i look at the performance monitor, both cpu s are very spikey ? is this due to context switching? what are the best practices to jack the CPU and fully utilize it? I hope this is not ambiguous.

    Read the article

  • Estimate serialization size of objects?

    - by Stefan K.
    In my thesis, I woud like to enhance messaging in a cluster. It's important to log runtime information about how big a message is (should I prefer processing local or remote). I could just find frameoworks about estimating the object memory size based on java instrumentation. I've tested classmexer, which didn't come close to the serialization size and sourceforge SizeOf. In a small testcase, SizeOf was around 10% wrong and 10x faster than serialization. (Still transient breaks the estimation completely and since e.g. ArrayList is transient but is serialized as an Array, it's not easy to patch SizeOf. But I could live with that) On the other hand, 10x faster with 10% error doesn't seem very good. Any ideas how I could do better?

    Read the article

  • If I prefer semantic naming then shouldn't i use any CSS Framework and grid approach?

    - by metal-gear-solid
    If I prefer semantic naming then shouldn't i use any CSS Framework and grid approach? Which approach is better Grid or Freehand? Is any CSS Frameworks really can save time and make semantic code even for Experienced CSS developer? Many CSS Frameworksd are popular in SIMPLE PSD 2 XHTML CSS and in wordpress/drupal/joomla theme development. Can we make CSS XHTML development as faster as with CSS frameworks but without using CSS Grid Frameworks? What makes development with CSS frameworks faster which we can't do without frameworks?

    Read the article

  • How do I stall until a SharePoint List Item is Deleted with SPLongOperation?

    - by ccomet
    I have a workflow, which creates a task and deletes it after the task is edited and its useful information acquired. I created a custom edit form for the task, so I have an SPLongOperation that I can use to stall the page. This is necessary, because if I don't stall the page in some fashion, the person will see the task in the task list for the minute moment before the workflow gets to delete the task, and that is bad. So some code to stall the page until the task is fully deleted is necessary. I have currently implemented a solution for this, but I am unsatisfied with the approach. It basically is summed up to a while loop that calls SPList.GetItemById until it throws an error. Delibrately attempting to cause an error doesn't sit well with me, but I cannot think of a faster method for checking this. I'm looking for alternatives that would preferably work faster if not as fast, and preferably without relying on catching exceptions. Thank you in advance!

    Read the article

  • Best way to encrypt certain fiels in SQL Server 2008?

    - by Josh
    I'm writing a .net web app that will read and write information to a SQL 2008 backend database. Some of this information will be highly confidential in nature so I want to encrypt certain data elements. I dont want to use TDE or any full-database encryption for performance reasons. My main concern is protecting this sensitive data as a last resort against a SQL injection or even a database server compromise. My question is what is the best way to do this to preserve performance? Is it faster to use the SQL2008 encryption functions such as EncryptByKey, or would it be faster to encrypt and decrypt the data in the .NET web app itself using a symmetric key stored in the secure web.config and store the encrypted values in the DB?

    Read the article

  • Is it worth caching a Dictionary for foreign key values in ASP.net?

    - by user169867
    I have a Dictionary<int, string> cached (for 20 minutes) that has ~120 ID/Name pairs for a reference table. I iterate over this collection when populating dropdown lists and I'm pretty sure this is faster than querying the DB for the full list each time. My question is more about if it makes sense to use this cached dictionary when displaying records that have a foreign key into this reference table. Say this cached reference table is a EmployeeType table. If I were to query and display a list of employee names and types should I query for EmployeeName and EmployeeTypeID and use my cached dictionary to grab the EmployeeTypeIDs name as each record is displayed or is it faster to just have the DB grab the EmployeeName and JOIN to get the EmployeeType string bypassing the cached Dictionary all together. I know both will work but I'm interested in what will perform the fastest. Thanks for any help.

    Read the article

  • What's wrong with my logic here?

    - by stu
    In java they say don't concatenate Strings, instead you should make a stringbuffer and keep adding to that and then when you're all done, use toString() to get a String object out of it. Here's what I don't get. They say do this for performance reasons, because concatenating strings makes lots of temporary objects. But if the goal was performance, then you'd use a language like C/C++ or assembly. The argument for using java is that it is a lot cheaper to buy a faster processor than it is to pay a senior programmer to write fast efficient code. So on the one hand, you're supposed let the hardware take care of the inefficiencies, but on the other hand, you're supposed to use stringbuffers to make java more efficient. While I see that you can do both, use java and stringbuffers, my question is where is the flaw in the logic that you either use a faster chip or you spent extra time writing more efficient software.

    Read the article

  • Correct Way to Get Date Between Dates In SQL Server

    - by Chuck Haines
    I have a table in SQL server which has a DATETIME field called Date_Printed. I am trying to get all records in the table which lie between a specified date range. Currently I am using the following SQL DECLARE @StartDate DATETIME DECLARE @EndDate DATETIME SET @StartDate = '2010-01-01' SET @EndDate = '2010-06-18 12:59:59 PM' SELECT * FROM table WHERE Date_Printed BETWEEN @StartDate AND @EndDate I have an index on the Date_Printed column. I was wondering if this is the best way to get the rows in the table which lie between those date or if there is a faster way. The table has about 750,000 records in it right now and it will continue to grow. The query is pretty fast but I'd like to make it faster if possible.

    Read the article

  • Visual Studio Optimizations

    - by lomaxx
    Visual studio is a pretty awesome IDE, but sometimes you just wish it would go faster. I was wondering if people have any tips or tricks to help speed up visual studio in day to day use. Things that I'm particularly interested in are speeding up build times and switching aspx files from source to design view seem to bring it to a grinding halt. Having said that, I'd be keen to hear anything that anyone uses to make VS run that little bit faster. Edit: Merged answers from related question, covering VS2008SP1. Please include any optimisations specific to the latest IDE.

    Read the article

  • Counting down to zero in contrast to counting up to length - 1

    - by Helper Method
    Is it recommended to count in small loops (where possible) down from length - 1 to zero instead of counting up to length - 1? 1.) Counting down for (int i = a.length - 1; i >= 0; i--) { if (a[i] == key) return i; } 2.) Counting up for (int i = 0; i < a.length; i++) { if (a[i] == key) return i; } The first one is slightly faster that the second one (because comparing to zero is faster) but is a little more error-prone in my opinion. Besides, the first one could maybe not be optimized by future improvements of the JVM. Any ideas on that?

    Read the article

  • How to migrate existing udp application to raw sockets

    - by osgx
    Hello Is there a tutorial for migration from plain udp sockets (linux, C99/C++, recv syscall is used) to the raw sockets? According to http://aschauf.landshut.org/fh/linux/udp_vs_raw/ch03s04.html raw socket is much faster than udp. Application is client-server. client is proprietary and must use exactly same procotol as it was with udp server. But server can be a bit faster with raw sockets. What parts of udp I must to implement in server? Is there a "quick migration" libraries?

    Read the article

  • sampling integers uniformly efficiently in python using numpy/scipy

    - by user248237
    I have a problem where depending on the result of a random coin flip, I have to sample a random starting position from a string. If the sampling of this random position is uniform over the string, I thought of two approaches to do it: one using multinomial from numpy.random, the other using the simple randint function of Python standard lib. I tested this as follows: from numpy import * from numpy.random import multinomial from random import randint import time def use_multinomial(length, num_points): probs = ones(length)/float(length) for n in range(num_points): result = multinomial(1, probs) def use_rand(length, num_points): for n in range(num_points): rand(1, length) def main(): length = 1700 num_points = 50000 t1 = time.time() use_multinomial(length, num_points) t2 = time.time() print "Multinomial took: %s seconds" %(t2 - t1) t1 = time.time() use_rand(length, num_points) t2 = time.time() print "Rand took: %s seconds" %(t2 - t1) if __name__ == '__main__': main() The output is: Multinomial took: 6.58072400093 seconds Rand took: 2.35189199448 seconds it seems like randint is faster, but it still seems very slow to me. Is there a vectorized way to get this to be much faster, using numpy or scipy? thanks.

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >