Search Results

Search found 4580 results on 184 pages for 'faster'.

Page 9/184 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • How much faster is C++ than C#?

    - by Trap
    Or is it now the other way around? From what I've heard there are some areas in which C# proves to be faster than C++, but I've never had the guts to test it by myself. Thought any of you could explain these differences in detail or point me to the right place for information on this.

    Read the article

  • Postgres: Find table foreign keys (Faster alternative)

    - by Najera
    Is there faster alternative to this: Take almost 1 minute in our server. SELECT tc.constraint_name, tc.table_name, kcu.column_name, ccu.table_name AS foreign_table_name, ccu.column_name AS foreign_column_name FROM information_schema.table_constraints AS tc JOIN information_schema.key_column_usage AS kcu ON tc.constraint_name = kcu.constraint_name JOIN information_schema.constraint_column_usage AS ccu ON ccu.constraint_name = tc.constraint_name WHERE constraint_type = 'FOREIGN KEY' AND tc.table_name='mytable'; Maybe using pg_class metadata?, thanks.

    Read the article

  • Why is an inverse loop faster than a normal loop (test included)

    - by Saif Bechan
    I have been running some small tests in PHP on loops. I do not know if my method is good. I have found that a inverse loop is faster than a normal loop. I have also found that a while-loop is faster than a for-loop. Setup <?php $counter = 10000000; $w=0;$x=0;$y=0;$z=0; $wstart=0;$xstart=0;$ystart=0;$zstart=0; $wend=0;$xend=0;$yend=0;$zend=0; $wstart = microtime(true); for($w=0; $w<$counter; $w++){ echo ''; } $wend = microtime(true); echo "normal for: " . ($wend - $wstart) . "<br />"; $xstart = microtime(true); for($x=$counter; $x>0; $x--){ echo ''; } $xend = microtime(true); echo "inverse for: " . ($xend - $xstart) . "<br />"; echo "<hr> normal - inverse: " . (($wend - $wstart) - ($xend - $xstart)) . "<hr>"; $ystart = microtime(true); $y=0; while($y<$counter){ echo ''; $y++; } $yend = microtime(true); echo "normal while: " . ($yend - $ystart) . "<br />"; $zstart = microtime(true); $z=$counter; while($z>0){ echo ''; $z--; } $zend = microtime(true); echo "inverse while: " . ($zend - $zstart) . "<br />"; echo "<hr> normal - inverse: " . (($yend - $ystart) - ($zend - $zstart)) . "<hr>"; echo "<hr> inverse for - inverse while: " . (($xend - $xstart) - ($zend - $zstart)) . "<hr>"; ?> Average Results The difference in for-loop normal for: 1.0908501148224 inverse for: 1.0212800502777 normal - inverse: 0.069570064544678 The difference in while-loop normal while: 1.0395669937134 inverse while: 0.99321985244751 normal - inverse: 0.046347141265869 The difference in for-loop and while-loop inverse for - inverse while: 0.0280601978302 Questions My question is can someone explain these differences in results? And is my method of benchmarking been correct?

    Read the article

  • Make c# matrix code faster

    - by Wam
    Hi all, Working on some matrix code, I'm concerned of performance issues. here's how it works : I've a IMatrix abstract class (with all matrices operations etc), implemented by a ColumnMatrix class. abstract class IMatrix { public int Rows {get;set;} public int Columns {get;set;} public abstract float At(int row, int column); } class ColumnMatrix : IMatrix { private data[]; public override float At(int row, int column) { return data[row + columns * this.Rows]; } } This class is used a lot across my application, but I'm concerned with performance issues. Testing only read for a 2000000x15 matrix against a jagged array of the same size, I get 1359ms for array access agains 9234ms for matrix access : public void TestAccess() { int iterations = 10; int rows = 2000000; int columns = 15; ColumnMatrix matrix = new ColumnMatrix(rows, columns); for (int i = 0; i < rows; i++) for (int j = 0; j < columns; j++) matrix[i, j] = i + j; float[][] equivalentArray = matrix.ToRowsArray(); TimeSpan totalMatrix = new TimeSpan(0); TimeSpan totalArray = new TimeSpan(0); float total = 0f; for (int iteration = 0; iteration < iterations; iteration++) { total = 0f; DateTime start = DateTime.Now; for (int i = 0; i < rows; i++) for (int j = 0; j < columns; j++) total = matrix.At(i, j); totalMatrix += (DateTime.Now - start); total += 1f; //Ensure total is read at least once. total = total > 0 ? 0f : 0f; start = DateTime.Now; for (int i = 0; i < rows; i++) for (int j = 0; j < columns; j++) total = equivalentArray[i][j]; totalArray += (DateTime.Now - start); } if (total < 0f) logger.Info("Nothing here, just make sure we read total at least once."); logger.InfoFormat("Average time for a {0}x{1} access, matrix : {2}ms", rows, columns, totalMatrix.TotalMilliseconds); logger.InfoFormat("Average time for a {0}x{1} access, array : {2}ms", rows, columns, totalArray.TotalMilliseconds); Assert.IsTrue(true); } So my question : how can I make this thing faster ? Is there any way I can make my ColumnMatrix.At faster ? Cheers !

    Read the article

  • How to make a ball fall faster on a ramp? Unity3D/C#

    - by Timothy Williams
    So, I'm making a ball game. Where you pick up the ball, drop it on a ramp, and it flies off in to blocks. The only problem right now is it falls at a normal speed, then lightly falls off, not nearly fast enough to get over the wall and hit the blocks. Is there any way to make the ball go faster down the ramp? Maybe even make it go faster depending on what height you dropped it from (e.g. if you hold it way above the ramp, and drop it, it will drop faster than if you dropped it right above the ramp.) Thanks.

    Read the article

  • How to make a ball fall faster on a ramp?

    - by Timothy Williams
    So, I'm making a ball game. Where you pick up the ball, drop it on a ramp, and it flies off in to blocks. The only problem right now is it falls at a normal speed, then lightly falls off, not nearly fast enough to get over the wall and hit the blocks. Is there any way to make the ball go faster down the ramp? Maybe even make it go faster depending on what height you dropped it from (e.g. if you hold it way above the ramp, and drop it, it will drop faster than if you dropped it right above the ramp.)

    Read the article

  • Recompile a x86 code with LLVM to some faster one x86

    - by osgx
    Hello Is it possible to run LLVM compiler with input of x86 32bit code? There is a huge algorithm which I have no source code and I want to make it run faster on the same hardware. Can I translate it from x86 back to x86 with optimizations. This Code runs a long time, so I want to do static recompilation of it. Also, I can do a runtime profile of it and give to LLVM hints, which branches are more probable. The original Code is written for x86, and uses no SSE/MMX/SSE2. After recompilation It has a chances to use x86_64 and/or SSE3. Also, The code will be regenerated in more optimal way to hardware decoder. Thanks.

    Read the article

  • Select query 2-3 times faster than view

    - by Richard Knop
    This query run alone: SELECT -- lots of columns FROM (((((((((((`table1` `t1` LEFT JOIN `table2` `t2` ON(( `t2`.`userid` = `t1`.`userid` ))) LEFT JOIN `table3` `t3` ON(( `t1`.`orderid` = `t3`.`orderid` ))) LEFT JOIN `table4` `t4` ON(( `t4`.`orderitemlicenseid` = `t3`.`orderitemlicenseid` ))) LEFT JOIN `table5` `t5` ON(( `t1`.`orderid` = `t5`.`orderid` ))) LEFT JOIN `table6` `t6` ON(( `t5`.`transactionid` = `t6`.`transactionid` ))) LEFT JOIN `table7` `t7` ON(( `t7`.`transactionid` = `t5`.`transactionid` ))) LEFT JOIN `table8` `t8` ON(( `t8`.`voucherid` = `t7`.`voucherid` ))) LEFT JOIN `table9` `t9` ON(( `t8`.`voucherid` = `t9`.`voucherid` ))) LEFT JOIN `table10` `t10` ON(( ( `t10`.`vouchergroupid` = `t9`.`vouchergroupid` ) AND ( `t2`.`territoryid` = `t10`.`territoryid` ) ))) LEFT JOIN `table11` `t11` ON(( `t11`.`voucherid` = `t8`.`voucherid` ))) LEFT JOIN `table12` `t12` ON(( `t12`.`orderid` = `t1`.`orderid` ))) GROUP BY `t5`.`transactionid` Takes about 2.5 seconds to finish. When I save it to a view and run it as: SELECT * FROM viewName; It takes 7 seconds to finish. What is the reason and how can I make the view faster?

    Read the article

  • Faster bulk inserts in sqlite3?

    - by scubabbl
    I have a file of about 30000 lines of data I want to load into a sqlite3 database. Is there a faster way that generating insert statements for each line of data? The data is space delimited and maps directly to the sqlite3 table. Is there any sort of bulk insert method for adding volume data to a database? Has anyone devised some deviously wonderful way of doing this if it's not built in? I should preface this by asking is there a c++ way to do it from the API? Thanks.

    Read the article

  • When is assembler faster than C?

    - by Adam Bellaire
    One of the stated reasons for knowing assembler is that, on occasion, it can be employed to write code that will be more performant than writing that code in a higher-level language, C in particular. However, I've also heard it stated many times that although that's not entirely false, the cases where assembler can actually be used to generate more performant code are both extremely rare and require expert knowledge of and experience with assembler. This question doesn't even get into the fact that assembler instructions will be machine-specific and non-portable, or any of the other aspects of assembler. There are plenty of good reasons for knowing assembler besides this one, of course, but this is meant to be a specific question soliciting examples and data, not an extended discourse on assembler versus higher-level languages. Can anyone provide some specific examples of cases where assembler will be faster than well-written C code using a modern compiler, and can you support that claim with profiling evidence? I am pretty confident these cases exist, but I really want to know exactly how esoteric these cases are, since it seems to be a point of some contention.

    Read the article

  • Symlinking (ln) faster than moving (mv)?

    - by Chad Johnson
    When we build web software releases, we prepare the release in a temporary directory and then replace the release directory with the temporary one just prepared: # Move and replace existing release directory. mv /path/to/httpdocs /path/to/httpdocs.before mv /path/to/$newReleaseName /path/to/httpdocs Under this scheme, it happens that with about 1 in every 15 releases, a user was using a file in the original release directory exactly at the time the commands above are run, and a fatal error occurs for that user. I am wondering if using symlinking like follows would be significantly faster, in terms of processing time, thereby helping to lessen the likelihood of this problem: # Remove and replace existing release symlink. ln -sf /path/to/$newReleaseName path/to/httpdocs

    Read the article

  • Is Matlab faster than Python?

    - by kame
    I want to compute magnetic fields of some conductors using the biot-savart-law and I want to use a 1000x1000x1000 matrix. Before I use Matlab, but now I want to use Python. Is Python slower than Matlab? How can I make Python faster? EDIT: Maybe the best way is to compute the big array with c/c++ and then transfering them to python. I want to visualise then with VPython. EDIT2: Could somebody give an advice for which is better in my case: C or C++?

    Read the article

  • make jquery animation faster

    - by darkandcold
    Hello, while loading a page i am using a animation, wid=jQuery(window).width()+400; jQuery('#div').animate({'marginLeft' : '+='+wid+'px'},{queue:false, duration:20000 }) div, is being moved to left in 20 sec. I use this animation for loading page. when page is loaded <body onload=myfunction()> is called. when myfunction is called (page is loadad completly) i want to my animation faster. how to change an animation duration while it's animating?

    Read the article

  • Faster way to clone.

    - by AngryHacker
    I am trying to optimize a piece of code that clones an object: #region ICloneable public object Clone() { MemoryStream buffer = new MemoryStream(); BinaryFormatter formatter = new BinaryFormatter(); formatter.Serialize(buffer, this); // takes 3.2 seconds buffer.Position = 0; return formatter.Deserialize(buffer); // takes 2.1 seconds } #endregion Pretty standard stuff. The problem is that the object is pretty beefy and it takes 5.4 seconds (according ANTS Profiler - I am sure there is the profiler overhead, but still). Is there a better and faster way to clone?

    Read the article

  • when is java faster than c++ (or when is JIT faster then precompiled)?

    - by kostja
    I have heard that under certain circumstances, Java programs or rather parts of java programs are able to be executed faster than the "same" code in C++ (or other precompiled code) due to JIT optimizations. This is due to the compiler being able to determine the scope of some variables, avoid some conditionals and pull similar tricks at runtime. Could you give an (or better - some) example, where this applies? And maybe outline the exact conditions under which the compiler is able to optimize the bytecode beyond what is possible with precompiled code? NOTE : This question is not about comparing Java to C++. Its about the possibilities of JIT compiling. Please no flaming. I am also not aware of any duplicates. Please point them out if you are.

    Read the article

  • Is there a faster TList implementation ?

    - by dmauric.mp
    My application makes heavy use of TList, so I was wondering if there are any alternative implementations that are faster or optimized for particular use case. I know of RtlVCLOptimize.pas 2.77, which has optimized implementations of several TList methods. But I'd like to know if there is anything else out there. I also don't require it to be a TList descendant, I just need the TList functionality regardless of how it's implemented. It's entirely possible, given the rather basic functionality TList provides, that there is not much room for improvement, but would still like to verify that, hence this question.

    Read the article

  • database vs flat file, which is a faster structure for regex matching with many simultaneous request

    - by Jamex
    Hi, which structure returns faster result and/or less taxing on the host server, flat file or database (mysql)? Assume many users (100 users) are simultaneously query the file/db. Searches involve pattern matching using regex against a static file/db. File has 50,000 unique lines (same data type). There could be many matches. There is no writing to the file/db, just read. Is it possible to have a duplicate the file/db and write a logic switch to use the backup file/db if the main file is in use? Which language is best for the type of structure? Perl for flat and PHP for db? TIA

    Read the article

  • Faster way to update 250k rows with SQL

    - by pablo
    I need to update about 250k rows on a table and each field to update will have a different value depending on the row itself (not calculated based on the row id or the key but externally). I tried with a parametrized query but it turns out to be slow (I still can try with a table-value parameter, SqlDbType.Structured, in SQL Server 2008, but I'd like to have a general way to do it on several databases including MySql, Oracle and Firebird). Making a huge concat of individual updates is also slow. What about creating a temp table and running an update joining my table and the tmp one? Will it work faster?

    Read the article

  • Simple Serialization Faster Than JSON? (in Ruby)

    - by Sinan Taifour
    I have an application written in ruby (that runs in the JRuby VM). When profiling it, I realized that it spends a lot (actually almost all of) its time converting some hashes into JSON. These hashes have keys of symbols, values of other similar hashes, arrays, strings, and numbers. Is there a serialization method that is suitable for such an input, and would typically run faster than JSON? It would preferable if it is has a Java or JRuby-compatible gem, too. I am currently using the jruby-json gem, which is the fastest JSON implementation in JRuby (as I am told), so the move will most likely be to a different serialization method rather than just a different library. Any help is appreciated! Thanks.

    Read the article

  • Creating objects makes the VM faster?

    - by Sudhir Jonathan
    Look at this piece of code: MessageParser parser = new MessageParser(); for (int i = 0; i < 10000; i++) { parser.parse(plainMessage, user); } For some reason, it runs SLOWER (by about 100ms) than for (int i = 0; i < 10000; i++) { MessageParser parser = new MessageParser(); parser.parse(plainMessage, user); } Any ideas why? The tests were repeated a lot of times, so it wasn't just random. How could creating an object 10000 times be faster than creating it once?

    Read the article

  • A faster alternative to Pandas `isin` function

    - by user3576212
    I have a very large data frame df that looks like: ID Value1 Value2 1345 3.2 332 1355 2.2 32 2346 1.0 11 3456 8.9 322 And I have a list that contains a subset of IDs ID_list. I need to have a subset of df for the ID contained in ID_list. Currently, I am using df_sub=df[df.ID.isin(ID_list)] to do it. But it takes a lot time. IDs contained in ID_list doesn't have any pattern, so it's not within certain range. (And I need to apply the same operation to many similar dataframes. I was wondering if there is any faster way to do this. Will it help a lot if make ID as the index? Thanks!

    Read the article

  • How to make this jpeg compression faster

    - by Richard Knop
    I am using OpenCV to compress binary images from a camera: vector<int> p; p.push_back(CV_IMWRITE_JPEG_QUALITY); p.push_back(75); // JPG quality vector<unsigned char> jpegBuf; cv::imencode(".jpg", fIplImageHeader, jpegBuf, p); The code above compresses a binary RGB image stored in fIplImageHeader to a JPEG image. For a 640*480 image it takes about 0.25 seconds to execute the five lines above. Is there any way I could make it faster? I really need to repeat the compression more than 4 times a second.

    Read the article

  • How to convert Bitmap to byte[,,] faster?

    - by Miko Kronn
    I wrote function: public static byte[, ,] Bitmap2Byte(Bitmap image) { int h = image.Height; int w = image.Width; byte[, ,] result= new byte[w, h, 3]; for (int i = 0; i < w; i++) { for (int j = 0; j < h; j++) { Color c= image.GetPixel(i, j); result[i, j, 0] = c.R; result[i, j, 1] = c.G; result[i, j, 2] = c.B; } } return result; } But it takes almost 6 seconds to convert 1800x1800 image. Can I do this faster?

    Read the article

  • Failing faster when URL content is not found, howto

    - by Jam
    I have a thread pool that loops over a bunch of pages and checks to see if some string is there or not. If String is found, or not found response is near instant, however if server is offline or application is not running getting a rejection seems to take seconds How can I change my code to fail faster? for (Thread thread : pool) { thread.start(); } for (Thread thread : pool) { try { thread.join(); } catch (InterruptedException e) { e.printStackTrace(); } } Here is my run method @Override public void run() { for (Box b : boxes) { try { connection = new URL(b.getUrl()).openConnection(); scanner = new Scanner(connection.getInputStream()); scanner.useDelimiter("\\Z"); content = scanner.next(); if (content.equals("YES")) { } else { System.out.println("\tFAILED ON " + b.getName() + " BAD APPLICATION STATE"); } } catch (Exception ex) { System.out.println("\tFAILED ON " + b.getName() + " BAD APPLICATION STATE"); } } }

    Read the article

  • Faster s3 bucket duplication

    - by Sean McCleary
    I have been trying to find a better command line tool for duplicating buckets than s3cmd. s3cmd can duplicate buckets without having to download and upload each file. The command I normally run to duplicate buckets using s3cmd is: s3cmd cp -r --acl-public s3://bucket1 s3://bucket2 This works, but it is very slow as copies each file via the API one at a time. If s3cmd could run in parallel mode, I'd be very happy. Are there other options available as a command line tools or code that people use to duplicate buckets that are faster than s3cmd?

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >