Search Results

Search found 4580 results on 184 pages for 'faster'.

Page 19/184 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • Why is .NET faster than C++ in this case?

    - by acidzombie24
    -edit- I LOVE SLaks comment. "The amount of misinformation in these answers is staggering." :D Calm down guys. Pretty much all of you were wrong. I DID make optimizations. It turns out whatever optimizations I made wasn't good enough. I ran the code in GCC using gettimeofday (I'll paste code below) and used g++ -O2 file.cpp and got slightly faster results then C#. Maybe MS didn't create the optimizations needed in this specific case but after downloading and installing mingw I was tested and found the speed to be near identical. Justicle Seems to be right. I could have sworn I use clock on my PC and used that to count and found it was slower but problem solved. C++ speed isn't almost twice as slower in the MS compiler. When my friend informed me of this I couldn't believe it. So I took his code and put some timers onto it. Instead of Boo I used C#. I constantly got faster results in C#. Why? The .NET version was nearly half the time no matter what number I used. C++ version: #include <iostream> #include <stdio.h> #include <intrin.h> #include <windows.h> using namespace std; int fib(int n) { if (n < 2) return n; return fib(n - 1) + fib(n - 2); } int main() { __int64 time = 0xFFFFFFFF; while (1) { int n; //cin >> n; n = 41; if (n < 0) break; __int64 start = __rdtsc(); int res = fib(n); __int64 end = __rdtsc(); cout << res << endl; cout << (float)(end-start)/1000000<<endl; break; } return 0; } C# version: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Runtime.InteropServices; using System.ComponentModel; using System.Threading; using System.IO; using System.Diagnostics; namespace fibCSTest { class Program { static int fib(int n) { if (n < 2)return n; return fib(n - 1) + fib(n - 2); } static void Main(string[] args) { //var sw = new Stopwatch(); //var timer = new PAB.HiPerfTimer(); var timer = new Stopwatch(); while (true) { int n; //cin >> n; n = 41; if (n < 0) break; timer.Start(); int res = fib(n); timer.Stop(); Console.WriteLine(res); Console.WriteLine(timer.ElapsedMilliseconds); break; } } } } GCC version: #include <iostream> #include <stdio.h> #include <sys/time.h> using namespace std; int fib(int n) { if (n < 2) return n; return fib(n - 1) + fib(n - 2); } int main() { timeval start, end; while (1) { int n; //cin >> n; n = 41; if (n < 0) break; gettimeofday(&start, 0); int res = fib(n); gettimeofday(&end, 0); int sec = end.tv_sec - start.tv_sec; int usec = end.tv_usec - start.tv_usec; cout << res << endl; cout << sec << " " << usec <<endl; break; } return 0; }

    Read the article

  • Building html tables from query data... faster?

    - by Andrew Heath
    With my limited experience/knowledge I am using the following structure to generate HTML tables on the fly from MySQL queries: $c = 0; $t = count($results); $table = '<table>'; while ($c < $t) { $table .= "<tr><td>$results[0]</td><td>$results[1]</td> (etc etc) </tr>"; ++$c; } $table .= '</table>'; this works, obviously. But for tables with 300+ rows there is a noticeable delay in pageload while the script builds the table. Currently the maximum results list is only about 1,100 rows, and the wait isn't long, but there's clearly a wait. Are there other methods for outputting an HTML table that are faster than my WHILE loop? (PHP only please...)

    Read the article

  • Faster way to convert from 24 bit wav pcm format to float?

    - by LMO
    I need to read data in from a wav file in 24 bit pcm format, and convert to float. I'm using Python 2.7.2. The wave package reads the data in as a string, so what I've tried is: # read in entire wav file wdata = f.readframes(nFrames) # unpack into signed integers and convert to float data = array.array('f') for i in range(0,nFrames*3,3): data.append(float(struct.unpack('<i', '\x00'+ wdata[i:i+3])[0])) # normalize sample values data = data / 0x800000 This is quite a bit faster than my earlier approaches, but still quite slow. Can anyone suggest a more efficient method?

    Read the article

  • Would it be faster to use CMS for building the first site in ASP.NET?

    - by rem
    I need an opinion and advise from experienced ASP.NET people, what way to go. Assuming that a developer has some practical background with HTML/JavaScript/PHP on one side and some .NET/C#/WPF experience on the other side. No previous hands on experience with ASP.NET - only theory and some read books on the topic. The task is to build ASP.NET web site with User Managment functionality (user authentication, user account, user buying history, user points and so on) and E-commerce functionality with shopping cart, checkout and all needed for this. Is it worth, i.e. will it be faster, more reliable and secure in the result to use a ASP.NET CMS system (for example Sitefinity from Telerik as declared developer friendly) to build such first site? In what case the learning curve will be more steep and it will take more time to achieve similar results? Notes to take into consideration: 1) Price of the CMS matters not very much 2) E-commerce module should be written from scratch in any case (and integrated in case of using CMS) due to very specific requirements

    Read the article

  • What tricks can be used to type and edit code faster?

    - by Thomas
    As Jeff Atwood noted, we are typists first, programmers second. Fast typing and editing may not be essential to be a good programmer, but it certainly helps. I noticed that I consciously and subconsciously use various tricks to get my intent across to the computer as fast as possible. What tricks can be used to type and edit code faster? I'm hoping to collect a nice list here that we can all learn from, so that we can be ever so slightly more productive. One trick per answer please! (This is not about typing speed in general. There are other questions about that. It's also not about general answers like "learn your editor's shortcut keys". Think of this topic as micro-optimizations for specific cases. See my own answers for examples of what I mean.)

    Read the article

  • Which is more efficient/faster when calling a cached image?

    - by andufo
    Hi, i made an image resizer in php. When an image is resized, it caches a new jpg file with the new dimensions. Next time you call the exact img.php?file=hello.jpg&size=400 it checks if the new jpg has already been created. If it has NOT been created yet, it creates the file and then prints the output (cool). If it ALREADY exists, no new file needs to be generated and instead, it just calls the already cached file. My question is regarding the second scenario. Which of these is faster? redirecting: header('Location: cache/hello_400.jpg');die(); grabbing data and printing the cached file: $data = file_get_contents('cache/hello_400.jpg'); header('Content-type: '.$mime); header('Content-Length: '.strlen($data)); echo $data; Any other ways to improve this?

    Read the article

  • Which is faster when animating the UI: a Control or a Picture?

    - by Christopher Walker
    /I'm working with and testing on a computer that is built with the following: {1 GB RAM (now 1.5 GB), 1.7 GHz Intel Pentium Processor, ATI Mobility Radeon X600 GFX} I need scale / transform controls and make it flow smoothly. Currently I'm manipulating the size and location of a control every 24-33ms (30fps), ±3px. When I add a 'fade' effect to an image, it fades in and out smoothly, but it is only 25x25 px in size. The control is 450x75 px to 450x250 px in size. In 2D games such as Bejeweled 3, the sprites animate with no choppy animation. So as the title would suggest: which is easier/faster on the processor: animating a bitmap (rendering it to the parent control during animation) or animating the control it's self?

    Read the article

  • Jet Database (ms access) ExecuteNonQuery - Can I make it faster?

    - by bluebill
    Hi all, I have this generic routine that I wrote that takes a list of sql strings and executes them against the database. Is there any way I can make this work faster? Typically it'll see maybe 200 inserts or deletes or updates at a time. Sometimes there is a mixture of updates, inserts and deletes. Would it be a good idea to separate the queries by type (i.e. group inserts together, then updates and then deletes)? I am running this against an ms access database and using vb.net 2005. Public Function ExecuteNonQuery(ByVal sql As List(Of String), ByVal dbConnection as String) As Integer If sql Is Nothing OrElse sql.Count = 0 Then Return 0 Dim recordCount As Integer = 0 Using connection As New OleDb.OleDbConnection(dbConnection) connection.Open() Dim transaction As OleDb.OleDbTransaction = connection.BeginTransaction() 'Using cmd As New OleDb.OleDbCommand() Using cmd As OleDb.OleDbCommand = connection.CreateCommand cmd.Connection = connection cmd.Transaction = transaction For Each s As String In sql If Not String.IsNullOrEmpty(s) Then cmd.CommandText = s recordCount += cmd.ExecuteNonQuery() End If Next transaction.Commit() End Using End Using Return recordCount End Function

    Read the article

  • Using scanf() in C++ programs is faster than using cin ?

    - by zeroDivisible
    Hello, I don't know if this is true, but when I was reading FAQ on one of the problem providing sites, I found something, that poke my attention: Check your input/output methods. In C++, using cin and cout is too slow. Use these, and you will guarantee not being able to solve any problem with a decent amount of input or output. Use printf and scanf instead. Can someone please clarify this? Is really using scanf() in C++ programs faster than using cin something ? If yes, that is it a good practice to use it in C++ programs? I thought that it was C specific, though I am just learning C++...

    Read the article

  • Using SmtpClient.Send to send ~500-2500 emails what way would be faster.

    - by jamone
    I'm needing to send around 500-2500 emails out at a time to internal email accounts. I'm wondering which was would be faster both for the mail server and for my client app. Should I send multiple emails with just different TO addresses, or just one with multiple BCC addresses? I tried testing this by sending a bunch to my own email and the multiple emails method work, but with the BCC and a single message I only get that single message in my inbox. Shouldn't I be getting as many copies as the number of times I put my address in the BCC line?

    Read the article

  • Will Algorithm written in OCaml compiled from C be Faster than Algorithm written in Pure C code?

    - by Ole Jak
    So I have some cool Image Processing algorithm. I have written it in OCaml. It performs well. I now I can compile it as C code with such command ocamlc -output-obj -o foo.c foo.ml (I have a situation where I am not alowed to use OCaml compiler to bild my programm for my arcetecture, I can use only specialy modified gcc. so I will compile that programm with sometyhing like gcc -L/usr/lib/ocaml foo.c -lcamlrun -lm -lncurses and Itll run on my archetecture.) I want to know in general case will my OCaml code compiled into C run faster than algorithm implemented in pure C?

    Read the article

  • Indexed key vs indexed separate columns, which one is faster ?

    - by Jerry
    In MYSQL, from a pure performance perspective, if I have a table with large amount of data with 10/1 read/write ratio. is it faster in read/write performance to have 4 search criteria in separate columns and all indexed or have them combined in to one single string acting as a key and store in one indexed column ? e.g. say this table with 5 columns, first name, last name, sex, country and file where the first four columns will ALWAYS be given as a part of search parameters in a search or have a table with two columns, key and file. where the value of key can be john-smith-male-australia ?? I don't quite get the pros and cons. the point I try to stress is the fact that all parameters will be given.in a search.

    Read the article

  • Is there a faster way to download a page from the net to a string?

    - by cphil5
    I have tried other methods to download info from a URL, but needed a faster one. I need to download and parse about 250 separate pages, and would like the app to not appear ridiculously slow. This is the code I am currently using to retrieve a single page, any insight would be great. try { URL myURL = new URL("http://www.google.com"); URLConnection ucon = myURL.openConnection(); InputStream inputStream = ucon.getInputStream(); BufferedInputStream bufferedInputStream = new BufferedInputStream(inputStream); ByteArrayBuffer byteArrayBuffer = new ByteArrayBuffer(50); int current = 0; while ((current = bufferedInputStream.read()) != -1) { byteArrayBuffer.append((byte) current); } tempString = new String(byteArrayBuffer.toByteArray()); } catch (Exception e) { Log.i("Error",e.toString()); }

    Read the article

  • What is the best way to write faster on Vim using a non-english keyboard?

    - by Martín Fixman
    I usually use Vim, and its great for the ability to do faster some actions than other editors. However, since I live in Argentina I have a Latin American keyboard, that makes everything in Vim pretty slower (to write / to search, I must press Shift+7). Since I don't want to be changing Keyboard layouts all the time (and its pretty difficult to get used to pressing symbols as in an English keyboard), I was wondering if there was a vim plugin (of .vimrc file) that may be useful for international users. Just for the sake of it, here's how the Latin American keyboard is laid out: By the way, I would love to go and buy an English keyboard, but unfortunately I use a Laptop.

    Read the article

  • Is it faster to loop through a Python set of number or a set of letters?

    - by Scott Bartell
    Is it faster to loop through a Python set of numbers or a Python set of letters given that each set is the exact same length and each item within each set is the same length? Why? I would think that there would be a difference because letters have more possible characters [a-zA-Z] than numbers [0-9] and therefor would be more 'random' and likely affect the hashing to some extent. numbers = set([00000,00001,00002,00003,00004,00005, ... 99999]) letters = set(['aaaaa','aaaab','aaaac','aaaad', ... 'aaabZZ']) # this is just an example, it does not actually end here for item in numbers: do_something() for item in letters: do_something() where len(numbers) == len(letters) Update: I am interested in Python's specific hashing algorithm and what happens behind the scenes with this implementation.

    Read the article

  • How much faster are register based architectures than stack architectures?

    - by drozzy
    Studying compilers course, I am left wondering why use registers at all. It is often the case that the caller or callee must save the register value and then restore it. In a way they always end up using the stack anyway. Is creating additional complexity by using registers really worth it? Excuse my ignorance. Update: Please, I know that registers are faster than RAM and other types of cache. My main concern is that one has to "save" the value that is in the register and the "restore" it to. In both cases we are accessing some kind of cache. Would it not be better to use cache in the first place?

    Read the article

  • Call external library from PHP. What is faster: exec or extension?

    - by robusta
    Hi, I need to make calls from webpage to external library written in C++ and display the result. Platform is Linux, Apache, PHP. My current idea is to use PHP service which will call my library/program. I found that there are two possible ways to do this: 1) use PHP 'exec' function 2) write PHP extension I am curious what works more effective? Faster? Less load the server? I will probably need to do 4 calls per second, so I want to be as optimal as possible. P.S. If you are aware of some other (more effective) way of calling C++ library or program from webpage, please let me know. Thanks a lot, Robusta

    Read the article

  • Java - Can i have a faster performance for this loop ?

    - by Brad
    I am reading a book and deleting a number of words from it. My problem is that the process takes long time, and i want to make its performance better(Less time), example : Vector<String> pages = new Vector<String>(); // Contains about 1500 page, each page has about 1000 words. Vector<String> wordsToDelete = new Vector<String>(); // Contains about 50000 words. for( String page: pages ) { String pageInLowCase = page.toLowerCase(); for( String wordToDelete: wordsToDelete ) { if( pageInLowCase.contains( wordToDelete ) ) page = page.replaceAll( "(?i)\\b" + wordToDelete + "\\b" , "" ); } // Do some staff with the final page that does not take much time. } This code takes around 3 minutes to execute. If i skipped the loop of replaceAll(...) i can save more than 2 minutes. So is there a way to do the same loop with a faster performance ?

    Read the article

  • can a program written in C be faster than one written in OCaml and translated to C?

    - by Ole Jak
    So I have some cool Image Processing algorithm. I have written it in OCaml. It performs well. I now I can compile it as C code with such command ocamlc -output-obj -o foo.c foo.ml (I have a situation where I am not alowed to use OCaml compiler to bild my programm for my arcetecture, I can use only specialy modified gcc. so I will compile that programm with sometyhing like gcc -L/usr/lib/ocaml foo.c -lcamlrun -lm -lncurses and Itll run on my archetecture.) I want to know in general case can a program written in C be faster than one written in OCaml and translated to C?

    Read the article

  • Vertex 2 SSD is running faster than my Vertex 3 SSD?

    - by Kairan
    I used Acronis Disk Director to do a direct clone of my C:\ windows 7 x64 drive from my Vertex 2 to my new Vertex 3 SSD (Just to show the drive software winstall everything is identical.) I ran a performance test on Windows using the Windows Experience Index. The rating I am receiving when booting on the Vertex 2 is 7.5 While I am getting only a rating for the Vertex 3 of 6.9 My understanding is that the read/write speeds of the Vertex 2 is only up to 250MB/sec while the Vertex 3 is up to 500MB/sec. Copying a single file (3GB in size) from the Vertex 3 to itself was getting speed of approx 70-80MB/sec This speed is no better (maybe worse) than what I got from the Vertex 2 I am connected via the SATA 3 port on the motherboard, using an SATA 3 cable Is this issue caused by the drive cloning? Do I have a bad SSD?

    Read the article

  • For Australian audiences, would an uncached .com.au domain resolve faster than an uncached .com?

    - by thomasrutter
    Is there any speed benefit to using a .com.au domain rather than a .com if your customers, hosting and DNS services are in Australia, specifically in the worst typical case (domain is not cached in any local DNS relay for customer)? Assuming that both domains pointed to the same nameservers in the end. I know this is mostly academic because we are talking about a DNS lookup that would take at most a few hundred milliseconds and would only be relevant once at the beginning of a session. I just was curious. I know that an uncached .com lookup will involve consulting at least one ?.gtld-servers.net. server and an uncached .com.au will involve consulting at least one ?.au. server. Now, what I guess I'd need to know is Are the various ?.gtld-servers.net. servers using anycast technology that would have local fully authoritative nodes in Australia, making them just as fast to Australians as ?.au. and avoiding a 200ms+ overseas latency, or are some or all of them hosted only in the US or in the northern hemisphere?

    Read the article

  • Which is faster, copying everything at once or one thing at a time?

    - by fredley
    I am transferring a bunch (20+) of large (1GB+) files to my external flash drive over USB 2.0. Is it quicker to just sling them all over at once (as in one at a time but not waiting for the previous transfer to finish) so that there are multiple transfers going on, or transfer one, wait for it to finish, transfer the next. The files are coming from a variety of locations so I can't do one single big transfer. Are there any other advantages to one way or the other that are worth considering?

    Read the article

  • Why do msi installations use slower drives over faster ones in windows 7?

    - by Joshua C
    I have noticed that the slowest drive in my system is used most during an msi installation. I mainly notice this when running windows updates but it seems to be msi installs in general. The setup I last saw this occur on was running Windows 7 with the following drives: Sata: 240GB SSD NTFS ~515MB/s Operating system drive 1TB NTFS ~110MB/s Firewire: 4TB ExFAT ~80MB/s I would think that windows would choose the fastest drive with available space for temporary files. But it will instead choose the external drive with the slowest transfer speed. I could also understand choosing the 1TB for not being an ssd in an attempt to preserve the longevity of the ssd write capacity. Why does this happen? Is there a way to force these installations to use the OS drive or a specific drive?

    Read the article

  • What do you upgrade to make games load faster? [on hold]

    - by Superbest
    Let's say you have a relatively modern game like Shogun 2. The loading screens take several minutes. This bothers you and you'd like to improve it. What is actually going on when loading screens are up? I'm guessing assets are being loaded into memory from disk, and possibly being decompressed first. However, what is actually causing the slow down? The memory? Mainboard? CPU? HDD? If you had $100 to spend on upgrades and your only goal is to speed up loading screens without reducing other performance, what component of the computer does it make sense to upgrade for maximum benefit? If your answer is "it depends on the existing setup", what sort of benchmarks would you run to determine what is causing the bottleneck? What if you had $500 instead? I give the two budgets for context. I am not asking for actual recommendations about which component to buy (nor are the numbers supposed to be rigid limits), but what features are important when shopping for components with small and large budgets (a large budget could allow buying multiple components which are not so good on their own, but work particularly well together). I mention Shogun 2 as an example, but I'm asking about reducing overall loading times, across all games, not just one game. Therefore, "put it on a solid state disk" probably won't be good solution, because putting every game on your SDD will quickly fill it up.

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >