Search Results

Search found 6670 results on 267 pages for 'speed dial'.

Page 35/267 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • Threading cost - minimum execution time when threads would add speed

    - by Lukas
    I am working on a C# application that works with an array. It walks through it (meaning that at one time only a narrow part of the array is used). I am considering adding threads in it to make it perform faster (it runs on a dualcore computer). The problem is that I do not know if it would actually help, because threads cost something and this cost could easily be more than the parallel gain... So how do I determine if threading would help?

    Read the article

  • php curl upload speed

    - by zanatic
    hey i made a script that uploads files to different servers using php curl and the problem is that it eats all my upload bandwith and my apache does not respond as i would like. the call is made from the php-cli and can not use any apache bandwith limiting bandwith thanks in advance

    Read the article

  • speed up wamp server + drupal on windows vista

    - by Andrew Welch
    Hi, My localhost performance with drupal six is pretty slow. I found a solution to add a # before the :: localhost line of the system32/etc/hosts file but this was something I had already done and didn't help much. does anyone know of any other optimisations that might work? tHanks Andy

    Read the article

  • PostgreSQL, Foreign Keys, Insert speed & Django

    - by Miles
    A few days ago, I ran into an unexpected performance problem with a pretty standard Django setup. For an upcoming feature, we have to regenerate a table hourly, containing about 100k rows of data, 9M on the disk, 10M indexes according to pgAdmin. The problem is that inserting them by whatever method literally takes ages, up to 3 minutes of 100% disk busy time. That's not something you want on a production site. It doesn't matter if the inserts were in a transaction, issued via plain insert, multi-row insert, COPY FROM or even INSERT INTO t1 SELECT * FROM t2. After noticing this isn't Django's fault, I followed a trial and error route, and hey, the problem disappeared after dropping all foreign keys! Instead of 3 minutes, the INSERT INTO SELECT FROM took less than a second to execute, which isn't too surprising for a table <= 20M on the disk. What is weird is that PostgreSQL manages to slow down inserts by 180x just by using 3 foreign keys. Oh, disk activity was pure writing, as everything is cached in RAM; only writes go to the disks. It looks like PostgreSQL is working very hard to touch every row in the referred tables, as 3MB/sec * 180s is way more data than the 20MB this new table takes on disk. No WAL for the 180s case, I was testing in psql directly, in Django, add ~50% overhead for WAL logging. Tried @commit_on_success, same slowness, I had even implemented multi row insert and COPY FROM with psycopg2. That's another weird thing, how can 10M worth of inserts generate 10x 16M log segments? Table layout: id serial primary, a bunch of int32, 3 foreign keys to small table, 198 rows, 16k on disk large table, 1.2M rows, 59 data + 89 index MB on disk large table, 2.2M rows, 198 + 210MB So, am I doomed to either drop the foreign keys manually or use the table in a very un-Django way by defining saving bla_id x3 and skip using models.ForeignKey? I'd love to hear about some magical antidote / pg setting to fix this.

    Read the article

  • streaming speed for video content on iphone

    - by Jim
    My application was rejected from Apple today.Apple says that the video stream should be not more than at 64kbps. what should i will have to do get my application approve on App Store?? Should i have to make changes on video content that i am streaming from my iPhone or should i have to change in code? Please suggest. Thanks, Jim.

    Read the article

  • php parsing speed optimization

    - by Arnaud
    I would like to add tooltip or generate link according to the element available in the database, for exemple if the html page printed is: to reboot your linux host in single-user mode you can ... I will use explode(" ", $row[page]) and the idea is now to lookup for every single word in the page to find out if they have a related referance in this exemple let's say i've got a table referance an one entry for reboot and one for linux reboot: restart a computeur linux: operating system now my output will look like (replaced < and by @) to @a href="ref/reboot"@reboot@/a@ your @a href="ref/linux"@linux@/a@ host in single-user mode you can ... Instead of have a static list generated when I saved the content, if I add more keyword in the future, then the text will become more interactive. My main concerne and question is how can I create a efficient enough process to do it ? Should I store all the db entry in an array and compare them ? Do an sql query for each word (seems to be crazy) Dump the table in a file and use a very long regex or a "grep -f pattern data" way of doing it? Or or or or I'm sure it must be a better way of doing it, just don't have a clue about it, or maybe this will be far too resource un-friendly and I should avoid doing such things. Cheers!

    Read the article

  • c# - SQL - speed up code to DB

    - by user228058
    I have a page with 26 sections - one for each letter of the alphabet. I'm retrieving a list of manufacturers from the database, and for each one, creating a link - using a different field in the Database. So currently, I leave the connection open, then do a new SELECT by each letter, WHERE the Name LIKE that letter. It's very slow, though. What's a better way to do this? TIA

    Read the article

  • Calculating Connection/Download Speed

    - by kdbdallas
    I have a client and server program (both in Obj-C) and I am transferring files between two devices using the programs. The transferring is working fine, but I would like to display to the user what transfer rate they are getting. So I know the total size of the file, and how much of the file has been transferred, is there a way to figure out the transfer rate from this information, and if not, what information do I need to calculate the transfer rate? Thanks

    Read the article

  • Why is this code's execution speed so different?

    - by Steve Watkins
    In Internet Explorer 7, this code executes consistently in 47 ms: function updateObjectValues() { $('.objects').html(12345678); // ~500 DIVs } however, this code executes consistently in 157 ms: function updateObjectValues() { $('.objects').html('12345678'); // ~500 DIVs } Passing a number is over 3x faster than a string. Why are these results so dramatically different? And, is there any way to help the performance of the string?

    Read the article

  • how to speed up the code??

    - by kaushik
    in my program i have a method which requires about 4 files to be open each time it is called,as i require to take some data.all this data from the file i have been storing in list for manupalation. I approximatily need to call this method about 10,000 times.which is making my program very slow? any method for handling this files in a better ways and is storing the whole data in list time consuming what is better alternatives for list? I can give some code,but my previous question was closed as that only confused everyone as it is a part of big program and need to be explained completely to understand,so i am not giving any code,please suggest ways thinking this as a general question... thanks in advance

    Read the article

  • Is there a technical way to speed up a general program above current PC speed limit?

    - by Maksee
    Let's imagine I developed a Windows console application implementing some algorithm calculating something. Let's say it doesn't use any threads, just straightforward linear approach with ifs, loops and so on. Is there any technical way to make if run it 100x times faster than on the most advanced current PC? For example one of the way would be to run it on a super computer that emulates i386 faster than any of the existing PCs. But in this case the question what computer and does it really have ability to emulate Windows. In other words, is there real examples of such approach? Although in general it looks useless, but if there is a way, one could develop some program on his general home computer and pay for running it much faster on some other hardware. I suppose that this question could be asked on superuser.com, but since there are possible specific with such things as assembler instructions or threads, I thought that stackoverflow.com is better

    Read the article

  • (Pathinfo vs fnmatch part 2) Speed benchmark reversed on Windows and Mac

    - by zaf
    On a previous question the pathinfo and fnmatch functions were benchmarked and the answers all came out opposite to my benchmark results. You can read the different results with the benchmark code here: http://stackoverflow.com/questions/2693428/pathinfo-vs-fnmatch I couldn't work it out until I ran the same code on a machine running vista. The results then matched the other users. My main machine is a mac. So, my questions are: Why do we get these two different results? Could this apply to other functions?

    Read the article

  • git can I speed up committing?

    - by AndreasT
    I have a big repository in a shared folder. I use git from within a VM on that folder. Everything works nice, but the repository is big and git's searching through all directories and files when committing is slow. I cannot move this repository out of the shared folder. I tried to git add specific files and directories, but when I do git commit -m "something" it still goes off onto it's oddyssey through the directory tree. Can I do commits that ignore the rest of the tree?

    Read the article

  • What type of websites does memcached speed up

    - by Saif Bechan
    I have read this article about 400% boost of your website. This is done by a combination of nginx and memcached. The how-to part of this website is quite good, but i mis the part where it says to what types of websites this applies. I know nginx is a http engine, I need no explanation for that. I thought memcached had something to do with caching database result. However i don't understand what this has to do with the http request, can someone please explain that to me. Another question I have is for what types of websites is this used. I have a website where the important part of the website consist of data that changes often. Often being minutes. Will this method still apply to me, or should I just stick with the basic boring setup of apache and nothing else.

    Read the article

  • Speed-up of readonly MyISAM table

    - by Ozzy
    We have a large MyISAM table that is used to archive old data. This archiving is performed every month, and except from these occasions data is never written to the table. Is there anyway to "tell" MySQL that this table is read-only, so that MySQL might optimize the performance of reads from this table? I've looked at the MEMORY storage engine, but the problem is that this table is so large that it would take a large portion of the servers memory, which I don't want. Hope my question is clear enough, I'm a novice when it comes to db administration so any input or suggestions are welcome.

    Read the article

  • Speed up an Excel Macro?

    - by N. Lucas
    Right now I have a macro PopulateYearlyValues But it seems to me it's taking way too long Sub PopulateYearlyValues(ByVal Month As Range) Dim c As Double Dim s As Double c = Application.WorksheetFunction.Match(UCase(Month.Value), ActiveSheet.Range("AA5:AX5"), 0) s = (ActiveSheet.Range("AA5").Column - 1) With ActiveSheet Dim i As Integer Dim j As Integer For i = 7 To 44 .Range("G" & i).Value = 0 .Range("H" & i).Value = 0 For j = 1 To c .Range("G" & i).Value = (.Range("G" & i).Value + .Cells(i, s).Offset(0, j)) .Range("H" & i).Value = (.Range("H" & i).Value + .Cells(i, s).Offset(0, (j + 1))) j = j + 1 Next j Next i End With End Sub I have a range G7:H44 that needs to be populated with the SUM of range AA7:AX44 but.. it's only every other column: If Month.Value = "January" G7 = SUM(AA7) H7 = SUM(AB7) ... G44 = SUM(AA44) H44 = SUM(AB44) End If If Month.Value = "April" G7 = SUM(AA7, AC7, AE7, AG7) H7 = SUM(AB7, AD7, AF7, AH7) ... G44 = SUM(AA44, AC44, AE44, AG44) H44 = SUM(AB44, AD44, AF44, AH44) End If But the macro I have is taking way too long.. Is there any other way to do this?

    Read the article

  • casting doubles to integers in order to gain speed

    - by antirez
    Hello all, in Redis (http://code.google.com/p/redis) there are scores associated to elements, in order to take this elements sorted. This scores are doubles, even if many users actually sort by integers (for instance unix times). When the database is saved we need to write this doubles ok disk. This is what is used currently: snprintf((char*)buf+1,sizeof(buf)-1,"%.17g",val); Additionally infinity and not-a-number conditions are checked in order to also represent this in the final database file. Unfortunately converting a double into the string representation is pretty slow. While we have a function in Redis that converts an integer into a string representation in a much faster way. So my idea was to check if a double could be casted into an integer without lost of data, and then using the function to turn the integer into a string if this is true. For this to provide a good speedup of course the test for integer "equivalence" must be fast. So I used a trick that is probably undefined behavior but that worked very well in practice. Something like that: double x = ... some value ... if (x == (double)((long long)x)) use_the_fast_integer_function((long long)x); else use_the_slow_snprintf(x); In my reasoning the double casting above converts the double into a long, and then back into an integer. If the range fits, and there is no decimal part, the number will survive the conversion and will be exactly the same as the initial number. As I wanted to make sure this will not break things in some system, I joined #c on freenode and I got a lot of insults ;) So I'm now trying here. Is there a standard way to do what I'm trying to do without going outside ANSI C? Otherwise, is the above code supposed to work in all the Posix systems that currently Redis targets? That is, archs where Linux / Mac OS X / *BSD / Solaris are running nowaday? What I can add in order to make the code saner is an explicit check for the range of the double before trying the cast at all. Thank you for any help.

    Read the article

  • Windows Azure local development environment speed

    - by Paperjam
    I've started porting an existing ASP.NET web app to Windows Azure and have noticed that the development process is really slow. Each time I make a change to my code and want to view it, I have to effectively redeploy it to the local dev cloud (using Start debugging (F5) or Start without debugging (Ctrl-F5). The process itself takes over a minute, during which time Visual Studio is completely unresponsive. Am I doing something wrong or is that simply how things are developing for Azure? My specs: Visual Studio 2008 9.0.30729.1 SP 5 projects running on .NET 3.5 SP1 Azure SDK 1.1 (February 2010) Single instance of a single web role Dual-core AMD 64 machine with 8GB RAM, 64-bit Windows 7, fully patched The main project itself is quite large (3k files, ~200k lines) but compiles normally in 10-15 seconds

    Read the article

  • Why the difference in speed?

    - by AngryHacker
    Consider this code: function Foo(ds as OtherDLL.BaseObj) dim lngRowIndex as long dim lngColIndex as long for lngRowIndex = 1 to ubound(ds.Data, 2) for lngColIndex = 1 to ds.Columns.Count Debug.Print ds.Data(lngRowIndex, lngColIndex) next next end function OK, a little context. Parameter ds is of type OtherDLL.BaseObj which is defined in a referenced ActiveX DLL. ds.Data is a variant 2-dimensional array (one dimension carries the data, the other one carries the column index. ds.Columns is a Collection of columns in 'ds.Data`. Assuming there are at least 400 rows of data and 25 columns, this code takes about 15 seconds to run on my machine. Kind of unbelievable. However if I copy the variant array to a local variable, so: function Foo(ds as OtherDLL.BaseObj) dim lngRowIndex as long dim lngColIndex as long dim v as variant v = ds.Data for lngRowIndex = 1 to ubound(v, 2) for lngColIndex = 1 to ds.Columns.Count Debug.Print v(lngRowIndex, lngColIndex) next next end function the entire thing processes in barely any noticeable time (basically close to 0). Why?

    Read the article

  • How to time Java program execution speed

    - by George Mic
    Sorry if this sounds like a dumb question but how do you time the execution of a java program? I'm not sure what class I should use to do this. I'm kinda looking for something like: //Some timer starts here for (int i = 0; i < length; i++) { // Do something } //End timer here System.out.println("Total execution time: " + totalExecutionTime); Thanks

    Read the article

  • nodejs response speed and nginx

    - by user1502440
    I'm just started testing nodejs, and wanted to get some help in understanding following behavior: Example #1: var http = require('http'); http.createServer(function(req, res){ res.writeHeader(200, {'Content-Type': 'text/plain'}); res.end('foo'); }).listen(1001, '0.0.0.0'); Example #2: var http = require('http'); http.createServer(function(req, res){ res.writeHeader(200, {'Content-Type': 'text/plain'}); res.write('foo'); res.end('bar'); }).listen(1001, '0.0.0.0'); When testing response time in Chrome: example #1 - 6-10ms example #2 - 200-220ms But, if test both examples through nginx proxy_pass server{ listen 1011; location / { proxy_pass http://127.0.0.1:1001; } } i get this: example #1 - 4-8ms example #2 - 4-8ms I am not an expert on either nodejs or nginx, and asking if someone can explain this? nodejs - v.0.8.1 nginx - v.1.2.2

    Read the article

  • B-trees, databases, sequential inputs, and speed.

    - by IanC
    I know from experience that b-trees have awful performance when data is added to them sequentially (regardless of the direction). However, when data is added randomly, best performance is obtained. This is easy to demonstrate with the likes of an RB-Tree. Sequential writes cause a maximum number of tree balances to be performed. I know very few databases use binary trees, but rather used n-order balanced trees. I logically assume they suffer a similar fate to binary trees when it comes to sequential inputs. This sparked my curiosity. If this is so, then one could deduce that writing sequential IDs (such as in IDENTITY(1,1)) would cause multiple re-balances of the tree to occur. I have seen many posts argue against GUIDs as "these will cause random writes". I never use GUIDs, but it struck me that this "bad" point was in fact a good point. So I decided to test it. Here is my code: SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[T1]( [ID] [int] NOT NULL CONSTRAINT [T1_1] PRIMARY KEY CLUSTERED ([ID] ASC) ) GO CREATE TABLE [dbo].[T2]( [ID] [uniqueidentifier] NOT NULL CONSTRAINT [T2_1] PRIMARY KEY CLUSTERED ([ID] ASC) ) GO declare @i int, @t1 datetime, @t2 datetime, @t3 datetime, @c char(300) set @t1 = GETDATE() set @i = 1 while @i < 2000 begin insert into T2 values (NEWID(), @c) set @i = @i + 1 end set @t2 = GETDATE() WAITFOR delay '0:0:10' set @t3 = GETDATE() set @i = 1 while @i < 2000 begin insert into T1 values (@i, @c) set @i = @i + 1 end select DATEDIFF(ms, @t1, @t2) AS [Int], DATEDIFF(ms, @t3, getdate()) AS [GUID] drop table T1 drop table T2 Note that I am not subtracting any time for the creation of the GUID nor for the considerably extra size of the row. The results on my machine were as follows: Int: 17,340 ms GUID: 6,746 ms This means that in this test, random inserts of 16 bytes was almost 3 times faster than sequential inserts of 4 bytes. Would anyone like to comment on this? Ps. I get that this isn't a question. It's an invite to discussion, and that is relevant to learning optimum programming.

    Read the article

  • wordpress mu image speed problem

    - by InnateDev
    I have an mu install with the typical blogs.dir folder storing files for each blog. When loading these images however they take forever to appear, but they eventually do. It seems that wpmu uses php to serve each image which is ludicrous. When using images from the same domain but in a root folder, the images are displayed quickly. Is there a workaround the blogs.php for rendering files? Could there be something else wrong in the settings of my install?

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >