Search Results

Search found 10033 results on 402 pages for 'execution speed'.

Page 50/402 | < Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >

  • Calculating Connection/Download Speed

    - by kdbdallas
    I have a client and server program (both in Obj-C) and I am transferring files between two devices using the programs. The transferring is working fine, but I would like to display to the user what transfer rate they are getting. So I know the total size of the file, and how much of the file has been transferred, is there a way to figure out the transfer rate from this information, and if not, what information do I need to calculate the transfer rate? Thanks

    Read the article

  • Is there a technical way to speed up a general program above current PC speed limit?

    - by Maksee
    Let's imagine I developed a Windows console application implementing some algorithm calculating something. Let's say it doesn't use any threads, just straightforward linear approach with ifs, loops and so on. Is there any technical way to make if run it 100x times faster than on the most advanced current PC? For example one of the way would be to run it on a super computer that emulates i386 faster than any of the existing PCs. But in this case the question what computer and does it really have ability to emulate Windows. In other words, is there real examples of such approach? Although in general it looks useless, but if there is a way, one could develop some program on his general home computer and pay for running it much faster on some other hardware. I suppose that this question could be asked on superuser.com, but since there are possible specific with such things as assembler instructions or threads, I thought that stackoverflow.com is better

    Read the article

  • how to speed up the code??

    - by kaushik
    in my program i have a method which requires about 4 files to be open each time it is called,as i require to take some data.all this data from the file i have been storing in list for manupalation. I approximatily need to call this method about 10,000 times.which is making my program very slow? any method for handling this files in a better ways and is storing the whole data in list time consuming what is better alternatives for list? I can give some code,but my previous question was closed as that only confused everyone as it is a part of big program and need to be explained completely to understand,so i am not giving any code,please suggest ways thinking this as a general question... thanks in advance

    Read the article

  • git can I speed up committing?

    - by AndreasT
    I have a big repository in a shared folder. I use git from within a VM on that folder. Everything works nice, but the repository is big and git's searching through all directories and files when committing is slow. I cannot move this repository out of the shared folder. I tried to git add specific files and directories, but when I do git commit -m "something" it still goes off onto it's oddyssey through the directory tree. Can I do commits that ignore the rest of the tree?

    Read the article

  • (Pathinfo vs fnmatch part 2) Speed benchmark reversed on Windows and Mac

    - by zaf
    On a previous question the pathinfo and fnmatch functions were benchmarked and the answers all came out opposite to my benchmark results. You can read the different results with the benchmark code here: http://stackoverflow.com/questions/2693428/pathinfo-vs-fnmatch I couldn't work it out until I ran the same code on a machine running vista. The results then matched the other users. My main machine is a mac. So, my questions are: Why do we get these two different results? Could this apply to other functions?

    Read the article

  • What type of websites does memcached speed up

    - by Saif Bechan
    I have read this article about 400% boost of your website. This is done by a combination of nginx and memcached. The how-to part of this website is quite good, but i mis the part where it says to what types of websites this applies. I know nginx is a http engine, I need no explanation for that. I thought memcached had something to do with caching database result. However i don't understand what this has to do with the http request, can someone please explain that to me. Another question I have is for what types of websites is this used. I have a website where the important part of the website consist of data that changes often. Often being minutes. Will this method still apply to me, or should I just stick with the basic boring setup of apache and nothing else.

    Read the article

  • Speed-up of readonly MyISAM table

    - by Ozzy
    We have a large MyISAM table that is used to archive old data. This archiving is performed every month, and except from these occasions data is never written to the table. Is there anyway to "tell" MySQL that this table is read-only, so that MySQL might optimize the performance of reads from this table? I've looked at the MEMORY storage engine, but the problem is that this table is so large that it would take a large portion of the servers memory, which I don't want. Hope my question is clear enough, I'm a novice when it comes to db administration so any input or suggestions are welcome.

    Read the article

  • Speed up an Excel Macro?

    - by N. Lucas
    Right now I have a macro PopulateYearlyValues But it seems to me it's taking way too long Sub PopulateYearlyValues(ByVal Month As Range) Dim c As Double Dim s As Double c = Application.WorksheetFunction.Match(UCase(Month.Value), ActiveSheet.Range("AA5:AX5"), 0) s = (ActiveSheet.Range("AA5").Column - 1) With ActiveSheet Dim i As Integer Dim j As Integer For i = 7 To 44 .Range("G" & i).Value = 0 .Range("H" & i).Value = 0 For j = 1 To c .Range("G" & i).Value = (.Range("G" & i).Value + .Cells(i, s).Offset(0, j)) .Range("H" & i).Value = (.Range("H" & i).Value + .Cells(i, s).Offset(0, (j + 1))) j = j + 1 Next j Next i End With End Sub I have a range G7:H44 that needs to be populated with the SUM of range AA7:AX44 but.. it's only every other column: If Month.Value = "January" G7 = SUM(AA7) H7 = SUM(AB7) ... G44 = SUM(AA44) H44 = SUM(AB44) End If If Month.Value = "April" G7 = SUM(AA7, AC7, AE7, AG7) H7 = SUM(AB7, AD7, AF7, AH7) ... G44 = SUM(AA44, AC44, AE44, AG44) H44 = SUM(AB44, AD44, AF44, AH44) End If But the macro I have is taking way too long.. Is there any other way to do this?

    Read the article

  • casting doubles to integers in order to gain speed

    - by antirez
    Hello all, in Redis (http://code.google.com/p/redis) there are scores associated to elements, in order to take this elements sorted. This scores are doubles, even if many users actually sort by integers (for instance unix times). When the database is saved we need to write this doubles ok disk. This is what is used currently: snprintf((char*)buf+1,sizeof(buf)-1,"%.17g",val); Additionally infinity and not-a-number conditions are checked in order to also represent this in the final database file. Unfortunately converting a double into the string representation is pretty slow. While we have a function in Redis that converts an integer into a string representation in a much faster way. So my idea was to check if a double could be casted into an integer without lost of data, and then using the function to turn the integer into a string if this is true. For this to provide a good speedup of course the test for integer "equivalence" must be fast. So I used a trick that is probably undefined behavior but that worked very well in practice. Something like that: double x = ... some value ... if (x == (double)((long long)x)) use_the_fast_integer_function((long long)x); else use_the_slow_snprintf(x); In my reasoning the double casting above converts the double into a long, and then back into an integer. If the range fits, and there is no decimal part, the number will survive the conversion and will be exactly the same as the initial number. As I wanted to make sure this will not break things in some system, I joined #c on freenode and I got a lot of insults ;) So I'm now trying here. Is there a standard way to do what I'm trying to do without going outside ANSI C? Otherwise, is the above code supposed to work in all the Posix systems that currently Redis targets? That is, archs where Linux / Mac OS X / *BSD / Solaris are running nowaday? What I can add in order to make the code saner is an explicit check for the range of the double before trying the cast at all. Thank you for any help.

    Read the article

  • Windows Azure local development environment speed

    - by Paperjam
    I've started porting an existing ASP.NET web app to Windows Azure and have noticed that the development process is really slow. Each time I make a change to my code and want to view it, I have to effectively redeploy it to the local dev cloud (using Start debugging (F5) or Start without debugging (Ctrl-F5). The process itself takes over a minute, during which time Visual Studio is completely unresponsive. Am I doing something wrong or is that simply how things are developing for Azure? My specs: Visual Studio 2008 9.0.30729.1 SP 5 projects running on .NET 3.5 SP1 Azure SDK 1.1 (February 2010) Single instance of a single web role Dual-core AMD 64 machine with 8GB RAM, 64-bit Windows 7, fully patched The main project itself is quite large (3k files, ~200k lines) but compiles normally in 10-15 seconds

    Read the article

  • Why the difference in speed?

    - by AngryHacker
    Consider this code: function Foo(ds as OtherDLL.BaseObj) dim lngRowIndex as long dim lngColIndex as long for lngRowIndex = 1 to ubound(ds.Data, 2) for lngColIndex = 1 to ds.Columns.Count Debug.Print ds.Data(lngRowIndex, lngColIndex) next next end function OK, a little context. Parameter ds is of type OtherDLL.BaseObj which is defined in a referenced ActiveX DLL. ds.Data is a variant 2-dimensional array (one dimension carries the data, the other one carries the column index. ds.Columns is a Collection of columns in 'ds.Data`. Assuming there are at least 400 rows of data and 25 columns, this code takes about 15 seconds to run on my machine. Kind of unbelievable. However if I copy the variant array to a local variable, so: function Foo(ds as OtherDLL.BaseObj) dim lngRowIndex as long dim lngColIndex as long dim v as variant v = ds.Data for lngRowIndex = 1 to ubound(v, 2) for lngColIndex = 1 to ds.Columns.Count Debug.Print v(lngRowIndex, lngColIndex) next next end function the entire thing processes in barely any noticeable time (basically close to 0). Why?

    Read the article

  • nodejs response speed and nginx

    - by user1502440
    I'm just started testing nodejs, and wanted to get some help in understanding following behavior: Example #1: var http = require('http'); http.createServer(function(req, res){ res.writeHeader(200, {'Content-Type': 'text/plain'}); res.end('foo'); }).listen(1001, '0.0.0.0'); Example #2: var http = require('http'); http.createServer(function(req, res){ res.writeHeader(200, {'Content-Type': 'text/plain'}); res.write('foo'); res.end('bar'); }).listen(1001, '0.0.0.0'); When testing response time in Chrome: example #1 - 6-10ms example #2 - 200-220ms But, if test both examples through nginx proxy_pass server{ listen 1011; location / { proxy_pass http://127.0.0.1:1001; } } i get this: example #1 - 4-8ms example #2 - 4-8ms I am not an expert on either nodejs or nginx, and asking if someone can explain this? nodejs - v.0.8.1 nginx - v.1.2.2

    Read the article

  • B-trees, databases, sequential inputs, and speed.

    - by IanC
    I know from experience that b-trees have awful performance when data is added to them sequentially (regardless of the direction). However, when data is added randomly, best performance is obtained. This is easy to demonstrate with the likes of an RB-Tree. Sequential writes cause a maximum number of tree balances to be performed. I know very few databases use binary trees, but rather used n-order balanced trees. I logically assume they suffer a similar fate to binary trees when it comes to sequential inputs. This sparked my curiosity. If this is so, then one could deduce that writing sequential IDs (such as in IDENTITY(1,1)) would cause multiple re-balances of the tree to occur. I have seen many posts argue against GUIDs as "these will cause random writes". I never use GUIDs, but it struck me that this "bad" point was in fact a good point. So I decided to test it. Here is my code: SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[T1]( [ID] [int] NOT NULL CONSTRAINT [T1_1] PRIMARY KEY CLUSTERED ([ID] ASC) ) GO CREATE TABLE [dbo].[T2]( [ID] [uniqueidentifier] NOT NULL CONSTRAINT [T2_1] PRIMARY KEY CLUSTERED ([ID] ASC) ) GO declare @i int, @t1 datetime, @t2 datetime, @t3 datetime, @c char(300) set @t1 = GETDATE() set @i = 1 while @i < 2000 begin insert into T2 values (NEWID(), @c) set @i = @i + 1 end set @t2 = GETDATE() WAITFOR delay '0:0:10' set @t3 = GETDATE() set @i = 1 while @i < 2000 begin insert into T1 values (@i, @c) set @i = @i + 1 end select DATEDIFF(ms, @t1, @t2) AS [Int], DATEDIFF(ms, @t3, getdate()) AS [GUID] drop table T1 drop table T2 Note that I am not subtracting any time for the creation of the GUID nor for the considerably extra size of the row. The results on my machine were as follows: Int: 17,340 ms GUID: 6,746 ms This means that in this test, random inserts of 16 bytes was almost 3 times faster than sequential inserts of 4 bytes. Would anyone like to comment on this? Ps. I get that this isn't a question. It's an invite to discussion, and that is relevant to learning optimum programming.

    Read the article

  • wordpress mu image speed problem

    - by InnateDev
    I have an mu install with the typical blogs.dir folder storing files for each blog. When loading these images however they take forever to appear, but they eventually do. It seems that wpmu uses php to serve each image which is ludicrous. When using images from the same domain but in a root folder, the images are displayed quickly. Is there a workaround the blogs.php for rendering files? Could there be something else wrong in the settings of my install?

    Read the article

  • C++ STL Map vs Vector speed

    - by sub
    In the interpreter for my experimental programming language I have a symbol table. Each symbol consists of a name and a value (the value can be e.g.: of type string, int, function, etc.). At first I represented the table with a vector and iterated through the symbols checking if the given symbol name fitted. Then I though using a map, in my case map<string,symbol>, would be better than iterating through the vector all the time but: It's a bit hard to explain this part but I'll try. If a variable is retrieved the first time in a program in my language, of course its position in the symbol table has to be found (using vector now). If I would iterate through the vector every time the line gets executed (think of a loop), it would be terribly slow (as it currently is, nearly as slow as microsoft's batch). So I could use a map to retrieve the variable: SymbolTable[ myVar.Name ] But think of the following: If the variable, still using vector, is found the first time, I can store its exact integer position in the vector with it. That means: The next time it is needed, my interpreter knows that it has been "cached" and doesn't search the symbol table for it but does something like SymbolTable.at( myVar.CachedPosition ). Now my (rather hard?) question: Should I use a vector for the symbol table together with caching the position of the variable in the vector? Should I rather use a map? Why? How fast is the [] operator? Should I use something completely different?

    Read the article

  • speed up calling lot of entities, and getting unique values, google app engine python

    - by user291071
    OK this is a 2 part question, I've seen and searched for several methods to get a list of unique values for a class and haven't been practically happy with any method so far. So anyone have a simple example code of getting unique values for instance for this code. Here is my super slow example. class LinkRating2(db.Model): user = db.StringProperty() link = db.StringProperty() rating2 = db.FloatProperty() def uniqueLinkGet(tabl): start = time.time() dic = {} query = tabl.all() for obj in query: dic[obj.link]=1 end = time.time() print end-start return dic My second question is calling for instance an iterator instead of fetch slower? Is there a faster method to do this code below? Especially if the number of elements called be larger than 1000? query = LinkRating2.all() link1 = 'some random string' a = query.filter('link = ', link1) adic ={} for itema in a: adic[itema.user]=itema.rating2

    Read the article

  • Getting up to speed on modern architecture

    - by Matt Thrower
    Hi, I don't have any formal qualifications in computer science, rather I taught myself classic ASP back in the days of the dotcom boom and managed to get myself a job and my career developed from there. I was a confident and, I think, pretty good programmer in ASP 3 but as others have observed one of the problems with classic ASP was that it did a very good job of hiding the nitty-gritty of http so you could become quite competent as a programmer on the basis of relatively poor understanding of the technology you were working with. When I changed on to .NET at first I treated it like classic ASP, developing stand-alone applications as individual websites simply because I didn't know any better at the time. I moved jobs at this point and spent the next several years working on a single site whose architecture relied heavily on custom objects: in other words I gained a lot of experience working with .NET as a middle-tier development tool using a quite old-fashioned approach to OO design along the lines of the classic "car" class example that's so often used to teach OO. Breaking down programs into blocks of functionality and basing your classes and methods around that. Although we worked under an Agile approach to manage the work the whole setup was classic client/server stuff. That suited me and I gradually got to grips with .NET and started using it far more in the manner that it should be, and I began to see the power inherent in the technology and precisely why it was so much better than good old ASP 3. In my latest job I have found myself suddenly dropped in at the deep end with two quite young, skilled and very cutting-edge programmers. They've built a site architecture which is modelling along a lot of stuff which is new to me and which, in truth I'm having a lot of trouble understanding. The application is built on a cloud computing model with multi-tenancy and the architecture is all loosely coupled using a lot of interfaces, factories and the like. They use nHibernate a lot too. Shortly after I joined, both these guys left and I'm now supposedly the senior developer on a system whose technology and architecture I don't really understand and I have no-one to ask questions of. Except you, the internet. Frankly I feel like I've been pitched in at the deep end and I'm sinking. I'm not sure if this is because I lack the educational background to understand this stuff, if I'm simply not mathematically minded enough for modern computing (my maths was never great - my approach to design is often to simply debug until it works, then refactor until it looks neat), or whether I've simply been presented with too much of too radical a nature at once. But the only way to find out which it is is to try and learn it. So can anyone suggest some good places to start? Good books, tutorials or blogs? I've found a lot of internet material simply presupposes a level of understanding that I just don't have. Your advice is much appreciated. Help a middle-aged, stuck in the mud developer get enthusastic again! Please!

    Read the article

  • How can i test my DB speed? (Learning)

    - by acidzombie24
    I have design a database. Theres no columns with indexing, nor any code for optimizing. I am positive i should index certain columns since i search them a lot. My question is HOW do i test if any part of my database will be slow? ATM I am using sqlite and i will be switching to either MS Sql or MySql based on my host provider. Will creating 100,000 records in each table be enough? Or will that always be fast in sqlite and i need to do 1mil? Do i need 10mil before a database will become slow? Also how do i time it? I am using C# so should i use StopWatch or is there a ADO.NET/Sqlite function i should use?

    Read the article

  • Sun's JVM instruction speed table

    - by Pindatjuh
    Is there a benchmark available how much relative time each instruction costs in a single-thread, average-case scenario (either with or without JIT compiler), for the JVM (any version) by Sun? If there is not a benchmark already available, how can I get this information? E.g.: TIME iload_1 1 iadd 12 getfield 40 etc. Where getfield is equivalent to 40 iload_1 instructions.

    Read the article

  • How to speed up my websites (backoffices)

    - by jmpena
    Hello im developing some backends in ASP.NET 2.0 and i have put all the images in Cache, GZIPED my CSS, JS files and everything to speedup the load of each options. the performance its good and i have no problems with the clients but i want "MORE" fast loads and im looking for some recomendations. Is important to mention that those websites are using only in intranets so im thinking to implement my next projects using IFRAME for content that way (i think) the options will be loading faster because they not have to load the entire site. any help / recomendations? thanks in advance.

    Read the article

  • C++ Vector at/[] operator speed

    - by sub
    In order to give functions the option to modify the vector I can't do curr = myvec.at( i ); doThis( curr ); doThat( curr ); doStuffWith( curr ); But I have to do: doThis( myvec.at( i ) ); doThat( myvec.at( i ) ); doStuffWith( myvec.at( i ) ); (as the answers of my other question pointed out) I'm going to make a hell lot of calls to myvec.at() then. How fast is it, compared to the first example using a variable to store the result? Is there a different option for me? Can I somehow use pointers? When it's getting serious there will be thousands of calls to myvec.at() per second. So every little performance-eater is important.

    Read the article

  • Speed up :visible:input selector avoiding filter

    - by macca1
    I have a jQuery selector that is running way too slow on my unfortunately large page: $("#section").find(":visible:input").filter(":first").focus(); Is there a quicker way to select the first visible input without having to find ALL the visible inputs and then filtering THAT selection for the first? I want something like :visible:input:first but that doesn't seem to work.

    Read the article

< Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >