Search Results

Search found 12007 results on 481 pages for 'usb speed'.

Page 114/481 | < Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >

  • Do PHP Framworks Speed Up the Development Process?

    - by Jacob
    I recently started working for a web firm as a freelancer taking my hobby of coding in PHP to a career level, and since then I have been overwhelmed by the amount of work that needs to be done within short time frames. The problem isn't being able to do what is asked, but being able to do it all as quickly as is needed of me. I never used any PHP frameworks, but if I started using one, would that speed up the entire development process? If so, how drastically? Also which framework would be best for my purpose? If it matters, what I do is mostly build back end cms's and tie that with front end functionality for small business client sites.

    Read the article

  • PHP, ImageMagick, Google's Page Speed, & JPG File Size Optimization

    - by Sonny
    I have photo gallery code that does image re-sizing and thumbnail creation. I use ImageMagick to do this. I ran a gallery page through Google's Page Speed tool and it revealed that the re-sized images and thumbnails both have about an extra 10KB of data (JPEG files specifically). What can I add to my scripts to optimize the file size? ADDITIONAL INFORMATION I am using the imagick::FILTER_LANCZOS filter with a blur setting of 0.9 when calling the resizeImage() function. JPEGs have a quality setting of 80.

    Read the article

  • JPG File Size Optimization - PHP, ImageMagick, & Google's Page Speed

    - by Sonny
    I have photo gallery code that does image re-sizing and thumbnail creation. I use ImageMagick to do this. I ran a gallery page through Google's Page Speed tool and it revealed that the re-sized images and thumbnails both have about an extra 10KB of data (JPEG files specifically). What can I add to my scripts to optimize the file size? ADDITIONAL INFORMATION I am using the imagick::FILTER_LANCZOS filter with a blur setting of 0.9 when calling the resizeImage() function. JPEGs have a quality setting of 80.

    Read the article

  • Speed improvements for Perl's chameneos-redux in the Computer Language Benchmarks Game

    - by Robert P
    Ever looked at the Computer Language Benchmarks Game (formerly known as the Great Language Shootout)? Perl has some pretty healthy competition there at the moment. It also occurs to me that there's probably some places that Perl's scores could be improved. The biggest one is in the chameneos-redux script right now—the Perl version runs the worst out of any language: 1,626 times slower than the C baseline solution! There are some restrictions on how the programs can be made and optimized, and there is Perl's interpreted runtime penalty, but 1,626 times? There's got to be something that can get the runtime of this program way down. Taking a look at the source code and the challenge, how can the speed be improved?

    Read the article

  • Let system time determine animation speed, not program FPS

    - by Anders
    I'm writing a card game in ActionScript 3. Each card is represented by an instance of a class extending movieclip exported from Flash CS4 that contains the card graphics and a flip animation. When I want to flip a card I call gotoAndPlay on this movieclip. When the frame rate slows down all animations take longer to finish. It seems Flash will by default animate movieclips in a way that makes sure all frames in the clip will be drawn. Therefor, when the program frame rate goes below the frame rate of the clip, the animation will be played at a slower pace. I would like to have an animation always play at the same speed and as a consequence always be shown on the screen for the same amount of time. If the frame rate is too slow to show all frames, frames are dropped. Is is possible to tell Flash to animate in this way? If not, what is the easiest way to program this behavior myself?

    Read the article

  • Is Code Completion speed improved in Delphi 2010?

    - by Holgerwa
    I am working with Delphi 2009 Pro and just tried to find out why code completion is so slow in my setup. Whenever code completion is invoked, the IDE locks up for up to 30s, which really interrupts any workflow. When working with BDS 2006, code completion was incredibly fast compared to Delphi 2009. After reading this post it seems to be normal for Delphi 2009, but just turning off the automatic code completion is not anything I want to do. My question is: If I switch to Delphi 2010, will I have the same slow speed for code completion or was it improved to a point to be usable?

    Read the article

  • Speed improvements for Perl's chameneos-redux script in the Computer Language Benchmarks Game

    - by Robert P
    Ever looked at the Computer Language Benchmarks Game, (formerly known as the Great Language Shootout)? Perl has some pretty healthy competition there at the moment. It also occurs to me that there's probably some places that Perl's scores could be improved. The biggest one is in the chameneos-redux script right now - the Perl version runs the worst out of any language : 1,626 times slower than the C baseline solution! There are some restrictions on how the programs can be made and optimized, and there is Perl's interpreted runtime penalty, but 1,626 times? There's got to be something that can get the runtime of this program way down. Taking a look at the source code and the challenge, what do you think could be done to reduce this runtime speed?

    Read the article

  • speed up a sql query to mysql?

    - by fayer
    in my mysql database i've got the geonames database, containing all countries, states and cities. i am using this to create a cascading menu so the user could select where he is from: country - state - county - city. but the main problem is that the query will search through all the 7 millions rows in that table each time i want to get the list of children rows, and that is taking a while 10-15 seconds. i wonder how i could speed this up: caching? table views? reorganizing table structure somehow? and most important, how do i do these things? are there good tutorials you could link to me? i appreciate all help and feedback discussing smart ways of handling this issue!

    Read the article

  • Play a beep that loop and change the frequency/speed

    - by Bono
    Hi all, I am creating an iphone application that use audio. I want to play a beep sound that loop indefinitely. I found an easy way to do that using the upper layer AVAudioPlayer and the numberOfLoops set to "-1". It works fine. But now I want to play this audio and be able to change the rate / speed. It may works like the sound played by a car when approaching an obstacle. At the beginning the beep has a low frequency and this frequency accelerate till reaching a continuous sound biiiiiiiiiiiip ... It seems this is not feasible using the high layer AVAudioPlayer, but even looking at AudioToolBox I found no solution. Does anybody have informations about how to do that? Thanks a lot for helping me!

    Read the article

  • Java paint speed relative to color model

    - by Jon
    I have a BufferedImage with an IndexColorModel. I need to paint that image onto the screen, but I've noticed that this is slow when using an IndexColorModel. However, if I run the BufferedImage through an identity affine transform it creates an image with a DirectColorModel and the painting is significantly faster. Here's the code I'm using AffineTransformOp identityOp = new AffineTransformOp(new AffineTransform(), AffineTransformOp.TYPE_BILINEAR); displayImage = identityOp.filter(displayImage, null); I have three questions 1. Why is painting the slower on an IndexColorModel? 2. Is there any way to speed up the painting of an IndexColorModel? 3. If the answer to 2. is no, is this the most efficient way to convert from an IndexColorModel to a DirectColorModel? I've noticed that this conversion is dependent on the size of the image, and I'd like to remove that dependency. Thanks for the help

    Read the article

  • making use of c++ to speed up php

    - by Ygam
    I saw this post on Sitepoint quoting a statement by Rasmus Lerdorf which goes (according to Sitepoint) as follows: "How can you make PHP fast? Well, you can’t" was his quick answer. PHP is simply not fast enough to scale to Yahoo levels. PHP was never meant for those sorts of tasks. "Any script based language is simply not fast enough". To get the speed that is necessary for truly massive web systems you have to use compiled C++ extensions to get true, scaleable architecture. That is what Yahoo does and so do many other PHP heavyweights. Intrigued by the statement (not to mention the fact that up to now, all I was doing in PHP was small database-based apps), I was wondering how I could "use compiled C++ extensions" with PHP. Any ideas or resources?

    Read the article

  • Can rails test speed be increased?

    - by Sam
    Hi all, I'm a recent convert to TDD but as my codebase grows in size and complexity, I find myself waiting longer and longer periods for the framework to load every time I want to run a test. I am aware of rspec's spec_server but I'm using Test::Unit with shoulda. I tried Snailgun (http://github.com/candlerb/snailgun) but noticed very little increased in speed. I have also tried spork-testunit (http://github.com/timcharper/spork-testunit) but it's not fully compatible with my existing tests. The delay in running tests is a definite pain point and is putting me of TDD (at least with rails). Is anyone aware of any other options? thanks Sam

    Read the article

  • Trying to reduce the speed overhead of an almost-but-not-quite-int number class

    - by Fumiyo Eda
    I have implemented a C++ class which behaves very similarly to the standard int type. The difference is that it has an additional concept of "epsilon" which represents some tiny value that is much less than 1, but greater than 0. One way to think of it is as a very wide fixed point number with 32 MSBs (the integer parts), 32 LSBs (the epsilon parts) and a huge sea of zeros in between. The following class works, but introduces a ~2x speed penalty in the overall program. (The program includes code that has nothing to do with this class, so the actual speed penalty of this class is probably much greater than 2x.) I can't paste the code that is using this class, but I can say the following: +, -, +=, <, > and >= are the only heavily used operators. Use of setEpsilon() and getInt() is extremely rare. * is also rare, and does not even need to consider the epsilon values at all. Here is the class: #include <limits> struct int32Uepsilon { typedef int32Uepsilon Self; int32Uepsilon () { _value = 0; _eps = 0; } int32Uepsilon (const int &i) { _value = i; _eps = 0; } void setEpsilon() { _eps = 1; } Self operator+(const Self &rhs) const { Self result = *this; result._value += rhs._value; result._eps += rhs._eps; return result; } Self operator-(const Self &rhs) const { Self result = *this; result._value -= rhs._value; result._eps -= rhs._eps; return result; } Self operator-( ) const { Self result = *this; result._value = -result._value; result._eps = -result._eps; return result; } Self operator*(const Self &rhs) const { return this->getInt() * rhs.getInt(); } // XXX: discards epsilon bool operator<(const Self &rhs) const { return (_value < rhs._value) || (_value == rhs._value && _eps < rhs._eps); } bool operator>(const Self &rhs) const { return (_value > rhs._value) || (_value == rhs._value && _eps > rhs._eps); } bool operator>=(const Self &rhs) const { return (_value >= rhs._value) || (_value == rhs._value && _eps >= rhs._eps); } Self &operator+=(const Self &rhs) { this->_value += rhs._value; this->_eps += rhs._eps; return *this; } Self &operator-=(const Self &rhs) { this->_value -= rhs._value; this->_eps -= rhs._eps; return *this; } int getInt() const { return(_value); } private: int _value; int _eps; }; namespace std { template<> struct numeric_limits<int32Uepsilon> { static const bool is_signed = true; static int max() { return 2147483647; } } }; The code above works, but it is quite slow. Does anyone have any ideas on how to improve performance? There are a few hints/details I can give that might be helpful: 32 bits are definitely insufficient to hold both _value and _eps. In practice, up to 24 ~ 28 bits of _value are used and up to 20 bits of _eps are used. I could not measure a significant performance difference between using int32_t and int64_t, so memory overhead itself is probably not the problem here. Saturating addition/subtraction on _eps would be cool, but isn't really necessary. Note that the signs of _value and _eps are not necessarily the same! This broke my first attempt at speeding this class up. Inline assembly is no problem, so long as it works with GCC on a Core i7 system running Linux!

    Read the article

  • Massive speed diff in upgrade to Java 7

    - by Brett Rigby
    We use Java within our build process, as it is used to resolve/publish our dependencies via Ivy. No problem, nor have we had with it for 2 years, until we've tried to upgrade Java 6 Update 26 to Version 7 Update 7, whereas a build on a local developer PC (WinXP) now takes 2 hours to complete, instead of 10 minutes!! Nothing else has changed on the PC, making it the absolute target for our concerns. Does anyone know of any reason as to why version 7 of Java would make such a speed difference like this?

    Read the article

  • c++ and c# speed compared

    - by Mack
    I was worried about C#'s speed when it deals with heavy calculations, when you need to use raw CPU power. I always thought that C++ is much faster than C# when it comes to calculations. So I did some quick tests. The first test computes prime numbers < an integer n, the second test computes some pandigital numbers. The idea for second test comes from here: Pandigital Numbers C# prime computation: using System; using System.Diagnostics; class Program { static int primes(int n) { uint i, j; int countprimes = 0; for (i = 1; i <= n; i++) { bool isprime = true; for (j = 2; j <= Math.Sqrt(i); j++) if ((i % j) == 0) { isprime = false; break; } if (isprime) countprimes++; } return countprimes; } static void Main(string[] args) { int n = int.Parse(Console.ReadLine()); Stopwatch sw = new Stopwatch(); sw.Start(); int res = primes(n); sw.Stop(); Console.WriteLine("I found {0} prime numbers between 0 and {1} in {2} msecs.", res, n, sw.ElapsedMilliseconds); Console.ReadKey(); } } C++ variant: #include <iostream> #include <ctime> int primes(unsigned long n) { unsigned long i, j; int countprimes = 0; for(i = 1; i <= n; i++) { int isprime = 1; for(j = 2; j < (i^(1/2)); j++) if(!(i%j)) { isprime = 0; break; } countprimes+= isprime; } return countprimes; } int main() { int n, res; cin>>n; unsigned int start = clock(); res = primes(n); int tprime = clock() - start; cout<<"\nI found "<<res<<" prime numbers between 1 and "<<n<<" in "<<tprime<<" msecs."; return 0; } When I ran the test trying to find primes < than 100,000, C# variant finished in 0.409 seconds and C++ variant in 5.553 seconds. When I ran them for 1,000,000 C# finished in 6.039 seconds and C++ in about 337 seconds. Pandigital test in C#: using System; using System.Diagnostics; class Program { static bool IsPandigital(int n) { int digits = 0; int count = 0; int tmp; for (; n > 0; n /= 10, ++count) { if ((tmp = digits) == (digits |= 1 << (n - ((n / 10) * 10) - 1))) return false; } return digits == (1 << count) - 1; } static void Main() { int pans = 0; Stopwatch sw = new Stopwatch(); sw.Start(); for (int i = 1; i <= 123456789; i++) { if (IsPandigital(i)) { pans++; } } sw.Stop(); Console.WriteLine("{0}pcs, {1}ms", pans, sw.ElapsedMilliseconds); Console.ReadKey(); } } Pandigital test in C++: #include <iostream> #include <ctime> using namespace std; int IsPandigital(int n) { int digits = 0; int count = 0; int tmp; for (; n > 0; n /= 10, ++count) { if ((tmp = digits) == (digits |= 1 << (n - ((n / 10) * 10) - 1))) return 0; } return digits == (1 << count) - 1; } int main() { int pans = 0; unsigned int start = clock(); for (int i = 1; i <= 123456789; i++) { if (IsPandigital(i)) { pans++; } } int ptime = clock() - start; cout<<"\nPans:"<<pans<<" time:"<<ptime; return 0; } C# variant runs in 29.906 seconds and C++ in about 36.298 seconds. I didn't touch any compiler switches and bot C# and C++ programs were compiled with debug options. Before I attempted to run the test I was worried that C# will lag well behind C++, but now it seems that there is a pretty big speed difference in C# favor. Can anybody explain this? C# is jitted and C++ is compiled native so it's normal that a C++ will be faster than a C# variant. Thanks for the answers!

    Read the article

  • java: speed up reading foreign characters

    - by Yang
    My current code needs to read foreign characters from the web, currently my solution works but it is very slow, since it read char by char using InputStreamReader. Is there anyway to speed it up and also get the job done? // Pull content stream from response HttpEntity entity = response.getEntity(); InputStream inputStream = entity.getContent(); StringBuilder contents = new StringBuilder(); int ch; InputStreamReader isr = new InputStreamReader(inputStream, "gb2312"); // FileInputStream file = new InputStream(is); while( (ch = isr.read()) != -1) contents.append((char)ch); String encode = isr.getEncoding(); return contents.toString();

    Read the article

  • Generator speed in python 3

    - by Will
    Hello all, I am going through a link about generators that someone posted. In the beginning he compares the two functions below. On his setup he showed a speed increase of 5% with the generator. I'm running windows XP, python 3.1.1, and cannot seem to duplicate the results. I keep showing the "old way"(logs1) as being slightly faster when tested with the provided logs and up to 1GB of duplicated data. Can someone help me understand whats happening differently? Thanks! def logs1(): wwwlog = open("big-access-log") total = 0 for line in wwwlog: bytestr = line.rsplit(None,1)[1] if bytestr != '-': total += int(bytestr) return total def logs2(): wwwlog = open("big-access-log") bytecolumn = (line.rsplit(None,1)[1] for line in wwwlog) getbytes = (int(x) for x in bytecolumn if x != '-') return sum(getbytes)

    Read the article

  • speed up the speed of a sql query to mysql?

    - by fayer
    in my mysql database i've got the geonames database, containing all countries, states and cities. i am using this to create a cascading menu so the user could select where he is from: country - state - county - city. but the main problem is that the query will search through all the 7 millions rows in that table each time i want to get the list of children rows, and that is taking a while 10-15 seconds. i wonder how i could speed this up: caching? table views? reorganizing table structure somehow? and most important, how do i do these things? are there good tutorials you could link to me? i appreciate all help and feedback discussing smart ways of handling this issue!

    Read the article

  • Unable to get Processor Speed in Device

    - by mukesh
    Hi i am using QueryperformanceFrequency to get the No of cycle i.e processor speed. But it is showing me the wornd value. It is written in the specicfication is the Processor is about 400MHz, but what we aregetting through code is something 16MHz. Please porvide any pointer : The code for Wince device is: enter code here LARGE_INTEGER FrequnecyCounter; QueryPerformanceFrequency(&FrequnecyCounter); CString temp; temp.Format(L"%lld",FrequnecyCounter.QuadPart)`AfxMessageBox(temp); Thanks, Mukesh

    Read the article

  • How to speed this kind of for-loop?

    - by wok
    I would like to compute the maximum of translated images along the direction of a given axis. I know about ordfilt2, however I would like to avoid using the Image Processing Toolbox. So here is the code I have so far: imInput = imread('tire.tif'); n = 10; imMax = imInput(:, n:end); for i = 1:(n-1) imMax = max(imMax, imInput(:, i:end-(n-i))); end Is it possible to avoid using a for-loop in order to speed the computation up, and, if so, how?

    Read the article

  • Python text file processing speed issues

    - by Anonymouslemming
    Hi all, I'm having a problem with processing a largeish file in Python. All I'm doing is f = gzip.open(pathToLog, 'r') for line in f: counter = counter + 1 if (counter % 1000000 == 0): print counter f.close This takes around 10m25s just to open the file, read the lines and increment this counter. In perl, dealing with the same file and doing quite a bit more (some regular expression stuff), the whole process takes around 1m17s. Perl Code: open(LOG, "/bin/zcat $logfile |") or die "Cannot read $logfile: $!\n"; while (<LOG>) { if (m/.*\[svc-\w+\].*login result: Successful\.$/) { $_ =~ s/some regex here/$1,$2,$3,$4/; push @an_array, $_ } } close LOG; Can anyone advise what I can do to make the Python solution run at a similar speed to the Perl solution? I've tried just uncompressing the file and dealing with it using open instead of gzip.open, but that made a very small difference to the overall time.

    Read the article

  • VB.Net HTTPWebRequest Speed is slow comparing Python URLOpen

    - by regexhacks
    Hi I am coding a web-crawler which will crawl the websites and selectively parse different sections of a web site. I am a .Net developer so the choice was obvious that I did it in .Net but the speed was very slow which included downloading and parsing of HTMLPages Then I tried to just download the contents first using .Net and then same domains using python but the python was very impressive in downloading data. I have achieved downloading using python but the later part is not that easy to code in python, which obviously i don't want to do. The same batch of domain which took 100 seconds in Python was taking 20 minutes in .Net based crawler I tried http://www.eqlit.com/ to download and in took 8 seconds in Python and same was taking 100 Seconds in .Net crawler Does anyone anyone have any idea why this is slow in .Net but fast in python?

    Read the article

  • ASP.NET Speed up DataView sorting/paging

    - by rlb.usa
    I have a page in ASP.NET where I'm using a Repeater to display a record listing. But it's slow as molasses, I've been tasked with speeding it up (sorting,paging). I've got it set up as follows: When user enters page, grab all of the data from the database (500 records, up to 4 relation'ed records) Store it all in Application["MyDataView"] On sort or paging, simply use the data view's internal sort/page method (no db calls) and rebind. I understand that databases can take time to query, but simply to have the DataView call it's sort method (no db calls) takes 10ish seconds, that's an alarmingly slow. Two questions: Why is it taking so long? How can I speed it up? A gridview is not possible.

    Read the article

  • sql query is too slow, how to improve speed

    - by user1289282
    I have run into a bottleneck when trying to update one of my tables. The player table has, among other things, id, skill, school, weight. What I am trying to do is: SELECT id, skill FROM player WHERE player.school = (current school of 4500) AND player.weight = (current weight of 14) to find the highest skill of all players returned from the query UPDATE player SET starter = 'TRUE' WHERE id = (highest skill) move to next weight and repeat when all weights have been completed move to next school and start over all schools completed, done I have this code implemented and it works, but I have approximately 4500 schools totaling 172000 players and the way I have it now, it would take probably a half hour or more to complete (did not wait it out), which is way too slow. How to speed this up? Short of reducing the scale of the system, I am willing to do anything that gets the intended result. Thanks! *the weights are the standard folk style wrestling weights ie, 103, 113, 120, 126, 132, 138, 145, 152, 160, 170, 182, 195, 220, 285 pounds

    Read the article

  • Using FILE_FLAG_NO_BUFFERING will return noticeable speed gain?

    - by 9dan
    Recently noticed detail description of FILE_FLAG_NO_BUFFERING flag in MSDN, and read several Google search results about unbuffered I/O in Windows. http://msdn.microsoft.com/en-us/library/aa363858(v=vs.85).aspx I wondering now, is it really important to consider unbuffered option in file I/O programming? Because many programs use plain old C stream I/O or C++ iostream, I didn't gave any attention to FILE_FLAG_NO_BUFFERING flag before. Let's say we are developing photo explorer program like Picasa. If we implement unbuffered I/O, could thumbnail display speed show noticeable difference in ordinary users?

    Read the article

< Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >