Search Results

Search found 42685 results on 1708 pages for 'page speed'.

Page 813/1708 | < Previous Page | 809 810 811 812 813 814 815 816 817 818 819 820  | Next Page >

  • Applying transformations to NSBitmapImageRep

    - by Adam
    So ... I have an image loaded into an NSBitmapImageRep object, so I am able to examine the contents of specific pixels via a two dimensional array. Now I want to apply a couple of "transformations" to the image, in preparation for some additional processing. If I was manipulating the image manually, in Photoshop, I would: Rotate the image Crop a portion of it and discard the rest Apply a "threshold" transformation (which essentially converts the image to black and white, based on the threshold value I provide) Resample the image to shrink it down a bit (which, although losing some image quality, will speed up the subsequent processing) (not necessarily in that order) Are there objective C methods available to facilitate these specific image manipulations, with the data in the NSBitmapImageRep object? If so, can someone point me to some good examples?

    Read the article

  • How would I go about implementing a stopwatch with different speeds?

    - by James
    Ideally I would like to have something similar to the Stopwatch class but with an extra property called Speed which would determine how quickly the timer changes minutes. I am not quite sure how I would go about implementing this. Edit Since people don't quite seem to understand why I want to do this. Consider playing a soccer game, or any sport game. The halfs are measured in minutes, but the time-frame in which the game is played is significantly lower i.e. a 45 minute half is played in about 2.5 minutes.

    Read the article

  • Memory leak in chrome.extension.sendRequest()

    - by jprim
    Chrome Version : 9.0.597.19 (Build 68937) beta & current stable I have simplified my code as far as possible. I ended up with the attached extension: content.js (content script run on every site): setInterval(function() { chrome.extension.sendRequest({ }, function(response) { //Do nothing }); }, 1); background.js (background page script): chrome.extension.onRequest.addListener(function(request, sender, sendResponse) { sendResponse({ }); }); When you install this extension, you can observe it eating up memory extremely fast (I got 90MB in 1 min with 9 tabs opened). You can speed up the process by opening more tabs. Of course, the extension I am actually developing does not send requests every millisecond, but only every 3 seconds. This just slows it down, though. A user who has run it in the background for a long time with many tabs opened has reported 100MB of memory usage, and I can reproduce it to a less extreme extent, too.

    Read the article

  • What kind of work benifits from OpenCL

    - by Daniel
    Hey All First of all: I am well aware that OpenCL does not magically make everything faster I am well aware that OpenCL has limitations So now to my question, i am used to do different scientific calculations using programming. Some of the things i work with is pretty intense in regards to the complexity and number of calculations. SO i was wondering, maybe i could speed things up bu using OpenCL. So, what i would love to hear from you all is answers to some of the following [bonus for links]: *What kind of calculations/algorithms/general problems is suitable for OpenCL *What is the general guidelines for determining if some particular code would benefit by migration to OpenCL? Regards

    Read the article

  • Using a single PHP script for an entire site

    - by briggins5
    I had an idea today (that millions of others have probably already had) of putting all the sites script into a single file, instead of having multiple, seperate ones. When submitting a form, there would also be a hidden field called something like 'action' which would represent which function in the file would handle it. I know that things like Code Igniter and CakePHP exist which help seperate/organise the code. Is this a good or bad idea in terms of security, speed and maintenance? Do things like this already exist that i am not aware of?

    Read the article

  • How can I load an MP3 or similar music file for display and anaysis in wxWdigets?

    - by Jon Cage
    I'm developing a GUI in wxPython which allows a user to generate sequences of colours for some toys I'm building. Part of the program needs to load an MP3 (and potentially other formats further down the line) and display it to the user. That shuold be sufficient to get started but later I'd like to add features like identifying beats and some crude frequency analysis. Is there any simple way of loading / understanding an MP3's contents to display a plot of it's amplitudes to the screen using wxWidgets? I later intend to port to C++/wxWidgets for speed and to avoid having to distribute wxPython.

    Read the article

  • How to merge branches in Git by "hunk"

    - by user1316464
    Here's the scenario. I made a "dev" branch off the "master" branch and made a few new commits. Some of those changes are going to only be relevant to my local development machine. For example I changed a URL variable to point to a local apache server instead of the real URL that's posted online (I did this for speed during the testing phase). Now I'd like to incorporate my changes from the dev branch into the master branch but NOT those changes which only make sense in my local environment. I'd envisioned something like a merge --patch which would allow me to choose the changes I want to merge line by line. Alternatively maybe I could checkout the "master" branch, but keep the files in my working directory as they were in the "dev" branch, then do a git add --patch. Would that work?

    Read the article

  • Develop iPhone application remotely?

    - by ANE
    4 java developers are new to iPod Touch/iPhone app development. They have an idea for an app. They have never used Xcode or Macs before. Instead of spending money for a new iMac or Mac Mini for each of them, my boss would like to sell them a $999 Apple server, hosted at a facility connected a single T1 line, and have all 4 people work remotely in Xcode. Is this feasible? Is anyone doing anything like this? Specifically, is 1 T1 enough for realistic remote app development? Would they have to work in black & white via Logmein or Gotomeeting to get decent speed? Can four people work remotely together on an Xcode project at the same time? Do they absolutely need their own Macs to connect their iPod Touches or iPhones physically to, or can they connect to their existing PCs with iTunes and install their in-development apps that way?

    Read the article

  • Getting Factors of a Number

    - by Dave
    Hi Problem: I'm trying to refactor this algorithm to make it faster. What would be the first refactoring here for speed? public int GetHowManyFactors(int numberToCheck) { // we know 1 is a factor and the numberToCheck int factorCount = 2; // start from 2 as we know 1 is a factor, and less than as numberToCheck is a factor for (int i = 2; i < numberToCheck; i++) { if (numberToCheck % i == 0) factorCount++; } return factorCount; }

    Read the article

  • C# Spawn Multiple Threads for work then wait until all finished

    - by pharoc
    just want some advice on "best practice" regarding multi-threading tasks. as an example, we have a C# application that upon startup reads data from various "type" table in our database and stores the information in a collection which we pass around the application. this prevents us from hitting the database each time this information is required. at the moment the application is reading data from 10 tables synchronously. i would really like to have the application read from each table in a different thread all running in parallel. the application would wait for all the threads to complete before continuing with the startup of the application. i have looked into BackGroundWorker but just want some advice on accomplishing the above. Does the method sound logical in order to speed up the startup time of our application How can we best handle all the threads keeping in mind that each thread's work is independent of one another, we just need to wait for all the threads to complete before continuing. i look forward to some answers

    Read the article

  • TCP 30 small packets per second polutes connection with server

    - by Denis Ermolin
    I'm testing connection with flash client and cloud server(boost::asio for software) over TCP connection. My connection with server already is really poor - 120 ms ping in average. I found when i start to send packets with 2 bytes size (without tcp header) with speed 30 packets/s ping grow to 170-200 average. I think that it's really bad and my bad connection and bad cloud provider is reason for this high ping without any load. What do you think? (I tested my software and can compute about 50k packets/s so software is not a problem).

    Read the article

  • Hiding an error message div with setTimeout not working using Smarty

    - by Donald
    Hi guys, I'm trying to hide an error message div using a javascript function setTimeout after a specified time but it gives me errors that its a wrong smarty syntax, i've never used smarty before so i would really appreciate it if anyone can help me get up to speed with this syntax My code is as follows {if $error_message != ""} <script type="text/javascript"> setTimeout(function(){$('error').hide(); }, 1000); </script> <div id="error" class='error_message'> {$error_message} </div> {/if} Thanks in advance

    Read the article

  • Single logical SQL Server possible from multiple physical servers?

    - by TuffyIsHere
    Hi, With Microsoft SQL Server 2005, is it possible to combine the processing power of multiple physical servers into a single logical sql server? Is it possible on SQL Server 2008? I'm thinking, if the database files were located on a SAN and somehow one of the sql servers acted as a kind of master, then processing could be spread out over multiple physical servers, for instance even allowing simultaneous updates where there was no overlap, and in the case of read-only queries on unlocked tables no limit. We have an application that is limited by the speed of our sql server, and probably stuck with server 2005 for now. Is the only option to get a single more powerful physical server? Sorry I'm not an expert, I'm not sure if the question is a stupid one. TIA

    Read the article

  • Rails create_table query

    - by Steve
    Hi, I am a beginner in Rails. In the following code,there is an id which is set as false. What's the meaning of it? class CreateCoursesStudents < ActiveRecord::Migration def self.up create_table :courses_students, :id => false do |t| t.integer :course_id, :null => false t.integer :student_id, :null => false end # Add index to speed up looking up the connection, and ensure # we only enrol a student into each course once add_index :courses_students, [:course_id, :student_id], :unique => true end def self.down remove_index :courses_students, :column => [:course_id, :student_id] drop_table :courses_students end end Thanks

    Read the article

  • Store data in Ruby on Rails without Database

    - by snowmaninthesun
    I have a few data values that I need to store on my rails app and wanted to know if there are any alternatives to creating a database table just to do this simple task. Background: I'm writing some analytics and dashboard tools for my ruby on rails app and i'm hoping to speed up the dashboard by caching results that will never change. Right now I pull all users for the last 30 days, and re arange them so I can see the number of new users per day. It works great but takes quite a long time, in reality I should only need to calculate the most recent day and just store the rest of the array somewhere else. Where is the best way to store this array? Creating a database table seems a bit overkill, and i'm not sure that global variables are the correct answer. Is there a best practice for persisting data like this? If anyone has done anything like this before let me know what you did and how it turned out.

    Read the article

  • In game programming are global variables bad?

    - by Joe.F
    I know my gut reaction to global variables is "badd!" but in the two game development courses I've taken at my college globals were used extensively, and now in the DirectX 9 game programming tutorial I am using (www.directxtutorial.com) I'm being told globals are okay in game programming ...? The site also recommends using only structs if you can when doing game programming to help keep things simple. I'm really confused on this issue, and all the research I've been trying to do is very confusing. I realize there are issues when using global variables (threading issues, they make code harder to maintain, the state of them is hard to track etc) but also there is a cost associated with not using globals, I'd have to pass a loooot of information around very often which can be confusing and I imagine time-costing, although I guess pointers would speed the process up (this is my first time writing a game in C++.) Anyway, I realize there is probably no "right" or "wrong" answer here since both ways work, but I want my code to be as proper as I can so any input would be good, thank you very much!

    Read the article

  • Getting mediafire directlink without using WebBrowser .NET control ?

    - by NVA
    I'm looking for the way to get the direct link from mediafire. By default, when a user visits a download link, he will be presented with a download page where he has to wait for the download to be processed and then a link will appear. I googled and found a VB.NET 2008 solution to this using WebBrowser WB http://www.vbforums.com/showthread.php?t=556681 It works pretty well but I'm tired of pop-up windows and the loading speed. So, I wonder if there is a solution to this problem? (a non WB solution ^^) Any help is greatly appreciated.

    Read the article

  • Overhead of serving pages - JSPs vs. PHP vs. ASPXs vs. C

    - by John Shedletsky
    I am interested in writing my own internet ad server. I want to serve billions of impressions with as little hardware possible. Which server-side technologies are best suited for this task? I am asking about the relative overhead of serving my ad pages as either pages rendered by PHP, or Java, or .net, or coding Http responses directly in C and writing some multi-socket IO monster to serve requests (I assume this one wins, but if my assumption is wrong, that would actually be most interesting). Obviously all the most efficient optimizations are done at the algorithm level, but I figure there has got to be some speed differences at the end of the day that makes one method of serving ads better than another. How much overhead does something like apache or IIS introduce? There's got to be a ton of extra junk in there I don't need. At some point I guess this is more a question of which platform/language combo is best suited - please excuse the in-adroitly posed question, hopefully you understand what I am trying to get at.

    Read the article

  • resort on a std::vector vs std::insert

    - by Abruzzo Forte e Gentile
    I have a sorted std::vector of relative small size ( from 5 to 20 elements ). I used std::vector since the data is continuous so I have speed because of cache. On a specific point I need to remove an element from this vector. I have now a doubt: which is the fastest way to remove this value between the 2 options below? setting that element to 0 and call sort to reorder: this has complexity but elements are on the same cache line. call erase that will copy ( or memcpy who knows?? ) all elements after it of 1 place ( I need to investigate the behind scense of erase ). Do you know which one is faster? I think that the same approach could be thought about inserting a new element without hitting the max capacity of the vector. Regards AFG

    Read the article

  • How to identify deadlock conditions in a third-party application?

    - by Imhotep is Invisible
    I am using a third-party application to handle batch CD audio extraction via multiple FireWire attached devices, but the application frequently (though non-deterministically) hangs during the extraction. I suspect that the multithreaded application is deadlocking over some shared resource. The developer, however, suspects the problem lies elsewhere but is not addressing the problem at this time. I would like to be able to do some legwork on my end to a) prove the condition exists and b) ideally point him in the right direction. The problems: while I used to be a programmer, it's been awhile and I need to shake off the dust (last work I did was back in '99 and it was under Solaris, while the application runs under XP). Rather than there being a dearth of information online, there's almost too much to digest. Are there any suggested guides or tutorials that might help me get back up to speed sufficient enough to help identify and/or diagnose the deadlock, or are there tools or approaches that I should study up on to aid me in my task? Many thanks for all suggestions!

    Read the article

  • Setting parameters after obtaining their values in stored procedures

    - by user1260028
    Right now I have an upload field while uploads files to the server. The prefix is saved so that it can later be obtained for retrieval. For this I need to attach the ID of the form to the prefix. I would like to be able to do this as such: @filePrefix = SCOPE_IDENTITY() + @filePrefix; However I am not so sure this would work because the record has not been created yet. If anything I could call an update function which obtains the ID and then injects it into the row after it has been created. To speed things up, I don't want to do this on the server but rather do this on the database. Regardless of what the approach is, I would still like to know if something like the above is possible (at least for future reference?) So if we replace that with @filePrefix = 5 + @filePrefix; would that be possible? SQL doesn't seem to like the current syntax very much...

    Read the article

  • Is it a problem if i query again and again to SQL Server 2005 and 2000?

    - by learner
    Window app i am constructing is for very low end machines (Celeron with max 128 RAM). From the following two approaches which one is the best (I don't want that application becomes memory hog for low end machines):- Approach One:- Query the database Select GUID from Table1 where DateTime <= @givendate which is returning me more than 300 thousands records (but only one field i.e. GUID - 300 thousands GUIDs). Now running a loop to achieve next process of this software based on GUID. Second Approach:- Query the database Select Top 1 GUID from Table1 where DateTime <= @givendate with top 1 again and again until all 300 thousands records done. It will return me only one GUID at a time, and I can do my next step of operation. What do you suggest which approach will use the less Memory Resources?? (Speed / performance is not the issue here).

    Read the article

  • Change Interval Of CADisplayLink

    - by user3679109
    I have replaced an NSTimer with a CADisplayLink. I have it working properly but it is running too slowly. How could I go about speeding this up? This is the code that I'm using: ViewController.h (In {} with @interface: CADisplayLink *displayLink; ViewController.m (viewDidLoad): displayLink = [CADisplayLink displayLinkWithTarget:self selector:@selector(onTimer)]; [displayLink addToRunLoop:[NSRunLoop currentRunLoop] forMode:NSRunLoopCommonModes]; onTimer is the method that it is calling. So my question is how do I speed up how often this is being called? Thanks

    Read the article

  • Can .NET AppDomains do this?

    - by Eloff
    I've spent hours reading up about AppDomains, but I'm not sure they work quite like I'm hoping. If I have two classes, Foo in AppDomain #1, Bar in AppDomain #2: App Domain #1 is the application. App Domain #2 is something like a plugin, and can be loaded and unloaded dynamically. AppDomain #2 wants to create Foo and use it. Foo uses lots of classes in AppDomain #1 internally. I don't want AppDomain #2 using object foo with reflection, I want it to use Foo foo, with all the static typing and compiled speed that goes with it. Can this be done considering that AppDomain #1, containing Foo, is never unloaded? If so, does any remoting take place here when using Foo? When I unload AppDomain #2, the type Foo is destroyed?

    Read the article

  • Why is my regex so much slower compiled than interpreted ?

    - by miket2e
    I have a large and complex C# regex that runs OK when interpreted, but is a bit slow. I'm trying to speed this up by setting RegexOptions.Compiled, and this seems to take about 30 seconds for the first time and instantly after that. I'm trying to negate this by compiling the regex to an assembly first, so my app can be as fast as possible. My problem is when the compiling delay takes place: Regex myComplexRegex = new Regex(regexText, RegexOptions.Compiled); MatchCollection matches = myComplexRegex.Matches(searchText); foreach (Match match in matches) // <--- when the one-time long delay kicks in { } This is making compiling to an assembly basically useless, as I still get the delay on the first foreach call. What I want is for all the compiling delay to be done in advance when I compile to the assembly, not when the user runs the app. Where am I going wrong ? (The code I'm using to compile to an assembly is similar to http://www.dijksterhuis.org/regular-expressions-advanced/ , if that's relevant ).

    Read the article

< Previous Page | 809 810 811 812 813 814 815 816 817 818 819 820  | Next Page >