Search Results

Search found 21501 results on 861 pages for 'slow connection'.

Page 509/861 | < Previous Page | 505 506 507 508 509 510 511 512 513 514 515 516  | Next Page >

  • Fatsest way to edit alpha of CGImage (or UIImage) with touch and then display?

    - by Pankaj
    I have two image views, one on top of the another, with two different images. As the user touches the image and moves his/her finger, the top image should become transparent along the touch points with a fixed radius. (Like the PhotoChop app). Currently I am doing it this way... For each touch. Get a copy of the image buffer from CGImage of the top image. Edit the alpha channel of the buffer to create a transparent circle centered at the touch point. Create new CGImage from the buffer. Create UIImage from the CGImage and use the new UIImage as the top image view's image. This works but as you can see too many copy, creates are involved and it is slow. Can somebody please suggest me a faster way of doing the same thing?

    Read the article

  • Codebase for making a Flash-based interactive map with SVG vector data?

    - by Mike
    I'm looking for a way to take SVG path info (basically a string of coordinates) and dynamically draw it with Actionscript. Icing on the cake would be if those shapes could detect mouse events to trigger JS and dynamically change their appearance (fill, stroke, etc...). I'm currently trying something similar to this (http://raphaeljs.com/australia.html) using SVG but it's just too slow in IE. I've also tried Google's SVG Web (http://code.google.com/p/svgweb/) which basically does exactly what I'm looking for (it converts SVG to Flash in IE) but again, it's sloooooow - which is why I'm considering doing the whole shebang in Flash. Anyone know of some links to point me in the right direction?

    Read the article

  • Sql Server 2000 Stored Procedure Prevent Parallelism or something?

    - by user187305
    I have a huge disgusting stored procedure that wasn't slow a couple months ago, but now is. I barely know what this thing does and I am in no way interested in rewriting it. I do know that if I take the body of the stored procedure and then declare/set the values of the parameters and run it in query analyzer that it runs more than 20x faster. From the internet, I've read that this is probably due to a bad cached query plan. So, I've tried running the sp with "WITH RECOMPILE" after the EXEC and I've also tried putting the "WITH RECOMPLE" inside the sp, but neither of those helped even a little bit. When I look at the execution plan of the sp vs the query, the biggest difference is that the sp has "Parallelism" operations all over the place and the query doesn't have any. Can this be the cause of the difference in speeds? Thank you, any ideas would be great... I'm stuck.

    Read the article

  • Do you think the AI industry will ever come back?

    - by Isaiah
    I just spent some time reading about the collapse of the AI industry and realized a lot of the reason it failed was because technology was slow to catch up with their theories on when it would be available. I also read that it is believed computers that will be able to emulate human synapses may be made round 2015-2025. It's 2010 now and were getting pretty close to that time frame. I was wondering if anyone thinks that the AI industry will return as the technology lands? And if so, will it change the language market? Could Lisp like languages suddenly experience a burst of growth if it does? Idk I just thought it was interesting thinking about it.

    Read the article

  • How do I control script execution time in PHP

    - by mathew
    for example I do have 5 PHP functions on a page which execute when loading. each functions has its own processing time and some of them take more time sometimes to complete the task. hence the total loading time of the said page is slow. my question is how do I control execution time for each script and set time limit for the same. I am aware that there is an in built function in PHP called set_time_limit(); but it gives fatal error if time is beyond the maximum limit...

    Read the article

  • CSS cross browser compatibility on Ubuntu

    - by bhefny
    Hello, I'm currently working in web development and my default desktop is Ubuntu and I'm kind of happy with the setup and applications I got going. But I need to test web pages for cross browser compatibility while still being on Ubuntu. I have gone through hell trying to get IE7 or IE8 (with wine) to run on ubuntu and when they finally worked they were very buggy and the graphics/scrolling was insanely slow. Of course there is the option of virtual box but again, too much GBytes just to run a small application! So to all the CSS gurus out there, how can I continue with my beloved Ubuntu and still deliver a good quality (tested) page. Thank you.

    Read the article

  • iOS - Application logging test and production code

    - by Peter Warbo
    I am doing a bunch of logging when I'm testing my application which is useful for getting information about variable state and such. However I have read that you should use logging sparsely in production code (because it can potentially slow down your application). But my question is now: if my app is in production and people are using it, whenever a crash (god forbid) occurs, how will I be able to interpret the crash information if I have removed the logging statements? Then I suppose I will only have a stacktrace for me to interpret? Does this mean I should leave logging in production code only WHERE it's really essential for me to interpret what has happened? Also how will the logging statements relate to the crash reports? Will they be combined? I'm thinking of using Flurry as analytics and crash reports...

    Read the article

  • Why does derivative trading position always require C++ knowledge?

    - by Jeffrey
    I’ve never worked in trading environment before and I was curious to see that few of the trading houses seem to use C# but most of them do heavily rely on C++. Why is it? Is it because C++ is better performance wise? Is it because of legacy code base? Is it because cross platform issue? What about dynamic languages (ruby, python)? Are they too slow for this kind of work in terms of performance? Updated: If realibility and performance are important would "Erlang" be the "next big thing" in trading platform?

    Read the article

  • TextMate/Macfusion combo for mounting projects over SSH

    - by Sam Lee
    Here is my workflow: I use Macfusion to mount a server over SSH, and then edit the root directory of the project in TextMate (using mate /Volumes/server/projectdir). I have a plug in installed that disables refreshing on refresh. This works ALMOST perfectly--the only thing I have problems with is "Find in Project": it's REALLY slow. Has anyone run into this problem before and been able to find any solutions? Currently I go to terminal when I have to do a search, but it would be great to be able to do it in TextMate. Thanks!

    Read the article

  • Problem processing large data using Applet-Servlet communication

    - by Marquinio
    Hi everyone. I have an Applet that makes a request to a Servlet. On the servlet it's using the PrintWriter to write the response back to Applet: out.println("Field1|Field2|Field3|Field4|Field5......|Field10"); There are about 15000 records, so the out.println() gets executed about 15000 times. Problem is that when the Applet gets the response from Servlet it takes about 15 minutes to process the records. I placed System.out.println's and processing is paused at around 5000, then after 15 minutes it continues processing and then its done. Has anyone faced a similar problem? The servlet takes about 2 seconds to execute. So seems that the browser/Applet is too slow to process the records. Any ideas appreciated. Thanks.

    Read the article

  • Is it possible to generate plain-old XML using Haml?

    - by lsdr
    I've been working on a piece of software where I need to generate a custom XML file to send back to a client application. The current solutions on Ruby/Rails world for generating XML files are slow, at best. Using builder or event Nokogiri, while have a nice syntax and are maintainable solutions, they consume too much time and processing. I definetly could go to ERB, which provides a good speed at the expense of building the whole XML by hand. HAML is a great tool, have a nice and straight-forward syntax and is fairly fast. But I'm struggling to build pure XML files using it. Which makes me wonder, is it possible at all? Does any one have some pointers to some code or docs showing how to do this, build a full, valid XML from HAML?

    Read the article

  • How can I enforce Eclipse to use Sun Java?

    - by Dan
    Hi Before installing Eclipse I had Open JDK on default. Now I changed it to Sun Java. I did as Eclipse Helios was running really slow, unfortunately it is still... Do you have any ideas how to enforce it to use Java Sun? I could reinstal it however I have already Android SDK installed so I would have to do all the process again, after all thats not the correct way of solving problem I think. I'm using Ubuntu 10.10. java -version java version "1.6.0_22" Java(TM) SE Runtime Environment (build1.6.0_22-b04) Java HotSpot(TM) 64-Bit Server VM (build 17.1-b03, mixed mode) Would be grateful for any help. Best, Daniel

    Read the article

  • Symfony caching question (caching a partial)

    - by morpheous
    I am using Symfony 1.3.2 and I have a page that uses a partial from another module. I have two modules: 'foo' and 'foobar'. In module 'foo', I have an 'index' action, which uses a partial from the 'foobar' module. so foo/indexSuccess.php looks something like this: Some data here ? I want to cache 'part2' of my foo/indexSuccess.php page, because it is very expensive (slow). I want the cache to have a lifetime of about 10 minutes. In apps/frontend/modules/foo/config/cache.yml I need to know how to cache 'part2' of the page (i.e. the [very expensive] partial part of the page. can anyone tell me what entries are required in the cache.yml file?

    Read the article

  • Alternative to 'where col in (list)' for MySQL

    - by user210481
    Hi I have the following table T: id 1 2 3 4 col a b a c I want to do a select that returns the id,col when group by(col) having count(col)1 One way of doing it is SELECT id,col FROM T WHERE col IN (SELECT col FROM T GROUP BY(col) HAVING COUNT(col)>1); The intern select (from the right) returns 'a' and main one (left) will return 1,a and 3,a The problem is that the where in statement seems to be extremely slow. In my real case, the results from the internal select has many 'col's, something about 70000 and it's taking hours. Right now it's much faster to do the internal select and the main select getting all ids and upcs and do the intersection locally. MySQL should be able to handle this kind of query efficiently. Can I substitute the where in for a join or something faster? Thanks

    Read the article

  • Retrieving data from database. Retrieve only when needed or get everything?

    - by RHaguiuda
    I have a simple application to store Contacts. This application uses a simple relational database to store Contact information, like Name, Address and other data fields. While designing it, I question came to my mind: When designing programs that uses databases, should I retrieve all database records and store them in objects in my program, so I have a very fast performance or I should always gather data only when required? Of course, retrieving all data can only be done if it`s not too many, but do you use this approach when you make sure that the database will be small (< 300 records for example)? I have designed once a similar application that fetches data only when needed, but that was slow (using a Access database). Thanks for all help.

    Read the article

  • Windows Azure local development environment speed

    - by Paperjam
    I've started porting an existing ASP.NET web app to Windows Azure and have noticed that the development process is really slow. Each time I make a change to my code and want to view it, I have to effectively redeploy it to the local dev cloud (using Start debugging (F5) or Start without debugging (Ctrl-F5). The process itself takes over a minute, during which time Visual Studio is completely unresponsive. Am I doing something wrong or is that simply how things are developing for Azure? My specs: Visual Studio 2008 9.0.30729.1 SP 5 projects running on .NET 3.5 SP1 Azure SDK 1.1 (February 2010) Single instance of a single web role Dual-core AMD 64 machine with 8GB RAM, 64-bit Windows 7, fully patched The main project itself is quite large (3k files, ~200k lines) but compiles normally in 10-15 seconds

    Read the article

  • In a graph, how to find the nearest node to a group of nodes?

    - by Nikola
    Hello, I have an undirected, unweighted graph, which doesn't have to be planar. I also have a subset of graph's nodes (true subset) and I need to find a node not belonging to the subset, with minimum sum of distances to all nodes in the subset. So far, I have implemented breath-first search starting from each node in the subset, and the intersection that occurs first is the node I am looking for. Unfortunately, it is running too slow since the graph contains a large number of nodes. Any advice or comment will be appreciated. Thank you, Nikola

    Read the article

  • Is it possible to find out what FlashBuilder is doing during compilation?

    - by justkevin
    I've found that Flash Builder 4 (formerly Flex Builder) has trouble working with large projects. After a certain point, builds seem to take longer and longer. I've tried many different ways of improving build time including: Moving embedded resources into externally linked projects. Using -incremental. Tweaking the .ini jvm settings including memory and -server. Turning off automatic build (I'd prefer not to have to do this, because one of the main reasons for using an IDE is to be told about errors as you make them). Deleting the project and re-checking out from the repository. While some of these may help a bit, the performance is still annoyingly slow. I feel if I knew what was taking so long I could refactor my projects to build faster. Is there some setting that tells FlashBuilder to let me see what parts of the build process take so much time?

    Read the article

  • Sql: simultaneous aggregate from two tables

    - by Ash
    I have two tables: a Files table, which includes the file type, and a File Properties table, which references the file table via a foreign key. Sample Files table: | id | name | type | --------------------- | 1 | file1 | zip | | 2 | file2 | zip | | 3 | file3 | zip | | 4 | file4 | jpg | And the Properties table: | file_id | property | ----------------------- | 1 | x | | 2 | x | I want to make a query, which shows the count of each file type, and how many files of that type have a property. So in the example, the result would be | type | filecount | prop count | ---------------------------------- | zip | 3 | 2 | | jpg | 1 | 0 | I could accomplish this by select f.type, (select count(id) from files where type = f.type), count(fp.id) from files as f, file_properties as fp where f.id = fp.file_id group by f.type; But this seems very suboptimal and is very slow. Any better way to do this?

    Read the article

  • Long IF tree with strings

    - by DalGr
    I have a C program which uses Lua for scripting. In order to keep readability and avoid importing several constants within the individual Lua states, I condense a large amount of functions within a simple call (such as "ObjectSet(id, "ANGLE", 45)"), by using an "action" string. To do this I have a large if tree comparing the action string to a list (such as "if(stringcompare(action, "ANGLE") ... else if (stringcompare(action, "X")... etc") This approach works well, and within the program it's not really slow, and is fairly quick to add a new action. But I kind of feel perfectionist. Is there a better way to do this in C? And having Lua in heavy use, maybe there is a way to use it for this purpose? (embedded "chunks" making a dictionary?) Although this part is mostly curiosity.

    Read the article

  • Extract anything that looks like links from large amount of data in python

    - by Riz
    Hi, I have around 5 GB of html data which I want to process to find links to a set of websites and perform some additional filtering. Right now I use simple regexp for each site and iterate over them, searching for matches. In my case links can be outside of "a" tags and be not well formed in many ways(like "\n" in the middle of link) so I try to grab as much "links" as I can and check them later in other scripts(so no BeatifulSoup\lxml\etc). The problem is that my script is pretty slow, so I am thinking about any ways to speed it up. I am writing a set of test to check different approaches, but hope to get some advices :) Right now I am thinking about getting all links without filtering first(maybe using C module or standalone app, which doesn't use regexp but simple search to get start and end of every link) and then using regexp to match ones I need.

    Read the article

  • Fulltext for innoDB? or a good solution for php app

    - by Joshua
    I have a table I want to run a fulltext search on, but it is currently innoDB and is using a lot of foreign keys for other kinds of queries. Should I make like a 1:1 "meta-data" table that is myisam for fulltext? Also I am reading some things that say that fulltext corrupts MySQL tables pretty randomly? I dunno, the articles are a couple years old, maybe they've fixed that in 5+? If not what's a good solution for searching? Zend_Lucene seems cool but slow, even with caching, for the client's large tables and autocomplete functionality et al.

    Read the article

  • Scalable (half-million files) version control system

    - by hashable
    We use SVN for our source-code revision control and are experimenting using it for non-source-code files. We are working with a large set (300-500k) of short (1-4kB) text files that will be updated on a regular basis and need to version control it. We tried using SVN in flat-file mode and it is struggling to handle the first commit (500k files checked in) taking about 36 hours. On a daily basis, we need the system to be able to handle 10k modified files per commit transaction in a short time (<5 min). My questions: Is SVN the right solution for my purpose. The initial speed seems too slow for practical use. If Yes, is there a particular svn server implementation that is fast? (We are currently using the gnu/linux default svn server and command line client.) If No, what are the best f/oss/commercial alternatives Thanks

    Read the article

  • [CA_COLOR_OPAQUE] things that make a layer non-opaque. scaled CAGradientLayer?

    - by mahal tertin
    i spent some time with the environment variable CA_COLOR_OPAQUE = 1 and have my findings to share. things that make a CALayer non-opaque (slow, more memory, ...): * contents with alpha (like an NSImage with an icon) * NSImage/CGImage from a pdf as contents (even when the pdf does not contain any alpha and opaque=YES) * backgroundColor = nil * CATextLayer with text in a (because it is contents with alpha) * rounded corners? maybe/sometimes * masksToBounds? not necessarily as we scale most of tree with CATransform3DScale on sublayerTransform i found also these rather irritating non-opaque: * CAGradientLayer that is somewhere down in this scaled tree (even when set all the gradient colors without alpha) * edgeAntialiasingMask != 0 of a layer that is somewhere down in this scaled tree the last two do not make sense to me. why should it be non opaque? what do i see? if anyone has any thoughts on these findings, i'm happy to learn as i couldn't find such a list yet.

    Read the article

  • Known problems with filemtime() on Windows - files getting touched arbitrarily?

    - by Pekka
    Is there a known issue leading to file modification times of cache files on Windows XP SP 3 getting arbitrarily updated, but without any actual change? Is there some service on a standard Windows XP - Backup, Sync, Versioning, Virus scanner - known to touch files? They all have a .txt extension. If there isn't, forget it. Then I'm getting something wrong in my cache routines, and I'll debug my way through. Background: I'm building a simple caching wrapper around a slow web site on a Windows server. I am comparing the filemtime() time stamp to some columns in the data base to determine whether a cached file is stale. I'm having problems using this method because the modification time of the cache files seems to get updated in between operations without me doing anything. THis results in stale files being displayed. I'm the only user on the machine. The operating system is Windows XP, the webserver a XAMPP Apache 2 with PHP 5.2

    Read the article

< Previous Page | 505 506 507 508 509 510 511 512 513 514 515 516  | Next Page >