Search Results

Search found 6355 results on 255 pages for 'slow downs'.

Page 177/255 | < Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >

  • java: speed up reading foreign characters

    - by Yang
    My current code needs to read foreign characters from the web, currently my solution works but it is very slow, since it read char by char using InputStreamReader. Is there anyway to speed it up and also get the job done? // Pull content stream from response HttpEntity entity = response.getEntity(); InputStream inputStream = entity.getContent(); StringBuilder contents = new StringBuilder(); int ch; InputStreamReader isr = new InputStreamReader(inputStream, "gb2312"); // FileInputStream file = new InputStream(is); while( (ch = isr.read()) != -1) contents.append((char)ch); String encode = isr.getEncoding(); return contents.toString();

    Read the article

  • ResultSet and aggregation

    - by kachanov
    Ok, I admit my situation is special There is a data system that supports SQL-92 and JDBC interface However the SQL requets are pretty expensive, and in my application I need to retreive the same data multiple times and aggregate it ("group by") on different fields to show different dimensions of the same data. For example on one screen I have three tables that show the same set or records but aggregated by City (1st grid), by Population (2nd grid), by number of babies (3rd grid) This amounts to 3 SQL queries (which is very slow), UNLESS anyone of you can suggest any idea any library from apache commons or from google code, so that I can select all records into ResultSet and get 3 arrays of data group by different fields from this single ResultSet. Am I'm missing some obvious and unexpected solution to this problem?

    Read the article

  • High level audio crossfading library for python

    - by tcoopman
    I am looking for a high level audio library that supports crossfading for python (and that works in linux). In fact crossfading a song and saving it is about the only thing I need. I tried pyechonest but I find it really slow. Working with multiple songs at the same time is hard on memory too (I tried to crossfade about 10 songs in one, but I got out of memory errors and my script was using 1.4Gb of memory). So now I'm looking for something else that works with python. I have no idea if there exists anything like that, if not, are there good command line tools for this, I could write a wrapper for the tool.

    Read the article

  • C#/WPF FileSystemWatcher on every extension on every path

    - by BlueMan
    I need FileSystemWatcher, that can observing same specific paths, and specific extensions. But the paths could by dozens, hundreds or maybe thousand (hope not :P), the same with extensions. The paths and ext are added by user. Creating hundreds of FileSystemWatcher it's not good idea, isn't it? So - how to do it? Is it possible to watch/observing every device (HDDs, SD flash, pendrives, etc.)? Will it be efficient? I don't think so... . Every changing Windows log file, scanning file by antyvirus program - it could realy slow down my program with SystemWatcher :(

    Read the article

  • jQuery FadeIn, FadeOut Div - IE7 bug

    - by user1058223
    I have a div that will fade in and out on hover in FF, but in IE7 it just hides and shows with no animation. Here is my code: #nav-buttons { display:none; width:894px; position:relative; z-index:1000; } ---------- <div id="contents"> <div id="nav-buttons"> <a href="javascript:void(0)" id="left-button"></a> <a href="javascript:void(0)" id="right-button"></a> </div> other html.... </div> ---------- $(document).ready(function() { $("#contents").hover(function() { $("#nav-buttons").fadeToggle("slow"); }); });

    Read the article

  • Ruby: would using Fibers increase my DB insert throughput?

    - by Zombies
    Currently I am using Ruby 1.9.1 and the 'ruby-mysql' gem, which unlike the 'mysql' gem is written in ruby only. This is pretty slow actually, as it seems to insert at a rate of almost 1 per second (SLOOOOOWWWWWW). And I have a lot of inserts to make too, its pretty much what this script does ultamitely. I am using just 1 connection (since I am using just one thread). I am hoping to speed things up by creating a fiber that will create a new DB connection insert 1-3 records close the DB connection I would imagine launching 20-50 of these would greatly increase DB throughput. Am I correct to go along this route? I feel that this is the best option, as opposed to refactoring all of my DB code :(

    Read the article

  • How to detect Out Of Memory condition?

    - by Jaromir Hamala
    I have an application running on Websphere Application Server 6.0 and it crashes nearly every day because of Out-Of-Memory. From verbose GC is certain there are the memory leaks(many of them) Unfortunately the application is provided by external vendor and getting things fixed is slow & painful process. As part of the process I need to gather the logs and heapdumps each time the OOM occurs. Now I'm looking for some way how to automate it. Fundamental problem is how to detect OOM condition. One way would be to create shell script which will periodically search for new heapdumps. This approach seems me a kinda dirty. Another approach might be to leverage the JMX somehow. But I have little or no experience in this area and don't have much idea how to do it. Or is in WAS some kind of trigger/hooks for this? Thank you very much for every advice!

    Read the article

  • Retrieving data from database. Retrieve only when needed or get everything?

    - by RHaguiuda
    I have a simple application to store Contacts. This application uses a simple relational database to store Contact information, like Name, Address and other data fields. While designing it, I question came to my mind: When designing programs that uses databases, should I retrieve all database records and store them in objects in my program, so I have a very fast performance or I should always gather data only when required? Of course, retrieving all data can only be done if it`s not too many, but do you use this approach when you make sure that the database will be small (< 300 records for example)? I have designed once a similar application that fetches data only when needed, but that was slow (using a Access database). Thanks for all help.

    Read the article

  • More efficient way to find & tar millions of files

    - by Stu Thompson
    I've got a job running on my server at the command line prompt for a two days now: find data/ -name filepattern-*2009* -exec tar uf 2008.tar {} ; It is taking forever, and then some. Yes, there are millions of files in the target directory. But just running... find data/ -name filepattern-*2009* -print > filesOfInterest.txt ...takes only two hours or so. At the rate my job is running, it won't be finished for a couple of weeks.. That seems unreasonable. Is there a more efficient to do this? Maybe with a more complicated bash script? A secondary questions is "why is my current approach so slow?"

    Read the article

  • BULK SMS, Long Codes (VMN MSIDN), T-mobile?

    - by John
    Does any US wireless carrier offer individuals or companies with a direct connection to the SMSC? The number is 747-772-3101 (repalce 7's with 6's) This number is registered to t-mobile, also verified by t-mobile to be a valid subscriber sending 160,000+ text messages monthly and that all they have is an unlimited text messaging plan on top of the cheapest voice plan. This company of the number verified to me that they don't use gsm modems as they are too slow. So I know it's possible but who would I contact, Sales or anyone else reachable through a 1-800 is ignorant to these services and developer.t-mobile is worthless and doesn't reply to emails. Any info??

    Read the article

  • Resetting AUTO_INCREMENT on myISAM without rebuilding the table

    - by Artem
    Please help I am in major trouble with our production database. I had accidentally inserted a key with a very large value into an autoincrement column, and now I can't seem to change this value without a huge rebuild time. "ALTER TABLE tracks_copy AUTO_INCREMENT = 661482981" Is super-slow. How can I fix this in production? I can't get this to work either (has no effect): myisamchk tracks.MYI --set-auto-increment=661482982 Any ideas? Basically, no matter what I do I get an overflow: SHOW CREATE TABLE tracks CREATE TABLE tracks ( ... ) ENGINE=MYISAM AUTO_INCREMENT=2147483648 DEFAULT CHARSET=latin1

    Read the article

  • How do I control script execution time in PHP

    - by mathew
    for example I do have 5 PHP functions on a page which execute when loading. each functions has its own processing time and some of them take more time sometimes to complete the task. hence the total loading time of the said page is slow. my question is how do I control execution time for each script and set time limit for the same. I am aware that there is an in built function in PHP called set_time_limit(); but it gives fatal error if time is beyond the maximum limit...

    Read the article

  • Find gap between start and end dates for multiple data ranges with overlaps

    - by sqlint
    Need to find gap between start and end dates more than 20 days for multiple data ranges with overlaps. One Id has multiple start dates and end dates. Following Id 1 has two gaps less that 20 day. It should be considered as one range from 10/01/2012 to 10/30/2014 without any gap. 1 10/01/2012 02/01/2013 1 01/01/2013 01/31/2013 1 02/10/2013 03/31/2013 1 04/15/2013 10/30/2014 Id 2 has a gap more than 20 days between end date 01/30/2013 and start date 05/01/2013. It has to be captured. 2 01/01/2013 01/30/2013 2 05/01/2013 06/30/2014 2 07/01/2013 02/01/2014 Id 3 should be considered as one range from 01/01/2012 to 06/01/2014 without any gap. The gap between end date 02/28/2013 and start date 07/01/2013 should be ignored because range from 01/01/2012 to 01/01/2014 cavers a gap. 3 01/01/2012 01/01/2014 3 01/01/2013 02/28/2013 3 07/01/2013 06/01/2014 The cursor can do it but it works extremely slow and not acceptable. SQL fiddle http://sqlfiddle.com/#!3/27e3f/2/0

    Read the article

  • iPhone webapp: my ressources don't get cached

    - by Savageman
    Hello, First of all, I'd like to say I'm not using any off-line feature from HTML5. I have a web-application which runs on the iPhone. When viewing it from safari, everything works quite well. But when I launch the application from the home screen (to remove the navigation bar), it can be really slow. I checked the logs in Apache and it appears that Safari does a good work to cache the resources (css / js / images), with Apache answering "304 Not Modified" when needed. However, when the web app run as a "real" application (navigation bar hidden), those resources doesn't get cached and Apache the content has to be transferred over and over again (response code 200 Ok + content), resulting in a significantly slower page load. How can I prevent this behavior? Do I need to always run my webapp inside Safari, even when it's launched from the home screen? Thank you!

    Read the article

  • Is it possible to use Sphinx search with dynamic conditions?

    - by Fedyashev Nikita
    In my web app I need to perform 3 types of searching on items table with the following conditions: items.is_public = 1 (use title field for indexing) - a lot of results can be retrieved(cardinality is much higher than in other cases) items.category_id = {X} (use title + private_notes fields for indexing) - usually less than 100 results items.user_id = {X} (use title + private_notes fields for indexing) - usually less than 100 results I can't find a way to make Sphinx work in all these cases, but it works well in 1st case. Should I use Sphinx just for the 1st case and use plain old "slow" FULLTEXT searching in MySQL(at least because of lower cardinality in 2-3 cases)? Or is it just me and Sphinx can do pretty much everything?

    Read the article

  • Has Object in VB 2010 received the same optimalization as dynamic in C# 4.0?

    - by Abel
    Some people have argued that the C# 4.0 feature introduced with the dynamic keyword is the same as the "everything is an Object" feature of VB. However, any call on a dynamic variable will be translated into a delegate once and from then on, the delegate will be called. In VB, when using Object, no caching is applied and each call on a non-typed method involves a whole lot of under-the-hood reflection, sometimes totaling a whopping 400-fold performance penalty. Have the dynamic type delegate-optimization and caching also been added to the VB untyped method calls, or is VB's untyped Object still so slow?

    Read the article

  • low latency data link pc to android

    - by steveh
    can anyone recommend a method for low latency bi-directional com link between my pc app and android slave app. the app i have works now via wifi but the latency is too slow (about 300mS), i'm looking to get it down to 10mS or so. the android is acting like a glorified remote control to the game on the pc. the apk displays a low res image and sends button presses back to the game and the round trip need to be quick i'm thinking the only option beside the network, is to connect a usb cable but i don't see a lot of support for that path and not even sure it would be lower latency than wifi any ideas please?

    Read the article

  • Parallelize Bash Script

    - by thelsdj
    Lets say I have a loop in bash: for foo in `some-command` do do-something $foo done do-something is cpu bound and I have a nice shiny 4 core processor. I'd like to be able to run up to 4 do-something's at once. The naive approach seems to be: for foo in `some-command` do do-something $foo & done This will run all do-somethings at once, but there are a couple downsides, mainly that do-something may also have some significant I/O which performing all at once might slow down a bit. The other problem is that this code block returns immediately, so no way to do other work when all the do-somethings are finished. How would you write this loop so there are always X do-somethings running at once?

    Read the article

  • Fulltext for innoDB? or a good solution for php app

    - by Joshua
    I have a table I want to run a fulltext search on, but it is currently innoDB and is using a lot of foreign keys for other kinds of queries. Should I make like a 1:1 "meta-data" table that is myisam for fulltext? Also I am reading some things that say that fulltext corrupts MySQL tables pretty randomly? I dunno, the articles are a couple years old, maybe they've fixed that in 5+? If not what's a good solution for searching? Zend_Lucene seems cool but slow, even with caching, for the client's large tables and autocomplete functionality et al.

    Read the article

  • Optimising (My)SQL Query

    - by Simon
    I usually use ORM instead of SQL and I am slightly out of touch on the different JOINs... SELECT `order_invoice`.*, `client`.*, `order_product`.*, SUM(product.cost) as net FROM `order_invoice` LEFT JOIN `client` ON order_invoice.client_id = client.client_id LEFT JOIN `order_product` ON order_invoice.invoice_id = order_product.invoice_id LEFT JOIN `product` ON order_product.product_id = product.product_id WHERE (order_invoice.date_created >= '2009-01-01') AND (order_invoice.date_created <= '2009-02-01') GROUP BY `order_invoice`.`invoice_id` The tables/ columns are logically names... it's an shop type application... the query works... it's just very very slow... I use the Zend Framework and would usually use Zend_Db_Table_Row::find(Parent|Dependent)Row(set)('TableClass') but I have to make lots of joins and I thought it'll improve performance by doing it all in one query instead of hundreds... Can I improve the above query by using more appropriate JOINs or a different implementation? Many thanks.

    Read the article

  • Sql Server 2000 Stored Procedure Prevent Parallelism or something?

    - by user187305
    I have a huge disgusting stored procedure that wasn't slow a couple months ago, but now is. I barely know what this thing does and I am in no way interested in rewriting it. I do know that if I take the body of the stored procedure and then declare/set the values of the parameters and run it in query analyzer that it runs more than 20x faster. From the internet, I've read that this is probably due to a bad cached query plan. So, I've tried running the sp with "WITH RECOMPILE" after the EXEC and I've also tried putting the "WITH RECOMPLE" inside the sp, but neither of those helped even a little bit. When I look at the execution plan of the sp vs the query, the biggest difference is that the sp has "Parallelism" operations all over the place and the query doesn't have any. Can this be the cause of the difference in speeds? Thank you, any ideas would be great... I'm stuck.

    Read the article

  • Alternative to 'where col in (list)' for MySQL

    - by user210481
    Hi I have the following table T: id 1 2 3 4 col a b a c I want to do a select that returns the id,col when group by(col) having count(col)1 One way of doing it is SELECT id,col FROM T WHERE col IN (SELECT col FROM T GROUP BY(col) HAVING COUNT(col)>1); The intern select (from the right) returns 'a' and main one (left) will return 1,a and 3,a The problem is that the where in statement seems to be extremely slow. In my real case, the results from the internal select has many 'col's, something about 70000 and it's taking hours. Right now it's much faster to do the internal select and the main select getting all ids and upcs and do the intersection locally. MySQL should be able to handle this kind of query efficiently. Can I substitute the where in for a join or something faster? Thanks

    Read the article

  • CSS cross browser compatibility on Ubuntu

    - by bhefny
    Hello, I'm currently working in web development and my default desktop is Ubuntu and I'm kind of happy with the setup and applications I got going. But I need to test web pages for cross browser compatibility while still being on Ubuntu. I have gone through hell trying to get IE7 or IE8 (with wine) to run on ubuntu and when they finally worked they were very buggy and the graphics/scrolling was insanely slow. Of course there is the option of virtual box but again, too much GBytes just to run a small application! So to all the CSS gurus out there, how can I continue with my beloved Ubuntu and still deliver a good quality (tested) page. Thank you.

    Read the article

  • C# - Inserting multiple rows using a stored procedure

    - by user177883
    I have a list of objects, this list contains about 4 million objects. there is a stored proc that takes objects attributes as params , make some lookups and insert them into tables. what s the most efficient way to insert this 4 million objects to db? How i do : -- connect to sql - SQLConnection ... foreach(var item in listofobjects) { SQLCommand sc = ... // assign params sc.ExecuteQuery(); } THis has been really slow. is there a better way to do this?

    Read the article

  • git can I speed up committing?

    - by AndreasT
    I have a big repository in a shared folder. I use git from within a VM on that folder. Everything works nice, but the repository is big and git's searching through all directories and files when committing is slow. I cannot move this repository out of the shared folder. I tried to git add specific files and directories, but when I do git commit -m "something" it still goes off onto it's oddyssey through the directory tree. Can I do commits that ignore the rest of the tree?

    Read the article

< Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >