Search Results

Search found 21501 results on 861 pages for 'slow connection'.

Page 508/861 | < Previous Page | 504 505 506 507 508 509 510 511 512 513 514 515  | Next Page >

  • C++: Best text accumulator

    - by MInner
    Text gets accumulates piecemeal before being sent to client. Now we use own class that allocates memory for each piece as char massive. (Anyway, works like char[][] + std::list<char*>). Then we build the whole string, convert it into std::sting and then create boost::asio::streambuf using it. That's slow enough, I assume. Correct me if I'm wrong. I know, in many cases simple FILE type from stdio.h is used. How does it works? Allocates memory at every write into it. So, is it faster and is there any way to read into boost::asio::streambuf from FILE?

    Read the article

  • Storing object as a column in LINQ

    - by Alex
    Hello, i have some class which constructs itself from string, like this: CurrencyVector v = new CurrencyVector("10 WMR / 20 WMZ"); it's actually a class which holds multiple currency values, but it does not matter much. I need to change type of column in my LINQ table (in vs 2010 designer) from String to that class, CurrencyVector. If i do it - i get runtime error when LINQ runtime tries to cast String as CurrencyVector (when populating the table from database). Adding IConvertible did not help. I wrapped these columns in properties, but it's ugly and slow solution. Searching internet gave no results.

    Read the article

  • Create dummy index.html inside a new MKDR directory

    - by jonnypixel
    Hi, I know this may be a silly question but i cant seem to find just a simple answer. I have a php script that makes a directory for me when the user starts a new entry. That directory holds photos for their gallery. What i would like to do is also create One index.html file inside that new directory with a few lines of html code in it. How do i do this? Im guessing that the file would be made like so: mkdir('users/'.$id.'/index.html',0755); But how do i add the html into that index.html file? Or do i have one file on the server and copy it over into there during the MKDIR process? Anyways a really simple answer would be best as i am very slow in this learning thing. Thank you John

    Read the article

  • How to store the result of a JSP in a string?

    - by Spines
    I want to store the result of a JSP in a string. For example, I want to be able to call a function like: String result = ProcessJsp("/jspfile.jsp"); Also, this must be rather efficient. Making a url request to the jsp and then storing it would definitely be too slow. How could I do this? Here are my thoughts on how to do this, though I'm not sure if it would work, and I'm hoping there is something simpler: Do RequestDispatcher("/jspfile.jsp").include(hreq, hresp), but instead of putting the real HttpResponse object in there, you put your own where the getWriter() method returns something that writes to your String or a memory buffer, etc.

    Read the article

  • Fatsest way to edit alpha of CGImage (or UIImage) with touch and then display?

    - by Pankaj
    I have two image views, one on top of the another, with two different images. As the user touches the image and moves his/her finger, the top image should become transparent along the touch points with a fixed radius. (Like the PhotoChop app). Currently I am doing it this way... For each touch. Get a copy of the image buffer from CGImage of the top image. Edit the alpha channel of the buffer to create a transparent circle centered at the touch point. Create new CGImage from the buffer. Create UIImage from the CGImage and use the new UIImage as the top image view's image. This works but as you can see too many copy, creates are involved and it is slow. Can somebody please suggest me a faster way of doing the same thing?

    Read the article

  • How to detect Out Of Memory condition?

    - by Jaromir Hamala
    I have an application running on Websphere Application Server 6.0 and it crashes nearly every day because of Out-Of-Memory. From verbose GC is certain there are the memory leaks(many of them) Unfortunately the application is provided by external vendor and getting things fixed is slow & painful process. As part of the process I need to gather the logs and heapdumps each time the OOM occurs. Now I'm looking for some way how to automate it. Fundamental problem is how to detect OOM condition. One way would be to create shell script which will periodically search for new heapdumps. This approach seems me a kinda dirty. Another approach might be to leverage the JMX somehow. But I have little or no experience in this area and don't have much idea how to do it. Or is in WAS some kind of trigger/hooks for this? Thank you very much for every advice!

    Read the article

  • POS Desktop Application using DB or Localfiles ? using WPF

    - by Panindra
    I am planning to build a POS Application for my shop. I have enough knowledge to build the application using DB and also using local files( system.IO - binary files ) to store and access the data for my application. But , i have no deployment experience and confused in choosing data storing option. Database using MDF may be good option ( may ease plenty of coding ) but i don't want to have SQL server on my desktop. as i am using WPF for building , my concern is that my application may get slow due to server response and design rendering of WPF. Then i tried to use only local data (binary files) to store the data and retrive using class and objects. but this coding is taking lot of time , so in the middle of the process i struck in the dilemma of going back to Database . Please help , for performance wise whic one is better . and in Practical World ,in professional applications which one is widely using .. please give suggestions ..

    Read the article

  • Why is display:inline killing IE 8.0 performance?

    - by monstermensch
    I have an image gallery based on this jQuery plugin: http://jqueryfordesigners.com/demo/slider-gallery.html This works really well in Firefox, Chrome and even IE 7.0, but when I try it with more than 50 images in IE 8.0 the performance is incredible slow. Just hovering over the thumbnail brings the CPU load to 100%. At first I thought it's a Javascript problem, so I used the IE profiler, but the results were normal. Next I checked the CSS and finally found the cause: .sliderGallery UL LI { display: inline; } This gets the thumbnails to align horizontally. If I chance it to display:block, performance is fine and the scroller is still working but obviously it looks funny, because the thumbs are aligned vertically. My questions: Why does IE 8 have this problem with many display:inline elements What can I do to solve it I'll gladly provide more information if necessary.

    Read the article

  • GVim highlighting with matchadd eventually slows down?

    - by Kyle MacFarlane
    I have the following in ~/.vim/ftplugin/python.vim to highlight long lines, accidental tabs and extra whitespace in Python files: hi CustomPythonErrors ctermbg=red ctermfg=white guibg=#592929 au BufWinEnter *.py call matchadd('CustomPythonErrors', '\%>80v.\+', -1) au BufWinEnter *.py call matchadd('CustomPythonErrors', '/^\t\+/', -1) au BufWinEnter *.py call matchadd('CustomPythonErrors', '\s\+$', -1) au BufWinLeave *.py call clearmatches() The BufWinLeave is so that the matches are cleared when I switch to another file in case that file isn't a .py file. It's an essential feature for me when working with something like Django. It all works fine for random amounts of time; from ten minutes to hours (my guess is it depends on how many files I open/close). But eventually when any line over 80 characters is displayed GVim slows to a halt and requires a restart. Does anyone have any ideas why this would eventually slow down?

    Read the article

  • Programming language for fast calculations with big integers

    - by sub
    I'm doing Project Euler problems at the moment and I can solve most of them using my own programming language which uses direct C++ integers (so they are bound to 2^32 on my machine). However, at times there are problems which require me to work with very high numbers, I can't do that with native integers. So I implemented a BigInt library in my language which unfortunately gets extremely slow at times. Is there a programming language suitable for very efficient handling of big numbers? I mean that I want to do the things I could do in other programming languages with it (variables, loops, etc.), but in a faster way. If you have got tips for workarounds of the 2^32 limit in my language/C++/other languages, please tell me too!

    Read the article

  • Do you think the AI industry will ever come back?

    - by Isaiah
    I just spent some time reading about the collapse of the AI industry and realized a lot of the reason it failed was because technology was slow to catch up with their theories on when it would be available. I also read that it is believed computers that will be able to emulate human synapses may be made round 2015-2025. It's 2010 now and were getting pretty close to that time frame. I was wondering if anyone thinks that the AI industry will return as the technology lands? And if so, will it change the language market? Could Lisp like languages suddenly experience a burst of growth if it does? Idk I just thought it was interesting thinking about it.

    Read the article

  • High level audio crossfading library for python

    - by tcoopman
    I am looking for a high level audio library that supports crossfading for python (and that works in linux). In fact crossfading a song and saving it is about the only thing I need. I tried pyechonest but I find it really slow. Working with multiple songs at the same time is hard on memory too (I tried to crossfade about 10 songs in one, but I got out of memory errors and my script was using 1.4Gb of memory). So now I'm looking for something else that works with python. I have no idea if there exists anything like that, if not, are there good command line tools for this, I could write a wrapper for the tool.

    Read the article

  • One big call vs. multiple smaller TSQL calls

    - by BrokeMyLegBiking
    I have a ADO.NET/TSQL performance question. We have two options in our application: 1) One big database call with multiple result sets, then in code step through each result set and populate my objects. This results in one round trip to the database. 2) Multiple small database calls. There is much more code reuse with Option 2 which is an advantage of that option. But I would like to get some input on what the performance cost is. Are two small round trips twice as slow as one big round trip to the database, or is it just a small, say 10% performance loss? We are using C# 3.5 and Sql Server 2008 with stored procedures and ADO.NET.

    Read the article

  • Optimizing BeautifulSoup (Python) code

    - by user283405
    I have code that uses the BeautifulSoup library for parsing, but it is very slow. The code is written in such a way that threads cannot be used. Can anyone help me with this? I am using BeautifulSoup for parsing and than save into a DB. If I comment out the save statement, it still takes a long time, so there is no problem with the database. def parse(self,text): soup = BeautifulSoup(text) arr = soup.findAll('tbody') for i in range(0,len(arr)-1): data=Data() soup2 = BeautifulSoup(str(arr[i])) arr2 = soup2.findAll('td') c=0 for j in arr2: if str(j).find("<a href=") > 0: data.sourceURL = self.getAttributeValue(str(j),'<a href="') else: if c == 2: data.Hits=j.renderContents() #and few others... c = c+1 data.save() Any suggestions? Note: I already ask this question here but that was closed due to incomplete information.

    Read the article

  • Long IF tree with strings

    - by DalGr
    I have a C program which uses Lua for scripting. In order to keep readability and avoid importing several constants within the individual Lua states, I condense a large amount of functions within a simple call (such as "ObjectSet(id, "ANGLE", 45)"), by using an "action" string. To do this I have a large if tree comparing the action string to a list (such as "if(stringcompare(action, "ANGLE") ... else if (stringcompare(action, "X")... etc") This approach works well, and within the program it's not really slow, and is fairly quick to add a new action. But I kind of feel perfectionist. Is there a better way to do this in C? And having Lua in heavy use, maybe there is a way to use it for this purpose? (embedded "chunks" making a dictionary?) Although this part is mostly curiosity.

    Read the article

  • Alternative to 'where col in (list)' for MySQL

    - by user210481
    Hi I have the following table T: id 1 2 3 4 col a b a c I want to do a select that returns the id,col when group by(col) having count(col)1 One way of doing it is SELECT id,col FROM T WHERE col IN (SELECT col FROM T GROUP BY(col) HAVING COUNT(col)>1); The intern select (from the right) returns 'a' and main one (left) will return 1,a and 3,a The problem is that the where in statement seems to be extremely slow. In my real case, the results from the internal select has many 'col's, something about 70000 and it's taking hours. Right now it's much faster to do the internal select and the main select getting all ids and upcs and do the intersection locally. MySQL should be able to handle this kind of query efficiently. Can I substitute the where in for a join or something faster? Thanks

    Read the article

  • Is it possible to use Sphinx search with dynamic conditions?

    - by Fedyashev Nikita
    In my web app I need to perform 3 types of searching on items table with the following conditions: items.is_public = 1 (use title field for indexing) - a lot of results can be retrieved(cardinality is much higher than in other cases) items.category_id = {X} (use title + private_notes fields for indexing) - usually less than 100 results items.user_id = {X} (use title + private_notes fields for indexing) - usually less than 100 results I can't find a way to make Sphinx work in all these cases, but it works well in 1st case. Should I use Sphinx just for the 1st case and use plain old "slow" FULLTEXT searching in MySQL(at least because of lower cardinality in 2-3 cases)? Or is it just me and Sphinx can do pretty much everything?

    Read the article

  • Is use of LEAKS instrument still common on 3G iPhone?

    - by gordonmcdowell
    I'm working with an iPhone 3G, and when I'm trying to investigate memory leaks using the LEAKS instrument, my app crashes. It does not crash when LEAKS is not used. I'm making no claim to having a bug-free or non-memory-intensive app here. But I'd like to investigate leaks on an actual device. When I'm running LEAKS it is incredibly slow. Are there still developers working on iPhone 3G? I don't want to be the whiny guy blaming his tools, but I'd also like to be sure the whole dev world hasn't moved on to iPhone 3GS and I'm the only one trying to run both my app and leaks on a 3G. Currently running iOS 4.0 "gold". Snow Leopard dev env with latest XCode.

    Read the article

  • Problem processing large data using Applet-Servlet communication

    - by Marquinio
    Hi everyone. I have an Applet that makes a request to a Servlet. On the servlet it's using the PrintWriter to write the response back to Applet: out.println("Field1|Field2|Field3|Field4|Field5......|Field10"); There are about 15000 records, so the out.println() gets executed about 15000 times. Problem is that when the Applet gets the response from Servlet it takes about 15 minutes to process the records. I placed System.out.println's and processing is paused at around 5000, then after 15 minutes it continues processing and then its done. Has anyone faced a similar problem? The servlet takes about 2 seconds to execute. So seems that the browser/Applet is too slow to process the records. Any ideas appreciated. Thanks.

    Read the article

  • Resetting AUTO_INCREMENT on myISAM without rebuilding the table

    - by Artem
    Please help I am in major trouble with our production database. I had accidentally inserted a key with a very large value into an autoincrement column, and now I can't seem to change this value without a huge rebuild time. "ALTER TABLE tracks_copy AUTO_INCREMENT = 661482981" Is super-slow. How can I fix this in production? I can't get this to work either (has no effect): myisamchk tracks.MYI --set-auto-increment=661482982 Any ideas? Basically, no matter what I do I get an overflow: SHOW CREATE TABLE tracks CREATE TABLE tracks ( ... ) ENGINE=MYISAM AUTO_INCREMENT=2147483648 DEFAULT CHARSET=latin1

    Read the article

  • TextMate/Macfusion combo for mounting projects over SSH

    - by Sam Lee
    Here is my workflow: I use Macfusion to mount a server over SSH, and then edit the root directory of the project in TextMate (using mate /Volumes/server/projectdir). I have a plug in installed that disables refreshing on refresh. This works ALMOST perfectly--the only thing I have problems with is "Find in Project": it's REALLY slow. Has anyone run into this problem before and been able to find any solutions? Currently I go to terminal when I have to do a search, but it would be great to be able to do it in TextMate. Thanks!

    Read the article

  • How does an interpreter switch scope?

    - by Dox
    I'm asking this because I'm relatively new to interpreter development and I wanted to know some basic concepts before reinventing the wheel. I thought of the values of all variables stored in an array which makes the current scope, upon entering a function the array is swapped and the original array put on some sort of stack. When leaving the function the top element of the "scope stack" is popped of and used again. Is this basically right? Isn't swapping arrays (which means moving around a lot of data) not very slow and therefore not used by modern interpreters?

    Read the article

  • Sql Server 2000 Stored Procedure Prevent Parallelism or something?

    - by user187305
    I have a huge disgusting stored procedure that wasn't slow a couple months ago, but now is. I barely know what this thing does and I am in no way interested in rewriting it. I do know that if I take the body of the stored procedure and then declare/set the values of the parameters and run it in query analyzer that it runs more than 20x faster. From the internet, I've read that this is probably due to a bad cached query plan. So, I've tried running the sp with "WITH RECOMPILE" after the EXEC and I've also tried putting the "WITH RECOMPLE" inside the sp, but neither of those helped even a little bit. When I look at the execution plan of the sp vs the query, the biggest difference is that the sp has "Parallelism" operations all over the place and the query doesn't have any. Can this be the cause of the difference in speeds? Thank you, any ideas would be great... I'm stuck.

    Read the article

  • C#/WPF FileSystemWatcher on every extension on every path

    - by BlueMan
    I need FileSystemWatcher, that can observing same specific paths, and specific extensions. But the paths could by dozens, hundreds or maybe thousand (hope not :P), the same with extensions. The paths and ext are added by user. Creating hundreds of FileSystemWatcher it's not good idea, isn't it? So - how to do it? Is it possible to watch/observing every device (HDDs, SD flash, pendrives, etc.)? Will it be efficient? I don't think so... . Every changing Windows log file, scanning file by antyvirus program - it could realy slow down my program with SystemWatcher :(

    Read the article

  • iPhone webapp: my ressources don't get cached

    - by Savageman
    Hello, First of all, I'd like to say I'm not using any off-line feature from HTML5. I have a web-application which runs on the iPhone. When viewing it from safari, everything works quite well. But when I launch the application from the home screen (to remove the navigation bar), it can be really slow. I checked the logs in Apache and it appears that Safari does a good work to cache the resources (css / js / images), with Apache answering "304 Not Modified" when needed. However, when the web app run as a "real" application (navigation bar hidden), those resources doesn't get cached and Apache the content has to be transferred over and over again (response code 200 Ok + content), resulting in a significantly slower page load. How can I prevent this behavior? Do I need to always run my webapp inside Safari, even when it's launched from the home screen? Thank you!

    Read the article

< Previous Page | 504 505 506 507 508 509 510 511 512 513 514 515  | Next Page >