Search Results

Search found 10033 results on 402 pages for 'execution speed'.

Page 337/402 | < Previous Page | 333 334 335 336 337 338 339 340 341 342 343 344  | Next Page >

  • How to use setTimeout / .delay() to wait for typing between characters

    - by Darcy
    Hi all, I am creating a simple listbox filter that takes the user input and returns the matching results in a listbox via javascript/jquery (roughly 5000+ items in listbox). Here is the code snippet: var Listbox1 = $('#Listbox1'); var commands = document.getElementById('DatabaseCommandsHidden'); //using js for speed $('#CommandsFilter').bind('keyup', function() { Listbox1.children().remove(); for (var i = 0; i < commands.options.length; i++) { if (commands.options[i].text.toLowerCase().match($(this).val().toLowerCase())) { Listbox1.append($('<option></option>').val(i).html(commands.options[i].text)); } } }); This works pretty well, but slows down somewhat when the 1st/2nd char's are being typed since there are so many items. I thought a solution I could use would be to add a delay to the textbox that prevents the 'keyup' event from being called until the user stops typing. The problem is, I'm not sure how to do that, or if its even a good idea or not. Any suggestions/help is greatly appreciated.

    Read the article

  • Serialized NHibernate Configuration objects - detect out of date or rebuild on demand?

    - by fostandy
    I've been using serialized nhibernate configuration objects (also discussed here and here) to speed up my application startup from about 8s to 1s. I also use fluent-nhibernate, so the path is more like ClassMap class definitions in code fluentconfiguration xml nhibernate configuration configuration serialized to disk. The problem from doing this is that one runs the risk of out of date mappings - if I change the mappings but forget to rebuild the serialized configuration, then I end up using the old mappings without realising it. This does not always result in an immediate and obvious error during testing, and several times the misbehaviour has been a real pain to detect and fix. Does anybody have any idea how I would be able to detect if my classmaps have changed, so that I could either issue an immediate warning/error or rebuild it on demand? At the moment I am comparing timestamps on my compiled assembly against the serialized configuration. This will pickup mapping changes, but unfortunately it generates a massive false positive rate as ANY change to the code results in an out of date flag. I can't move the classmaps to another assembly as they are tightly integrated into the business logic. This has been niggling me for a while so I was wondering if anybody had any suggestions?

    Read the article

  • SQL Server full text query across multiple tables - why so slow?

    - by Mikey Cee
    Hi. I'm trying to understand the performance of an SQL Server 2008 full-text query I am constructing. The following query, using a full-text index, returns the correct results immediately: SELECT O.ID, O.Name FROM dbo.EventOccurrence O WHERE FREETEXT(O.Name, 'query') ie, all EventOccurrences with the word 'query' in their name. And the following query, using a full-text index from a different table, also returns straight away: SELECT V.ID, V.Name FROM dbo.Venue V WHERE FREETEXT(V.Name, 'query') ie. all Venues with the word 'query' in their name. But if I try to join the tables and do both full-text queries at once, it 12 seconds to return: SELECT O.ID, O.Name FROM dbo.EventOccurrence O INNER JOIN dbo.Event E ON O.EventID = E.ID INNER JOIN dbo.Venue V ON E.VenueID = V.ID WHERE FREETEXT(E.Name, 'search') OR FREETEXT(V.Name, 'search') Here is the execution plan: http://uploadpad.com/files/query.PNG From my reading, I didn't think it was even possible to make a free text query across multiple tables in this way, so I'm not sure I am understanding this correctly. Note that if I remove the WHERE clause from this last query then it returns all results within a second, so it's definitely the full-text that is causing the issue here. Can someone explain (i) why this is so slow and (ii) if this is even supported / if I am even understanding this correctly. Thanks in advance for your help.

    Read the article

  • Gridview Paging via ObjectDataSource: Why is maximumRows being set to -1?

    - by Bryan
    So before I tried custom gridview paging via ObjectDataSource... I think I read every tutorial known to man just to be sure I got it. It didn't look like rocket science. I've set the AllowPaging = True on my gridview. I've specified PageSize="10" on my gridview. I've set EnablePaging="True" on the ObjectDataSource. I've added the 2 paging parms (maximumRows & startRowIndex) to my business object's select method. I've created an analogous "count" method with the same signature as the select method. The only problem I seem to have is during execution... the ObjectDataSource is supplying my business object with a maximumRows value of -1 and I can't for the life of me figure out why. I've searched to the end of the web for anyone else having this problem and apparently I'm the only one. The StartRowIndex parameter seems to be working just fine. Any ideas?

    Read the article

  • Corrupt UTF-8 Characters with PHP 5.2.10 and MySQL 5.0.81

    - by jkndrkn
    We have an application hosted on both a local development server and a live site. We are experiencing UTF-8 corruption issues and are looking to figure out how to resolve them. The system is run using symfony 1.0 with Propel. On our development server, we are running PHP 5.2.0 and MySQL 5.0.32. We do not experience corrupted UTF-8 characters there. On our live site, PHP 5.2.10 and MySQL 5.0.81 is running. On that server, certain characters such as ô´ and S are corrupted once they are stored in the database. The corrupted characters are showing up as either question marks or approximations of the original character with adjacent question marks. Examples of corruption: Uncorrupted: ô´ Corrupted: ô? Uncorrupted: S Corrupted: ? We are currently using the following techniques on both development and live servers: Executing the following queries prior to execution of any other queries: SET NAMES 'utf8' COLLATE 'utf8_unicode_ci' SET CHARSET 'utf8' Setting the <meta> Content-Type value to: <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> Adding the following to our .htaccess file: AddDefaultCharset utf-8 Using mb_* (multibyte) PHP functions where necessary. Being sure to set database columns to use utf8_unicode_ci collation. These techniques are sufficient for our development site, but do not work on the live site. On the live site I've also tried adding mysql_set_encoding('ut8', $mysql_connection) but this does not help either. I have found some evidence that newer versions of PHP and MySQL are mishandling UTF-8 character encodings.

    Read the article

  • Javascript and rendering pauses and stays paused on scroll in the android browser

    - by user357303
    Hi. I've found some wierd behaviour related to scrolling and rendering and javascript. How to make it happen: On any webpage that is long enough to scroll on. Start to scroll pretty fast (fling the page). then release the touch. No while the page is still scrolling because of the momentum. Tap the screen to stop the scroll. This make the browser enter a wierd mode. On the nexus one it behaves like this: The updating of what's shown on the screen stops, you can still click on links and the go to where they are supposed to but what's shown on the screen stays the same. If you then scroll the screen a bit the update of the screen kicks in again and what you you where suppsed to see all the time is shown. On all phones with HTC Sense I've tried (Hero, Desire, Legend) this happens: The updating of the screen is stopped just like on the nexus one, but also the execution of any javascript is stopped. If you click on a link that takes you to another page however things return to normal again. The way I tested this was I created a page like this: http://pastebin.ca/1881620 The changeColor function simply changed the background color of 'container' to a few different colors. So before the error what happens is that when you click any link the color changes. after the error this happens: Nexus one: when you click on the links nothing happens (except the "orange link selected rounded corner box thing" is shown as if the link is clicked). Then when you scroll abit. You can see the color has changed (and equal amount of times to the number of times I clicked the link). On Sense: The links take me to google.com Has anyone else noticed this problem? Is there anyway to work around it? Thanks.

    Read the article

  • net send command program debugging in C#

    - by riad
    Dear all, I write a program to execute and send msg using windows net send command.Its working fine for single receptor.But the problem happen when i want to send message in many receptors. i use a for loop to take receptor name from a list box. Here all message go to the first receptor.The problem happen on process execution.So far i guess the process is not clear or dead on the time. Can any body guide me how i can send the msg to multiple users at a time? my code below: string sendingMessage = messageRichTextBox.Text; string[] recepentAddressArray = new string[recepentAddressListBox.Items.Count]; for (int j = 0; j < recepentAddressListBox.Items.Count; j++) // Getting address from list box { recepentAddressArray[j] = recepentAddressListBox.Items[j].ToString(); string recepantAddress = recepentAddressArray[j]; try { string strLine = "net send " + recepantAddress + " " + sendingMessage + " >C:netsend.log"; FileStream fs = new FileStream("c:netsend.bat", FileMode.Create, FileAccess.Write); StreamWriter streamWriter = new StreamWriter(fs); streamWriter.BaseStream.Seek(0, SeekOrigin.End); streamWriter.Write(strLine); streamWriter.Flush(); streamWriter.Close(); fs.Close(); Process p = new Process(); p.StartInfo.FileName = "C:netsend.bat"; p.StartInfo.WindowStyle = ProcessWindowStyle.Hidden; p.Start(); p.WaitForExit(); p.Close(); FileStream fsOutput = new FileStream("C:netsend.log", FileMode.Open, FileAccess.Read); StreamReader reader = new StreamReader(fsOutput); reader.BaseStream.Seek(0, SeekOrigin.Begin); string strOut = reader.ReadLine(); reader.Close(); fsOutput.Close(); } catch (Exception) { MessageBox.Show("ERROR"); } }

    Read the article

  • SQL Server becomes slow after restart

    - by Tobi DM
    We use SQL Server 2005 on an Windwos Server 2008. Ther Server has 48 GB RAM. SQL Server is configured to use 40 GB RAM. There is only one database hosted (About 70 GB). The only app beside SQL Server is our App-Server which connects the clients to the database. Now we encounter the following problem: After a restart of the server our the performance is great. The server grabs the 40 GB RAM wich it is allowed to and then runs fast as hell. But after about 4 weeks the system becomes slower and slower. The execution of statements (seen in the profiler) is raising slowly. But I cannot see that there is something going wrong on the server. CPU usage is at about 20% I/O also seems to be no Problem The process monitor does also not show that there are strange apps or something like that. Eventlog does also have no interessting messages No open transactions or blockings to see We tried already the following things without effect: Droped the cache by using the statements DBCC FreeProcCache DBCC FREESYSTEMCACHE('ALL') DBCC DropCleanbuffers Restarted the Appserver we are using. Restart the sql server service But nothing did help exept restarting the whole server. Any ideas?

    Read the article

  • Do you know a date picker to quickly pick one day of the current week?

    - by Murmelschlurmel
    Most date pickers allow you to pick the date from a tine calendar or enter the date by hand. For example http://jqueryui.com/demos/datepicker/ This requires - two clicks (one to display the calendar and one to select the correct date) - good eyesight (usually the pop-up calendar is very small) - and good hand-eye coordination to pick the correct date in the tiny calendar with your mouse. That's no problem for power users, but a hassle for older people and computer beginners. I found a website with a different approach. It seems like their users mostly select dates of the current week. So they listed all the days of the week in a bar together with the weekday. The current day is marked in another color. There is a tiny calender icon on the right hand that opens up a regular date picker. That gives you access to all regular date picker functionality. Here is a screenshot: http://mite.yo.lk/assets/img/tour/de/zeiten-erfassen.png Do you know of any jquery plugin which has a similar feature? If not, do you any other plugin or widget which would help me speed up development ? Thank you!

    Read the article

  • How can I set a Java applet transparency to show contents behind it?

    - by yuri
    I've got a web page with an applet inside. This applet is a drop target on a drag and drop action from the OS, I simply take an image from a folder, drag it on the applet and something happens. I give this webpage to a graphic designer and he ask to if he can put an image behind the java applet so he can simulate to change the background using CSS (it is a skinned app and graphic design can change during execution). Practically i supposed to do: <div> <applet width="50" height="50" /> </div> with this CSS: div { width:50px; height:50px; background-image: url(image.jpg) center center no-repeat; } But it doesn't work (background is opaque). It is possible to set transparency to the applet without loosing drag and drop capabilities ? I'm searching something similar to flash wmode parameter. Better solutions implies only changes on the CSS/HTML without recompiling java class so the designing team can change the page structure without changing the Java.

    Read the article

  • Is it safe to read regular expressions from a file?

    - by Zilk
    Assuming a Perl script that allows users to specify several text filter expressions in a config file, is there a safe way to let them enter regular expressions as well, without the possibility of unintended side effects or code execution? Without actually parsing the regexes and checking them for problematic constructs, that is. There won't be any substitution, only matching. As an aside, is there a way to test if the specified regex is valid before actually using it? I'd like to issue warnings if something like /foo (bar/ was entered. Thanks, Z. EDIT: Thanks for the very interesting answers. I've since found out that the following dangerous constructs will only be evaluated in regexes if the use re 'eval' pragma is used: (?{code}) (??{code}) ${code} @{code} The default is no re 'eval'; so unless I'm missing something, it should be safe to read regular expressions from a file, with the only check being the eval/catch posted by Axeman. At least I haven't been able to hide anything evil in them in my tests. Thanks again. Z.

    Read the article

  • PHP Socket Server vs node.js: Web Chat

    - by Eliasdx
    I want to program a HTTP WebChat using long-held HTTP requests (Comet), ajax and websockets (depending on the browser used). Userdatabase is in mysql. Chat is written in PHP except maybe the chat stream itself which could also be written in javascript (node.js): I don't want to start a php process per user as there is no good way to send the chat messages between these php childs. So I thought about writing an own socket server in either PHP or node.js which should be able to handle more then 1000 connections (chat users). As a purely web developer (php) I'm not much familiar with sockets as I usually let web server care about connections. The chat messages won't be saved on disk nor in mysql but in RAM as an array or object for best speed. As far as I know there is no way to handle multiple connections at the same time in a single php process (socket server), however you can accept a great amount of socket connections and process them successive in a loop (read and write; incoming message - write to all socket connections). The problem is that there will most-likely be a lag with ~1000 users and mysql operations could slow the whole thing down which will then affect all users. My question is: Can node.js handle a socket server with better performance? Node.js is event-based but I'm not sure if it can process multiple events at the same time (wouldn't that need multi-threading?) or if there is just an event queue. With an event queue it would be just like php: process user after user. I could also spawn a php process per chat room (much less users) but afaik there are singlethreaded IRC servers which are also capable to handle thousands of users. (written in c++ or whatever) so maybe it's also possible in php. I would prefer PHP over Node.js because then the project would be php-only and not a mixture of programming languages. However if Node can process connections simultaneously I'd probably choose it.

    Read the article

  • Personalized UIView created with Interface Builder

    - by Malox
    I need to project a personalized UIView with a UIImageView and 3 UILabel. I need to allocate more of this view because I want put it into a UIScrollView. I would avoid to generate the view programatically because it's difficult and boring design it. My idea is to create a new class that extends UIView and design it with interface builder. For example my Personalized View code is like that: #import <UIKit/UIKit.h> @interface PersonalizedPreview : UIView { IBOutlet UIImageView *image; IBOutlet UILabel *first_label; IBOutlet UILabel *second_label; IBOutlet UILabel *third_label; } -(void) setImage:(UIImage *)image; @property (nonatomic, retain) IBOutlet UIImageView *image; @property (nonatomic, retain) IBOutlet UILabel *label; .... @end I would create an associated xib file for this view and initialize it simply specifing the xib file. Note that I don't want create a specific ViewController for this view and PersonalizedView is instantiate at runtime not when the app runs, moreover I don't know how many PersonalizedView I will instantiate, it depends on runtime execution. Anyone can help me? Thank you very much.

    Read the article

  • real time stock quotes, StreamReader performance optimization

    - by sean717
    I am working on a program that extracts real time quote for 900+ stocks from a website. I use HttpWebRequest to send HTTP request to the site and store the response to a stream and open a stream using the following code: HttpWebResponse response = (HttpWebResponse)request.GetResponse(); Stream stream = response.GetResponseStream (); StreamReader reader = new StreamReader( stream ) the size of the received HTML is large (5000+ lines), so it takes a long time to parse it and extract the price. For 900 files, It takes about 6 mins for parsing and extracting. Which my boss isn't happy with, he told me he'd want the whole process to be done in TWO mins. I've identified the part of the program that takes most of time to finish is parsing and extracting. I've tried to optimize the code to make it faster, the following is what I have now after some optimization: // skip lines at the top for(int i=0;i<1500;++i) reader.ReadLine(); // read the line that contains the price string theLine = reader.ReadLine(); // ... extract the price from the line now it takes about 4 mins to process all the files, there is still a significant gap to what my boss's expecting. So I am wondering, is there other way that I can further speed up the parsing and extracting and have everything done within 2 mins?

    Read the article

  • Scheduling algorithm optimized to execute during low usage periods.

    - by The Rook
    Lets say there is a Web Application serving mostly one country. Because of normal sleep habits website traffic follows a Sine wave, where 1 period lasts 24 hours and the lowest part of the wave is at about midnight. Is there a scheduling algorithm optimized to execute during low usage periods? I am thinking of this as a liquid that is "pored into" this sine wave to flatten out resource usage. A ideal algorithm would take the integral of this empty space. If the same tasks need to be run daily the amount of resources consumed by previous executions could be used to predict future usage by looking at the rate in which resource usage is increasing. By knowing the amount of resources required this algorithm could fill in this empty space while leaving as much buffer as possible on either side such that its interference was reduced as much as possible. It would also be possible to detect if there isn't enough resources before execution begins, this opens the door for a cloud to help out. Does anything like this exist? Or should I build it into an existing scheduler like quartz and make it open source?

    Read the article

  • How to scale JPEG images with a non-standard sampling factor in Java?

    - by HRJ
    I am using Java AWT for scaling a JPEG image, to create thumbnails. The code works fine when the image has a normal sampling factor ( 2x2,1x1,1x1 ) However, an image which has this sampling factor ( 1x1, 1x1, 1x1 ) creates problem when scaled. The colors get corrupted though the features are recognizable. The original and the thumbnail: The code I am using is roughly equivalent to: static BufferedImage awtScaleImage(BufferedImage image, int maxSize, int hint) { // We use AWT Image scaling because it has far superior quality // compared to JAI scaling. It also performs better (speed)! System.out.println("AWT Scaling image to: " + maxSize); int w = image.getWidth(); int h = image.getHeight(); float scaleFactor = 1.0f; if (w > h) scaleFactor = ((float) maxSize / (float) w); else scaleFactor = ((float) maxSize / (float) h); w = (int)(w * scaleFactor); h = (int)(h * scaleFactor); // since this code can run both headless and in a graphics context // we will just create a standard rgb image here and take the // performance hit in a non-compatible image format if any Image i = image.getScaledInstance(w, h, hint); image = new BufferedImage(w, h, BufferedImage.TYPE_INT_RGB); Graphics2D g = image.createGraphics(); g.drawImage(i, null, null); g.dispose(); i.flush(); return image; } (Code courtesy of this page ) Is there a better way to do this? Here's a test image with sampling factor of [ 1x1, 1x1, 1x1 ].

    Read the article

  • Will Learning C++ Help for Building Fast/No-Additional-Requirements Desktop Applications?

    - by vito
    Will learning C++ help me build native applications with good speed? Will it help me as a programmer, and what are the other benefits? The reason why I want to learn C++ is because I'm disappointed with the UI performances of applications built on top of JVM and .NET. They feel slow, and start slow too. Of course, a really bad programmer can create a slower and sluggish application using C++ too, but I'm not considering that case. One of my favorite Windows utility application is Launchy. And in the Readme.pdf file, the author of the program wrote this: 0.6 This is the first C++ release. As I became frustrated with C#’s large .NET framework requirements and users lack of desire to install it, I decided to switch back to the faster language. I totally agree with the author of Launchy about the .NET framework requirement or even a JRE requirement for desktop applications. Let alone the specific version of them. And some of the best and my favorite desktop applications don't need .NET or Java to run. They just run after installing. Are they mostly built using C++? Is C++ the only option for good and fast GUI based applications? And, I'm also very interested in hearing the other benefits of learning C++.

    Read the article

  • What is a good automated data import method for SQL Server?

    - by Joel Potter
    I'm in the process of porting some SQL Server 2005 databases to SQL Server 2008. One of these databases has an associated import application (Windows task) which uses SSIS with a DTS package to import a large dataset from an MS Access database nightly. In upgrading to SQL Server 2008, I discovered that I can't run the same console application which has been performing the imports due to the missing manageddts DLL in SQL Server 2008. It's several years old and in need of a rewrite for various reason, plus, I've been fairly unhappy with DTS in general. The original reason DTS was chosen was for speed (5 min import time compared to 30+ for ADO.NET). The format of the data to import is out of my control (the client likes Access). I would also like to be able to run the import from a machine completely separate from the server hosting SQL Server and preferably with minimal SQL features installed. Options I've considered: Creating an Access application to connect to both databases (SQL Server and Access) and perform the import (Ugh!) Revisiting ADO.NET to see if the original implementation was poorly written. Updated SSIS packages. What other technologies should I be considering for this job?

    Read the article

  • optimize 2D array in C++

    - by Hristo
    I'm dealing with a 2D array with the following characteristics: const int cols = 500; const int rows = 100; int arr[rows][cols]; I access array arr in the following manner to do some work: for(int k = 0; k < T; ++k) { // for each trainee myscore[k] = 0; for(int i = 0; i < N; ++i) { // for each sample for(int j = 0; j < E[i]; ++j) { // for each expert myscore[k] += delta(i, anotherArray[k][i], arr[j][i]); } } } So I am worried about the array 'arr' and not the other one. I need to make this more cache-friendly and also boost the speed. I was thinking perhaps transposing the array but I wasn't sure how to do that. My implementation turns out to only work for square matrices. How would I make it work for non-square matrices? Also, would mapping the 2D array into a 1D array boost the performance? If so, how would I do that? Finally, any other advice on how else I can optimize this... I've run out of ideas, but I know that arr[j][i] is the place where I need to make changes because I'm accessing columns by columns instead of rows by rows so that is not cache friendly at all. Thanks, Hristo

    Read the article

  • javascript error handeling

    - by pankaj
    I have a javascript function for checking errors which i am callin on OnClicentClick event of a button. Once it catch a error i want to stop execution of click event . But in my case it always it always executes the onclick event. Following is my function: function DisplayError() { if (document.getElementById('<%=txtPassword.ClientID %>').value.length < 6 || document.getElementById('<%=txtPassword.ClientID %>').value.length > 12) { document.getElementById('<%=lblError.ClientID %>').innerText = "Password length must be between 6 to 12 characters"; return false; } var str = <%=PhoneNumber()%>; if(str.length <10) { alert('<%=phoneNum%>'.length); document.getElementById('<%=lblError.ClientID %>').innerText = "Phone Number not in correct format"; return false; } } button html code: <asp:Button runat="server" Text="Submit" ID="btnSubmit" ValidationGroup="submit" onclick="btnSubmit_Click" OnClientClick="DisplayError()"/> It should not execute the button click event once it satisfies any of the IF condition in javascript function.

    Read the article

  • How to benchmark on multi-core processors

    - by Pascal Cuoq
    I am looking for ways to perform micro-benchmarks on multi-core processors. Context: At about the same time desktop processors introduced out-of-order execution that made performance hard to predict, they, perhaps not coincidentally, also introduced special instructions to get very precise timings. Example of these instructions are rdtsc on x86 and rftb on PowerPC. These instructions gave timings that were more precise than could ever be allowed by a system call, allowed programmers to micro-benchmark their hearts out, for better or for worse. On a yet more modern processor with several cores, some of which sleep some of the time, the counters are not synchronized between cores. We are told that rdtsc is no longer safe to use for benchmarking, but I must have been dozing off when we were explained the alternative solutions. Question: Some systems may save and restore the performance counter and provide an API call to read the proper sum. If you know what this call is for any operating system, please let us know in an answer. Some systems may allow to turn off cores, leaving only one running. I know Mac OS X Leopard does when the right Preference Pane is installed from the Developers Tools. Do you think that this make rdtsc safe to use again? More context: Please assume I know what I am doing when trying to do a micro-benchmark. If you are of the opinion that if an optimization's gains cannot be measured by timing the whole application, it's not worth optimizing, I agree with you, but I cannot time the whole application until the alternative data structure is finished, which will take a long time. In fact, if the micro-benchmark were not promising, I could decide to give up on the implementation now; I need figures to provide in a publication whose deadline I have no control over.

    Read the article

  • Problem when get pageContent of URL in java ?

    - by tiendv
    Hi all ! i have a code for get pagecontent from a URL here is code ! import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; import java.net.URL; import java.net.URLConnection; public class GetPageFromURLAction extends Thread { public String stringPageContent; public String targerURL; public String getPageContent(String targetURL) throws IOException { String returnString=""; URL urlString = new URL(targetURL); URLConnection openConnection = urlString.openConnection(); String temp; BufferedReader in = new BufferedReader(new InputStreamReader(openConnection.getInputStream())); while ((temp = in.readLine()) != null) { returnString += temp + "\n"; } in.close(); // String nohtml = sb.toString().replaceAll("\\<.*?>",""); return returnString; } public String getStringPageContent() { return stringPageContent; } public void setStringPageContent(String stringPageContent) { this.stringPageContent = stringPageContent; } public String getTargerURL() { return targerURL; } public void setTargerURL(String targerURL) { this.targerURL = targerURL; } @Override public void run() { try { this.stringPageContent=this.getPageContent(targerURL); } catch (IOException e) { e.printStackTrace(); } } } The problem is : 1 Some time i receive a error lik 405 ,or 403 HTTP error ... and result string is null . To repair i check permission to connect URL but it usualy return null URLConnection openConnection = urlString.openConnection(); openConnection.getPermission( ) is mean that i don't have permission to acess link ? To get resultString without HTML Tag ? i do like that String nohtml = sb.toString().replaceAll("\<.*?",""); Para sb is Stringbulder , but it can't remove all HTML Tab in string return ? I use thread here because i must get page alot of url , so how can i cread a multi thread to impro speed of program ! Thanks

    Read the article

  • Using memcache together with conventional cache

    - by Industrial
    Hi! Here's the deal. We would have taken the complete static html road to solve performance issues, but since the site will be partially dynamic, this won't work out for us. What we have thought of instead is using memcache + eAccelerator to speed up PHP and take care of caching for the most used data. Here's our two approaches that we have thought of right now: Using memcache on all<< major queries and leaving it alone to do what it does best. Usinc memcache for most commonly retrieved data, and combining with a standard harddrive-stored cache for further usage. The major advantage of only using memcache is of course the performance, but as users increases, the memory usage gets heavy. Combining the two sounds like a more natural approach to us, even though the theoretical compromize in performance. Memcached appears to have some replication features available as well, which may come handy when it's time to increase the nodes. What approach should we use? - Is it stupid to compromize and combine the two methods? Should we insted be focusing on utilizing memcache and instead focusing on upgrading the memory as the load increases with the number of users? Thanks a lot!

    Read the article

  • What would you write in a Constitution (law book) for a programmers' country? [closed]

    - by Developer Art
    After we have got our great place to talk about professional matters and sozialize online - on SO, I believe the next logical step would be to found our own country! I invite you all to participate and bring together your available resources. We will buy an island, better even a group of islands so that we could establish states like .NET territories, Java land, Linux republic etc. We will build a society of programmers (girls - we need you too). To the organizational side, we're going to need some sort of a Constitution or a law book. I suggest we write it together. I make it a wiki as it should be a cooperative effort. I'll open the work. Section 1. Programmers' rights. Every citizen has a right to an Internet connection 24/7. Every citizen can freely choose the field of interest Section 2. Programmers' obligations. Every citizen must embrace the changing nature of the profession and constanly educate himself Section 3. Law enforcement. Code duplication when can be avoided is punished by limiting the bandwith speed to 64Kbit for a period of one week. Using ugly hacks instead of refactoring code is punished by cutting the Internet connection for a period of one month. Usage of technologies older than 5/10 years is punished by restricting the web access to the sites last updated 5/10 years ago for a period of one month. Please feel free to modify and extend the list. We'll need to have it ready before we proceed formally with the country foundation. A purchase fund will be established shortly. Everyone is invited to participate.

    Read the article

  • Beginner Question: For extract a large subset of a table from MySQL, how does Indexing, order of tab

    - by chongman
    Sorry if this is too simple, but thanks in advance for helping. This is for MySQL but might be relevant for other RDMBSs tblA has 4 columns: colA, colB, colC, mydata, A_id It has about 10^9 records, with 10^3 distinct values for colA, colB, colC. tblB has 3 columns: colA, colB, B_id It has about 10^4 records. I want all the records from tblA (except the A_id) that have a match in tblB. In other words, I want to use tblB to describe the subset that I want to extract and then extract those records from tblA. Namely: SELECT a.colA, a.colB, a.colC, a.mydata FROM tblA as a INNER JOIN tblB as b ON a.colA=b.colA a.colB=b.colB ; It's taking a really long time (more than an hour) on a newish computer (4GB, Core2Quad, ubuntu), and I just want to check my understanding of the following optimization steps. ** Suppose this is the only query I will ever run on these tables. So ignore the need to run other queries. Now my questions: 1) What indexes should I create to optimize this query? I think I just need a multiple index on (colA, colB) for both tables. I don't think I need separate indexes for colA and colB. Another stack overflow article (that I can't find) mentioned that when adding new indexes, it is slower when there are existing indexes, so that might be a reason to use the multiple index. 2) Is INNER JOIN correct? I just want results where a match is found. 3) Is it faster if I join (tblA to tblB) or the other way around, (tblB to tblA)? This previous answer says that the optimizer should take care of that. 4) Does the order of the part after ON matter? This previous answer say that the optimizer also takes care of the execution order.

    Read the article

< Previous Page | 333 334 335 336 337 338 339 340 341 342 343 344  | Next Page >