Search Results

Search found 287 results on 12 pages for 'crawling pasta hellion'.

Page 10/12 | < Previous Page | 6 7 8 9 10 11 12  | Next Page >

  • SwingWorker in Java (beginner question)

    - by Malachi
    I am relatively new to multi-threading and want to execute a background task using a Swingworker thread - the method that is called does not actually return anything but I would like to be notified when it has completed. The code I have so far doesn't appear to be working: private void crawl(ActionEvent evt) { try { SwingWorker<Void, Void> crawler = new SwingWorker<Void, Void>() { @Override protected Void doInBackground() throws Exception { Discoverer discover = new Discoverer(); discover.crawl(); return null; } @Override protected void done() { JOptionPane.showMessageDialog(jfThis, "Finished Crawling", "Success", JOptionPane.INFORMATION_MESSAGE); } }; crawler.execute(); } catch (Exception ex) { JOptionPane.showMessageDialog(this, ex.getMessage(), "Exception", JOptionPane.ERROR_MESSAGE); } } Any feedback/advice would be greatly appreciated as multi-threading is a big area of programming that I am weak in.

    Read the article

  • Should a given URI in a RESTful architecture always return the same response?

    - by keithjgrant
    This is kind of a follow-up question to this one. So is having a unique response for any given URI a core tenant of RESTful architecture? A lot of discussion here tends that direction, but I haven't seen it anywhere as a "hard and fast" rule. I understand the value of it (for caching, crawling, passing links, etc), but I also see things like the twitter API violate it (A request to http://api.twitter.com/1/statuses/friends_timeline.xml will vary based on the username given), and I understand there are times when it may be necessary--not to mention that a chronologically paged resource will also change as new elements are added. Should I strive for varied responses from the same URI to be eliminated altogether, or do I just accept that sometimes it isn't practical, and as long as I minimize its occurrence, I'll be in decent shape.

    Read the article

  • Streamlining my javascript with a function

    - by liz
    i have a series of select lists, that i am using to populate text boxes with ids. so you click a select option and another text box is filled with its id. with just one select/id pair this works fine, but i have multiples, and the only thing that changes is the id of the select and input.. in fact just the ending changes, the inputs all start with featredproductid and the select ids all start with recipesproduct and then both end with the category. i know that listing this over and over for each category is not the way to do it. i think i need to make an array of the categories var cats = ['olive oil', "grains", "pasta"] and then use a forEach function? maybe? here is the clunky code window.addEvent('domready', function() { $('recipesproductoliveoil').addEvent('change', function(e){ pidselected = this.options[this.selectedIndex].getProperty('value') ; $("featuredproductidoliveoil").setProperties({ value: pidselected}); ; }); $('recipesproductgrains').addEvent('change', function(e){ pidselected = this.options[this.selectedIndex].getProperty('value') ; $("featuredproductidgrains").setProperties({ value: pidselected}); ; }); $('recipesproductpasta').addEvent('change', function(e){ pidselected = this.options[this.selectedIndex].getProperty('value') ; $("featuredproductidpasta").setProperties({ value: pidselected}); ; }); $('recipesproductpantry').addEvent('change', function(e){ pidselected = this.options[this.selectedIndex].getProperty('value') ; $("featuredproductidpantry").setProperties({ value: pidselected}); ; }); }); keep in mind this is mootools 1.1 (no i cant update it sorry). i am sure this is kind of basic, something i seem to have wrapping my brain around. but i am quite sure doing it as above is not really good...

    Read the article

  • Issues with SharePoint 2010 Development

    - by Rahul Soni
    I am planning to use my laptop for SharePoint 2010 development and I have only 4 GB RAM which is not even upgradable. Just because of RAM constraint, my VS 2010 keeps crawling if I try to run it along with SharePoint 2010 on the same machine. Hence, I've reformatted my machine and looking for alternate solutions until I get a new laptop. Currently, I have installed VS 2010 ONLY on my laptop and wanted to create an empty SharePoint project. Once done with my project, I want to deploy it on a different machine (which is a 4GB RAM machine as well, but contains only SharePoint 2010). I thought this will work and give me a bit of breather if everything is configured well. Unfortunately, when I tried creating a new SharePoint Empty Project in VS 2010, it says... A SharePoint server is not installed on this computer. A SharePoint server must be installed to work with SharePoint projects. Is there a way out?

    Read the article

  • Python urllib3 and how to handle cookie support?

    - by bigredbob
    So I'm looking into urllib3 because it has connection pooling and is thread safe (so performance is better, especially for crawling), but the documentation is... minimal to say the least. urllib2 has build_opener so something like: #!/usr/bin/python import cookielib, urllib2 cj = cookielib.CookieJar() opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj)) r = opener.open("http://example.com/") But urllib3 has no build_opener method, so the only way I have figured out so far is to manually put it in the header: #!/usr/bin/python import urllib3 http_pool = urllib3.connection_from_url("http://example.com") myheaders = {'Cookie':'some cookie data'} r = http_pool.get_url("http://example.org/", headers=myheaders) But I am hoping there is a better way and that one of you can tell me what it is. Also can someone tag this with "urllib3" please.

    Read the article

  • Is is better to store serialized data or raw html in mysql?

    - by Yegor
    I de-normalized my database, since the application was crawling otherwise, and Im storing a list of categories for each item in the DB as a raw html version, and simply echoing it out in my design. Each category is actually a link, which is include a tag. Naturally, this is abit of a pain, especially if I want to change the look of how the category links are displayed, since I gotta update all the old cached entries. What if I were to store this data as a serialized array instead, and simply unserialize it, and then apply formatting to it in php. Would there be a significant performance decrease over simply echoing out the raw html?

    Read the article

  • Special characters stripped by mySQL/PHP JSON

    - by Will Gill
    Hi, I have a simple PHP script to extract data from a mySQL database and encode it as JSON. The problem is that special characters (for example German ä or ß characters) are stripped from the JSON response. Everything after the first special character for any single field is just stripped. The fields are set to utf8_bin, and in phpMyAdmin the characters display correctly. The PHP script looks like this: <?php header("Content-type: application/json; charset=utf-8"); $con = mysql_connect('database', 'username', 'password'); if (!$con) { die('Could not connect: ' . mysql_error()); } mysql_select_db("sql01_5789willgil", $con); $sql="SELECT * FROM weightevent"; $result = mysql_query($sql); $row = mysql_fetch_array($result); $events = array(); while($row = mysql_fetch_array($result)) { $eventid = $row['eventid']; $userid = $row['userid']; $weight = $row['weight']; $sins = $row['sins']; $gooddeeds = $row['gooddeeds']; $date = $row['date']; $event = array("eventid"=>$eventid, "userid"=>$userid, "weight"=>$weight, "sins"=>$sins, "gooddeeds"=>$gooddeeds, "date"=>$date); array_push($events, $event); } $myJSON = json_encode($events); echo $myJSON; mysql_close($con); ?> Sample output: [{"eventid":"2","userid":"1","weight":"70.1","sins":"Weihnachtspl","gooddeeds":"situps! lots and lots of situps!","date":"2011-01-02"},{"eventid":"3","userid":"2","weight":"69.9","sins":"A second helping of pasta...","gooddeeds":"I ate lots of salad","date":"2011-01-01"}] -- in the first record the value for field 'sins' should be "Weihnachtsplätzchen". thanks very much!

    Read the article

  • Does Wicket hamper SEO or search engines ability to crawl?

    - by Nick
    We're coming from GWT projects and because of problems with SEO not liking GWT for our next project we're going to move clear of GWT (mainly because seo is a high priority for this next project). In choosing a new framework, I'm looking at Wicket and liking what I've seen so far. I've only done a few tutorials, but in looking at the war layout (from these tutorials) it looks like most of the html pages are in the WEB-INF folder. It this going to cause problems for SEO and search engines crawling through the sites files? Ideally, I'd like to use Wicket with some AJAX and deploy to Google App Engine.

    Read the article

  • How to display recently installed programs and when they were installed?

    - by salvationishere
    I have a Windows XP and I just installed about 12 new programs. Big mistake! Before I installed these programs, my internet connection was running great. But now after installing and restarting my laptop, the internet is crawling. How can I see what was changed? Hint: prior to installing these 12 programs, I installed IE version 8. So probably if I removed that it would fix it, but the problem is I need IE in order for my SQL/C# web application to work properly.

    Read the article

  • how to create a mootools 1.1 function with array

    - by liz
    i have a series of select lists, that i am using to populate text boxes with ids. so you click a select option and another text box is filled with its id. with just one select/id pair this works fine, but i have multiples, and the only thing that changes is the id of the select and input.. in fact just the ending changes, the inputs all start with featredproductid and the select ids all start with recipesproduct and then both end with the category. i know that listing this over and over for each category is not the way to do it. i think i need to make an array of the categories var cats = ['olive oil', "grains", "pasta"] and then use a forEach function? maybe? here is the clunky code window.addEvent('domready', function() { $('recipesproductoliveoil').addEvent('change', function(e){ pidselected = this.options[this.selectedIndex].getProperty('value') ; $("featuredproductidoliveoil").setProperties({ value: pidselected}); ; }); $('recipesproductgrains').addEvent('change', function(e){ pidselected = this.options[this.selectedIndex].getProperty('value') ; $("featuredproductidgrains").setProperties({ value: pidselected}); ; }); $('recipesproductpasta').addEvent('change', function(e){ pidselected = this.options[this.selectedIndex].getProperty('value') ; $("featuredproductidpasta").setProperties({ value: pidselected}); ; }); $('recipesproductpantry').addEvent('change', function(e){ pidselected = this.options[this.selectedIndex].getProperty('value') ; $("featuredproductidpantry").setProperties({ value: pidselected}); ; }); }); keep in mind this is mootools 1.1 (no i cant update it sorry). i am sure this is kind of basic, something i seem to have wrapping my brain around. but i am quite sure doing it as above is not really good...

    Read the article

  • WebClient.DownloadString() Not Producing Exact HTML

    - by Ryan Fuentes
    So here's the deal. I'm creating a spider bot for a website that scans all the product pages and records the product data. I'm using C# and the WebClient library to download the HTML string. The site I'm crawling must be specially made because the HTML that is received from WebClient.DownloadString() is different than the HTML that I get when I view the source of the HTML when visiting it on a browser. This seems intentional because the only info I can't get is the price. Does anyone know a workaround for this problem or can anyone explain what is happening? Thanks.

    Read the article

  • Rewrite this function as DB query?

    - by aLk
    I'm cleaning up my code, should i change the following function to a MySQL query? If so what would be a nice MySQL function to achieve this functionality? public ArrayList getNewTitles(ArrayList candidateTitles, ArrayList existingTitles) { ArrayList newTitles = new ArrayList(); Movie movie = new Movie(); boolean isNew = true; for(int i=0; i<candidateTitles.size(); i++) { for(int j=0; j<existingTitles.size(); j++) { movie = (Movie)existingTitles.get(j); if(((String)candidateTitles.get(i)).equals(movie.getRawTitle())) { isNew = false; } } if(isNew == true) { System.out.println("newTitle for crawling: " + (String)candidateTitles.get(i)); newTitles.add((String)candidateTitles.get(i)); } else { System.out.println("candidate binned: " + (String)candidateTitles.get(i)); } isNew = true; } return newTitles; }

    Read the article

  • Ruby execute code in class getting inherited to

    - by AdamB
    I'm trying to be able to have a global exception capture where I can add extra information when an error happens. I have two classes, "crawler" and "amazon". What I want to do is be able to call "crawl", execute a function in amazon, and use the exception handling in the crawl function. Here are the two classes I have: require 'mechanize' class Crawler Mechanize.html_parser = Nokogiri::HTML def initialize @agent = Mechanize.new end def crawl puts "crawling" begin #execute code in Amazon class here? rescue Exception => e puts "Exception: #{e.message}" puts "On url: #{@current_url}" puts e.backtrace end end def get(url) @current_url = url @agent.get(url) end end class Amazon < Crawler #some code with errors def stuff page = get("http://www.amazon.com") puts page.parser.xpath("//asldkfjasdlkj").first['href'] end end a = Amazon.new a.crawl Is there a way I can call "stuff" inside of "crawl" so I can use that exception handling over the entire stuff function? Is there a better way to accomplish this?

    Read the article

  • Anonymous users support vs Google bot

    - by Andy
    I have a User class in my web app that represents a user currently logged in. Every time a user vists a page, a User instance is populated based on authentication data supplied in cookies. A User instance is created even if an anonymous user logs in - and a corresponding new record is created in the User table in the database. This approach allows me to save some state info for the current user regardless of its type. The problem however with this approach is the Google bot, and other non-human web organisms crawling my pages. Every time a bot starts to walk around the site, thousands of useless records will be created in the database, each of them only to be used for a single page. Question: what is the best trade off? How to support anonymous users, save their state, and don't get too much overhead because of cookieless bots?

    Read the article

  • Create an seo and web accessibility analyzer

    - by rebellion
    I'm thinking of making a little web tool for analyzing the search engine optimization and web accessiblity of a whole website. First of all, this is just a private tool for now. Crawling a whole website takes up alot of resources and time. I've found out that wget is the best option for downloading the markup for a whole site. I plan on using PHP/MySQL (maybe even CodeIgniter), but I'm not quite sure if that's the right way to do it. There's always someone who recommends Python, Ruby or Perl. I only know PHP and a little bit Rails. I've also found a great HTML DOM parser class in PHP on SourceForge. But, the thing is, I need some feedback on what I should and should not do. Everything from how I should make the crawl process to what I should be checking for in regards to SEO and WCAG. So, what comes to your mind when you hear this?

    Read the article

  • Symfony: Pre filtering submitted values before/after validation

    - by Rob
    Hi All I've been scouring the net and i have found nothing! I am using symfonys form framework to build a simple 'Create' form. Validation is fine. However i'd like to pre-filter my submitted values, so adding ucfirst, strtoupper, and the like. I'm not sure if im missing something crucial here, but the way i see it is the only way to do this would be to create my own custom validators and utilizing the doClean method, which seems daft since i'd have hundreds of validators for each php function! Hope you guys can help, i've been crawling through source code, api's, numerous books and blogs and i haven't found a thing :( Either it's impossible, or it's really easy, i hope its the latter!

    Read the article

  • How do I run multiple ruby scripts sequentially on my local machine?

    - by marcamillion
    I have about 5 or 6 ruby scripts I want to run, right after each other. These are all on my local machine (OS X) and won't be run on a server. Each takes about 15 minutes to run, and I don't want to have to wait for each one to finish before running the others manually. Without using something as heavy as delayed_job or some other queueing gem, how can I achieve this? Or should I go through the hassle of setting up sidekiq or something else? Thanks. P.S. It would be nice to restart the script if one of them times out (I am doing web crawling, so keeping an HTTP connection open sometimes gives me issues) - which happens occasionally.

    Read the article

  • Hiding part of a page from Search Server 2010 Express

    - by Jonathan
    I'm working on a soon-to-be-public-facing site, and we want to have our search live on day 1, and want it to be searchable but non-public during testing, so we're planning to use something whose crawling we can control -- Search Server 2010 Express. However, if I search for something in my top navigation bar, I get nearly every page as a hit. It kinda makes sense, as every page has that content, but it's completely irrelevant on most pages. I want it to crawl through my navigation, but ignore the text within the navigation for search results. I was hoping that it'd just figure that out on it's own (the HTML for the top nav is static), but it's apparently not. Is there some standard thing I can put in my HTML that will achieve the effect I'm going for? On a side note: when I go live, will I have the same problem with public search engines, or do they tend to be smarter?

    Read the article

  • IE Hanging on jQuery code

    - by OrangeRind
    Here's another clichéd problem, but I couldn't find an exact match to this. I haven't posted any source here, as you can freely see all that is there on the link. :-) Statement:I have a web page at http://agrimgupta.com/antaragni/ Disclaimer: Pardon me for the pathetic coding on that page. ;-) It was done on a very short interval. Improvements will be done at a later stage. Observation: This page is functioning normally on my localhost on all browsers. Problem: IE 8 is crawling (nearly hanging) while loading this page from the website. Although it is working fine on localhost. When on the website, It fails to render the mouseover effects, doing them in almost what seems like a minute. Question: How to resolve this stuck up of IE? It is necessary to resolve this. Thanks in Advance

    Read the article

  • Archlinux/atheros WLAN configuration troubles

    - by GrinReaper
    I'm trying to config archlinux to use my wireless network adapter. It's quite troublesome. From what I've gathered, it's an atheros network adapter, using the ath5k driver/module... I can't get it to work; any ideas? Here's some of the output from my tinkering: # lspci | grep -i net 00:0a.0 Ethernet controller: nVidia corporation MCP67 Ethernet (reva2) 03:00.0 Ethernet controller: atheros communications inc. AR5001 Wireless Network Adapter (rev01) # lsusb ... Bus 004 Device 003: ID 03f0:17d Hewlett Packard Wireless (Bluetooth + WLAN Interface [Integrated Module] # ping -c 3 www.google.com ping: unknown host www.google.com #ping -c 3 8.8.8.8 ping: network is unreachable # lspci -v 03:00.0 Ethernet controller: atheros communications inc. AR5001 Wireless Network Adapter (rev01) ... Kernel driver in use: ath5k Kernel modules: ath5k # dmesg |grep ath5k registered as phy0 registered led device ath5k: atheros chip found PCI INT A disabled registered led device registered as phy1 # ip addr | sed '/^[0-9]/!d;s/: <.*$//' 1: lo 2: eth1 3: eth0 # ip link set <interface> up/down RNETLINK answers: Operation not possible due to RF-kill Also, is there a way to dump text from command-line to a text file so i can just copy pasta? Sorry, first time using a linux distro... EDIT: So I just tried this: I actually just did this twice. (I can't tell which setting is on/off for my wireless adapter. The lights are blue all the time now.) #rfkill list 0: hp-wifi: wireless lan softblocked: no hardblocked :yes 1: hp-bluetooth: bluetooth softblocked: no hardblocked :yes 3: phy1: wireless lan softblocked: no hardblocked :yes #rfkill list 0: hp-wifi: wireless lan softblocked: no hardblocked :no 1: hp-bluetooth: bluetooth softblocked: no hardblocked no 3: phy1: wireless lan softblocked: no hardblocked :yes 7: hci0: bluetooh 0: hp-wifi: wireless lan softblocked: no hardblocked :no I've dug around some other articles and it seems like ath5k is supposed to be preferable to madwifi, so should i be using madwifi? I'm 99% sure I disabled the hardblock (by turning it ON) but, as shown above, phy1 wireless lan is STILL hardblocked. What gives? Maybe I've made some more fundamental error in a basic config file? EDIT: I've fixed the hardblock. I've tried pinging www.google.com, but to no avail. I get: ping: unknown host www.google.com In the arch wiki: Edit /etc/hosts and add the same HOSTNAME you entered in /etc/rc.conf: 127.0.0.1 archlinux.domain.org localhost.localdomain localhost archlinux To my understanding, hostname is just a user-specified and based on preference(?) My /etc/rc.conf: HOSTNAME="gestalt" My /etc/hosts: 127.0.0.1 localhost.localdomain localhost gestalt but should it be the following? 120.0.0.1 localhost.domain.org localhost.localdomain localhost gestalt

    Read the article

  • The Sitemap Paradox

    - by Jeff Atwood
    We use a sitemap on Stack Overflow, but I have mixed feelings about it. Web crawlers usually discover pages from links within the site and from other sites. Sitemaps supplement this data to allow crawlers that support Sitemaps to pick up all URLs in the Sitemap and learn about those URLs using the associated metadata. Using the Sitemap protocol does not guarantee that web pages are included in search engines, but provides hints for web crawlers to do a better job of crawling your site. Based on our two years' experience with sitemaps, there's something fundamentally paradoxical about the sitemap: Sitemaps are intended for sites that are hard to crawl properly. If Google can't successfully crawl your site to find a link, but is able to find it in the sitemap it gives the sitemap link no weight and will not index it! That's the sitemap paradox -- if your site isn't being properly crawled (for whatever reason), using a sitemap will not help you! Google goes out of their way to make no sitemap guarantees: "We cannot make any predictions or guarantees about when or if your URLs will be crawled or added to our index" citation "We don't guarantee that we'll crawl or index all of your URLs. For example, we won't crawl or index image URLs contained in your Sitemap." citation "submitting a Sitemap doesn't guarantee that all pages of your site will be crawled or included in our search results" citation Given that links found in sitemaps are merely recommendations, whereas links found on your own website proper are considered canonical ... it seems the only logical thing to do is avoid having a sitemap and make damn sure that Google and any other search engine can properly spider your site using the plain old standard web pages everyone else sees. By the time you have done that, and are getting spidered nice and thoroughly so Google can see that your own site links to these pages, and would be willing to crawl the links -- uh, why do we need a sitemap, again? The sitemap can be actively harmful, because it distracts you from ensuring that search engine spiders are able to successfully crawl your whole site. "Oh, it doesn't matter if the crawler can see it, we'll just slap those links in the sitemap!" Reality is quite the opposite in our experience. That seems more than a little ironic considering sitemaps were intended for sites that have a very deep collection of links or complex UI that may be hard to spider. In our experience, the sitemap does not help, because if Google can't find the link on your site proper, it won't index it from the sitemap anyway. We've seen this proven time and time again with Stack Overflow questions. Am I wrong? Do sitemaps make sense, and we're somehow just using them incorrectly?

    Read the article

  • Site Search Engine for 1,000 page website

    - by Ian
    I manage a website with about 1,000 articles that need to be searchable by my members. The site search engines I've tried all had their own problems: Fluid Dynamics Search Engine Since it's written in perl, it was a bit hacky to integrate with my PHP-based CMS. I basically had to file_get_contents the search results page. However, FDSE had the best search results. Google CSE Ugh, the search results SUCK. It can't find documents even using unique strings. I'm so surprised that a Google search product is this bad. Nor can I get any answers on their 'help' forums, and I am a paying user. Boo, Google. Boo. Sphider Again, bad search results. Unable to locate some phrases used in link text. Better results than Google CSE though. Shame on Google that a free PHP script has better search results than their paid application. IndexTank This one looked really promising. I got all set up with their PHP API client. But it would only randomly add articles that I submitted. Out of 700+ articles I pushed to the index through their API, only 8 made it in. Unable to find any help on this subject. Update for IndexTank -- Got the above issue fixed, so this looks most promising so far. The site itself runs on php/mysql and FreeBSD, though this shouldn't matter for a web crawling indexer. I've looked at Lucene, but I don't know anything about Java or installing Java programs on my web server. I also do not have root access on my web server, if this would be required for installation. I really don't need a lot of fancy features. It just needs to be able to crawl my web site and return great (even decent!) search results. I don't need any crazy search operators. It doesn't need to index off my primary domain. It just needs to work! Thanks, Hive Mind!

    Read the article

  • Live CD / Live USB much faster than full install

    - by user29347
    I've observed it on both laptops I own! HP Compaq nx6125 and Ubuntu 11.04 x64 - somewhat solved Lenovo Thinkpad T500 and Ubuntu 11.10 x64 - help needed! I'm still struggling with the Thinkpad to get performance level similar to that of 10 y.o. laptops... All in all a really serious issue with multiple versions of Ubuntu that renders computers with perfectly compatible hardware unusable, as far as out of the box experience is concerned. Troubleshooting resultant issues seems to be a hard case even for users with some experience with installing graphics drivers. EDIT: I can't really post additional details. Two different ubuntu versions, two laptops, two different set of graph. drivers (OS vs ATI prop.) - all with the same symptoms. Also I can't stress enough how massive the performance degradation is compared to a healthy system. For that reason I ask for input from people who may know roughly what are we dealing with here. I can post more details if we were to focus on my current Thinkpad T500. In that case my current system details: Lenovo Thinkpad T500 Ubuntu 11.10 x64 ATI Mobility Radeon HD 3650 (also see the "What I have already tried" section about Intel graphics tested) ATI Catalyst 11.10 drivers OCZ Agility 3 SSD but! same with the default driver for ATI the card same with the prop. driver for the ATI card from Jockey (Additional drivers applet) What I have already tried: 0. Switching to Intel integrated card (Intel GMA 4500M HD) with the default driver - same effects = may indicate not driver related problem but a problem with something of global influence like e.g. nomodeset or other I don't even know about. (What you can read above) ATI Catalyst 11.10 and radeon.modeset=0 boot parameter + disabled Wait for VBlank. Unity 2D Ubuntu 10.04 LTS tested (ubuntu-10.04.3-desktop-i386.iso): Both live USB and installed version blazing fast! (on the default drivers - without even installing the proprietary fglrx drivers). re2 a) seems to give me the only significant results (still poor) - perfect Unity elements performance with the same crawling stuttering/lagging when dragging windows around. re2 b) this happens often http://i17.photobucket.com/albums/b68/Bucic/ubuntuforumsorg/Screenshotat2011-10-28083140.png re2 c) Sometimes I am able to witness a normal performance when dragging a window around but only for a second or two. When I try to shake it longer it starts to lag and it will keep lagging like that with an increased probability of what you see in the sshot in point re2 b). re2 d) I can't establish the radeon.modeset=0 influence though. Once it seems to work be smooth with it, the other time - without it. Really can't tell.

    Read the article

  • How do multiple displays work on a AMD 785G / ATI HD 4200 motherboard?

    - by aireq
    I just ordered a ASUS M4A785TD-V EVO which has the AMD 785G chipset and HD4200 integrated graphic. The board has VGA, DVI, and HDMI outputs. I'm wondering how many outputs I can run at once, and from what connectors? My guess is that I can only use the VGA, and either the DVI or the HDMI in a dual setup. But not the HDMI and the DVI at the same time. Is this correct? If I have devices plugged into both the HDMI and the DVI ports is there a way to choose between which port I want to use? I have a dual 19" monitor setup, as well as a LCD TV. I'd like to run the VGA and the DVI into my two monitors, and then the HDMI to my TV. Then when I want to watch something on the TV I'd like to be able to switch over from the DVI to the HDMI. Is this possible with out crawling under my desk and unplugging/plugging things in? Update I found the following in the manual off ASUS's website, which confirms my original suspicion that HDMI and DVI can't be used at the same time. But I'd still like to know if it's possible through software to switch between using the HDMI and DVI.

    Read the article

  • Googlebot repeatedly looks for files that aren't on my server

    - by John at CashCommons
    I'm hosting a site for a volunteer organization. I've moved the site to WordPress, but it wasn't always that way. I suspect at one point it was hacked badly. My Apache error log file has grown to 122 kB in just the past 18 hours. The large majority of the errors logged are of this form -- it's repeated hundreds of times today alone in my log files: [Mon Nov 12 18:29:27 2012] [error] [client xx.xxx.xx.xxx] File does not exist: /home/*******/public_html/*******.org/calendar.php [Mon Nov 12 18:29:27 2012] [error] [client xx.xxx.xx.xxx] File does not exist: /home/*******/public_html/*******.org/404.shtml (I verified that xx.xxx.xx.xxx was a Google server.) I suspect there was a security hole somewhere before, likely in calendar.php, that was exploited. The files don't exist anymore, but there may be many backlinks that exist that reference here, hence why googlebot is so interested in crawling them. How do I fix this gracefully? I still would like Google to index the site. I just want to tell it somehow not to look for these files anymore.

    Read the article

< Previous Page | 6 7 8 9 10 11 12  | Next Page >