Search Results

Search found 446 results on 18 pages for 'crawl'.

Page 13/18 | < Previous Page | 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • How will closures in Java impact the Java Community?

    - by Ryan Delucchi
    It is one of the most talked about features planned for Java: Closures. Many of us have been longing for them. Some of us (including I) have grown a bit impatient and have turned to scripting languages to fill the void. But, once closures have finally arrived to Java: how will they effect the Java Community? Will the advancement of VM-targetted scripting languages slow to a crawl, stay the same, or acclerate? Will people flock to the new closure syntax, thus turning Java code-bases all-around into more functionally structured implementations? Will we only see closures sprinkled in Java throughout? What will be the effect on tool/IDE support? How about performance? And finally, what will it mean for Java's continued adoption, as a language, compared with other languages that are rising in popularity? To provide an example of one of the latest proposed Java Closure syntax specs: public interface StringOperation { String invoke(String s); } // ... (new StringOperation() { public invoke(String s) { new StringBuilder(s).reverse().toString(); } }).invoke("abcd"); would become ... String reversed = { String s => new StringBuilder(s).reverse().toString() }.invoke("abcd"); [source: http://tronicek.blogspot.com/2007/12/closures-closure-is-form-of-anonymous_28.html]

    Read the article

  • How do I stop ubuntu waiting for me to "Press S to skip of C to continue..." when it fails to mount on startup?

    - by Jon Cage
    I've been having some issues with my RAID setup recently on my headless Ubuntu 10.04 server which means one of my mount requests is failing on bootup. Clearly I need to fix the RAID issue, but this machine is in my loft and having to crawl up there with a keyboard just so I can hit S a few times is extremely irritating. The exact message is as follows: The disk drive for /drv/photos is not ready yet or not present Continue to wait; or Press S to skip mounting or M for manual recovery I'd still like Ubuntu to try and mount it, but is there any way to tell it not to wait for the device?

    Read the article

  • How to switch easily between a 5.1 headset and 2.1 speakers ?

    - by Wookai
    I have a 5.1 headset that I mostly use for gaming and watching movies. It is connected using 3 jacks to the rear panel of my motherboard (plus one for the microphone). I also have some small 2.1 speakers that I sometimes use to listen to music or to share a video/movie with someone. For now, what I do is crawl under my desk, and switch the cables. As my motherboard has 7.1 capabilities, I am wondering if there could be a way of having all 4 jack cables always plugged in, and switching on which device to output the sound programmatically. Has anybody had a similar problem, and how did you solve it ?

    Read the article

  • Recommend a web file sharing software please.

    - by Baczek
    I'm looking for a web platform to put company files at. My requirements are: should be accessible via a browser should be open source must be installable (dropbox is a no-go) must have an option to put a access time limit on a file must perform garbage collection automatically after a file expires must be able to mark files as public or private an option to protect a file via a pin-code for users without accounts in the system would be nice to have The problem is I don't even know what to search for - all my googling results in either complete groupware solutions or p2p file sharing software. If such a thing doesn't exist, please don't hestitate to say so, so I can crawl to a corner and cry myself to sleep. TIA

    Read the article

  • Routing / binding 128 to one server

    - by Andrew
    I have a Ubuntu server with 128 ip's (static external ips 86.xx.xx.16), and I want to crawl pages thru different ip's. The gateway is xx.xxx.xxx.1, the main ip is xx.xxx.xxx.16, and the other 128 ip's are xx.xxx.xxx.129/255. I tried this configuration in /etc/network/interfaces but I doesn't work. It work if I remove the gateway for the aliases eth0:0 and eth0:1. I think this is routing problem. auto lo iface lo inet loopback auto eth0 auto eth0:0 auto eth0:1 iface eth0 inet static address xx.xxx.xxx.16 netmask 255.255.255.128 gateway xx.xxx.xxx.1 iface eth0:0 inet static address xx.xxx.xxx.129 netmask 255.255.255.128 gateway xx.xxx.xxx.1 iface eth0:1 inet static address xx.xxx.xxx.130 netmask 255.255.255.128 gateway xx.xxx.xxx.1 Also, please tell me how to "reset" every changes that I made in networking and routing. Thank you

    Read the article

  • Why won't Windows use the other CPU cores?

    - by revloc02
    In Windows Task Manager the Performance tab shows the first CPU maxed out, the other 7 just idling along with the occasional spike. What gives? More info: I've got 8GB and only 4.5GB are being used. The Processes tab has no indication of any process hogging processing power. In fact System Idle Process is 98-99. When I program stuff and have like 8 to 12 applications going (several directly unrelated to programming of course) my computer slows to a crawl. Sysyem Info: Intel Core i7-2600K Processor (quad-core with hyper-threading), 8GB RAM, Intel BOXDZ68BC LGA 1155 Motherboard, 500GB HDD

    Read the article

  • Is full partition encryption the only sure way to make Ubuntu safe from external access?

    - by fred.bear
    (By "external access", I mean eg. via a Live CD, or another OS on the same dual-boot machine) A friend wants to try Ubuntu. He's fed up with Vista grinding to a crawl (the kids? :), so he likes the "potential" security offered by Ubuntu, but because the computer will be multi-booting Ubuntu (primary) and 2 Vistas (one for him, if he ever needs it again, and the other one for the kids to screw up (again). However, he is concerned about any non-Ubuntu access to the Ubuntu partitions (and also to his Vista partition)... I believe TrueCrypt will do the job for his Vista, but I'd like to know what the best encryption system for Ubuntu is... If TrueCrypt works for Ubuntu, it may be the best option for him, as it would be the same look and feel for both. Ubuntu will be installed with 3 partitions; 1) root 2) home 3) swap.. Will Ubuntu's boot loader clash with TrueCrypt's encrypted partition? PS.. Is encryption a suitable solution?

    Read the article

  • My Router is fast when i reset it but slows down seconds later

    - by hglocke
    I have a Belkin N wireless router which until recently worked perfectly fine. Now i have to reset the router every few minutes, otherwise it slows down to a crawl. What can I do? I have tried turning the routers firewall off, but it does not make any difference. As far as I'm aware there have been no recent firmware updates. EDIT: The other devices on my network (laptop and iphone) do not have this problem. I connect to the router using a TP-Link wireless network card and I have already tried uninstalling and installing the driver. Hopefully this will narrow down the problem significantly.

    Read the article

  • Synergy Locks up w7 when Visual Studio is debugging

    - by EdK
    I love synergy but as a developer this is driving me crazy. I use Synergy to go through two x64 Windows 7 machines (with all flavors of Visual studio from 2003 to 2010 professional) and an MacOS 10.6? desktop and most of the time it works flawlessly. However, if I happen to be in the middle of a transition from one windows 7 machine to the other (it's never happened to the mac but I don't flip to it that often) when Visual Studio hits a breakpoint, the mouse and keyboard both completely lock up and the only way I can seem to do anything is to physically unplug the mouse and keyboard and plug them back in. Unfortunately I have to crawl under my desk to do that, so you can see where it'd be annoying. Anybody have any idea how I can get around this? I did note that it was much more frequent with the previous version I had of synergy+ before I upgraded/sidegraded to the current version of synergy. But it's still happening. Thanks alot,

    Read the article

  • Google Chrome not using local cache

    - by Steve
    Hi. I've been using Google Chrome as a substitute for Firefox not being able to handle having lots of tabs open at the same time. Unfortunately, it looks like Chrome is having the same problem. Freakin useless. I had to end Chrome as my whole system had slowed to a crawl. When I restarted it, I opted to restore the tabs that were last open. At this stage, every one of the 20+ tabs srated downloading the pages they had previously had open. My question is: why can't they open a locally stored/saved copy of the web page from cache? Does Google Chrome store pages in a cache? Also: after most of the pages had completed their downloading, I clicked on each tab to view the page. Half of them only display a white page, and I have to reload the page manually. What is causing this? Thanks for your help.

    Read the article

  • 2 year cis degree and in school for computer science what can I do?

    - by chame1eon
    Hi I am 29 and have a recent 2 cis year degree from a community college , an A+ certification , and meager experience with web stuff ( Java , Javascript , php ) while in my 1 year help desk internship. In all the programming classes I was able to blow through the homework easily even while other students were panicking and dropping. I think I have managed to avoid the most atrocious noob/self taught mistakes ( spaghetti code etc) by just doing research before starting something and trying to keep good design in mind. Even so I'd have to make heavy use of references to crawl through even simple projects that would result in fully finished useful applications. I need a job now and I am tired of the slow pace of the classes and would love to get any kind of practical experience I could. The problem is that I am not sure what I should be trying to do. I have a very strong preference for application programming or at least anything light on design and preferably pretty low level. If I can't do that then anything technology related , for example help desk would be better than nothing. I live near Raleigh NC. Am I qualified for anything that could contribute to coding (C++ or Java ) experience or even web development though I don't really like it. Would web development experience help. If not is there anything I could read or do that could help? Is the help desk my only choice? If it is, are there any relatively quick certifications or anything similar that would help while I am waiting? Sorry about the long multi-part question. Thanks for reading.

    Read the article

  • Recommend a web file sharing software please

    - by Baczek
    I'm looking for a web platform to put company files at. My requirements are: should be accessible via a browser should be open source must be installable (dropbox is a no-go) must have an option to put a access time limit on a file must perform garbage collection automatically after a file expires must be able to mark files as public or private an option to protect a file via a pin-code for users without accounts in the system would be nice to have The problem is I don't even know what to search for - all my googling results in either complete groupware solutions or p2p file sharing software. If such a thing doesn't exist, please don't hestitate to say so, so I can crawl to a corner and cry myself to sleep. TIA

    Read the article

  • Somehow Google considers a properly 301'd URL as 200 and is still indexing the new content in old page?

    - by user2178914
    We redirected all the old URL's to new ones properly using htaccess. The problem is Google, somehow is still finding content in the old page(which it shouldn't) and stores it in the cache rather than the new URL. For eg: Old Page- http://www.natures-energies.com/iching.htm New Page- http://www.natures-energies.com/index.php?option=com_content&view=article&id=760 If you type the old URL into the browser it redirects If you fetch the old URL as Googlebot in the webmaster tools the header says 301/permanently redirected. If I try to crawl as any other bot it still says 301 redirected. Even if you click the old link in Google it redirects to the new URL. Only in its cache it shows the old URL and moreover it shows the new content in it! I am stumped on how Google manages to grab the new content and puts in the old URL instead of the new one! One more interesting thing is that if I try a cache for the new page it shows the cache of the new content with old URL! Any help would be appreciated. I am at end of my wits. I think i have tried almost everything. Is there anything that I'm missing to see? You can use this search to find the old url's. Maybe you'll some patterns that i missed. site:www.natures-energies.com inurl:htm -inurl:https|index

    Read the article

  • php-cgi.exe process on IIS

    - by HYP
    The production server runs a PHP application on IIS 6.0. During the peak hours we have had a few issues where the php-cgi.exe processes increase in numbers and approach around 200. The server comes to a crawl and we have to restart the server a multiple times to restore the normal behavior. When the server is running normally, I have noticed that there are only 10-15 php-cgi.exe processes in the task manager. What could be causing the php-cgi.exe processes to increase in number from 10-15 to around 200 during the peak hours? Where should I look for a cause?

    Read the article

  • How do I justify to my management that we need a bandwidth upgrade?

    - by Sandeep
    I work in an office with a 8mbps line and about a 100 people. Our internet has slowed to crawl over the past few months, as we added headcount. However, using speedtest.net or other sites, still shows bandwidth as 8mbps. Now, how do I justify to management that we indeed need to upgrade our bandwidth ? Please note that I dont have access to our main routers or any network equipment. I can only use my system (windows+linux dual boot) to make a case for a reasonable justification. help!

    Read the article

  • Building a Student Storage server

    - by DobotJr
    I work for a school district. I've been put in charge of building a storage server for students. A place for them to work off of from school and home. My challenge is getting this to work from home. At school they login, authenticate, and they get a mapped drive to their folder on the server (S:\fileserver\studentname). My question is how can I make this available to students at home? The server is running Windows Server 2003 R1. I've got PHP, Apache, and MySQL working together. My idea is to write a script that will "crawl" through the directory containing all of the student folders, then create an instance of every file and folder in a MySQL DB. Create a login page that will use LDAP for authentication, and once they login to the server from home, they get a page with folders a files tied to their username. Has anyone out there ever put something like this together??

    Read the article

  • Apache Connection vs. Request

    - by user101570
    I apologize in advance if this is a basic question, but I am quite confused after reading the Apache documentation and other tutorials. Does a single Apache prefork process serve all HTTP requests for a given client? That's what I thought, but when I reduce maxclients down to a low number, my page load times go to a crawl. This despite the fact I'm the only client on the server in question. This would suggest each process serves a single HTTP request at a time, rather than serving all requests within the TimeOut window. So if a single webpage requires 15 HTTP requests to load fully, do I require 15 prefork Apache processes to optimally serve it?

    Read the article

  • How should I clean up hung grandchild processes when an alarm trips in Perl?

    - by brian d foy
    I have a parallelized automation script which needs to call many other scripts, some of which hang because they (incorrectly) wait for standard input. That's not a big deal because I catch those with alarm. The trick is to shut down those hung grandchild processes when the child shuts down. I thought various incantations of SIGCHLD, waiting, and process groups could do the trick, but they all block and the grandchildren aren't reaped. My solution, which works, just doesn't seem like it is the right solution. I'm not especially interested in the Windows solution just yet, but I'll eventually need that too. Mine only works for Unix, which is fine for now. I wrote a small script that takes the number of simultaneous parallel children to run and the total number of forks: $ fork_bomb <parallel jobs> <number of forks> $ fork_bomb 8 500 This will probably hit the per-user process limit within a couple of minutes. Many solutions I've found just tell you to increase the per-user process limit, but I need this to run about 300,000 times, so that isn't going to work. Similarly, suggestions to re-exec and so on to clear the process table aren't what I need. I'd like to actually fix the problem instead of slapping duct tape over it. I crawl the process table looking for the child processes and shut down the hung processes individually in the SIGALRM handler, which needs to die because the rest of real code has no hope of success after that. The kludgey crawl through the process table doesn't bother me from a performance perspective, but I wouldn't mind not doing it: use Parallel::ForkManager; use Proc::ProcessTable; my $pm = Parallel::ForkManager->new( $ARGV[0] ); my $alarm_sub = sub { kill 9, map { $_->{pid} } grep { $_->{ppid} == $$ } @{ Proc::ProcessTable->new->table }; die "Alarm rang for $$!\n"; }; foreach ( 0 .. $ARGV[1] ) { print "."; print "\n" unless $count++ % 50; my $pid = $pm->start and next; local $SIG{ALRM} = $alarm_sub; eval { alarm( 2 ); system "$^X -le '<STDIN>'"; # this will hang alarm( 0 ); }; $pm->finish; } If you want to run out of processes, take out the kill. I thought that setting a process group would work so I could kill everything together, but that blocks: my $alarm_sub = sub { kill 9, -$$; # blocks here die "Alarm rang for $$!\n"; }; foreach ( 0 .. $ARGV[1] ) { print "."; print "\n" unless $count++ % 50; my $pid = $pm->start and next; setpgrp(0, 0); local $SIG{ALRM} = $alarm_sub; eval { alarm( 2 ); system "$^X -le '<STDIN>'"; # this will hang alarm( 0 ); }; $pm->finish; } The same thing with POSIX's setsid didn't work either, and I think that actually broke things in a different way since I'm not really daemonizing this. Curiously, Parallel::ForkManager's run_on_finish happens too late for the same clean-up code: the grandchildren are apparently already disassociated from the child processes at that point.

    Read the article

  • CSS regression tool?

    - by ronaldwidha
    I'm looking for a visual regression testing tool for CSS refactoring and see whether or not there are any unintended cascading behavior in a website. Ideally, the tool that can crawl a website (even locally) and grab snapshots of each page and store it in a single repository. When run for the second time, it will show the pages that are visually different since the last time it was run. Even better: if it can show the overlapper XOR view of the 2 version of the page. compare rendering results of different browsers (almost like an automated Microsoft Expression Web compare feature). Thanks

    Read the article

  • How do web crawlers affect site statistics?

    - by LM
    What are ways in which web crawlers (both from search engines and non-search engines) could affect site statistics (e.g., when doing AB-testing different page variations)? And what are ways to take care of these problems? For example: Do a lot of people writing web crawlers often delete their cookies and mask their IPs, so that web crawlers often show up as different users each time they crawl the site? What are heuristics to use to recognize that something is a bot? (I'm guessing any sophisticated enough bot can be indistinguishable from a real user, if it wants to -- is this correct?)

    Read the article

  • What is a good Java crawler library?

    - by DrDee
    Hi, I am about to develop a crawler in Java but don't feel like reinventing the wheel. A quick Google search gives a whole bunch of Java libraries to build a web crawler. Besides that Nutch is of course a very robust package but seems a bit too advanced for my needs. I only need to crawl a handful websites a week containing a couple of 1000 pages each. Which open source Java library would you recommend considering: speed multithreading (or even distributed) extending it with new functionality active maintained and documentation?

    Read the article

  • MOSS 2007 BDC Profile Import fails for a few users

    - by Hobber
    We have set up a BDC Profile Import for Sharepoint 2007 and it works well for 99% of the users. A handful fails with the "Exception occured when calling into BIL connector for import from non master data source" message in the crawl log, though The ULS logs reveal the following information: Exception Profile Import: Exception occured when importing user: '[redacted]\[redacted]'. Microsoft.Office.Server.UserProfiles.PropertyInvalidValueException: Invalid Property Value: Could not find SID corresponding to input account name I have confirmed that the user has a sharepoint profile matching the username is a valid domain user exists in AD Can anyone help med troubleshoot this?

    Read the article

  • Scrapy spider is not working

    - by Zeynel
    Since nothing so far is working I started a new project with python scrapy-ctl.py startproject Nu I followed the tutorial exactly, and created the folders, and a new spider from scrapy.contrib.spiders import CrawlSpider, Rule from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor from scrapy.selector import HtmlXPathSelector from scrapy.item import Item from Nu.items import NuItem from urls import u class NuSpider(CrawlSpider): domain_name = "wcase" start_urls = ['http://www.whitecase.com/aabbas/'] names = hxs.select('//td[@class="altRow"][1]/a/@href').re('/.a\w+') u = names.pop() rules = (Rule(SgmlLinkExtractor(allow=(u, )), callback='parse_item'),) def parse(self, response): self.log('Hi, this is an item page! %s' % response.url) hxs = HtmlXPathSelector(response) item = Item() item['school'] = hxs.select('//td[@class="mainColumnTDa"]').re('(?<=(JD,\s))(.*?)(\d+)') return item SPIDER = NuSpider() and when I run C:\Python26\Scripts\Nu>python scrapy-ctl.py crawl wcase I get [Nu] ERROR: Could not find spider for domain: wcase The other spiders at least are recognized by Scrapy, this one is not. What am I doing wrong? Thanks for your help!

    Read the article

  • Using .htaccess to force either HTTP or HTTPS

    - by ILMV
    I have already got this code to force these URLs to HTTPS: RewriteCond %{HTTPS} off RewriteCond %{REQUEST_URI} ^/my/?.*$ RewriteCond %{REQUEST_URI} !^/my/basket/add/$ RewriteCond %{REQUEST_URI} ^/login/?.*$ RewriteCond %{REQUEST_URI} ^/logout/?.*$ RewriteCond %{REQUEST_URI} ^/register/?.*$ RewriteCond %{REQUEST_URI} ^/newsletter/?.*$ RewriteCond %{REQUEST_URI} ^/reset-password/?.*$ RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} [R=301,L] And this works really well, but what I want to do is force any URL that does not comply with the above conditions to HTTP. Any thoughts on how I could do this? It has to be done using .htaccess, I have been achieving it by using our PHP framework, but this has been messing our Google crawl. Cheers!

    Read the article

  • How to use Pisa on Google App Engine to generate PDF from HTML\CSS

    - by systempuntoout
    I'm developing a simple GAE application that crawl some data from a given site and present it formatted in html\css. What i would like to do now is to offer the "Export to PDF feature" trasforming the formatted html\css to PDF. I've imported Reportlab Toolkit and it works good but it's not what i need since it forces me to create PDF manually like: pcanvas.drawString(10, 10, 'This is the title Blah blah blah') What i really need is a library like PISA that trasform Html\Css to PDF. Anyone has managed to succesfully intregrate and use PISA on Google App Engine? Any hints?

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18  | Next Page >