Search Results

Search found 1838 results on 74 pages for 'miss spider'.

Page 16/74 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • (Ubuntu - Lamp) phpmyadmin install error 404 not found

    - by meyosef
    Hi, I install apache, mysql, php and then phpmyadmin on ubuntu desktop 10.04 version I test apache: localhost with browser and its works. I test mysql with console and its works. I test phpmyadmin http://localhost/phpmyadmin/ and get error 404 not found I see that solution for that brobem, copy /phpmyadmin/ to /www/, but I dont want to put in /www/ to my websites files (maybe I wrong or miss something-please tell me in this case). How this problem can be solved in the best way? Thanks

    Read the article

  • is a negative text-indent considered cloaking?

    - by John Isaacks
    I am using the negative-text-indent technique I learned to show a text-image to the user, while hiding the corresponding actual text. This way the user sees the fancy styled text while search engines can still index it. However I am started to think this sounds like cloaking since I am serving different content to the user vs the spider. However, I am not using this in a deceitful way. Plus it seems like this is a popular technique. So is it SEO-safe or is it cloaking? Thanks!

    Read the article

  • Pass HAProxy healthcheck requests as User-agent "LB-Check" to the backend webservers(apache)

    - by Joseph
    I have a HAProxy setup in front of webservers(apache) for loadbalancing. Also healthchecks for these webservers are also configured in HAProxy. option httpchk HEAD /healthcheck.txt HTTP/1.0 Is it possible to transfer these healthcheck requests to backend webservers as "LB-Check" User-agent or any other option, so that I can distinguish it from other log entries? However I dont want to go for "dontlog" option, as I dont want to miss these entries.

    Read the article

  • Puppet master and resources graph

    - by jrottenberg
    Hello ! I've setup rrd reporting + graph on my puppet master, my nodes report as expected and I can see the 'changes' and 'time' graphs, but I miss the 'resources' (html and daily weekly monthly yearly graphs) elements. Note resources.rrd files are there, just puppetmaster does not generate the html and png

    Read the article

  • git push not updating the cloned from repo

    - by dhaval
    I did the following git clone from another repo say Release1 made changes to cloned repo committed changes pushed changes to both master and Release1 pulled changes from cloned folder in Release1 status/log is showing my changes at both places The update is not reflected at Release1 What did I miss in the above steps? Both repo are in same server.

    Read the article

  • Is there a Hypercam for mac?

    - by mrpwnscroner
    I don't care if you have to buy it(i use bittorrent) me and my cousin switched from pc to mac and i kind of miss hypercam to record our best moments in games, (such as, halo, Combat arms, starcraft, etc.) and showing hacks. I tried to use crossover 8 and 9 both pro versions and it hasnt worked. if anyone has a solution, lemme know, thanx!

    Read the article

  • APC File Cache not working but user cache is fine

    - by danishgoel
    I have just got a VPS (with cPanel/WHM) to test what gains i could get in my application with using apc file cache AND user cache. So firstly I got the PHP 5.3 compiled in as a DSO (apache module). Then installed APC via PECL through SSH. (First I tried with WHM Module installer, it also had the same problem, so I tried it via ssh) All seemed fine and phpinfo showed apc loaded and enabled. Then I checked with apc.php. All seemed OK But as I started testing my php application, the stats in apc for File Cache Information state: Cached Files 0 ( 0.0 Bytes) Hits 1 Misses 0 Request Rate (hits, misses) 0.00 cache requests/second Hit Rate 0.00 cache requests/second Miss Rate 0.00 cache requests/second Insert Rate 0.00 cache requests/second Cache full count 0 Which meant no PHP files were being cached, even though I had browsed through over 10 PHP files having multiple includes. So there must have been some Cached Files. But the user cache is functioning fine. User Cache Information Cached Variables 0 ( 0.0 Bytes) Hits 1000 Misses 1000 Request Rate (hits, misses) 0.84 cache requests/second Hit Rate 0.42 cache requests/second Miss Rate 0.42 cache requests/second Insert Rate 0.84 cache requests/second Cache full count 0 Its actually from an APC caching test script which tries to retrieve and store 1000 entries and gives me the times. A sort of simple benchmark. Can anyone help me here. Even though apc.cache_by_default = 1, no php files are being cached. This is my apc config Runtime Settings apc.cache_by_default 1 apc.canonicalize 1 apc.coredump_unmap 0 apc.enable_cli 0 apc.enabled 1 apc.file_md5 0 apc.file_update_protection 2 apc.filters apc.gc_ttl 3600 apc.include_once_override 0 apc.lazy_classes 0 apc.lazy_functions 0 apc.max_file_size 1M apc.mmap_file_mask apc.num_files_hint 1000 apc.preload_path apc.report_autofilter 0 apc.rfc1867 0 apc.rfc1867_freq 0 apc.rfc1867_name APC_UPLOAD_PROGRESS apc.rfc1867_prefix upload_ apc.rfc1867_ttl 3600 apc.serializer default apc.shm_segments 1 apc.shm_size 32M apc.slam_defense 1 apc.stat 1 apc.stat_ctime 0 apc.ttl 0 apc.use_request_time 1 apc.user_entries_hint 4096 apc.user_ttl 0 apc.write_lock 1 Also most php files are under 20KB, thus, apc.max_file_size = 1M is not the cause. I have also tried using 'apc_compile_file ' to force some files into opcode cache with no luck. I have also re-installed APC with Debugging enabled, but nothing shows in the error_log I have also tried setting mmap_file_mask to /dev/zero and /tmp/apc.xxxxxx, i have also set /tmp permissions to 777 to no avail Any clue anyone. Update: I have tried following things and none cause APC file cache to populate 1. set apc.enable_cli = 1 AND run a script from cli 2. Set apc.max_file_size = 5M (just in case) 3. switched php handler from dso to FastCGI in WHM (then switched it back to dso as it did not solve the problem) 4. Even tried restarting the container

    Read the article

  • How Do I Programmatically Check if a WordPress Plugin Is Already Activated?

    - by Volomike
    I know I can use activate_plugin() from inside a given, active plugin in WordPress, to activate another plugin. But what I want to know is how do I programmatically check if that plugin is already active? For instance, this snippet of code can be temporarily added to an existing plugin's initial file to activate a partner plugin: add_action('wp','activatePlugins'); function activatePlugins() { if( is_single() || is_page() || is_home() || is_archive() || is_category() || is_tag()) { @ activate_plugin('../mypartnerplugin/thepluginsmainfile.php'); } } Then, use a Linux command line tool to spider all your sites that have this code, and it will force a page view. That page view will cause the above code to fire and activate that other plugin. That's how to programmatically activate another plugin from a given plugin as far as I can tell. But the problem is that it gets activated over and over and over again. What would be great is if I had an if/then condition and some function I could call in WordPress to see if that plugin is already activated, and only activate it once if not active.

    Read the article

  • How to create alternative linux boot without starting some services

    - by rics
    I would like to create an alternative booting possibility in my GRUB menu that does not start some services (listed by chkconfig) like cups. I would use this boot during travel where I surely does not need these services and shorter bootup time is preferable. Permanent removal of such services is not an option because I could not miss them during normal daily work. I use Mandriva 2010 with the latest updates.

    Read the article

  • Scraping paginated items from a website using scrapy

    - by Mridang Agarwalla
    I'm using scrapy to scrape items from a site. I'm not being able to implement this scraping pattern. The site I'm trying to scrape is a forum and I scrape the site once a day. Each page has a table containing posts. New posts are added to the top of the table and as more and more posts are posted to the site, the older posts go further into the pages due to pagination. This is a very simple scenario and we will assume that the order of the posts never change. I would like to scrape this site and scrape all the "new" records until the last scraped post from yesterday is encountered. I have configured my spider to paginate endlessly and when it encounters yesterday's last scraped post, it should stop. How can implement this? (My Scrapy installation works with my Django installation using django-dynamic-scraper )

    Read the article

  • Nginx Reverse Proxy : post_action if proxy cache hit - Possbile?

    - by anonymous-one
    We have recently found out about nginxes post_action. We were wondering it there was a way to use this directive if a proxy cache hit is made? The flow we were hoping on is as follows: 1) User request comes in 2) If cache HIT goto A / If cache MISS goto B A) 1) Serve Cached Result A) 2) post_action to another url on the backend B) 1) Server request from backend B) 2) Store result from backend Any ideas if this is possible via post_action? Thanks!

    Read the article

  • Repeating a List in Scala

    - by Ralph
    I am a Scala noob. I have decided to write a spider solitaire solver as a first exercise to learn the language and functional programming in general. I would like to generate a randomly shuffled deck of cards containing 1, 2, or 4 suits. Here is what I came up with: val numberOfSuits = 1 (List["clubs", "diamonds", "hearts", "spades"].take(numberOfSuits) * 4).take(4) which should return List["clubs", "clubs", "clubs", "clubs"] List["clubs", "diamonds", "clubs", "diamonds"] List["clubs", "diamonds", "hearts", "spades"] depending on the value of numberOfSuits, except there is no List "multiply" operation that I can find. Did I miss it? Is there a better way to generate the complete deck before shuffling? BTW, I plan on using an Enumeration for the suits, but it was easier to type my question with strings. I will take the List generated above and using a for comprehension, iterate over the suits and a similar List of card "ranks" to generate a complete deck.

    Read the article

  • Asp.net cached objects staying in memory

    - by GordonB
    I have a asp.net web forms app that uses System.Web.Caching.Cache to cache xml data from a number of web services for 2 hours. webCacheObj.Remove(dataCacheKey) webCacheObj.Insert(dataCacheKey, dataToCache, Nothing, DateTime.Now.AddHours(2), Nothing) Every 90 minutes a Microsoft Search Server hits a particular (spider) page which calls the code to put the objects into the cache. The issue i have is that over a period of time, the memory usage of the application grows exponentially. Lets say that in a week, the memory usage of the application pool grows to over 1gb. I'm using IIS7 and no application pool recycling is currently enabled.

    Read the article

  • Combine Windows 8 app and settings search?

    - by askvictor
    While I've adapted to most things in windows 8 quite easily, I miss the 'combined' search feature of win 7 where pressing Win then typing would bring up all of the applications, settings and files (not that I ever really used the files part). Now, if I want to search settings I press win, start typing, then have to press the down arrow twice, then enter, then find the setting I want (I know I could press win-w, but that's just another thing to remember). Is there any way to bring back the 'unified search'?

    Read the article

  • When is a secondary nameserver hit?

    - by Evan Carroll
    Take this scenario: domain: foobar.com ns1: 2.2.2.2 ns2: 3.3.3.3 My question: Is ns2 hit just in the event that ns1 is down? Or, is ns2 hit any time that ns1 returns a miss/doesn't resolve the query? I know ns2 would be hit if ns1 ever went down; but, what if ns1 is up and just doesn't have the data?

    Read the article

  • Windows 7 on Laptop: Show 3 power Profiles instead of just 2

    - by josecortesp
    The only thing I miss from Vista on W7 is that, you can click the batery icon and select one of three power profiles without having to open the Energy option Window. Can I archieve this on W7 Without having to install a third party tool? I have tried one, but with so many bugs and problems, but I guess there are maybe one tool out there that works fine. I'm Using W7 Professional x64 Thanks in advance

    Read the article

  • How to reset Scrapy parameters? (always running under same parameters)

    - by Jean Ventura
    I've been running my Scrapy project with a couple of accounts (the project scrapes a especific site that requieres login credentials), but no matter the parameters I set, it always runs with the same ones (same credentials). I'm running under virtualenv. Is there a variable or setting I'm missing? Edit: It seems that this problem is Twisted related. Even when I run: scrapy crawl -a user='user' -a password='pass' -o items.json -t json SpiderName I still get an error saying: ERROR: twisted.internet.error.ReactorNotRestartable And all the information I get, is the last 'succesful' run of the spider.

    Read the article

  • WebClient.DownloadString() Not Producing Exact HTML

    - by Ryan Fuentes
    So here's the deal. I'm creating a spider bot for a website that scans all the product pages and records the product data. I'm using C# and the WebClient library to download the HTML string. The site I'm crawling must be specially made because the HTML that is received from WebClient.DownloadString() is different than the HTML that I get when I view the source of the HTML when visiting it on a browser. This seems intentional because the only info I can't get is the price. Does anyone know a workaround for this problem or can anyone explain what is happening? Thanks.

    Read the article

  • Are there Adaptive Replacement Cache patent-free alternatives?

    - by aleccolocco
    An open source high-performance project I'm working on needs to keep a cache of parsed/compiled files. A plain LRU or a plain LFU wouldn't fit. Plain LRU wouldn't work as there will be remote batch/spider processes hitting the service regularly. Plain LFU wouldn't work because content will age. ARC seems like the perfect solution but since IBM holds patents to it at least one open source project dropped it. Are there any (good enough) alternatives? EDIT: I'm not looking for exactly the same thing, just something that could handle those two situations. Perhaps some simple strategy with timestamps and sources. There have to be many programmers who faced this situation before. That's why the "good enough" bit.

    Read the article

  • Catch access to undefined property in JavaScript

    - by avri
    The Spider-Monkey JavaScript engine implements the noSuchMethod callback function for JavaScript Objects. This function is called whenever JavaScript tries to execute an undefined method of an Object. I would like to set a callback function to an Object that will be called whenever an undefined property in the Object is accessed or assigned to. I haven't found a noSuchProperty function implemented for JavaScript Objects and I am curios if there is any workaround that will achieve the same result. Consider the following code: var a = {}; a.__defineGetter__("bla", function(){alert(1);return 2;}); alert(a.bla); It is equivalent to [alert(1);alert(2)] - even though a.bla is undefined. I would like to achieve the same result but to unknown properties (i.e. without knowing in advance that a."bla" will be the property accessed)

    Read the article

  • how to scrawl file hosting website with scrapy in python?

    - by Veryel Hua
    Can anyone help me to figure out how to scrawl file hosting website like filefactory.com? I don't want to download all the file hosted but just to index all available files with scrapy. I have read the tutorial and docs with respect to spider class for scrapy. If I only give the website main page as the begining url I wouldn't not scrawl the whole site, because the scrawling depends on links but the begining page seems not point to any file pages. That's the problem I am thinking and any help would be appreciated!

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >