Search Results

Search found 325 results on 13 pages for 'tasty spider men'.

Page 10/13 | < Previous Page | 6 7 8 9 10 11 12 13  | Next Page >

  • XPath to find ancestor node containing CSS class

    - by Juan Mendes
    I am writing some Selenium tests and I need to be able to find an ancestor of a WebElement that I have already found. This is what I'm trying but is returning no results // checkbox is also a WebElement WebElement container = checkbox.findElement(By.xpath( "current()/ancestor-or-self::div[contains(@class, 'x-grid-view')]") ); The image below shows the div that I have selected highlighted in dark blue and the one I want to find with an arrow pointing at it. UPDATE Tried prestomanifesto's suggestion and got the following error [cucumber] org.openqa.selenium.InvalidSelectorException: The given selector ./ancestor::div[contains(@class, 'x-grid-view']) is either invalid or does not result in a WebElement. The following error occurred: [cucumber] [InvalidSelectorError] Unable to locate an element with the xpath expression ./ancestor::div[contains(@class, 'x-grid-view']) because of the following error: [cucumber] [Exception... "The expression is not a legal expression." code: "51" nsresult: "0x805b0033 (NS_ERROR_DOM_INVALID_EXPRESSION_ERR)" location: "file:///C:/Users/JUAN~1.MEN/AppData/Local/Temp/anonymous849245187842385828webdriver-profile/extensions/fxdriv Update 2 Really weird, even by ID is not working [cucumber]       org.openqa.selenium.NoSuchElementException: Unable to locate element:{"method":"xpath","selector":"./ancestor::div[@id='gridview-1252']"}

    Read the article

  • Run a PHP-script from a PHP-script without blocking

    - by Nicklas Ansman
    I'm building a spider which will traverse various sites and data mining them. Since I need to get each page separately this could take a VERY long time (maybe 100 pages). I've already set the set_time_limit to be 2 minutes per page but it seems like apache will kill the script after 5 minutes no matter. This isn't usually a problem since this will run from cron or something similar which does not have this time limit. However I would also like the admins to be able to start a fetch manually via a HTTP-interface. It is not important that apache is kept alive for the full duration, I'm, going to use AJAX to trigger a fetch and check back once in a while with AJAX. My problem is how to start the fetch from within a PHP-script without the fetch being terminated when the script calling it dies. Maybe I could use system('script.php &') but I'm not sure it will do the trick. Any other ideas?

    Read the article

  • Prevent bot from crawling certain areas of site.

    - by Skoder
    Hey, I don't know much about SEO and how web spiders work, so forgive my ignorance here. I'm creating a site (using ASP.NET-MVC) which has areas that displays information retrieved from the database. The data is unique to the user, so there's no real server-side output caching going on. However, since the data can contain things the user may not wish to have displayed from search engine results, I'd like to prevent any spiders from accessing the search results page. Are there any special actions I should take to ensure that the search result directory isn't crawled? Also, would a spider even crawl a page that's dynamically generated and would any actions preventing certain directories being search mess up my search engine rankings? edit: I should add, I'm reading up on robots.txt protocol, but it relies on co-operation from the web crawler. However, I'd also like to prevent any data-mining users who will ignore the robots.txt file. I appreciate any help!

    Read the article

  • Subscription payment processing with ASP.NET

    - by bobsmith123
    I would like to create a subscription based website with users getting charged every month. I know I have to create an account with paypal or authorize.net. Do these payment providers automatically bill the user every month? How would I take care of offering the service free for 30 days and start billing after that? Also, I've heard of middle men services like spreedly, chargify. Where do they fit into the equation? Can someone help me wrap my head around these concepts? Considering, I am based in the U.S but would like to have the payment provider accept any kind of currency. Which payment provider do you suggest?

    Read the article

  • is a negative text-indent considered cloaking?

    - by John Isaacks
    I am using the negative-text-indent technique I learned to show a text-image to the user, while hiding the corresponding actual text. This way the user sees the fancy styled text while search engines can still index it. However I am started to think this sounds like cloaking since I am serving different content to the user vs the spider. However, I am not using this in a deceitful way. Plus it seems like this is a popular technique. So is it SEO-safe or is it cloaking? Thanks!

    Read the article

  • iPad title bars. Navbars or toolbars?

    - by Squeegy
    I see a bunch of apps for iPad with really cool title bars. These seem to be a combination of a navigation bar and a toolbar. They usually have a back button and a title as well as men other buttons. And a navbar only supports a left item, a right item and and title view. And the toolbar does not really support back buttons or titles. So how do I implement these rich navbars with many buttons on my UINavigationController driven application?

    Read the article

  • How Do I Programmatically Check if a WordPress Plugin Is Already Activated?

    - by Volomike
    I know I can use activate_plugin() from inside a given, active plugin in WordPress, to activate another plugin. But what I want to know is how do I programmatically check if that plugin is already active? For instance, this snippet of code can be temporarily added to an existing plugin's initial file to activate a partner plugin: add_action('wp','activatePlugins'); function activatePlugins() { if( is_single() || is_page() || is_home() || is_archive() || is_category() || is_tag()) { @ activate_plugin('../mypartnerplugin/thepluginsmainfile.php'); } } Then, use a Linux command line tool to spider all your sites that have this code, and it will force a page view. That page view will cause the above code to fire and activate that other plugin. That's how to programmatically activate another plugin from a given plugin as far as I can tell. But the problem is that it gets activated over and over and over again. What would be great is if I had an if/then condition and some function I could call in WordPress to see if that plugin is already activated, and only activate it once if not active.

    Read the article

  • Good memory profiling, leak and error detection for Windows

    - by Fernando
    I'm currently looking for a good memory / leak detection tool for Windows. A few years ago, I used Numega's Boundschecker, which was VERY good. Right now it seems to have been sold to Compuware, which apparently sold it again to some other company. Trying to evaluate a demo of the current version has been so far very frustrating, in the best "enterprisy" tradition: (a) no advertised prices on their website (Great Red Flashing Lights of Warning); (b) contact form asked for number of employeers and other private information; (c) no response to my emails asking for a evaluation and price. I had to conclude that BoundsChecker is now one of "those" products. Y'know, the type where you innocently call and tomorrow 3 men in black suits turn up at your building wanting to talk to you about "partnerships" and not-so-secretly gauge the size of your company and therefore how much they can get away with charging you. SO, rant aside, can anyone recommend an excellent memory checking/leak detection tool, how much it costs, and suggestions for where to buy?

    Read the article

  • Scraping paginated items from a website using scrapy

    - by Mridang Agarwalla
    I'm using scrapy to scrape items from a site. I'm not being able to implement this scraping pattern. The site I'm trying to scrape is a forum and I scrape the site once a day. Each page has a table containing posts. New posts are added to the top of the table and as more and more posts are posted to the site, the older posts go further into the pages due to pagination. This is a very simple scenario and we will assume that the order of the posts never change. I would like to scrape this site and scrape all the "new" records until the last scraped post from yesterday is encountered. I have configured my spider to paginate endlessly and when it encounters yesterday's last scraped post, it should stop. How can implement this? (My Scrapy installation works with my Django installation using django-dynamic-scraper )

    Read the article

  • Repeating a List in Scala

    - by Ralph
    I am a Scala noob. I have decided to write a spider solitaire solver as a first exercise to learn the language and functional programming in general. I would like to generate a randomly shuffled deck of cards containing 1, 2, or 4 suits. Here is what I came up with: val numberOfSuits = 1 (List["clubs", "diamonds", "hearts", "spades"].take(numberOfSuits) * 4).take(4) which should return List["clubs", "clubs", "clubs", "clubs"] List["clubs", "diamonds", "clubs", "diamonds"] List["clubs", "diamonds", "hearts", "spades"] depending on the value of numberOfSuits, except there is no List "multiply" operation that I can find. Did I miss it? Is there a better way to generate the complete deck before shuffling? BTW, I plan on using an Enumeration for the suits, but it was easier to type my question with strings. I will take the List generated above and using a for comprehension, iterate over the suits and a similar List of card "ranks" to generate a complete deck.

    Read the article

  • who wrote 250k tests for webkit?

    - by amwinter
    assuming a yield of 3 per hour, that's 83000 hours. 8 hours a day makes 10,500 days, divide by thirty to get 342 mythical man months. I call them mythical because writing 125 tests per person per week is unreal. can any wise soul out there on SO shed some light on what sort of mythical men write unreal quantities of tests for large software projects? thank you. update chrisw thinks there are only 20k tests (check out his explanation below). PS I'd really like to hear from folks who have worked on projects with large test bases

    Read the article

  • Asp.net cached objects staying in memory

    - by GordonB
    I have a asp.net web forms app that uses System.Web.Caching.Cache to cache xml data from a number of web services for 2 hours. webCacheObj.Remove(dataCacheKey) webCacheObj.Insert(dataCacheKey, dataToCache, Nothing, DateTime.Now.AddHours(2), Nothing) Every 90 minutes a Microsoft Search Server hits a particular (spider) page which calls the code to put the objects into the cache. The issue i have is that over a period of time, the memory usage of the application grows exponentially. Lets say that in a week, the memory usage of the application pool grows to over 1gb. I'm using IIS7 and no application pool recycling is currently enabled.

    Read the article

  • How to get all the fields of a row using the SQL MAX function?

    - by Yiannis Mpourkelis
    Consider this table (from http://www.tizag.com/mysqlTutorial/mysqlmax.php): Id name type price 123451 Park's Great Hits Music 19.99 123452 Silly Puddy Toy 3.99 123453 Playstation Toy 89.95 123454 Men's T-Shirt Clothing 32.50 123455 Blouse Clothing 34.97 123456 Electronica 2002 Music 3.99 123457 Country Tunes Music 21.55 123458 Watermelon Food 8.73 This SQL query returns the most expensive item from each type: SELECT type, MAX(price) FROM products GROUP BY type Clothing $34.97 Food $8.73 Music $21.55 Toy $89.95 I also want to get the fields id and name that belong to the above max price, for each row. What SQL query will return a table like this? Id name type price 123455 Blouse Clothing 34.97 123458 Watermelon Food 8.73 123457 Country Tunes Music 21.55 123453 Playstation Toy 89.95

    Read the article

  • How to reset Scrapy parameters? (always running under same parameters)

    - by Jean Ventura
    I've been running my Scrapy project with a couple of accounts (the project scrapes a especific site that requieres login credentials), but no matter the parameters I set, it always runs with the same ones (same credentials). I'm running under virtualenv. Is there a variable or setting I'm missing? Edit: It seems that this problem is Twisted related. Even when I run: scrapy crawl -a user='user' -a password='pass' -o items.json -t json SpiderName I still get an error saying: ERROR: twisted.internet.error.ReactorNotRestartable And all the information I get, is the last 'succesful' run of the spider.

    Read the article

  • How to use named_scope to incrementally construct a complex query?

    - by wbharding
    I want to use Rails named_scope to incrementally build complex queries that are returned from external methods. I believe I can get the behavior I want with anonymous scopes like so: def filter_turkeys Farm.scoped(:conditions => {:animal => 'turkey'}) end def filter_tasty Farm.scoped(:conditions => {:tasty => true}) end turkeys = filter_turkeys is_tasty = filter_tasty tasty_turkeys = filter_turkeys.filter_tasty But say that I already have a named_scope that does what these anonymous scopes do, can I just use that, rather than having to declare anonymous scopes? Obviously one solution would be to pass my growing query to each of the filter methods, a la def filter_turkey(existing_query) existing_query.turkey # turkey is a named_scoped that filters for turkey end But that feels like an un-DRY way to solve the problem. Oh, and if you're wondering why I would want to return named_scope pieces from methods, rather than just build all the named scopes I need and concatenate them together, it's because my queries are too complex to be nicely handled with named_scopes themselves, and the queries get used in multiple places, so I don't want to manually build the same big hairy concatenation multiple times.

    Read the article

  • WebClient.DownloadString() Not Producing Exact HTML

    - by Ryan Fuentes
    So here's the deal. I'm creating a spider bot for a website that scans all the product pages and records the product data. I'm using C# and the WebClient library to download the HTML string. The site I'm crawling must be specially made because the HTML that is received from WebClient.DownloadString() is different than the HTML that I get when I view the source of the HTML when visiting it on a browser. This seems intentional because the only info I can't get is the price. Does anyone know a workaround for this problem or can anyone explain what is happening? Thanks.

    Read the article

  • Are there Adaptive Replacement Cache patent-free alternatives?

    - by aleccolocco
    An open source high-performance project I'm working on needs to keep a cache of parsed/compiled files. A plain LRU or a plain LFU wouldn't fit. Plain LRU wouldn't work as there will be remote batch/spider processes hitting the service regularly. Plain LFU wouldn't work because content will age. ARC seems like the perfect solution but since IBM holds patents to it at least one open source project dropped it. Are there any (good enough) alternatives? EDIT: I'm not looking for exactly the same thing, just something that could handle those two situations. Perhaps some simple strategy with timestamps and sources. There have to be many programmers who faced this situation before. That's why the "good enough" bit.

    Read the article

  • JAVA MySql multiple word search

    - by user1703849
    i have a database in MySql that has a name column in it which contains several words(description). I am connected to database with java through eclipse. I have a search, that returns results if only name field contains one word. id: name: info: type: 1 balloon big red balloon big 2 house expensive beautiful luxury 3 chicken wings deep fried wings tasty these are just random words but as an example my search can only see ex. balloon and then show info, but if i type chicken wings, it does nothing. so it possible somehow to search from columns with multiple words? this is my search code below import java.io.*; import java.sql.*; import java.util.*; class Search { public static void main(String[] args) { Scanner inp``ut = new Scanner(System.in); try { Connection con = DriverManager.getConnection( "jdbc:mysql://example/mydb", "user", "password"); Statement stmt = (Statement) con.createStatement(); System.out.print("enter search: "); String name = input.next(); String SQL = "SELECT * FROM menu where name LIKE '" + name + "'"; ResultSet rs = stmt.executeQuery(SQL); while (rs.next()) { System.out.println("Name: " +rs.getString("name")); System.out.println("Description: " + rs.getString("info") ); System.out.println("Price: " + rs.getString("Price")); } } catch (Exception e) { System.out.println("ERROR: " + e.getMessage()); } } }

    Read the article

  • how to scrawl file hosting website with scrapy in python?

    - by Veryel Hua
    Can anyone help me to figure out how to scrawl file hosting website like filefactory.com? I don't want to download all the file hosted but just to index all available files with scrapy. I have read the tutorial and docs with respect to spider class for scrapy. If I only give the website main page as the begining url I wouldn't not scrawl the whole site, because the scrawling depends on links but the begining page seems not point to any file pages. That's the problem I am thinking and any help would be appreciated!

    Read the article

  • Catch access to undefined property in JavaScript

    - by avri
    The Spider-Monkey JavaScript engine implements the noSuchMethod callback function for JavaScript Objects. This function is called whenever JavaScript tries to execute an undefined method of an Object. I would like to set a callback function to an Object that will be called whenever an undefined property in the Object is accessed or assigned to. I haven't found a noSuchProperty function implemented for JavaScript Objects and I am curios if there is any workaround that will achieve the same result. Consider the following code: var a = {}; a.__defineGetter__("bla", function(){alert(1);return 2;}); alert(a.bla); It is equivalent to [alert(1);alert(2)] - even though a.bla is undefined. I would like to achieve the same result but to unknown properties (i.e. without knowing in advance that a."bla" will be the property accessed)

    Read the article

  • ASP.NET MVC - how to modify requested URLs?

    - by Marek
    The baidu spider seems to be adding ¤ to end of some crawled urls (it seems that it happens with urls containing single unicode character as the last character) The baidu-requested url looks like this: site.com/abc/ä¤ while site.com/abc/ä is the valid url and as linked from many places on my site. The internal problem is that a different route is matched for this kind of url and an unhandled exception occurs. I would not like to lose baidu because of too many 500 errors on the site. I would like to change the requested URL to a different URL by removing the added character before any ASP.NET MVC processing of the request starts. Can I write a request filter/http module or something similar in ASP.NET MVC to remove the trailing '¤' from the urls? I would not like to alter my routes to counter-hack this behavior.

    Read the article

  • What's the difference between Path and Polygon control? + issues when designing custom drawing user

    - by tomo
    I'm starting to design a custom control with rather complex drawing. It will be a kind of chart (a kind of radar chart). It will be composed of a few axises with labels, line-regions (like spider-network) and filled shapes. The main question is what's the difference between using Path control and Polygon control? What's better to use here? The goal is to prepare control with minimal amount of c# and try to do as much as possible in xaml / binding. The next important requirement is that control should resize to parent containers' width - if possible without any long recalulations in c#.

    Read the article

  • What is considered bleeding edge in programming these days?

    - by iestyn
    What is "bleeding edge" these days? has it all been done before us, and we are just discovering new ways of implementing mathematical constructs within programming? Functional Programming seems to be making inroads in all areas, but is this just marketing to create interest in a programming arena where it appears that the state of the art has climaxed too soon. have the sales men got hold of the script, and selling ideas that can be sold, dumbing down the future? I see very old ideas making their way into the market place....what are the truly new things that should be considered fresh and new in 2010 onwards, and not some 1960-1980 idea being refocused.

    Read the article

  • What&rsquo;s new in RadChart for 2010 Q1 (Silverlight / WPF)

    Greetings, RadChart fans! It is with great pleasure that I present this short highlight of our accomplishments for the Q1 release :). Weve worked very hard to make the best silverlight and WPF charting product even better. Here is some of what we did during the past few months.   1) Zooming&Scrolling and the new sampling engine: Without a doubt one of the most important things we did. This new feature allows you to bind your chart to a very large set of data with blazing performance. Dont take my word for it give it a try!   2) New Smart Label Positioning and Spider-like labels feature: This new feature really helps with very busy graphs. You can play with the different settings we offer in this example.     3) Sorting and Filtering. Much like our RadGridview control the chart now allows you to sort and filter your data out of the box with a single line of code!   4) Legend improvements Weve also been paying attention to those of you who wanted a much improved legend. It is now possible to customize the look and feel of legend items and legend position with a single click.     5) Custom palette brushes. You have told us that you want to easily customize all palette colors using a single clean API from both XAML and code behind. The new custom palette brushes API does exactly that.   There are numerous other improvements as well, as much improved themes, performance optimizations and other features that we did. If you want to dig in further check the release notes and changes and backwards compatibility topics.   Feel free to share the pains and gains of working with RadChart. Our team is always open to receiving constructive feedback and beer :-)Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • What&rsquo;s new in RadChart for 2010 Q1 (Silverlight / WPF)

    Greetings, RadChart fans! It is with great pleasure that I present this short highlight of our accomplishments for the Q1 release :). Weve worked very hard to make the best silverlight and WPF charting product even better. Here is some of what we did during the past few months.   1) Zooming&Scrolling and the new sampling engine: Without a doubt one of the most important things we did. This new feature allows you to bind your chart to a very large set of data with blazing performance. Dont take my word for it give it a try!   2) New Smart Label Positioning and Spider-like labels feature: This new feature really helps with very busy graphs. You can play with the different settings we offer in this example.     3) Sorting and Filtering. Much like our RadGridview control the chart now allows you to sort and filter your data out of the box with a single line of code!   4) Legend improvements Weve also been paying attention to those of you who wanted a much improved legend. It is now possible to customize the look and feel of legend items and legend position with a single click.     5) Custom palette brushes. You have told us that you want to easily customize all palette colors using a single clean API from both XAML and code behind. The new custom palette brushes API does exactly that.   There are numerous other improvements as well, as much improved themes, performance optimizations and other features that we did. If you want to dig in further check the release notes and changes and backwards compatibility topics.   Feel free to share the pains and gains of working with RadChart. Our team is always open to receiving constructive feedback and beer :-)Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13  | Next Page >