Search Results

Search found 101 results on 5 pages for 'mechanize'.

Page 4/5 | < Previous Page | 1 2 3 4 5  | Next Page >

  • No form with the requested fields?

    - by FRESHTER
    // #!/usr/bin/perl use WWW::Mechanize; use Compress::Zlib; my $mech = WWW::Mechanize->new(); my $username = ""; #fill in username here my $keyword = ""; #fill in password here my $mobile = $ARGV[0]; my $text = $ARGV[1]; $deb = 1; print length($text)."\n" if($deb); $text = $text."\n\n\n\n\n" if(length($text) < 135); $mech->get("http://wwwl.way2sms.com/content/index.html"); unless($mech->success()) { exit; } $dest = $mech->response->content; print "Fetching...\n" if($deb); if($mech->response->header("Content-Encoding") eq "gzip") { $dest = Compress::Zlib::memGunzip($dest); $mech->update_html($dest); } $dest =~ s/<form name="loginForm"/<form action='..\/auth.cl' name="loginForm"/g; $mech->update_html($dest); $mech->form_with_fields(("username","password")); $mech->field("username",$username); $mech->field("password",$keyword); print "Loggin...\n" if($deb); $mech->submit_form(); $dest= $mech->response->content; if($mech->response->header("Content-Encoding") eq "gzip") { $dest = Compress::Zlib::memGunzip($dest); $mech->update_html($dest); } $mech->get("http://wwwl.way2sms.com//jsp/InstantSMS.jsp?val=0"); $dest= $mech->response->content; if($mech->response->header("Content-Encoding") eq "gzip") { $dest = Compress::Zlib::memGunzip($dest); $mech->update_html($dest); } print "Sending ... \n" if($deb); $mech->form_with_fields(("MobNo","textArea")); $mech->field("MobNo",$mobile); $mech->field("textArea",$text); $mech->submit_form(); if($mech->success()) { print "Done \n" if($deb); } else { print "Failed \n" if($deb); exit; } $dest = $mech->response->content; if($mech->response->header("Content-Encoding") eq "gzip") { $dest = Compress::Zlib::memGunzip($dest); #print $dest if($deb); } if($dest =~ m/successfully/sig) { print "Message sent successfully" if($deb); } exit; *In this code I face with an error saying There is no form with the requested fields at ./sms.pl line 65 Can't call method "value" on an undefined value at /usr/share/perl5/vendor_perl/WWW/Mechanize.pm line* 1348. Could any 1 guide me please

    Read the article

  • method __getattr__ is not inherited from parent class

    - by ??????
    Trying to subclass mechanize.Browser class: from mechanize import Browser class LLManager(Browser, object): IS_AUTHORIZED = False def __init__(self, login = "", passw = "", *args, **kwargs): super(LLManager, self).__init__(*args, **kwargs) self.set_handle_robots(False) But when I make something like this: lm["Widget[LinksList]_link_1_title"] = anc then I get an error: Traceback (most recent call last): File "<pyshell#8>", line 1, in <module> lm["Widget[LinksList]_link_1_title"] = anc TypeError: 'LLManager' object does not support item assignment Browser class have overridden method __getattr__ as shown: def __getattr__(self, name): # pass through _form.HTMLForm methods and attributes form = self.__dict__.get("form") if form is None: raise AttributeError( "%s instance has no attribute %s (perhaps you forgot to " ".select_form()?)" % (self.__class__, name)) return getattr(form, name) Why my class or instance don't get this method as in parent class?

    Read the article

  • BeautifulSoup Parser Confusion - HTML

    - by lyngbym
    I'm trying to scrape some content off another site and I'm not sure why BeautifulSoup is producing this output. It is only finding a blank space inside the match, but the real HTML contains a large amount of markup. I apologize if this is something stupid on my part. I'm new to python. Here's my code: import sys import os import mechanize import re from BeautifulSoup import BeautifulSoup def scrape_trails(BASE_URL, data): #Get the trail names soup = BeautifulSoup(data) sitesDiv = soup.findAll("div", attrs={"id" : "sitesDiv"}) print sitesDiv def main(): BASE_URL = "http://www.dnr.state.mn.us/skiing/skipass/list.html" br = mechanize.Browser() data = br.open(BASE_URL).get_data() links = scrape_trails(BASE_URL, data) if __name__ == '__main__': main() If you follow that URL you can see the sitesDiv contains a lot of markup. I'm not sure if I'm doing something wrong or if this is just malformed markup that the script can't handle. Thanks!

    Read the article

  • How to automate IE/Firefox to download some files from a https: website with Javascript links?

    - by Horace Ho
    Some of my users download several pdf files from an internet website regularly. They'd like to automate the process to save a few minutes every day, and most importantly, to minimize errors. I tried mechanize but failed as mechanize does not process javascripts. Since the download links in the remote site are all triggered by javescript, I am looking for solutions to automate the browser itself. Any recommendations? https remote server login and search are FORM POST file download link are JavaScripts on win32 IE or Firefox thanks!

    Read the article

  • how to use nokogiri methods .xpath & .at_xpath

    - by Radek
    I'm learning how to use nokogiri and few questions came to me based on the code below require 'rubygems' require 'mechanize' post_agent = WWW::Mechanize.new post_page = post_agent.get('http://www.vbulletin.org/forum/showthread.php?t=230708') puts "\nabsolute path with tbody gives nil" puts post_page.parser.xpath('/html/body/div/div/div/div/div/table/tbody/tr/td/div[2]').xpath('text()').to_s.strip.inspect puts "\n.at_xpath gives an empty string" puts post_page.parser.at_xpath("//div[@id='posts']/div/table/tr/td/div[2]").at_xpath('text()').to_s.strip.inspect puts "\ntwo lines solution with .at_xpath gives an empty string" rows = post_page.parser.xpath("//div[@id='posts']/div/table/tr/td/div[2]") puts rows[0].at_xpath('text()').to_s.strip.inspect puts puts "two lines working code" rows = post_page.parser.xpath("//div[@id='posts']/div/table/tr/td/div[2]") puts rows[0].xpath('text()').to_s.strip puts "\none line working code" puts post_page.parser.xpath("//div[@id='posts']/div/table/tr/td/div[2]")[0].xpath('text()').to_s.strip puts "\nanother one line code" puts post_page.parser.at_xpath("//div[@id='posts']/div/table/tr/td/div[2]").xpath('text()').to_s.strip puts "\none line code with full path" puts post_page.parser.xpath("/html/body/div/div/div/div/div/table/tr/td/div[2]")[0].xpath('text()').to_s.strip is it better to use // or / in xpath? @AnthonyWJones says that 'the use of an unprefixed //' is not so good idea I had to remove tbody from any working xpath otherwise I got 'nil' result. How is possible to remove an element from the xpath to get things work? do I have to use .xpath twice to extract data if not using full xpath? why I cannot make .at_xpath working to extract data? it works nicely here what is the difference?

    Read the article

  • Does anyone know a way to interact with HP OV(NNM) with python, perl or bash?

    - by marc.riera
    Do anyone know if there is out there any API/library to access NNM database from perl or python? We have a NNM 7.53 which give us access to its data with its java based applet through http. And of course through the 'ovw' GUI interface. I've tried to use Mechanize and selenium2(webdriver) to automatize some checks. The pourpose is to integrate it with our other monitoring services on our "general master console". Many thanks. Marc

    Read the article

  • Can't start rails server after 3.0.1 upgrade

    - by Alberto
    Followed instructions on Railscast but can't get server to start. It states the following error: $ rails s script/rails:6:in `require': no such file to load -- rails/commands (LoadError)` from script/rails:6:in `<main>' Saw the answer on this related question but my Gemfile has no reference to any rails 2.x version and in the "bundle install" results i get this in the results: "Using rails (3.0.1)" EDIT: (adding Gemfile.lock details) GEM remote: http://rubygems.org/ specs: abstract (1.0.0) actionmailer (3.0.1) actionpack (= 3.0.1) mail (~> 2.2.5) actionpack (3.0.1) activemodel (= 3.0.1) activesupport (= 3.0.1) builder (~> 2.1.2) erubis (~> 2.6.6) i18n (~> 0.4.1) rack (~> 1.2.1) rack-mount (~> 0.6.12) rack-test (~> 0.5.4) tzinfo (~> 0.3.23) activemodel (3.0.1) activesupport (= 3.0.1) builder (~> 2.1.2) i18n (~> 0.4.1) activerecord (3.0.1) activemodel (= 3.0.1) activesupport (= 3.0.1) arel (~> 1.0.0) tzinfo (~> 0.3.23) activeresource (3.0.1) activemodel (= 3.0.1) activesupport (= 3.0.1) activesupport (3.0.1) arel (1.0.1) activesupport (~> 3.0.0) builder (2.1.2) calendar_date_select (1.16.1) erubis (2.6.6) abstract (>= 1.0.0) googlecharts (1.6.0) i18n (0.4.2) mail (2.2.9) activesupport (>= 2.3.6) i18n (~> 0.4.1) mime-types (~> 1.16) treetop (~> 1.4.8) mechanize (1.0.0) nokogiri (>= 1.2.1) mime-types (1.16) nokogiri (1.4.3.1) pg (0.9.0) polyglot (0.3.1) rack (1.2.1) rack-mount (0.6.13) rack (>= 1.0.0) rack-test (0.5.6) rack (>= 1.0) rails (3.0.1) actionmailer (= 3.0.1) actionpack (= 3.0.1) activerecord (= 3.0.1) activeresource (= 3.0.1) activesupport (= 3.0.1) bundler (~> 1.0.0) railties (= 3.0.1) railties (3.0.1) actionpack (= 3.0.1) activesupport (= 3.0.1) rake (>= 0.8.4) thor (~> 0.14.0) rake (0.8.7) sparklines (0.5.2) thor (0.14.4) treetop (1.4.8) polyglot (>= 0.3.1) tzinfo (0.3.23) PLATFORMS ruby DEPENDENCIES calendar_date_select googlecharts mechanize pg rails (= 3.0.1) sparklines EDIT: (adding Boot.rb details) require 'rubygems' # Set up gems listed in the Gemfile. gemfile = File.expand_path('../../Gemfile', __FILE__) begin ENV['BUNDLE_GEMFILE'] = gemfile require 'bundler' Bundler.setup rescue Bundler::GemNotFound => e STDERR.puts e.message STDERR.puts "Try running `bundle install`." exit! end if File.exist?(gemfile)

    Read the article

  • What is the best way of testing XML responses?

    - by user303396
    I'm using selenium IDE to test my webpages but unfortunately I cannot use it to test those pages that return an xml response. Some people use Selenium Remote Control, others use modules like WWW::Mechanize and Test::XML or Test::XPath. What is the best way to test the XML responses?

    Read the article

  • How to remove package from apt-get autoremove "queue"

    - by Darth
    I just installed Calibre for ebook management via apt-get on Ubuntu 10.04, however I found out that it's one major version behind the current release, so I decided to reinstall it directly from sources. When I uninstalled the packaged version, apt added bunch of dependencies to the autoremove queue, and as I installed newer version of Calibre from sources, it has no knowledge of it being dependent on those packages. Now I basically have all libraries that I want, but they are still in the autoremove queue. The following packages were automatically installed and are no longer required: libqt4-script libqt4-designer libqt4-dbus python-lxml python-cherrypy3 python-encutils libqt4-xmlpatterns libqt4-help python-qt4 python-clientform python-sip python-django python-mechanize libqt4-svg python-django-tagging libphonon4 libqt4-xml libqt4-assistant libqt4-webkit libqt4-scripttools python-beautifulsoup python-pypdf python-dateutil python-cssutils Use 'apt-get autoremove' to remove them. How do I tell apt that I want to keep these packages installed, without reinstalling them manually?

    Read the article

  • Emacs can't find my installed modules

    - by XVirtusX
    I run Perl/Tk or WWW::Mechanize just fine using terminal but when I try to invoke the interpreter through Emacs shell (M-!), it dies with a message: Can't locate tk.pm in @INC (@INC contains: /etc/perl /usr/local/lib/perl/5.12.4 /usr/local/share/perl/5.12.4 /usr/lib/perl5 /usr/share/perl5 /usr/lib/perl/5.12 /usr/share/perl/5.12 /usr/local/lib/site_perl .) at perl-tk.pl line 2. Obviously, this array @INC doesn't hold the path to my modules, the question is, how can I do that? I didn't find anything on emacs manual. I also googled it but to no avail.

    Read the article

  • Cucumber response object -- PHP environment

    - by trisignia
    Hi, I'm using Cucumber to test a PHP application, and while most everything works without issue, I haven't yet figured out how to retrieve a response object for a request -- I'm looking to test whether a response is successful (code 200) and also to perform some Hpricot parsings of the response body. Right now my env.rb file is pretty simple: require 'webrat' include Webrat::Methods include Webrat::Matchers Webrat.configure do |config| config.mode = :mechanize end And if I put something like this in my step definitions: Given /Debug/ do puts response.to_yaml end I get this error: undefined method `response' for nil:NilClass (NoMethodError) ./features/step_definitions/webrat_steps.rb:11:in `/Debug/' features/versions.feature:4:in `Given Debug' Is anyone familiar with this type of situation? best, Jacob

    Read the article

  • How can Perl interact with an ajax form

    - by Jeff
    I'm writing a perl program that was doing a simple get command to retrieve results and process them. But the site has been updated and now has a java component that handles the results (so the actual data is not in the source code anymore). This is the site: http://wro.westchesterclerk.com/legalsearch.aspx Try putting in: Index Number: 11103 Year: 2009 I want to be able to pro grammatically enter the "index number" and "year" at the bottom of the form where it says "search by number" and then retrieve the results listed next to it. I've written many programs in Perl that simply pass variables via the URL and the results are listed in the source code, so it's easy to parse. (Using LWP:Simple) Like: $html = get("http://www.url.com?id=$somenum&year=$someyear") But this is totally new to me and I don't know where to begin. I'm somewhat familiar with LWP:UserAgent and Mechanize. I'd really appreciate any help. Thanks!

    Read the article

  • how to install python-spidermonkey on windows

    - by paul
    Hello all, im making some script with python mechanize, one of problem is it really hard to find which support javascript supported web client scraping or crawler. actually i was found some such as python-spidermonkey and pykhtml and so on. but most of all only support on linux . i want to make my python script with exe file. so definitely i have to install on windows platform. my question is ..are there any method to can install python-spidermonkey or pykhtml on windows platform? i really need to support windows platform. if anyone can hint or help really appreicate! thanks in advance Paul

    Read the article

  • Why Shouldn't I Programmatically Submit Username/Password to Facebook/Twitter/Amazon/etc?

    - by viatropos
    I wish there was a central, fully customizable, open source, universal login system that allowed you to login and manage all of your online accounts (maybe there is?)... I just found RPXNow today after starting to build a Sinatra app to login to Google, Facebook, Twitter, Amazon, OpenID, and EventBrite, and it looks like it might save some time. But I keep wondering, not being an authentication guru, why couldn't I just have a sleek login page saying "Enter username and password, and check your login service", and then in the background either scrape the login page from say EventBrite and programmatically submit the form with Mechanize, or use an API if there was one? It would be so much cleaner and such a better user experience if they didn't have to go through popups and redirects and they could use any previously existing accounts. My question is: What are the reasons why I shouldn't do something like that? I don't know much about the serious details of cookies/sessions/security, so if you could be descriptive or point me to some helpful links that would be awesome. Thanks!

    Read the article

  • BeautifulSoup: Get the contents of a specific table

    - by Adam Matan
    Hi, My local airport disgracefully blocks users without IE, and looks awful. I want to write a Python scripts that would get the contents of the Arrival and Departures pages every few minutes, and show them in a more readable manner. My tools of choice are mechanize for cheating the site to believe I use IE, and BeautifulSoup for parsing page to get the flights data table. Quite honestly, I got lost in the BeautifulSoup documentation, and can't understand how to get the table (whose title I know) from the entire document, and how to get a list of rows from that table. Any ideas? Adam

    Read the article

  • Python Scraper for Javascript?

    - by Diego
    Hey all, Can anyone direct me to a good Python screen scraping library for javascript code (hopefully one with good documentation/tutorials)? I'd like to see what options are out there, but most of all the easiest to learn with fastest results... wondering if anyone had experience. I've heard some stuff about spidermonkey, but maybe there are better ones out there? Specifically, I use BeautifulSoup and Mechanize to get to here, but need a way to open the javascript popup, submit data, and download/parse the results in the javascript popup. <a href="javascript:openFindItem(12510109)" onclick="s_objectID=&quot;javascript:openFindItem(12510109)_1&quot;;return this.s_oc?this.s_oc(e):true">Find Item</a> I'd like to implement this with Google App engine and Django. Thanks!

    Read the article

  • Warning that users of the function must handle

    - by Hagai
    I am looking for a way to set warning that the user will have to respond. In a sense I would like to use late exception mechanize that occur after the function already finished executing.and returned the wanted value. SomeObject Foo(int input) { SomeObject result; // do something. oh, we need to warn the user. return result; } void Main() { SomeObject object; object = Foo(1); // after copy consturctor is done I would like an exception to be thrown } EDIT: The title to users of the function

    Read the article

  • Scraping html WITHOUT uniquie identifiers using python

    - by Nicholas Law
    I would like to design an algorithm using python that scrapes thousands of pages like this one and this one, gathers all the data and inserts it into a MySQL database. The script will be run on a weekly or bi-weekly basis to update the database of any new information added to each individual page. Ideally I would like a scraper that is easy to work with for table structured data but also data that does not have unique identifiers (ie. id and classes attributes). Which scraper add-on should I use? BeautifulSoup, Scrapy or Mechanize? Are there any particular tutorials/books I should be looking at for this desired result? In the long-run I will be implementing a mobile app that works with all this data through querying the database.

    Read the article

  • Warning that users must handle

    - by hagaik
    I am looking for a way to set warning that the user will have to respond. In a sense I would like to use late exception mechanize that occur after the function already finished executing.and returned the wanted value. SomeObject Foo(int input) { SomeObject result; // do something. oh, we need to warn the user. return result; } void Main() { SomeObject object; object = Foo(1); // after copy consturctor is done I would like an exception to be thrown }

    Read the article

  • How to architect Rails site that can be edited while running?

    - by Chris Kimpton
    Hi, I am writing a Rails app that "scrapes/navigates" some other websites and webservices for content. I am using Mechanize and Savon to do the heavylifting. But given the dynamic nature of the web, I'd like to make my calls to these editable by the admin users of the site - rather than requiring me to release a new version of the site. The actual scraping thread happens async to the website, using the daemons gem. My requirements are: Thinking that the scraping/webservice calling code is quite simple, the easiest route is to make the whole class editable by the admins. Keep a history of the scraping code - so that we can fairly easily revert if we introduce a problem. Initially use the code from the file system, but as soon as thats been edited and stored somewhere, to use that code instead. I am thinking my options are: Store the code in the db (with a history table for the old versions) Store the code in a private git repo somewhere and access that for the history/latest versions. I am thinking the git route might be easiest, given its raison d'etre is to track file history... But perhaps there is a gem/plugin that does all this for me, out of the box? Thanks in advance for any tips/advice. ~chris

    Read the article

  • The proper way to script periodically pulling a page from an https site

    - by DarthShader
    I want to create a command-line script for Cygwin/Bash that logs into a site, navigates to a specific page and compares it with the results of the last run. So far, I have it working with Lynx like so: ----snpipped, just setting variables---- echo "# Command logfile created by Lynx 2.8.5rel.5 (29 Oct 2005) ----snipped the recorded keystrokes------- key Right Arrow key p key Right Arrow key ^U" >> $tmp1 #p, right arrow initiate the page saving #"type" the filename inside the "where to save" dialog for i in $(seq 0 $((${#tmp2} - 1))) do echo "key ${tmp2:$i:1}" >> $tmp1 done #hit enter and quit echo "key ^J key y key q key y " >> $tmp1 lynx -accept_all_cookies -cmd_script=$tmp1 https://thewebpage.com/login diff $tmp2 $oldComp mv $tmp2 $oldComp It definitely does not feel "right": the cmd_script consists of relative user actions instead of specifying the exact link names and actions. So, if anything on the site ever changes, switches places, or a new link is added - I will have to re-create the actions. Also, I can't check for any errors so I can't abort the script if something goes wrong (login failed, etc) Another alternative I have been looking at is Mechanize with Ruby (as a note - I have 0 experience with Ruby). What would be the best way to improve or rewrite this?

    Read the article

  • Is Storing Cookies in a Database Safe?

    - by viatropos
    If I use mechanize, I can, for instance, create a new google analytics profile for a website. I do this by programmatically filling out the login form and storing the cookies in the database. Then, for at least until the cookie expires, I can access my analytics admin panel without having to enter my username and password again. Assuming you can't create a new analytics profile any other way (with OpenAuth or any of that, I don't think it works for actually creating a new Google Analytics profile, the Analytics API is for viewing the data, but I need to create an new analytics profile), is storing the cookie in the database a bad thing? If I do store the cookie in the database, it makes it super easy to programatically login to Google Analytics without the user ever having to go to the browser (maybe the app has functionality that says "user, you can schedule a hook that creates a new anaytics profile for each new domain you create, just enter your credentials once and we'll keep you logged in and safe"). Otherwise I have to keep transferring around emails and passwords which seems worse. So is storing cookies in the database safe?

    Read the article

  • train wreck. Rails requires RubyGems >= 1.3.2

    - by JZ
    I recently upgraded ruby to ruby 1.8.7 (2009-06-12 patchlevel 174) [i686-darwin10] and I think I broke rails. When I attempt to load rails. I get an odd message. Please help! $ ruby script/server Rails requires RubyGems = 1.3.2. Please install RubyGems and try again: http://rubygems.rubyforge.org $ rails -v Rails 3.0.0.beta $ gem -v 1.3.6 $ which gem /usr/bin/gem $ whereis gem /usr/bin/gem $ which rails /usr/bin/rails $ whereis rails /usr/bin/rails $ /usr/bin/gem -v 1.3.6 $ /usr/bin/rails -v Rails 3.0.0.beta $ ruby script/console Rails requires RubyGems >= 1.3.2. Please install RubyGems and try again: http://rubygems.rubyforge.org $ gem list rails *** LOCAL GEMS *** rails (3.0.0.beta, 2.3.5, 2.2.2, 1.2.6) $ gem list *** LOCAL GEMS *** abstract (1.0.0) actionmailer (3.0.0.beta, 2.3.5, 2.2.2, 1.3.6) actionpack (3.0.0.beta, 2.3.5, 2.2.2, 1.13.6) actionwebservice (1.2.6) activemerchant (1.4.1) activemodel (3.0.0.beta) activerecord (3.0.0.beta, 2.3.5, 2.2.2, 1.15.6) activerecord-tableless (0.1.0) activeresource (3.0.0.beta, 2.3.5, 2.2.2) activesupport (3.0.0.beta, 2.3.5, 2.2.2, 1.4.4) acts_as_ferret (0.4.3) arel (0.2.pre) authlogic (2.1.3) builder (2.1.2) bundler (0.9.3) calendar_date_select (1.15) capistrano (2.5.2) cgi_multipart_eof_fix (2.5.0) chronic (0.2.3) columnize (0.3.1) compass (0.8.17) daemons (1.0.10) dnssd (0.6.0) erubis (2.6.5) fastercsv (1.5.0) fastthread (1.0.1) fcgi (0.8.7) ferret (0.11.6) flay (1.4.0) flog (2.4.0) gbarcode (0.98.16) gem_plugin (0.2.3) git (1.2.5) haml (2.2.15) haml-edge (2.3.100) highline (1.5.0) hoe (2.4.0) hpricot (0.6.164) i18n (0.3.3) javan-whenever (0.3.7) jeweler (1.4.0) jscruggs-metric_fu (1.1.5) json_pure (1.2.0) libxml-ruby (1.1.2) linecache (0.43) mail (2.1.2) mechanize (0.9.3) memcache-client (1.7.8) mime-types (1.16) mislav-will_paginate (2.3.11) mocha (0.9.7) mojombo-chronic (0.3.0) mongrel (1.1.5) needle (1.3.0) net-scp (1.0.1) net-sftp (2.0.1, 1.1.1) net-ssh (2.0.4, 1.1.4) net-ssh-gateway (1.0.0) nifty-generators (0.3.0) nokogiri (1.4.0) openrain-action_mailer_tls (1.1.3) passenger (2.2.5) polyglot (0.2.9) prawn (0.6.3) prawn-core (0.6.3) prawn-format (0.2.3) prawn-layout (0.3.2) prawn-security (0.1.1) rack (1.1.0, 1.0.1) rack-mount (0.4.5) rack-test (0.5.3) rails (3.0.0.beta, 2.3.5, 2.2.2, 1.2.6) railties (3.0.0.beta) rake (0.8.7, 0.8.3) rake-compiler (0.6.0) RedCloth (4.1.1) reek (1.2.6) relevance-rcov (0.9.2.1) rmagick (2.12.2) roodi (2.1.0) rsl-stringex (1.0.3) rspec (1.2.9) rspec-rails (1.2.9) ruby-debug (0.10.3) ruby-debug-base (0.10.3) ruby-openid (2.1.2) ruby-yadis (0.3.4) ruby2ruby (1.2.4) ruby_parser (2.0.4) rubyforge (2.0.3) rubygems-update (1.3.6, 1.3.5) rubynode (0.1.5) searchlogic (2.3.9) sexp_processor (3.0.3) spree (0.9.4) sqlite3-ruby (1.2.5, 1.2.4) termios (0.9.4) test-unit (2.0.5) text-format (1.0.0) text-hyphen (1.0.0) thor (0.13.0) tlsmail (0.0.1) topfunky-gruff (0.3.5) treetop (1.4.3) tzinfo (0.3.16) xmpp4r (0.4)

    Read the article

< Previous Page | 1 2 3 4 5  | Next Page >