Search Results

Search found 613 results on 25 pages for 'spawn fcgi'.

Page 3/25 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Ruby PTY.spawn is Hanging - How to fill out Email and Password in simple example

    - by viatropos
    After asking this question, it looks like I need to use Ruby's PTY Module, of which there is no documentation. I have written this code to try to push content to Google App Engine because the python command sometimes asks me for my username and password. But when I run this code, it just hangs. cmd = "appcfg.py update cdn" PTY.spawn("#{cmd} 2>&1") do | input, output, pid | begin input.expect("Email:") do output.write("#{credentials[:username]}\n") end input.expect("Password:") do output.write("#{credentials[:password]}\n") end rescue Exception => e puts "GAE Error..." end end What am I missing here? How can I get this to work?

    Read the article

  • Cryptic Erlang Errors

    - by Jim
    Okay so I started learning erlang recently but am baffled by the errors it keeps returning. I made a bunch of changes but I keep getting errors. The syntax is correct as far as I can tell but clearly I'm doing something wrong. Have a look... -module(pidprint). -export([start/0]). dostuff([]) - receive begin - io:format("~p~n", [This is a Success]) end. sender([N]) - N ! begin, io:format("~p~n", [N]). start() - StuffPid = spawn(pidprint, dostuff, []), spawn(pidprint, sender, [StuffPid]). Basically I want to compile the script, call start, spawn the "dostuff" process, pass its process identifier to the "sender" process, which then prints it out. Finally I want to send a the atom "begin" to the "dostuff" process using the process identifier initially passed into sender when I spawned it. The errors I keep getting occur when I try to use c() to compile the script. Here they are.. ./pidprint.erl:6: syntax error before: '-' ./pidprint.erl:11: syntax error before: ',' What am I doing wrong?

    Read the article

  • In Djano, why do I get a 500 server error when browsing, but "python mysite.fcgi" from SSH works fin

    - by Jim
    If I browse to my site, I get a 500 "internal server error." However, if I SSH into my server and go to my site's folder and run "python mysite.fcgi" I see the HTML rendered fine. Obviously, something is wrong, but I'm not sure what. Here is my .htaccess file: AddHandler fastcgi-script .fcgi RewriteEngine On RewriteRule ^(media/.*)$ - [L] RewriteRule ^(static/.*)$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ mysite.fcgi/$1 [QSA,L] Here is my mysite.fcgi file: #!/usr/bin/python2.5 import sys, os sys.path.insert(0, "/kunden/homepages/34/[mydir]/htdocs/projects/django") sys.path.insert(1, "/kunden/homepages/34/[mydir]/lib/python/site-packages") os.chdir("/kunden/homepages/34/[mydir]/htdocs/projects/django/mysite") os.environ['DJANGO_SETTINGS_MODULE'] = 'mysite.settings' from django.core.servers.fastcgi import runfastcgi runfastcgi(["method=threaded", "daemonize=false"]) I'm setting this up on 1and1. It has been a pain, but I think I'm close.

    Read the article

  • socket() failed: No buffer space available) while connecting to upstream,

    - by alfish
    On my ubuntu 10.04 VPS, I get regular 500 error on nginx (0.7.??)+ fcgi web server running a durpal site. and when I trace the nginx error log I see plenty of these: socket() failed: No buffer space available) while connecting to upstream ..., I have tried differnt combination configs but none fixed the problem. Currently I have 3 nginx workers, Keep-alive time out 15 seconds and and PHP_FCGI_CHILDREN=5 PHP_FCGI_MAX_REQUESTS=1000 I really appreciate if you Can you suggest a solution to this annoying problem.

    Read the article

  • passenger won't spawn more than 6 instances despite passenger_max_pool_size = 30

    - by mrD
    I have some problems with passenger + nginx and hope someone might be able help me and direct me in the right direction. I've set the passenger_max_pool_size to 30 but passenger never spawns more than 6 instances. I'm loading a webpage that uses ajax to load 30 sub pages from the server but because passenger only spawns 6 instances they are queued. What makes me confused is that Waiting on global queue is 0 but I can see in my browser that everything gets queued. When the first 6 ajax requests are done the next 6 starts loading. What am I missing? :) This is the output from passenger-status (I had about 24 requests in the browser waiting for response from the server when I checked this status) ----------- General information ----------- max = 30 count = 6 active = 6 inactive = 0 Waiting on global queue: 0 ----------- Domains ----------- /srv/rails/production/current: PID: 28428 Sessions: 1 Processed: 42 Uptime: 5m 43s PID: 28424 Sessions: 1 Processed: 23 Uptime: 5m 43s PID: 28422 Sessions: 1 Processed: 7 Uptime: 5m 43s PID: 28420 Sessions: 1 Processed: 22 Uptime: 6m 0s PID: 28426 Sessions: 1 Processed: 39 Uptime: 5m 43s PID: 28430 Sessions: 1 Processed: 7 Uptime: 5m 43s These are my passenger related settings in nginx.conf http { passenger_root /opt/ruby/lib/ruby/gems/1.8/gems/passenger-2.2.11; passenger_ruby /opt/ruby/bin/ruby; passenger_max_pool_size 30;

    Read the article

  • How to run several fastcgi processes for nginx (spawn-fcgi)

    - by SPnova
    I want to run several fastcgi processes with different users and different php.ini . But unfortunately I found information about running one process only. http://www.howtoforge.com/installing-nginx-with-php5-and-mysql-support-on-ubuntu-9.04 There is mod suexec for Apache which allows to do it. Does anyone know something like that for fastcgi php nginx?

    Read the article

  • C# Spawn Multiple Threads for work then wait until all finished

    - by pharoc
    just want some advice on "best practice" regarding multi-threading tasks. as an example, we have a C# application that upon startup reads data from various "type" table in our database and stores the information in a collection which we pass around the application. this prevents us from hitting the database each time this information is required. at the moment the application is reading data from 10 tables synchronously. i would really like to have the application read from each table in a different thread all running in parallel. the application would wait for all the threads to complete before continuing with the startup of the application. i have looked into BackGroundWorker but just want some advice on accomplishing the above. Does the method sound logical in order to speed up the startup time of our application How can we best handle all the threads keeping in mind that each thread's work is independent of one another, we just need to wait for all the threads to complete before continuing. i look forward to some answers

    Read the article

  • Fork a process and send data to it inside Rails

    - by taro
    I'm making a Rails application. In the one action I need to spawn a long running process. This is not a problem. I can fork new process using spawn gem or some other. But some time after process has been spawned, user must be able to pass additional data to that process. Sure, I can fork process which will listen a UNIX socket, store socket address in the HTTP session and communicate with that process using drb protocol when user will require to pass new data to process. But I think it is not best solution and it will be a problem to deploy an application to the hosting. What is the easy way to do that?

    Read the article

  • PHP hits 100% CPU and eats RAM at the same time Monday to Friday

    - by Daniel Samuels
    We run a learning platform for primary schools here in the UK and it's all been running extremely well. However at around 4PM Monday to Friday we see the same issue arise -- 1-2 PHP threads will spike to 100% CPU and gradually start eating up RAM until the server(s) fall over. 98%+ of our requests are HTTPS, these come into our Layer 7 load balancer which then decrypts the SSL data, adds the X-HTTP-Forwarded-For header and forwards the data onto an application server (we have 2 of those at the moment) on port 80. Our application servers have Varnish on port 80 which takes in the request from the load balancer and passes the request through to Nginx on port 81. Nginx then works out which 'vhost' it needs to use and passes any PHP processing through to PHP-CGI which is listening on a socket (managed through spawn-fcgi). There's an instance of Memcached running too, MySQL runs on a separate server / slave setup. Throughout the day the load will typically go no higher than 0.8 on either of the application servers, however at around 4PM our problem arises. I've managed to run strace on a few of the actual threads when they cause the problem and I always see the same thing: stat("/usr/share/zoneinfo/Europe/London", {st_mode=S_IFREG|0644,st_size=3661, ...}) = 0 stat("/usr/share/zoneinfo/Europe/London", {st_mode=S_IFREG|0644,st_size=3661, ...}) = 0 This is repeated infinitely and never stops until you SEGKILL the process or oomkiller kills it. There are no cron jobs scheduled to run at that time and I don't have any way of seeing exactly what Nginx request is associated with the PHP process which is running. We are running PHP 5.3.14 which we upgraded to from 5.3.8 last week to rule out the older version being the problem. This issue has been going on a few months now and we have no idea what is causing it. We deploy our software very frequently, so it's difficult to track down a specific release which may have started the problem - especially as we do not know the date of the first occurrence of this issue. Varnish is version 3.0.1, Nginx is 1.0.6 (which I understand is about a year old now), our servers are running CentOS release 5.7 (Final) they have Intel i3 540s at 3.07Ghz and 8GB of RAM. There's a discussion on the Debian mailing list about something very similar, you can find that here. Has anyone seen anything like this in the past, does anyone have any ideas or suggestions? Are there a way of linking an Nginx request directly to a PHP thread? Is there a better way of seeing what the PHP process is doing? (I've seen GDB mentioned, though I'll have to recompile PHP) Thanks!

    Read the article

  • flask, lighttpd with fastcgi can't get it to work

    - by kurojishi
    i'm tring to deploy a simple flask script to a lighttpd server with fastcgi. this is the configuration file for lighttpd builded using the flask documentation http://flask.pocoo.org/docs/deploying/fastcgi/#configuring-lighttpd server.modules = ( "mod_access", "mod_alias", "mod_compress", "mod_redirect", "mod_rewrite", "mod_fastcgi", ) server.document-root = "/var/www" server.upload-dirs = ( "/var/cache/lighttpd/uploads" ) server.errorlog = "/var/log/lighttpd/error.log" server.pid-file = "/var/run/lighttpd.pid" server.username = "www-data" server.groupname = "www-data" index-file.names = ( "index.php", "index.html", "index.htm", "default.htm", " index.lighttpd.html" ) url.access-deny = ( "~", ".inc" ) static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" ) var.home_dir = "/var/lib/lighttpd" var.socket_dir = home_dir + "sockets/" ## Use ipv6 if available #include_shell "/usr/share/lighttpd/use-ipv6.pl" dir-listing.encoding = "utf-8" server.dir-listing = "enable" compress.cache-dir = "/var/cache/lighttpd/compress/" compress.filetype = ( "application/x-javascript", "text/css", "text/html", "text/plain" ) include_shell "/usr/share/lighttpd/create-mime.assign.pl" include_shell "/usr/share/lighttpd/include-conf-enabled.pl" fastcgi.server = ("weibo/callback.fcgi" => (( "socket" => "/tmp/weibocrawler-fcgi.sock", "bin-path" => "/var/www/weibo/callback.fcgi", "check-local" => "disable", "max-procs" => 1 )) ) url.rewrite-once = ( "^(/weibo($|/.*))$" => "$1", "^(/.*)$" => "weibo/callback.fcgi$1" and this is the script i'm tring to run: #!/home/nrl/kuro/weiboenv/bin/python from flup.server.fcgi import WSGIServer from callback import app if __name__ == '__main__': WSGIServer(application, bindAddress='/tmp/weibocrawler-fcgi.sock').run() but i have this error testing the configuration file i get this error: 2013-07-02 17:15:42: (configfile.c.912) source: lighttpd.conf.new line: 52 pos: 1 parser failed somehow near here: weibo/callback.fcgi$1 when i remove the urlrewrite i get these errors in the log even if the daemon start: 2013-07-02 16:25:53: (log.c.166) server started 2013-07-02 16:25:53: (mod_fastcgi.c.1104) the fastcgi-backend fcgi.py failed to start: 2013-07-02 16:25:53: (mod_fastcgi.c.1108) child exited with status 2 fcgi.py 2013-07-02 16:25:53: (mod_fastcgi.c.1111) If you're trying to run your app as a FastCGI backend, make sure you're using the FastCGI-enabled version. If this is PHP on Gentoo, add 'fastcgi' to the USE flags. 2013-07-02 16:25:53: (mod_fastcgi.c.1399) [ERROR]: spawning fcgi failed. 2013-07-02 16:25:53: (server.c.938) Configuration of plugins failed. Going down.

    Read the article

  • Fast CGI, Lighttpd, Ubuntu

    - by Gosh
    Is this log file familiar to someone of UBUNTU users? Lighttpd log file: > 2009-08-30 21:37:45: (log.c.75) server started > 2009-08-30 21:37:45: (mod_fastcgi.c.1029) the fastcgi-backend php5-cgi *failed* to start: > 2009-08-30 21:37:45: (mod_fastcgi.c.1033) *child exited with status 9 php5-cgi* > 2009-08-30 21:37:45: (mod_fastcgi.c.1036) If you're trying to run PHP as a FastCGI backend, make sure you're using the FastCGI-enabled version. > You can find out if it is the right one by executing 'php -v' and it should display '(cgi-fcgi)' in the output, NOT '(cgi)' NOR '(cli)'. > For more information, check http://trac.lighttpd.net/trac/wiki/Docs%3AModFastCGI#preparing-php-as-a-fastcgi-programIf this is PHP on Gentoo, add 'fastcgi' to the USE flags. > 2009-08-30 21:37:45: (mod_fastcgi.c.1340) [ERROR]: *spawning fcgi failed*. > 2009-08-30 21:37:45: (server.c.908) Configuration of plugins failed. Going down. Please open a secret how did you solved fcgi problem and made lighttpd to start, if you did. Thx, Gosh.

    Read the article

  • Django: Gracefully restart nginx + fastcgi sites to reflect code changes?

    - by Bartek
    Hi, Common situation: I have a client on my server who may update some of the code in his python project. He can ssh into his shell and pull from his repository and all is fine -- but the code is stored in memory (as far as I know) so I need to actually kill the fastcgi process and restart it to have the code change. I know I can gracefully restart fcgi but I don't want to have to manually do this. I want my client to update the code, and within 5 minutes or whatever, to have the new code running under the fcgi process. Thanks

    Read the article

  • Spawning a interactive process

    - by notnoop
    How can a Java application spawn a new interactive application (e.g. an command line editor) from Java/Scala? When I use Runtime.getRuntime().exec("vim test"), I would only get a Process instance, while vim would be running in the background; rather then appear to the user.

    Read the article

  • apache with php fastcgi keeps going down

    - by Josh Nankin
    I have an apache2 server configured with MPM worker and php fast cgi. Lately the apache logs have been telling me that MaxClients is being reached frequently, even though it's already pretty high. My server is now constantly going down, and I see a bunch of lines like this in the log: [Sun Mar 06 04:25:40 2011] [error] [client 50.16.83.115] FastCGI: comm with (dynamic) server "/var/local/fcgi/php-cgi-wrapper.fcgi" aborted: (first read) idle timeout (20 sec) [Sun Mar 06 04:25:40 2011] [error] [client 50.16.83.115] FastCGI: incomplete headers (0 bytes) received from server "/var/local/fcgi/php-cgi-wrapper.fcgi" I can see that my php-cgi processes are pretty large (about 70mb on average). Here's my apache configuration for MPM worker: KeepAlive ON KeepAliveTimeout 2 <IfModule mpm_worker_module> StartServers 5 MinSpareThreads 10 MaxSpareThreads 10 ThreadLimit 64 ThreadsPerChild 10 MaxClients 20 MaxRequestsPerChild 2000 </IfModule> Heres my fastcgi apache configuration: <IfModule mod_fastcgi.c> # One shared PHP-managed fastcgi for all sites Alias /fcgi /var/local/fcgi # IMPORTANT: without this we get more than one instance # of our wrapper, which itself spawns 20 PHP processes, so # that would be Bad (tm) FastCgiConfig -idle-timeout 20 -maxClassProcesses 1 <Directory /var/local/fcgi> # Use the + so we don't clobber other options that # may be needed. You might want FollowSymLinks here Options +ExecCGI </Directory> AddType application/x-httpd-php5 .php AddHandler fastcgi-script .fcgi Action application/x-httpd-php5 /fcgi/php-cgi-wrapper.fcgi </IfModule> Here's my fastcgi wrapper: #!/bin/sh PHPRC="/etc/php5/apache2" export PHPRC PHP_FCGI_CHILDREN=8 export PHP_FCGI_CHILDREN exec /usr/bin/php-cgi Any help would be very very much appreciated!

    Read the article

  • git on HTTP with gitolite and nginx

    - by Arnaud
    I am trying to setup a server where my git repo would be accessible with HTTP(S). I am using gitolite and nginx (and gitlab for web interface but I doubt it makes any difference). I have searched the whole afternoon and I think I'm stuck. I have think I have understood that nginx needs fcgiwrap to work with gitolite, so I tried several configurations, but none of them work. My repositories are at /home/git/repositories. Here's the three nginx configurations I have tried. 1: location ~ /git(/.*) { gzip off; root /usr/lib/git-core; fastcgi_pass unix:/var/run/fcgiwrap.socket; include /etc/nginx/fcgiwrap.conf; fastcgi_param SCRIPT_FILENAME /usr/lib/git-core/git-http-backend; fastcgi_param DOCUMENT_ROOT /usr/lib/git-core/; fastcgi_param SCRIPT_NAME git-http-backend; fastcgi_param GIT_HTTP_EXPORT_ALL ""; fastcgi_param GIT_PROJECT_ROOT /home/git/repositories; fastcgi_param PATH_INFO $1; #fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info; } Result: > git clone http://myservername/projectname.git test/ Cloning into test... fatal: http://myservername/projectname.git/info/refs not found: did you run git update-server-info on the server? and > git clone http://myservername/git/projectname.git test/ Cloning into test... error: The requested URL returned error: 502 while accessing http://myservername/git/projectname.git/info/refs fatal: HTTP request failed 2: location ~ /git(/.*) { fastcgi_pass localhost:9001; include /etc/nginx/fcgiwrap.conf; fastcgi_param SCRIPT_FILENAME /usr/lib/git-core/git-http-backend; fastcgi_param GIT_HTTP_EXPORT_ALL ""; fastcgi_param GIT_PROJECT_ROOT /home/git/repositories; fastcgi_param PATH_INFO $1; } Result: > git clone http://myservername/projectname.git test/ Cloning into test... fatal: http://myservername/projectname.git/info/refs not found: did you run git update-server-info on the server? and > git clone http://myservername/git/projectname.git test/ Cloning into test... error: The requested URL returned error: 502 while accessing http://myservername/git/projectname.git/info/refs fatal: HTTP request failed 3: location ~ ^.*\.git/objects/([0-9a-f]+/[0-9a-f]+|pack/pack-[0-9a-f]+.(pack|idx))$ { root /home/git/repositories/; } location ~ ^.*\.git/(HEAD|info/refs|objects/info/.*|git-(upload|receive)-pack)$ { root /home/git/repositories; fastcgi_pass unix:/var/run/fcgiwrap.socket; fastcgi_param SCRIPT_FILENAME /usr/lib/git-core/git-http-backend; fastcgi_param PATH_INFO $uri; fastcgi_param GIT_PROJECT_ROOT /home/git/repositories; include /etc/nginx/fcgiwrap.conf; } Result: > git clone http://myservername/projectname.git test/ Cloning into test... error: The requested URL returned error: 502 while accessing http://myservername/projectname.git/info/refs fatal: HTTP request failed and > git clone http://myservername/git/projectname.git test/ Cloning into test... error: The requested URL returned error: 502 while accessing http://myservername/git/projectname.git/info/refs fatal: HTTP request failed Also note that with any of those configurations, when I try to clone with a project name that actually doesn't exist, I get a 502 error. Does anyone already succeeded in doing this? What am I doing wrong? Thanks. UPDATE: nginx error log file said: 2012/04/05 17:34:50 [crit] 21335#0: *50 connect() to unix:/var/run/fcgiwrap.socket failed (13: Permission denied) while connecting to upstream, client: 192.168.12.201, server: myservername, request: "GET /git/oct_editor.git/info/refs HTTP/1.1", upstream: "fastcgi://unix:/var/run/fcgiwrap.socket:", host: "myservername" So I changed permissions for /var/run/fcgiwrap.socket, and now I have : > git clone http://myservername/git/projectname.git test/ Cloning into test... error: The requested URL returned error: 403 while accessing http://myservername/git/projectname.git/info/refs fatal: HTTP request failed Here is the error.log file I have now: 2012/04/05 17:36:52 [error] 21335#0: *78 FastCGI sent in stderr: "Cannot chdir to script directory (/usr/lib/git-core/git/projectname.git/info)" while reading response header from upstream, client: 192.168.12.201, server: myservername, request: "GET /git/projectname.git/info/refs HTTP/1.1", upstream: "fastcgi://unix:/var/run/fcgiwrap.socket:", host: "myservername" I keep on investigating.

    Read the article

  • Why is Perl Cgiwrap Sock refusing connection to nginx?

    - by Emmanuel
    Could anyone shed some light on the following line in my nginx error logs. I'm trying to get Perl and Nginx talking to each other, but so far no success. unix:/var/run/nginx/cgiwrap-dispatch.sock failed (111: Connection refused)2011/11/20 09:18:34 [error] 24054#0: *1186 connect() to unix:/var/run/nginx/cgiwrap-dispatch.sock failed (111: Connection refused) while connecting to upstream, client: 150.101.221.75, server: example.com, request: "GET /dspam.cgi HTTP/1.1", upstream: "fastcgi://unix:/var/run/nginx/cgiwrap-dispatch.sock:", host: "example.com" The relevant nginx configs. location ~ \.cgi$ { gzip off; include /etc/nginx/fastcgi_params; fastcgi_pass unix:/var/run/fcgiwrap.socket; fastcgi_index index.pl; fastcgi_param SCRIPT_FILENAME /var/www/dspam$fastcgi_script_name; }

    Read the article

  • apache 2.4, mod_proxy_fcgi not honouring .htaccess, work around needed

    - by user229874
    I am using apache 2.4.7 with mod_proxy_fcgi for purpose of passing through php to php-fpm (this will be used for shared hosting environment). The htaccess works fine for non php files, but once it hit rewrite rule that proxies through the php requests, the htaccess is ignored. I know why it is happening. The question is: how do I work around it? The question how do I force apache to treat the request to php file as a request to local file, and then proxy it through? I have spent substantial time in researching on this problem, and following "answers" were given as solution: 1) "use apache configuration instead of .htaccess" it is valid solution, but not for shared hosting environment (I am not going to give access to apache configuration to shared hosting customers ;)). 2) "don't use .htaccess, as it has performance/security/other issues", well how else would shared hosting customers control access/url rewriting on their site? Besides if the .htaccess was not a requirement I would simply use nginx. 3) "put rewrite rule for proxy inside of " - this is incorrect, and it does not work. This behaviour appears to be not a bug but a "feature" as per https://issues.apache.org/bugzilla/show_bug.cgi?id=54887

    Read the article

  • How can I spawn a sprite based on a condition after the game starts?

    - by teddyweedy
    How do I spawn a sprite anytime later in the game when the game starts? I'm using FlashDevelop with Flixel. I did try it in override public function create(): void. It works only in the beginning. I try it using if (FlxG.score == 1) but it doesn't work. I also tried it in override public function update(): void. It works and it is moving but it leaves a sprite making it a multi-sprite. I also did try FlxGroup but to no avail.

    Read the article

  • How/is data shared between fastCGI processes?

    - by Josh the Goods
    I've written a simple perl script that I'm running via fastCGI on Apache. The application loads a set of XML data files which are used to lookup values based upon the the query parameters of an incoming request. As I understand it, if I want to increase the amount of concurrent requests my application can handle I need to allow fastCGI to spawn multiple processes. Will each of these processes have to hold duplicate copies of the XML data in memory? Is there a way to set things up so that I can have one copy of the XML data loaded in memory while increasing the capacity to handle concurrent requests?

    Read the article

  • Apache2 on Ubuntu Server w/ CGI, FastCGI, mod_php

    - by illegal3alien
    I've looked at various websites on configuring Apache with cgi and can't get mod_fcgid to work. It works fine using mod_php5, but I wanted to compare performance using cgi and fastcgi. I tried methods using FGCIWrapper among various other techniques and the only one that didn't result in an unlogged 403 or download of the file was using "Action application/x-httpd-php /usr/bin/php-cgi" When trying to configure mod_fcgid the file normally just started a download of an unprocessed file. I used wget to check headers and type was "application/x-httpd-php" At one point I was able to get to the page, but it resulted in a 403, which was listed in access.log, but not error.log (I was told it should be in there too) I tried to get it working on a fresh install of Ubuntu Server 10.04 LTS and 10.10 and had the same results on both, so I'm not doing something correctly in terms of configuration. I tried virtualmin and could only get mod_php to work. The page just prompted a download when selecting cgi or fcgi from the control panel.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >