Search Results

Search found 12017 results on 481 pages for 'no root'.

Page 285/481 | < Previous Page | 281 282 283 284 285 286 287 288 289 290 291 292  | Next Page >

  • File access with hostname or ip only - no domain?

    - by Jonathon
    It seems likely that this is an obvious question, but I'm having trouble tracking down any useful information. Normally when accessing files in a particular directory on a server, I'm able to create a virtual host, assign a domain, root directory location, etc -- however am in a situation where I have server space and need to access files with only a hostname. Is this possible? For example, let's say the hostname is 123hostname.com, and the file I want access to is in /home/sub-directory/filename.php. How do I get at it via a browser? I've tried: http://123hostname.com/home/sub-directory/filename.php ...and some other variations on that theme (that I can't post because new users are restricted to one link in messages). But generally stuck. Any help -- even if it's just to let me know that this isn't possible without some additional configuration -- would be great. Thank you!

    Read the article

  • runas without asking for a password

    - by Gregory MOUSSAT
    On a Windows server which is in a domain, I have a script I run from scheduled tasks. I want this script to be run under a mydomain\peter user account. It is simple to do it with scheduled tasks, if you know Peter's password. And once done, the script stops when Peter decides to (or has to) change his password. On Linux, a cron job can be run with whatever user account without having to know the corresponding password. And root can run anything on behalf on another user (with su and sudo). Any way to do this with Windows? My need is for a old Windows 2003 server, but I can manage to run it from another computer.

    Read the article

  • How to exclude one subfolder from my RewriteRule Htaccess rules?

    - by tomaszs
    I have a .htaccess in my root of website that looks like this: RewriteEngine On RewriteCond %{HTTP_HOST} !^www\.mydomain\.pl [NC] RewriteCond %{HTTP_HOST} ^(?:www\.)?([a-z0-9_-]+)\.mydomain\.pl [NC] RewriteRule ^/?$ /index.php?run=places/%1 [L,QSA] RewriteCond %{REQUEST_URI} !^/index.php$ RewriteCond %{REQUEST_URI} !^/images/ RewriteCond %{REQUEST_URI} !^/upload/ RewriteCond %{REQUEST_URI} !^/javascript/ RewriteRule ^(.*)$ /index.php?runit=$1 [L,QSA] I've installed custom guest book in folder guests and now I would like to disable rules above for this one specific folder. So that when I type: mydomain.pl/guests I would like to go normally to actual folder guests. I understand that I need to somehow disable rules above for guests subfolder, but how do I do this?

    Read the article

  • How to install pecl uploadprogress on Debian Lenny

    - by kidrobot
    I am getting this output/error for # pecl install uploadprogress downloading uploadprogress-1.0.1.tgz ... Starting to download uploadprogress-1.0.1.tgz (8,536 bytes) .....done: 8,536 bytes 4 source files, building running: phpize Configuring for: PHP Api Version: 20041225 Zend Module Api No: 20060613 Zend Extension Api No: 220060519 building in /var/tmp/pear-build-root/uploadprogress-1.0.1 running: /tmp/pear/temp/uploadprogress/configure checking for grep that handles long lines and -e... /bin/grep checking for egrep... /bin/grep -E checking for a sed that does not truncate output... /bin/sed checking for gcc... no checking for cc... no checking for cl.exe... no configure: error: no acceptable C compiler found in $PATH See `config.log' for more details. ERROR: `/tmp/pear/temp/uploadprogress/configure' failed php-pear is installed. I'm stumped.

    Read the article

  • FreeBSD high load loopback interface

    - by user1740915
    I have a problem with a FreeBSD server. There is a FreeBSD 9.0 amd64, two network cards em1 (internet), em0 (local network) configured firewall ipfw, natd, squid (not transparent), the server acts as a gateway for access to the Internet. Next problem: upload via squid is very low. At this moment I see next: natd, dhcpd load the cpu at that time when uploading through squid and there are a lot of traffic through the loopback interface. ipfw show output 0100 655389684 36707144666 allow ip from any to any via lo0 00200 0 0 deny ip from any to 127.0.0.0/8 00300 0 0 deny ip from 127.0.0.0/8 to any 00400 0 0 deny ip from any to ::1 00500 0 0 deny ip from ::1 to any 00600 4 292 allow ipv6-icmp from :: to ff02::/16 00700 0 0 allow ipv6-icmp from fe80::/10 to fe80::/10 00800 1 76 allow ipv6-icmp from fe80::/10 to ff02::/16 00900 0 0 allow ipv6-icmp from any to any ip6 icmp6types 1 01000 0 0 allow ipv6-icmp from any to any ip6 icmp6types 2,135,136 01100 1615 76160 deny ip from 192.168.1.1 to any in via em1 01200 0 0 deny ip from 199.69.99.11 to any in via em0 01300 46652 3705426 deny ip from any to 172.16.0.0/12 via em1 01400 3936404 345618870 deny ip from any to 192.168.0.0/16 via em1 01500 4 336 deny ip from any to 0.0.0.0/8 via em1 01600 4129 387621 deny ip from any to 169.254.0.0/16 via em1 01700 0 0 deny ip from any to 192.0.2.0/24 via em1 01800 917566 33777571 deny ip from any to 224.0.0.0/4 via em1 01900 147872 22029252 deny ip from any to 240.0.0.0/4 via em1 02000 1132194739 1190981955947 divert 8668 ip4 from any to any via em1 02100 3 248 deny ip from 172.16.0.0/12 to any via em1 02200 35925 2281289 deny ip from 192.168.0.0/16 to any via em1 02300 1808 122494 deny ip from 0.0.0.0/8 to any via em1 02400 3 174 deny ip from 169.254.0.0/16 to any via em1 02500 0 0 deny ip from 192.0.2.0/24 to any via em1 02600 0 0 deny ip from 224.0.0.0/4 to any via em1 02700 0 0 deny ip from 240.0.0.0/4 to any via em1 02800 960156249 1095316736582 allow tcp from any to any established 02900 64236062 8243196577 allow ip from any to any frag 03000 34 1756 allow tcp from any to me dst-port 25 setup 03100 193 11580 allow tcp from any to me dst-port 53 setup 03200 63 4222 allow udp from any to me dst-port 53 03300 64 8350 allow udp from me 53 to any 03400 417 24140 allow tcp from any to me dst-port 80 setup 03500 211 10472 allow ip from any to me dst-port 3389 setup 05300 77 4488 allow ip from any to me dst-port 1723 setup 05400 3 156 allow ip from any to me dst-port 8443 setup 05500 9882 590596 allow tcp from any to me dst-port 22 setup 05600 1 60 allow ip from any to me dst-port 2000 setup 05700 0 0 allow ip from any to me dst-port 2201 setup 07400 4241779 216690096 deny log logamount 1000 ip4 from any to any in via em1 setup proto tcp 07500 21135656 1048824936 allow tcp from any to any setup 07600 474447 35298081 allow udp from me to any dst-port 53 keep-state 07700 532 40612 allow udp from me to any dst-port 123 keep-state 65535 1990638432 1122305322718 allow ip from any to any systat -ifstat when uploading via squid Load Average ||| Interface Traffic Peak Total tun0 in 79.507 KB/s 232.479 KB/s 42.314 GB out 2.022 MB/s 2.424 MB/s 59.662 GB lo0 in 4.450 MB/s 4.450 MB/s 43.723 GB out 4.450 MB/s 4.450 MB/s 43.723 GB em1 in 2.629 MB/s 2.982 MB/s 464.533 GB out 2.493 MB/s 2.875 MB/s 484.673 GB em0 in 240.458 KB/s 296.941 KB/s 442.368 GB out 512.508 KB/s 850.857 KB/s 416.122 GB top output PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND 66885 root 1 92 0 26672K 2784K CPU3 3 528:43 65.48% natd 9160 dhcpd 1 45 0 31032K 9280K CPU1 1 7:40 32.96% dhcpd 66455 root 1 20 0 18344K 2856K select 1 119:27 1.37% openvpn 16043 squid 1 20 0 44404K 17884K kqread 2 0:22 0.29% squid squid.conf cat /usr/local/etc/squid/squid.conf # # Recommended minimum configuration: # acl manager proto cache_object acl localhost src 127.0.0.1/32 ::1 acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1 # Example rule allowing access from your local networks. # Adapt to list your (internal) IP networks from where browsing # should be allowed acl localnet src 10.0.0.0/8 # RFC1918 possible internal network acl localnet src 172.16.0.0/12 # RFC1918 possible internal network acl localnet src 192.168.0.0/16 # RFC1918 possible internal network acl localnet src fc00::/7 # RFC 4193 local private network range acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl CONNECT method CONNECT # # Recommended minimum Access Permission configuration: # # Only allow cachemgr access from localhost http_access allow manager localhost http_access deny manager # Deny requests to certain unsafe ports http_access deny !Safe_ports # Deny CONNECT to other than secure SSL ports http_access deny CONNECT !SSL_ports # We strongly recommend the following be uncommented to protect innocent # web applications running on the proxy server who think the only # one who can access services on "localhost" is a local user http_access deny to_localhost # # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS # # Example rule allowing access from your local networks. # Adapt localnet in the ACL section to list your (internal) IP networks # from where browsing should be allowed http_access allow localnet http_access allow localhost # And finally deny all other access to this proxy http_access deny all # Squid normally listens to port 3128 http_port 192.168.1.1:3128 # Uncomment and adjust the following to add a disk cache directory. #cache_dir ufs /var/squid/cache 100 16 256 # Leave coredumps in the first cache dir coredump_dir /var/squid/cache I understand that the traffic passes through the SQUID several times. But can not find why.

    Read the article

  • /dev/fuse "permission denied" even when member of fuse group

    - by steeef
    I have a backup script scheduled on a Debian 5.0 x86 server, via sshfs. However, when I attempt to mount the remote directory, I receive: failed to open /dev/fuse: Permission denied ls -l /dev/fuse returns: crwxrwxr-x 1 root fuse 10, 229 2010-11-12 09:08 /dev/fuse id backup returns: uid=501(backup) gid=501(backup) groups=501(backup),46(plugdev),108(fuse) The only way I can get the directory to mount is if I run chmod a+w /dev/fuse, but this is reset at some point during the day. It's a kludge though, and I'd rather figure out why the group permissions aren't working.

    Read the article

  • Using NOPASSWD for specific commands in sudoers file, PASSWD for all others

    - by jberryman
    I would like to configure sudo such that users can run some specific commands without entering a password (for convenience) and can run all other commands by entering a password. This is what I have, but this does not work; a password is always required: Defaults env_reset Defaults timestamp_timeout = 1 root ALL=(ALL:ALL) ALL # Allow members of group sudo to execute any command %sudo ALL=(ALL:ALL) NOPASSWD: /usr/sbin/pm-suspend, /usr/bin/apt-get, PASSWD: ALL #includedir /etc/sudoers.d Note that this is a debian system which uses this adding users to the "sudo" group method. Thanks.

    Read the article

  • permission denied when trying to execute a binary I burned to a CD-R

    - by user16654
    On a UBUNTU karmic machine, I burned a cd from the command prompt using: cdrecord -v speed=16 dev=0,1,0 /FPS.iso The CD now contains an executable and some files. I tested the cd by loading it onto another machine (Red Hat 5.3) and when I try to run the program I get the following message: bash: ./FPS1_1: Permission denied I can open other files like text documents (the executable also comes with shared libraries). I realized I had burned the cd as root so I burned another one as another user but I still got the same problem. How can I remove this permission or what is the problem? P.S. the image was in / if that helps

    Read the article

  • Free DNS software with failover support?

    - by Lin
    I'm looking for DNS software that can accomplish the following: Check health of all A records at set intervals If server is unresponsive after multiple successive checks, replace A record with a working server When a server is down, check it periodically. Once it's up, restore normal A records Here's an equivalent I thought of: Run DNS servers with very low TTL (minutes) Use a cron job to periodically query all webservers Use sed to replace A records if need be, and then restart DNS server I have a hard time believing there isn't already something that can accomplish the above. I'm not looking for a paid service, and I'm restricted to anything I can run with root access to a VPS. Any suggestions would be great. Thanks!

    Read the article

  • Using rsync to take backup of folder

    - by Ali
    Hi, I have a server (Linux) with NAS which is mounted as folder "mount" I have website in "public_html" folder. I want to take backup of website in mount folder automatically at certain intervals for e.g. every hour. I read that there is something called "rsync" which is used to make two folders sync. And it doesn't copy all files every time and instead matches if the file has been changed and then only update changed files. How do I use it to make automatic backups? I have root access to server. Thanks

    Read the article

  • nginx: php-fastcgi running but php files not executing

    - by Daniel
    I have recently set up a nginx server with PHP running as FastCGI process. The server is running with HTML files however PHP files are downloading instead of displaying and PHP code is not processed. This is what I have in nginx.conf: server { listen 80; server_name pubserver; location ~ \.php$ { root /usr/share/nginx/html; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /usr/share/nginx/html$fastcgi_script_name; include fastcgi_params; } } The command netstat -tulpn | grep :9000 displays the following which indicates php-fastcgi is running and listening on port 9000: tcp 0 0 127.0.0.1:9000 0.0.0.0:* LISTEN 2663/php-cgi If it's if any importance my server is running on CentOS 6 and I installed nginx and PHP using the repositories from The Fedora Project.

    Read the article

  • suphp how disable ls /

    - by Pol Hallen
    Using suphp, I set a php.ini to every virtual host. In php.ini I also setted: open_basedir = /home/site1 php script runs, but if I ve a script with ls / I can see whole root directory. How can disable this hole security? <VirtualHost *:80> ServerName site1 ServerAlias www.site1.com DirectoryIndex index.html index.htm DocumentRoot /home/site1/ suPHP_Engine on AddHandler x-httpd-php .php .php3 .php4 .php5 suPHP_AddHandler x-httpd-php # THIS READ php.ini suPHP_ConfigPath /home/site1/ <Directory /home/site1/> Options -Includes -Indexes -FollowSymLinks -ExecCGI -MultiViews AllowOverride none Order allow,deny Allow from all </Directory> </VirtualHost>

    Read the article

  • Import emails from Claws IMAP cache

    - by calandoa
    I am trying to import an IMAP account composed of many folders from Claws Mail internal cache. Claws is unfortunately unable to export all the folders by selecting the root account. When checking the internal Claws cache folder, each mail is a plain text file named as following: base_path/My Account/Folder ABC/1 base_path/My Account/Folder ABC/2 base_path/My Account/Folder ABC/3 base_path/My Account/Folder ABC/4 base_path/My Account/Folder DEF/1 base_path/My Account/Folder DEF/2 base_path/My Account/Folder DEF/3 base_path/My Account/Folder X/etc... I tried to import this structure with different mails reader like KMail and Balsa, but each import failed. I just would like all these mails easily accessible and readable. Which tool on Linux can I use to import such a structure?

    Read the article

  • Cron won't use msmtpd to send emails in case of failed cronjob

    - by Glister
    I'm trying to configure a machine so that it will send me an email if one of the cronjobs output something in case of an error. I'm using Debian Wheezy. Cron is working normally (without the email functionality). msmtp is installed and configured. Have already symlinked /usr/{bin|sbin}/sendmail to /usr/bin/msmtpd. I can send email by using: echo "test" | mail -s "subject" [email protected] or by executing: echo "test" | /usr/sbin/sendmail Without the symlink (/usr/sbin/sendmail) cron will tell me that: (CRON) info (No MTA installed, discarding output) With the symlinks I get: (root) MAIL (mailed 1 byte of output; but got status 0x004e, #012) Can you suggest how to config the cron/msmtp pair? Thanks!

    Read the article

  • Screen multiuser - Permission denied

    - by Zlug
    I'm trying to send input to a screen session from php. So far I have followed the steps explained here Is running GNU Screen suid root the only way to make multiuser mode work? And I have set "multiuser on" and "acladd www-data" in the screenrc file (or well, no. in another file that I use by the -c option but still) My problem now is that whenever i try to acess screen by php exec('screen -S user/session -p 0 -X stuff "test"'."\n", $ret); I get the error: Cannot opendir /var/run/screen/S-user: Permission denied

    Read the article

  • High Load - Low IO - Low CPU usage

    - by devup
    I have a system whose load is rather high. As you can see from the top output below, CPU usage and I/O is negligible: top - 17:31:59 up 4 days, 2:34, 2 users, load average: 1.00, 0.99, 1.00 Tasks: 71 total, 1 running, 70 sleeping, 0 stopped, 0 zombie Cpu(s): 2.0%us, 2.0%sy, 0.0%ni, 95.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 960720k total, 707288k used, 253432k free, 67328k buffers Swap: 2811896k total, 2644k used, 2809252k free, 528928k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 15310 root 20 0 2512 1128 888 R 2.1 0.1 0:00.05 top I would appreciate any assistance with isolating the cause(s) of high load for when I/O and CPU are not factors.

    Read the article

  • In Linux, when I use the PS command, why do I see multiple lines for www-data?

    - by johnlai2004
    I have a LAMP server that uses ubuntu 9.10, apache2, mysql5 and php5. When I login as root through the shell, I run a "ps aux" command and see something like the following www-data 3151 0.1 4.3 220024 31032 ? S 12:22 0:00 /usr/sbin/apache2 -k start www-data 3153 0.2 3.6 214776 26020 ? S 12:22 0:01 /usr/sbin/apache2 -k start www-data 3162 0.3 5.1 225060 36920 ? S 12:26 0:01 /usr/sbin/apache2 -k start www-data 3163 0.1 4.1 218872 29664 ? S 12:26 0:00 /usr/sbin/apache2 -k start How come I see multiple lines for www-data? Does each line represent an actual user on my website? I run into memory issues at times, so I'm trying to determine if these www-data statistics are related.

    Read the article

  • Unable to bring back terminal launcher in fedora

    - by Sandeepan Nath
    Our office housing keeping guy sat on my linux (Fedora 7) system and somehow removed the main panel. By panel I mean where all the menus are there - Applications, Places, System etc. and contains a tab for every running application. I brought back the panel but now I don't see the terminal launcher and I can't find it anywhere. It used to be there in "Applications - System Tools - Terminal" but now the System Tools option itself is not present there. System - Preferences - Look and Feel - Main Menu. There one System option is unchecked but I can't check it and apply. I am logged in as root. How do I bring it back? Please help. This is very irritating and weird. That guy doesn't even know what he has done so no use of asking him. :)

    Read the article

  • Simulating a low-bandwidth, high-latency network connection on Linux

    - by Justin L.
    I'd like to simulate a high-latency, low-bandwidth network connection on my Linux machine. Limiting bandwidth has been discussed before, e.g. here, but I can't find any posts which address limiting both bandwidth and latency. I can get either high latency or low bandwidth using tc. But I haven't been able to combine these into a single connection. In particular, the example rate control script here doesn't work for me: # tc qdisc add dev lo root handle 1:0 netem delay 100ms # tc qdisc add dev lo parent 1:1 handle 10: tbf rate 256kbit buffer 1600 limit 3000 RTNETLINK answers: Operation not supported How can I create a low-bandwidth, high-latency connection, using tc or any other readily-available tool?

    Read the article

  • Why can't windows see mmcblk0p3? [closed]

    - by jacknad
    The partition is created on the embedded linux target like this # n - new # p - partition # 3 - partition 3 # 66 - starting cylinder # <blank> - maximum size for the ending cylinder # t - set file system type # 3 - partition 3 # c - set to windows vfat # w - write partition table and exit echo -e "n\np\n3\n66\n\nt\n3\nc\nw" | fdisk /dev/mmcblk0 The file system is then formatted on the embedded linux target as MS-DOS like this # -n volume-name # -F FAT-size mkfs.vfat -n DB -F 32 /dev/mmcblk0p3 A linux host can mount and access files in mmcblk0p3 without issue. Why can't windows? Edit: Although the default number of FATS is 2 I tried adding -f 2 [number-of-FATs] since this is actually being done by busybox on an embedded platform but this didn't help. I understand the Linux MS-DOS file system does not support more than 2 FATs but there are only 2 on this target (the boot is also FAT which is visible), along with and EXT3 (on p2) for the root file system.

    Read the article

  • Unix domain socket firewall

    - by lagab
    Hello, everyone. I've got a problem with my debian server. Probably there is some vulnerable script at my web-serser, which is running from www-data user. I also have samba with winbind installed, and samba is joined to windows domain. So, probably this vulnerable script allows hacker to bruteforce out domain controller through winbind unix domain socket. Actually I have lots of such lines at netstat -a output: unix 3 [ ] STREAM CONNECTED 509027 /var/run/samba/winbindd_privileged/pipe And our DC logs contain lots of recorded authentication attems from root or guest accounts. How can I restrict my apaches access to winbind? I had an idea to use some kind of firewall for IPC sockets. Is it possible?

    Read the article

  • Website not available everywhere

    - by Cedric Reichenbach
    Today I noticed my website http://mint-nachhilfe.ch/ was down, but other people (located in different networks) said it looks up from there. When I came home, I double-checked, and I can really reach it from here. Also, this website considers it down. Some facts: It's a Tomcat webapp, connected to an Apache2 server. I restarted both, no change. Another (ruby on rails) application is connected to this Apache2, which I couldn't reach either, but is considered online by above check website. At any point, I could directly connect to the Tomcat over http://mint-nachhilfe.ch:8080! I don't know how to go on searching for the root error. I assume it's related to the Apache2 server, but how could that be?

    Read the article

  • How to execute programs on mounted partition

    - by DevNoob
    This is the aplication I want to run. -rwxr-xr-x 1 manuel manuel 582841 Nov 22 09:51 PromServerMain This is the fstab entry /dev/sda8 /media/data0 ext4 defaults,user 0 2 This is the mountpoint lrwxrwxrwx 1 manuel manuel 5 Nov 16 14:23 data -> data0 drwxrwxr-x 9 manuel manuel 4096 Nov 22 09:26 data0 This is what I get manuel@P5KC /media/data/Projekte/PromServer/src $ ./PromServerMain bash: ./PromServerMain: Keine Berechtigung manuel@P5KC /media/data/Projekte/PromServer/src $ sudo ./PromServerMain sudo: unable to execute ./PromServerMain: Permission denied Even as root. I have no clue whats wrong. Any suggestions? System is Debian Wheezy Xfce.

    Read the article

  • CentOS - dual boot from new partition

    - by Dima
    I need to install two copies of the CentOS 5.5 (bank A and bank B) on different partitions of the same hard disk and install grub boot loader to another partition (visible from both banks). The boot loader should redirect the boot menu to bank A or bank B (according to the configuration). The new partition is mounted to /common_partition and grub is installed on it using following command: grub-install /dev/hda In the new partition I'm created the following menu.lst file: title BOOTCONTROL REDIRECT : PLEASE WAIT root (hd0,1) configfile /boot/menu.lst boot On my setup: both partitions (bank A and bank B) are primary and grub is installed on MBR. The problem is: but the new boot loader (on common_partition) did not load. What wrong on my configuration?

    Read the article

  • Intraforest user account merge with Active Directory

    - by Neobyte
    I have a scenario where there is a root domain (RD) and two child domains (CD1 and CD2). Users have accounts on both CD1 and CD2, with identical samAccountNames, names etc, and various applications either use the CD1 or CD2 account for authentication to resources. I need to collapse CD2 into CD1, so I want to merge the accounts together. However ADMT does not allow me this option (merge options are greyed out), I think because it does not support intraforest merge of accounts (although it does not explicitly state this anywhere in the documentation). My question is - what is the easiest way for me to merge these accounts? Ultimately all I really need (I think) is for the SID of CD2\user1 to be added to the SIDHistory of CD1\user1 - is there a tool that supports this? Computer accounts and profiles are not a concern for this scenario. Group migration is unlikely to be an issue either - CD2\user1 is usually granted resource access through membership of a group on CD1.

    Read the article

< Previous Page | 281 282 283 284 285 286 287 288 289 290 291 292  | Next Page >