Search Results

Search found 3548 results on 142 pages for 'unix'.

Page 35/142 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • What's the best way to clean up after a fork bomb?

    - by raldi
    $ ls bash: no more processes Uh oh. Looks like someone made a fork bomb. Where I used to work, this pretty much meant that the shared server would need to be power-cycled, since even the sysadmins with root often couldn't get the problem cleaned up. Often, they couldn't even get a prompt. I've heard a few tricks (notably, to send STOP signals rather than KILL signals, since the latter would allow the remaining threads to immediately replace the killed ones), but I've never seen a comprehensive guide entitled So, You Have Yourself a Fork Bomb? Let's make one.

    Read the article

  • Setfacl configuration issue in Linux

    - by Balualways
    I am configuring a Linux Server with ACL[Access Control Lists]. It is not allowing me to perform setfacl operation on one of the directoriy /xfiles. I am able to perform the setfacl on other directories as /tmp /op/applocal/. I am getting the error as : root@asifdl01devv # setfacl -m user:eqtrd:rw-,user:feedmgr:r--,user::---,group::r--,mask:rw-,other:--- /xfiles/change1/testfile setfacl: /xfiles/change1/testfile: Operation not supported I have defined my /etc/fstab as /dev/ROOTVG/rootlv / ext3 defaults 1 1 /dev/ROOTVG/varlv /var ext3 defaults 1 2 /dev/ROOTVG/optlv /opt ext3 defaults 1 2 /dev/ROOTVG/crashlv /var/crash ext3 defaults 1 2 /dev/ROOTVG/tmplv /tmp ext3 defaults 1 2 LABEL=/boot /boot ext3 defaults 1 2 tmpfs /dev/shm tmpfs defaults 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 0 0 /dev/ROOTVG/swaplv swap swap defaults 0 0 /dev/APPVG/home /home ext3 defaults 1 2 /dev/APPVG/archives /archives ext3 defaults 1 2 /dev/APPVG/test /test ext3 defaults 1 2 /dev/APPVG/oracle /opt/oracle ext3 defaults 1 2 /dev/APPVG/ifeeds /xfiles ext3 defaults 1 2 I have a solaris server where the vfstab is defined as cat vfstab #device device mount FS fsck mount mount #to mount to fsck point type pass at boot options # fd - /dev/fd fd - no - /proc - /proc proc - no - /dev/vx/dsk/bootdg/swapvol - - swap - no - swap - /tmp tmpfs - yes size=1024m /dev/vx/dsk/bootdg/rootvol /dev/vx/rdsk/bootdg/rootvol / ufs 1 no logging /dev/vx/dsk/bootdg/var /dev/vx/rdsk/bootdg/var /var ufs 1 no logging /dev/vx/dsk/bootdg/home /dev/vx/rdsk/bootdg/home /home ufs 2 yes logging /dev/vx/dsk/APP/test /dev/vx/rdsk/APP/test /test vxfs 3 yes - /dev/vx/dsk/APP/archives /dev/vx/rdsk/APP/archives /archives vxfs 3 yes - /dev/vx/dsk/APP/oracle /dev/vx/rdsk/APP/oracle /opt/oracle vxfs 3 yes - /dev/vx/dsk/APP/xfiles /dev/vx/rdsk/APP/xfiles /xfiles vxfs 3 yes - I am not able to find out the issue. Any help would be appreciated.

    Read the article

  • Redirect input from one terminal to another

    - by Niki Yoshiuchi
    I have sshed into a linux box and I'm using dvtm and bash (although I have also tried this with Gnu screen and bash). I have two terminals, current /dev/pts/29 and /dev/pts/130. I want to redirect the input from one to the other. From what I understand, in /dev/pts/130 I can type: cat </dev/pts/29 And then when I type in /dev/pts/29 the characters I type should show up in /dev/pts/130. However what ends up happening is that every other character I type gets redirected. For example, if I type "hello" I get this: /dev/pts/29 | /dev/pts/130 $ | $ cat </dev/pts/29 $ el | hlo This is really frustrating as I need to do this in order to redirect the io of a process running in gdb (I've tried both run /dev/pts/# and set inferior-tty /dev/pts/# and both resulted in the aforementioned behavior). Am I doing something wrong, or is this a bug in bash/screen/dvtm?

    Read the article

  • how to set the font with mailx?

    - by Bob
    Solaris Korn Shell I am writing sql reports to an oracle database, spooling them to a file and emailing them with mailx. I use the syntax below. The reports do not format properly, unless I use the Courier New font. How do I set this with mailx? mailx -s "MY REPORT, date +'%D %r " -r "REPORTING SYSTEM" [email protected]< /mydir/mysql.log /dev/null

    Read the article

  • Automatically wake up notebooks not on the ethernet

    - by gletscher
    I am looking for an automated backup system and I like bacula. I have 3 Notebooks and a Desktop computer that need regular backup. Now I don't want to let them run all night just to do the backuping, so I was thinking I could use wake-on-lan to have bacula wake up the machines, then do the backups, and shut them down afterswards. While this may work with devices on the ethernet, it won't work with the Notebooks on the wifi. So is it possible to have the Notebooks schedules to automatically wake up from suspend or shutdown ? Or is it possible to interject a shutdown command if it is after a certain hour and call the bacula director to start the backup now? I'm new to controlling the linux system using scripts, so any hints on how and where to start are greatly appreciated. Thanks alot for your help, input and ideas.

    Read the article

  • Bash Shell Hangs on ?+Tab-complete

    - by michaelmichael
    I often use tab completion in Bash when completing directories, but I find that it hangs for an unacceptable amount of time if I accidentally include a question mark in the directory. I'd like to know why and how to prevent it if possible. Here's the scenario: I start a command and use the ~ key to represent home: ls ~?Desktop/co Oops! I held down the Shift for a split-second too long. I had intended for ? to be /. But (oh no!) muscle memory has already kicked in. I've hit the Tab before I noticed the mistake. Now I'm stuck waiting for the shell to beep angrily at me. Usually a minute or two. What happened? Why did the question mark cause it to hang and eventually beep? Any way to stop it from hanging?

    Read the article

  • Call to daemon in a /etc/init.d script is blocking, not running in background

    - by tony
    I have a Perl script that I want to daemonize. Basically this perl script will read a directory every 30 seconds, read the files that it finds and then process the data. To keep it simple here consider the following Perl script (called synpipe_server, there is a symbolic link of this script in /usr/sbin/) : #!/usr/bin/perl use strict; use warnings; my $continue = 1; $SIG{'TERM'} = sub { $continue = 0; print "Caught TERM signal\n"; }; $SIG{'INT'} = sub { $continue = 0; print "Caught INT signal\n"; }; my $i = 0; while ($continue) { #do stuff print "Hello, I am running " . ++$i . "\n"; sleep 3; } So this script basically prints something every 3 seconds. Then, as I want to daemonize this script, I've also put this bash script (also called synpipe_server) in /etc/init.d/ : #!/bin/bash # synpipe_server : This starts and stops synpipe_server # # chkconfig: 12345 12 88 # description: Monitors all production pipelines # processname: synpipe_server # pidfile: /var/run/synpipe_server.pid # Source function library. . /etc/rc.d/init.d/functions pname="synpipe_server" exe="/usr/sbin/synpipe_server" pidfile="/var/run/${pname}.pid" lockfile="/var/lock/subsys/${pname}" [ -x $exe ] || exit 0 RETVAL=0 start() { echo -n "Starting $pname : " daemon ${exe} RETVAL=$? PID=$! echo [ $RETVAL -eq 0 ] && touch ${lockfile} echo $PID > ${pidfile} } stop() { echo -n "Shutting down $pname : " killproc ${exe} RETVAL=$? echo if [ $RETVAL -eq 0 ]; then rm -f ${lockfile} rm -f ${pidfile} fi } restart() { echo -n "Restarting $pname : " stop sleep 2 start } case "$1" in start) start ;; stop) stop ;; status) status ${pname} ;; restart) restart ;; *) echo "Usage: $0 {start|stop|status|restart}" ;; esac exit 0 So, (if I have well understood the doc for daemon) the Perl script should run in the background and the output should be redirected to /dev/null if I execute : service synpipe_server start But here is what I get instead : [root@master init.d]# service synpipe_server start Starting synpipe_server : Hello, I am running 1 Hello, I am running 2 Hello, I am running 3 Hello, I am running 4 Caught INT signal [ OK ] [root@master init.d]# So it starts the Perl script but runs it without detaching it from the current terminal session, and I can see the output printed in my console ... which is not really what I was expecting. Moreover, the PID file is empty (or with a line feed only, no pid returned by daemon). Does anyone have any idea of what I am doing wrong ? EDIT : maybe I should say that I am on a Red Hat machine. Scientific Linux SL release 5.4 (Boron) Would it do the job if instead of using the daemon function, I use something like : nohup ${exe} >/dev/null 2>&1 & in the init script ?

    Read the article

  • bash starts replacing the characters on the current line insead of moving over to the next line

    - by Lazer
    I use bash shell $ bash --version GNU bash, version 3.2.25(1)-release (x86_64-redhat-linux-gnu) Copyright (C) 2005 Free Software Foundation, Inc. $ Sometimes, when typing a command on the prompt that is pretty lengthy and does not fit in the current line, instead of displaying the extra characters on the next line, bash starts again on the current line.. replacing the characters that were there and making a mess. what should happen : |---------------------------------------------| | $ my big long command takes a lot of argumen| | s and does not fit in a single line | | | |---------------------------------------------| what happens instead : |---------------------------------------------| | s and does not fit in a single linef argumen| | | | | |---------------------------------------------| The issue is intemittent If I resize my shell window to really small width, normal behaviour is restored Does anyone have any idea what is happening here? $ echo $TERM xterm $ echo $PS1 \[\e[30m\][\t]\[\e[0m\]\[\e]0;\w\a\]\[\e[30m\][\W]$ $

    Read the article

  • Break all hardlinks within a folder

    - by Georges Dupéron
    I have a folder which contains a certain number of files which have hard links (in the same folder or somewhere else), and I want to de-hardlink these files, so they become independant, and changes to their contents won't affect any other file (their link count becomes 1). Below, I give a solution which basically copies each hard link to another location, then move it back in place. However this method seems rather crude and error-prone, so I'd like to know if there is some command which will de-hardlink a file for me. Crude answer : Find files which have hard links (Edit: To also find sockets, etc. that have hardlinks, use find -not -type d -links +1) : find -type f -links +1 A crude method to de-hardlink a file (copy it to another location, and move it back) : Edit: As Celada said, it's best to do a cp -p below, to avoid loosing timestamps and permissions. Edit: Create a temporary directory and copy to a file under it, instead of overwriting a temp file, it minimizes the risk to overwrite some data, though the mv command is still risky (thanks @Tobu). # This is unhardlink.sh set -e for i in "$@"; do temp="$(mktemp -d ./hardlnk-XXXXXXXX)" [ -e "$temp" ] && cp -ip "$i" "$temp/tempcopy" && mv "$temp/tempcopy" "$i" && rmdir "$temp" done So, to un-hardlink all hard links (Edit: changed -type f to -not -type d, see above) : find -not -type d -links +1 -print0 | xargs -0 unhardlink.sh

    Read the article

  • Write STDOUT & STDERR to a logfile, also write STDERR to screen

    - by Stefan Lasiewski
    I would like to run several commands, and capture all output to a logfile. I also want to print any errors to the screen (or optionally mail the output to someone). Here's an example. The following command will run three commands, and will write all output (STDOUT and STDERR) into a single logfile. { command1 && command2 && command3 ; } > logfile.log 2>&1 Here is what I want to do with the output of these commands: STDERR and STDOUT for all commands goes to a logfile, in case I need it later--- I usually won't look in here unless there are problems. Print STDERR to the screen (or optionally, pipe to /bin/mail), so that any error stands out and doesn't get ignored. It would be nice if the return codes were still usable, so that I could do some error handling. Maybe I want to send email if there was an error, like this: { command1 && command2 && command3 ; } logfile.log 2&1 || mailx -s "There was an error" [email protected] The problem I run into is that STDERR loses context during I/O redirection. A '2&1' will convert STDERR into STDOUT, and therefore I cannot view errors if I do 2 error.log Here are a couple juicier examples. Let's pretend that I am running some familiar build commands, but I don't want the entire build to stop just because of one error so I use the '--keep-going' flag. { ./configure && make --keep-going && make install ; } > build.log 2>&1 Or, here's a simple (And perhaps sloppy) build and deploy script, which will keep going in the event of an error. { ./configure && make --keep-going && make install && rsync -av --keep-going /foo devhost:/foo} > build-and-deploy.log 2>&1 I think what I want involves some sort of Bash I/O Redirection, but I can't figure this out.

    Read the article

  • How do I connect to the serial console port os a Sunfire 280R?

    - by DrStalker
    We have a Sunfire 280R (old SPARC/Solaris server) that is refusing to come up after being relocated. We're trying to connect to the serial console port, but all we get is random gibberish on the screen. We've tried both connecting with a DB25DB9 adapter and using a DB-25-RJ45 adapter with a cisco RJ45-DB9 adapter to a windows laptop. We're configuring the laptop to 9600 baud, 8 bit, 1 stop bit, no parity. We've tried both no flow control and Xon/Xoff. We get the same results hooking up to the serial port on a working SPARC server, so it's probably something in our setup rather than a fault with the server. How do we get access to to serial console so we can work out what is stopping this box from getting to the network? Is there a special sun adapter we need to get/make to get the serial link working?

    Read the article

  • *Simple* way to block DDoS by number of requests

    - by Eduard Luca
    I have 3 Varnish 3.0.2 servers with Apache 2 as backends, which are being load balanced through a HAproxy separate server. I need to find a very simple program (I'm not much of a sysadmin), which blocks requests from an IP, if that IP has made more than X requests in Y seconds. Would something like this be achievable with a simple solution? Right now I have to block all requests manually with iptables.

    Read the article

  • What is the difference between bash and sh

    - by Saif Bechan
    In using i see 2 types of code #!/usr/bin/sh and #!/user/bin/bash I have Googled this and the opinions vary a lot. The explanation I have seen on most websites is that sh is older than bash, and that there is no real difference. Does someone know the difference between these and can give a practical example when to use either one of them. I highly doubt that there is no real difference, because then having to things that do the exact same thing would be just

    Read the article

  • Mac OS X: at command not working

    - by David.Chu.ca
    I am going to schedule a job by using at command. Here I tried the following command: $ at now + 1 minute echo 'Test at command' <EOD> I saw the job is scheduled by using at -l. However, I saw no echo out. I guess that I may need to add user to at.allow file. I cannot find at.allow in my Mac (Snow Leopard). Not sure what I need to do to test this at command?

    Read the article

  • Specify default group and permissions for new files in a certain directory

    - by mislav
    I have a certain directory in which there is a project shared by multiple users. These users use SSH to gain access to this directory and modify/create files. This project should only be writeable to a certain group of users: lets call it "mygroup". During an SSH session, all files/directories created by the current user should by default be owned by group "mygroup" and have group-writeable permissions. I can solve the permissions problem with umask: $ cd project $ umask 002 $ touch test.txt File "test.txt" is now group-writeable, but still belongs to my default group ("mislav", same as my username) and not to "mygroup". I can chgrp recursively to set the desired group, but I wanted to know is there a way to set some group implicitly like umask changes default permissions during a session. This specific directory is a shared git repo with a working copy and I want git checkout and git reset operations to set the correct mask and group for new files created in the working copy. The OS is Ubuntu Linux. Update: a colleague suggests I should look into getfacl/setfacl of POSIX ACL but the solution below combined with umask 002 in the current session is good enough for me and is much more simple.

    Read the article

  • Why /home folder is missing from my backup archive created by tar?

    - by Konstantin Pereyaslov
    So I'm doing full backup of my VPS using the following command (as root, of course): tar czvf 20120604.tar.gz / Everything seems to be fine, all files seem to appear in the list. The size of archive is 6 Gb and gunzipped version is 11 Gb which includes /home, because I totally have 11 Gb of data on VPS. But when I try actually to unpack archive, or open it using mc or WinRAR, there's no /home folder. And WinRAR tells 20120604.tar.gz - TAR+GZIP archive, unpacked size 894 841 346 bytes. It can't be WinRAR's bug, because when I type tar xzvf 20120604.tar.gz, /home folder isn't unpacked either. Why is /home folder missing from my archive? And what can I do to include it there? tar --version outputs the following: tar (GNU tar) 1.15.1

    Read the article

  • Safely get rid of "You have new mail in /var/mail" on a Mac?

    - by viatropos
    I was messing around with sendmail in Rails a year ago and have had this message popping up in the terminal after every command ever since: You have new mail in /var/mail/Lance How do I properly get rid of that so the message goes away? I ever use any of that functionality and don't have mail on my computer. There's one file in /var/mail called lance, and it's huge. Can I just remove it?

    Read the article

  • init never reaping zombie/defunct processes

    - by st9
    Hi, On my Fedora Core 9 webserver with kernel 2.6.18.8, init isn't reaping zombie processes. This would be bearable if it wasn't for the process table eventually reaching an upper limit where no new processes can be allocated. Sample output of ps -el | grep 'Z': F S UID PID PPID C PRI NI ADDR SZ WCHAN TTY TIME CMD 5 Z 0 2648 1 0 75 0 - 0 exit ? 00:00:00 sendmail <defunct> 1 Z 51 2656 1 0 75 0 - 0 exit ? 00:00:00 sendmail <defunct> 1 Z 0 2670 1 0 75 0 - 0 exit ? 00:00:02 crond <defunct> 4 Z 0 2874 1 0 82 0 - 0 exit ? 00:00:00 mysqld_safe <defunct> 5 Z 0 28104 1 0 76 0 - 0 exit ? 00:00:00 httpd <defunct> 5 Z 0 28716 1 0 76 0 - 0 exit ? 00:00:06 lfd <defunct> 5 Z 74 10172 1 0 75 0 - 0 exit ? 00:00:00 sshd <defunct> 5 Z 0 11199 1 0 75 0 - 0 exit ? 00:00:00 sendmail <defunct> 5 Z 0 11202 1 0 75 0 - 0 exit ? 00:00:00 sendmail <defunct> 5 Z 0 11205 1 0 75 0 - 0 exit ? 00:00:00 sendmail <defunct> 5 Z 0 11208 1 0 75 0 - 0 exit ? 00:00:00 sendmail <defunct> 5 Z 0 11211 1 0 75 0 - 0 exit ? 00:00:00 sendmail <defunct> 5 Z 0 11240 1 0 75 0 - 0 exit ? 00:00:00 sendmail <defunct> 5 Z 0 11246 1 0 75 0 - 0 exit ? 00:00:00 sendmail <defunct> 5 Z 0 11249 1 0 75 0 - 0 exit ? 00:00:00 sendmail <defunct> 5 Z 0 11252 1 0 75 0 - 0 exit ? 00:00:00 sendmail <defunct> 1 Z 0 14106 1 0 80 0 - 0 exit ? 00:00:00 anacron <defunct> 5 Z 0 14631 1 0 75 0 - 0 exit ? 00:00:00 sendmail <defunct> Is this an OS bug? misconfiguration? I'm looking for inspiration as to the source of this problem. Thanks

    Read the article

  • tcsh : path of sourced file

    - by Charles
    I am sourcing a file under tcsh. This file could be anywhere on the filesystem. How can I retrieve the path of my sourced file ? $0 won't work : I don't execute the file, I source it. Many thanks !

    Read the article

  • Use Apache authentication to Segregate access to Subversion subdirectories

    - by Stefan Lasiewski
    I've inherited a Subversion repository, running on FreeBSD and using Apache2.2 . Currently, we have one project, which looks like this. We use both local files and LDAP for authentication. <Location /> DAV svn SVNParentPath /var/svn AuthName "Staff only" AuthType Basic # Authentication through Local file (mod_authn_file), then LDAP (mod_authnz_ldap) AuthBasicProvider file ldap # Allow some automated programs to check content into the repo # mod_authn_file AuthUserFile /usr/local/etc/apache22/htpasswd Require user robotA robotB # Allow any staff to access the repo # mod_authnz_ldap Require ldap-group cn=staff,ou=PosixGroup,ou=foo,ou=Host,o=ldapsvc,dc=example,dc=com </Location> We would like to allow customers to access to certain subdirectories, without giving them global access to the entire repository. We would prefer to do this without migrating these sub-directories to their own repositories. Staff also need access to these subdirectories. Here's what I tried: <Location /www.customerA.com> DAV svn SVNParentPath /var/svn # mod_authn_file AuthType Basic AuthBasicProvider file AuthUserFile /usr/local/etc/apache22/htpasswd-customerA Require user customerA </Location> <Location /www.customerB.com> DAV svn SVNParentPath /var/svn # mod_authn_file AuthType Basic AuthBasicProvider file AuthUserFile /usr/local/etc/apache22/htpasswd-customerB Require user customerB </Location> I've tried the above. Access to '/' works for staff. However, access to /www.customerA.com and /www.customerB.com does not work. It looks like Apache is trying to authenticate the 'customerB' against LDAP, and doesn't try local password file. The error is: [Mon May 03 15:27:45 2010] [warn] [client 192.168.8.13] [1595] auth_ldap authenticate: user stefantest authentication failed; URI /www.customerB.com [User not found][No such object] [Mon May 03 15:27:45 2010] [error] [client 192.168.8.13] user stefantest not found: /www.customerB.com What am I missing?

    Read the article

  • Compiling PHP with GD and libjpeg support

    - by Robin Winslow
    I compile my own PHP, partly to learn more about how PHP is put together, and partly because I'm always finding I need modules that aren't available by default, and this way I have control over that. My problem is that I can't get JPEG support in PHP. Using CentOS 5.6. Here are my configuration options when compiling PHP 5.3.8: './configure' '--enable-fpm' '--enable-mbstring' '--with-mysql' '--with-mysqli' '--with-gd' '--with-curl' '--with-mcrypt' '--with-zlib' '--with-pear' '--with-gmp' '--with-xsl' '--enable-zip' '--disable-fileinfo' '--with-jpeg-dir=/usr/lib/' The ./configure output says: checking for GD support... yes checking for the location of libjpeg... no checking for the location of libpng... no checking for the location of libXpm... no And then we can see that GD is installed, but that JPEG support isn't there: # php -r 'print_r(gd_info());' Array ( [GD Version] => bundled (2.0.34 compatible) [FreeType Support] => [T1Lib Support] => [GIF Read Support] => 1 [GIF Create Support] => 1 [JPEG Support] => [PNG Support] => 1 [WBMP Support] => 1 [XPM Support] => [XBM Support] => 1 [JIS-mapped Japanese Font Support] => ) I know that PHP needs to be able to find libjpeg, and it obviously can't find a version it's happy with. I would have thought /usr/lib/libjpeg.so or /usr/lib/libjpeg.so.62 would be what it needs, but I supplied it with the correct lib directory (--with-jpeg-dir=/usr/lib/) and it doesn't pick them up so I guess they can't be the right versions. rpm says libjpeg is installed. Should I yum remove and reinstall it, and all it's dependent packages? Might that fix the problem? Here's a paste bin with a collection of hopefully useful system information: http://pastebin.com/ied0kPR6

    Read the article

  • Using mod_wsgi with mpm_itk: socket permission issue

    - by djechelon
    I'm using mod_itk as MPM for increased security in shared environment. I also have a Firefox Sync Server within one of the VHosts I host. That vhost is restricted to a certain user via AssignUserId user group. The problem is that the socket /var/run/wsgi...whatever.sock is chmodded srwx------ and owned by Apache's wwwrun. While I configured the vhost with WSGIProcessGroup sync WSGIDaemonProcess sync user=djechelon group=djechelon processes=1 threads=5 I still get the error that Apache wants to access a socket that is not accessible and because of this gets an error. Is it possible to configure mod_wsgi in order to create different sockets with different owners for different applications or to chmod its socket in a different way (less secure)? Currently, I'm running Firefox Sync as the only WSGI application. Moving it to a vhost that doesn't AssignUserId could solve this problem but will force me to change URL (and buy an additional SSL certificate), so I wouldn't consider this

    Read the article

  • Which filename encoding should I use to be compatible with Windows, Mac and Linux.

    - by jverdeyen
    I'm using a Linux server as fileshare. These files are accessed by Windows computers with a Samba server, accessed by Macs with a netatalk server (afpd) and also trough ssh an sftp for Windows, Macs and Linux system. It seems like some of these systems care about the use of characters in filenames and some don't.. There is a tool called 'convmv' to convert filenames from one to another, but which one should I use? Should I setup the Samba server for a defined file encoding? Same for netatalk?

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >