Search Results

Search found 1964 results on 79 pages for 'kill ring'.

Page 15/79 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • Question about network topology and routing performance

    - by algorithms
    Hello I am currently working on a uni project about routing protocols and network performance, one of the criteria i was going to test under was to see what effect lan topology has, ie workstations arranged in mesh, star, ring etc, but i am having doubts as to whether that would have any affect on the routing performance thus would be useless to do, rather i'm thinking it would be better to test under the topology of the routers themselves, ie routers arranged in either star, mesh ring etc. I would appreciate some feedback on this as I am rather confused. Thank You

    Read the article

  • Toshiba laptop cd drive read causes OS to totally freeze

    - by Fujishiro
    Okay I'll try to write an understandable summary. Forgive me if I'll fail with that attempt though. So. There is a Toshiba Satellite notebook. Got Windows 7 x86 Professional (OEM) installed on it, everything is fine (okay.. somewhat). The problem. If you put an audio or any kind of disc into the drive, something starts to eat the PC. Back then when the owner told me about this, he put an audio disc into the lappy. Winamp caused the IO load, 100%. Tried taskkill, taskkill /T, tried powershell, EVERYTHING. You just can NOT kill winamp or anything which becomes the blocker at that time. Even if you kill almost everything, laptop won't do a clear shutdown. Also I tried to use the force switch at 'shutdown' from cmd, but no use. (So: At these times you can use the laptop, but the blocker/explorer/disc becomes gray as a non-responding app. You can try to kill them, but that won't work, nor you can shutdown the machine). (Also tried using PID, but no use. For the highest IO I used the "select columns" from Task Manager and enabled the IO columns.) My first hunch was the problematic disc, autoplay and it tries to read tries to read (still shouldn't kill the PC). Disabled autoplay, removed winamp. Tried other software, etc. Everything was ok. Few days later the owner tried to put a disc into the machine and it started to reproduce the same symptoms but with a totally different disc. Uhm what to know. Virus is not an option, protected by BitDefender (valid license) and Spybot. Thanks if you have ANY idea about this strange problem. ps.: For now, the owner uses Daemon tools + Blindwrite as an alternative for those apps which wouldnt start without the disc.

    Read the article

  • Cannot terminate process, "already terminated"

    - by felix-freiberger
    On Windows 8, I regularly get processes into a state where I can't terminate them. Skypekit.exe seems to be the process that's most likely to trigger that issue, but other processes can do that, too. When I try to terminate these processes, I sometimes get an "access denied" message, sometimes nothing happens - but every following attempt to kill that process results in an "access denied" message, too, even though I... have administrative rights (and ran the task manager with it) own that process have the right to terminate it "Process Hacker 2" shows a more detailed error message, stating that I couldn't terminate the process because it already is terminated. Still, the process is most definitely still there, because every task manager I tested still can see it. Process Hacker's "Terminator" is unable to kill such a process, but when running the "Close the process' handles" tactic, Process Hacker gets stuck himself, leaving its windows in "not responding". In that state, other task managers are in turn unable to kill Process Hacker. The only way I found to actually end these processes is to shutdown (which works without any problems). Why is this happening? How can I kill these processes?

    Read the article

  • Issues with signal handling [closed]

    - by user34790
    I am trying to actually study the signal handling behavior in multiprocess system. I have a system where there are three signal generating processes generating signals of type SIGUSR1 and SIGUSR1. I have two handler processes that handle a particular type of signal. I have another monitoring process that also receives the signals and then does its work. I have a certain issue. Whenever my signal handling processes generate a signal of a particular type, it is sent to the process group so it is received by the signal handling processes as well as the monitoring processes. Whenever the signal handlers of monitoring and signal handling processes are called, I have printed to indicate the signal handling. I was expecting a uniform series of calls for the signal handlers of the monitoring and handling processes. However, looking at the output I could see like at the beginning the monitoring and signal handling processes's signal handlers are called uniformly. However, after I could see like signal handler processes handlers being called in a burst followed by the signal handler of monitoring process being called in a burst. Here is my code and output #include <iostream> #include <sys/types.h> #include <sys/wait.h> #include <sys/time.h> #include <signal.h> #include <cstdio> #include <stdlib.h> #include <sys/ipc.h> #include <sys/shm.h> #define NUM_SENDER_PROCESSES 3 #define NUM_HANDLER_PROCESSES 4 #define NUM_SIGNAL_REPORT 10 #define MAX_SIGNAL_COUNT 100000 using namespace std; volatile int *usrsig1_handler_count; volatile int *usrsig2_handler_count; volatile int *usrsig1_sender_count; volatile int *usrsig2_sender_count; volatile int *lock_1; volatile int *lock_2; volatile int *lock_3; volatile int *lock_4; volatile int *lock_5; volatile int *lock_6; //Used only by the monitoring process volatile int monitor_count; volatile int usrsig1_monitor_count; volatile int usrsig2_monitor_count; double time_1[NUM_SIGNAL_REPORT]; double time_2[NUM_SIGNAL_REPORT]; //Used only by the main process int total_signal_count; //For shared memory int shmid; const int shareSize = sizeof(int) * (10); double timestamp() { struct timeval tp; gettimeofday(&tp, NULL); return (double)tp.tv_sec + tp.tv_usec / 1000000.; } pid_t senders[NUM_SENDER_PROCESSES]; pid_t handlers[NUM_HANDLER_PROCESSES]; pid_t reporter; void signal_catcher_1(int); void signal_catcher_2(int); void signal_catcher_int(int); void signal_catcher_monitor(int); void signal_catcher_main(int); void terminate_processes() { //Kill the child processes int status; cout << "Time up terminating the child processes" << endl; for(int i=0; i<NUM_SENDER_PROCESSES; i++) { kill(senders[i],SIGKILL); } for(int i=0; i<NUM_HANDLER_PROCESSES; i++) { kill(handlers[i],SIGKILL); } kill(reporter,SIGKILL); //Wait for the child processes to finish for(int i=0; i<NUM_SENDER_PROCESSES; i++) { waitpid(senders[i], &status, 0); } for(int i=0; i<NUM_HANDLER_PROCESSES; i++) { waitpid(handlers[i], &status, 0); } waitpid(reporter, &status, 0); } int main(int argc, char *argv[]) { if(argc != 2) { cout << "Required parameters missing. " << endl; cout << "Option 1 = 1 which means run for 30 seconds" << endl; cout << "Option 2 = 2 which means run until 100000 signals" << endl; exit(0); } int option = atoi(argv[1]); pid_t pid; if(option == 2) { if(signal(SIGUSR1, signal_catcher_main) == SIG_ERR) { perror("1"); exit(1); } if(signal(SIGUSR2, signal_catcher_main) == SIG_ERR) { perror("2"); exit(1); } } else { if(signal(SIGUSR1, SIG_IGN) == SIG_ERR) { perror("1"); exit(1); } if(signal(SIGUSR2, SIG_IGN) == SIG_ERR) { perror("2"); exit(1); } } if(signal(SIGINT, signal_catcher_int) == SIG_ERR) { perror("3"); exit(1); } /////////////////////////////////////////////////////////////////////////////////////// ////////////////////// Initializing the shared memory ///////////////////////////////// /////////////////////////////////////////////////////////////////////////////////////// cout << "Initializing the shared memory" << endl; if ((shmid=shmget(IPC_PRIVATE,shareSize,IPC_CREAT|0660))< 0) { perror("shmget fail"); exit(1); } usrsig1_handler_count = (int *) shmat(shmid, NULL, 0); usrsig2_handler_count = usrsig1_handler_count + 1; usrsig1_sender_count = usrsig2_handler_count + 1; usrsig2_sender_count = usrsig1_sender_count + 1; lock_1 = usrsig2_sender_count + 1; lock_2 = lock_1 + 1; lock_3 = lock_2 + 1; lock_4 = lock_3 + 1; lock_5 = lock_4 + 1; lock_6 = lock_5 + 1; //Initialize them to be zero *usrsig1_handler_count = 0; *usrsig2_handler_count = 0; *usrsig1_sender_count = 0; *usrsig2_sender_count = 0; *lock_1 = 0; *lock_2 = 0; *lock_3 = 0; *lock_4 = 0; *lock_5 = 0; *lock_6 = 0; cout << "End of initializing the shared memory" << endl; ///////////////////////////////////////////////////////////////////////////////////////////// /////////////////// End of initializing the shared memory /////////////////////////////////// ///////////////////////////////////////////////////////////////////////////////////////////// /////////////////////////////////////////////////////////////////////////////////////////// /////////////////////////////Registering the signal handlers/////////////////////////////// /////////////////////////////////////////////////////////////////////////////////////////// cout << "Registering the signal handlers" << endl; for(int i=0; i<NUM_HANDLER_PROCESSES; i++) { if((pid = fork()) == 0) { if(i%2 == 0) { struct sigaction action; action.sa_handler = signal_catcher_1; sigset_t block_mask; action.sa_flags = 0; sigaction(SIGUSR1,&action,NULL); if(signal(SIGUSR2, SIG_IGN) == SIG_ERR) { perror("2"); exit(1); } } else { if(signal(SIGUSR1 ,SIG_IGN) == SIG_ERR) { perror("1"); exit(1); } struct sigaction action; action.sa_handler = signal_catcher_2; action.sa_flags = 0; sigaction(SIGUSR2,&action,NULL); } if(signal(SIGINT, SIG_DFL) == SIG_ERR) { perror("2"); exit(1); } while(true) { pause(); } exit(0); } else { //cout << "Registerd the handler " << pid << endl; handlers[i] = pid; } } cout << "End of registering the signal handlers" << endl; ///////////////////////////////////////////////////////////////////////////////////////////////////// ////////////////////////////End of registering the signal handlers ////////////////////////////////// ///////////////////////////////////////////////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////////////////////////////////////////////// ///////////////////////////Registering the monitoring process ////////////////////////////////////// //////////////////////////////////////////////////////////////////////////////////////////////////// cout << "Registering the monitoring process" << endl; if((pid = fork()) == 0) { struct sigaction action; action.sa_handler = signal_catcher_monitor; sigemptyset(&action.sa_mask); sigset_t block_mask; sigemptyset(&block_mask); sigaddset(&block_mask,SIGUSR1); sigaddset(&block_mask,SIGUSR2); action.sa_flags = 0; action.sa_mask = block_mask; sigaction(SIGUSR1,&action,NULL); sigaction(SIGUSR2,&action,NULL); if(signal(SIGINT, SIG_DFL) == SIG_ERR) { perror("2"); exit(1); } while(true) { pause(); } exit(0); } else { cout << "Monitor's pid is " << pid << endl; reporter = pid; } cout << "End of registering the monitoring process" << endl; ///////////////////////////////////////////////////////////////////////////////////////////////////// ////////////////////////End of registering the monitoring process//////////////////////////////////// ///////////////////////////////////////////////////////////////////////////////////////////////////// //Sleep to make sure that the monitor and handler processes are well initialized and ready to handle signals sleep(5); ////////////////////////////////////////////////////////////////////////////////////////////////////// //////////////////////////Registering the signal generators/////////////////////////////////////////// ///////////////////////////////////////////////////////////////////////////////////////////////////// cout << "Registering the signal generators" << endl; for(int i=0; i<NUM_SENDER_PROCESSES; i++) { if((pid = fork()) == 0) { if(signal(SIGUSR1, SIG_IGN) == SIG_ERR) { perror("1"); exit(1); } if(signal(SIGUSR2, SIG_IGN) == SIG_ERR) { perror("2"); exit(1); } if(signal(SIGINT, SIG_DFL) == SIG_ERR) { perror("2"); exit(1); } srand(i); while(true) { int signal_id = rand()%2 + 1; if(signal_id == 1) { killpg(getpgid(getpid()), SIGUSR1); while(__sync_lock_test_and_set(lock_4,1) != 0) { } (*usrsig1_sender_count)++; *lock_4 = 0; } else { killpg(getpgid(getpid()), SIGUSR2); while(__sync_lock_test_and_set(lock_5,1) != 0) { } (*usrsig2_sender_count)++; *lock_5=0; } int r = rand()%10 + 1; double s = (double)r/100; sleep(s); } exit(0); } else { //cout << "Registered the sender " << pid << endl; senders[i] = pid; } } //cout << "End of registering the signal generators" << endl; ///////////////////////////////////////////////////////////////////////////////////////////////////// //////////////////////////End of registering the signal generators/////////////////////////////////// ///////////////////////////////////////////////////////////////////////////////////////////////////// //Either sleep for 30 seconds and terminate the program or if the number of signals generated reaches 10000, terminate the program if(option = 1) { sleep(90); terminate_processes(); } else { while(true) { if(total_signal_count >= MAX_SIGNAL_COUNT) { terminate_processes(); } else { sleep(0.001); } } } } void signal_catcher_1(int the_sig) { while(__sync_lock_test_and_set(lock_1,1) != 0) { } (*usrsig1_handler_count) = (*usrsig1_handler_count) + 1; cout << "Signal Handler 1 " << *usrsig1_handler_count << endl; __sync_lock_release(lock_1); } void signal_catcher_2(int the_sig) { while(__sync_lock_test_and_set(lock_2,1) != 0) { } (*usrsig2_handler_count) = (*usrsig2_handler_count) + 1; __sync_lock_release(lock_2); } void signal_catcher_main(int the_sig) { while(__sync_lock_test_and_set(lock_6,1) != 0) { } total_signal_count++; *lock_6 = 0; } void signal_catcher_int(int the_sig) { for(int i=0; i<NUM_SENDER_PROCESSES; i++) { kill(senders[i],SIGKILL); } for(int i=0; i<NUM_HANDLER_PROCESSES; i++) { kill(handlers[i],SIGKILL); } kill(reporter,SIGKILL); exit(3); } void signal_catcher_monitor(int the_sig) { cout << "Monitoring process " << *usrsig1_handler_count << endl; } Here is the initial segment of output Monitoring process 0 Monitoring process 0 Monitoring process 0 Monitoring process 0 Signal Handler 1 1 Monitoring process 2 Signal Handler 1 2 Signal Handler 1 3 Signal Handler 1 4 Monitoring process 4 Monitoring process Signal Handler 1 6 Signal Handler 1 7 Monitoring process 7 Monitoring process 8 Monitoring process 8 Signal Handler 1 9 Monitoring process 9 Monitoring process 9 Monitoring process 10 Signal Handler 1 11 Monitoring process 11 Monitoring process 12 Signal Handler 1 13 Signal Handler 1 14 Signal Handler 1 15 Signal Handler 1 16 Signal Handler 1 17 Signal Handler 1 18 Monitoring process 19 Signal Handler 1 20 Monitoring process 20 Signal Handler 1 21 Monitoring process 21 Monitoring process 21 Monitoring process 22 Monitoring process 22 Monitoring process 23 Signal Handler 1 24 Signal Handler 1 25 Monitoring process 25 Signal Handler 1 27 Signal Handler 1 28 Signal Handler 1 29 Here is the segment when the signal handler processes signal handlers are called in a burst Signal Handler 1 456 Signal Handler 1 457 Signal Handler 1 458 Signal Handler 1 459 Signal Handler 1 460 Signal Handler 1 461 Signal Handler 1 462 Signal Handler 1 463 Signal Handler 1 464 Signal Handler 1 465 Signal Handler 1 466 Signal Handler 1 467 Signal Handler 1 468 Signal Handler 1 469 Signal Handler 1 470 Signal Handler 1 471 Signal Handler 1 472 Signal Handler 1 473 Signal Handler 1 474 Signal Handler 1 475 Signal Handler 1 476 Signal Handler 1 477 Signal Handler 1 478 Signal Handler 1 479 Signal Handler 1 480 Signal Handler 1 481 Signal Handler 1 482 Signal Handler 1 483 Signal Handler 1 484 Signal Handler 1 485 Signal Handler 1 486 Signal Handler 1 487 Signal Handler 1 488 Signal Handler 1 489 Signal Handler 1 490 Signal Handler 1 491 Signal Handler 1 492 Signal Handler 1 493 Signal Handler 1 494 Signal Handler 1 495 Signal Handler 1 496 Signal Handler 1 497 Signal Handler 1 498 Signal Handler 1 499 Signal Handler 1 500 Signal Handler 1 501 Signal Handler 1 502 Signal Handler 1 503 Signal Handler 1 504 Signal Handler 1 505 Signal Handler 1 506 Here is the segment when the monitoring processes signal handlers are called in a burst Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Why isn't it uniform afterwards. Why are they called in a burst?

    Read the article

  • Linux C: "Interactive session" with separate read and write named pipes?

    - by ~sd-imi
    Hi all, I am trying to work with "Introduction to Interprocess Communication Using Named Pipes - Full-Duplex Communication Using Named Pipes", http://developers.sun.com/solaris/articles/named_pipes.html#5 ; in particular fd_server.c (included below for reference) Here is my info and compile line: :~$ cat /etc/issue Ubuntu 10.04 LTS \n \l :~$ gcc --version gcc (Ubuntu 4.4.3-4ubuntu5) 4.4.3 :~$ gcc fd_server.c -o fd_server fd_server.c creates two named pipes, one for reading and one for writing. What one can do, is: in one terminal, run the server and read (through cat) its write pipe: :~$ ./fd_server & 2/dev/null [1] 11354 :~$ cat /tmp/np2 and in another, write (using echo) to server's read pipe: :~$ echo "heeellloooo" /tmp/np1 going back to first terminal, one can see: :~$ cat /tmp/np2 HEEELLLOOOO 0[1]+ Exit 13 ./fd_server 2 /dev/null What I would like to do, is make sort of a "interactive" (or "shell"-like) session; that is, the server is run as usual, but instead of running "cat" and "echo", I'd like to use something akin to screen. What I mean by that, is that screen can be called like screen /dev/ttyS0 38400, and then it makes a sort of a interactive session, where what is typed in terminal is passed to /dev/ttyS0, and its response is written to terminal. Now, of course, I cannot use screen, because in my case the program has two separate nodes, and as far as I can tell, screen can refer to only one. How would one go about to achieve this sort of "interactive" session in this context (with two separate read/write pipes)? Thanks, Cheers! Code below: #include <stdio.h> #include <errno.h> #include <ctype.h> #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> //#include <fullduplex.h> /* For name of the named-pipe */ #define NP1 "/tmp/np1" #define NP2 "/tmp/np2" #define MAX_BUF_SIZE 255 #include <stdlib.h> //exit #include <string.h> //strlen int main(int argc, char *argv[]) { int rdfd, wrfd, ret_val, count, numread; char buf[MAX_BUF_SIZE]; /* Create the first named - pipe */ ret_val = mkfifo(NP1, 0666); if ((ret_val == -1) && (errno != EEXIST)) { perror("Error creating the named pipe"); exit (1); } ret_val = mkfifo(NP2, 0666); if ((ret_val == -1) && (errno != EEXIST)) { perror("Error creating the named pipe"); exit (1); } /* Open the first named pipe for reading */ rdfd = open(NP1, O_RDONLY); /* Open the second named pipe for writing */ wrfd = open(NP2, O_WRONLY); /* Read from the first pipe */ numread = read(rdfd, buf, MAX_BUF_SIZE); buf[numread] = '0'; fprintf(stderr, "Full Duplex Server : Read From the pipe : %sn", buf); /* Convert to the string to upper case */ count = 0; while (count < numread) { buf[count] = toupper(buf[count]); count++; } /* * Write the converted string back to the second * pipe */ write(wrfd, buf, strlen(buf)); } Edit: Right, just to clarify - it seems I found a document discussing something very similar, it is http://en.wikibooks.org/wiki/Serial_Programming/Serial_Linux#Configuration_with_stty - a modification of the script there ("For example, the following script configures the device and starts a background process for copying all received data from the serial device to standard output...") for the above program is below: # stty raw # ( ./fd_server 2>/dev/null; )& bgPidS=$! ( cat < /tmp/np2 ; )& bgPid=$! # Read commands from user, send them to device echo $(kill -0 $bgPidS 2>/dev/null ; echo $?) while [ "$(kill -0 $bgPidS 2>/dev/null ; echo $?)" -eq "0" ] && read cmd; do # redirect debug msgs to stderr, as here we're redirected to /tmp/np1 echo "$? - $bgPidS - $bgPid" >&2 echo "$cmd" echo -e "\nproc: $(kill -0 $bgPidS 2>/dev/null ; echo $?)" >&2 done >/tmp/np1 echo OUT # Terminate background read process - if they still exist if [ "$(kill -0 $bgPid 2>/dev/null ; echo $?)" -eq "0" ] ; then kill $bgPid fi if [ "$(kill -0 $bgPidS 2>/dev/null ; echo $?)" -eq "0" ] ; then kill $bgPidS fi # stty cooked So, saving the script as say starter.sh and calling it, results with the following session: $ ./starter.sh 0 i'm typing here and pressing [enter] at end 0 - 13496 - 13497 I'M TYPING HERE AND PRESSING [ENTER] AT END 0~?.N=?(?~? ?????}????@??????~? [garble] proc: 0 OUT which is what I'd call for "interactive session" (ignoring the debug statements) - server waits for me to enter a command; it gives its output after it receives a command (and as in this case it exits after first command, so does the starter script as well). Except that, I'd like to not have buffered input, but sent character by character (meaning the above session should exit after first key press, and print out a single letter only - which is what I expected stty raw would help with, but it doesn't: it just kills reaction to both Enter and Ctrl-C :) ) I was just wandering if there already is an existing command (akin to screen in respect to serial devices, I guess) that would accept two such named pipes as arguments, and establish a "terminal" or "shell" like session through them; or would I have to use scripts as above and/or program own 'client' that will behave as a terminal..

    Read the article

  • Querying Networking Statistics: dlstat(1M)

    - by user12612042
    Oracle Solaris 11 took another big leap forward in networking technologies providing a reliable, secure and scalable infrastructure to meet the growing needs of today's datacenter implementations. Oracle Solaris 11 introduced a new and powerful network stack architecture, also known as Project Crossbow. From Solaris 11 onwards, we introduced a command line tool viz. dlstat(1M) to query network statistics. dlstat (for datalink statistics) is a statistics querying counterpart for dladm(1M) - the datalink administration tool. The tool is very easy to get started. Just type dlstat on a shell prompt on Solaris 11 (or later). For example,: # dlstat LINK IPKTS RBYTES OPKTS OBYTES net0 834.11K 145.91M 575.19K 104.24M net1 7.87K 2.04M 0 0 In this example, the system has two datalinks net0 and net1. The output columns denote input packets/bytes as well as output packets/bytes. The numbers are abbreviated in xxx.xxUnit format. However, one could get the actual counts by simply running dlstat -u R (R for raw): # dlstat -u R LINK IPKTS RBYTES OPKTS OBYTES net0 834271 145931244 575246 104242934 net1 7869 2036958 0 0 In addition, dlstat also supports various subcommands dlstat help The following subcommands are supported: Stats : show-aggr show-ether show-link show-phys show-bridge For more info, run: dlstat help {default|} I will only describe couple of interesting subcommands/options here. For a comprehensive description of all the dlstat subcommands refer dlstat's official manual . For NICs that support multiple rings (e.g. ixgbe), dlstat show-phys -r allows us to query per Rx ring statistics. For example: dlstat show-phys -r net4 LINK TYPE INDEX IPKTS RBYTES net4 rx 0 0 0 net4 rx 1 0 0 net4 rx 2 0 0 net4 rx 3 0 0 net4 rx 4 0 0 net4 rx 5 0 0 net4 rx 6 0 0 net4 rx 7 0 0 In this case, net4 is just a vanity name for an ixgbe datalink. This view is especially useful if one wants to look at the network traffic spread across all the available rings. Furthermore, any of the dlstat commands could be run with -i option to periodically query and display stats. For example, running dlstat show-phys -r net4 -i 5 will emit per Rx ring stats every 5 seconds. This is especially useful while analyzing a live system. Similarly, dlstat show-phys -t could be used to query per Tx ring stats. -r and -t could also be combined as dlstat show-phys -rt to query both Rx as well as Tx stats at the same time. Finally, there is also a quick way to dump ALL the stats. Just run dlstat -A. You probably want to redirect this output to a file because you are going to get a whole load of stats :-).

    Read the article

  • That Physics of Coffee Rings [Video]

    - by Jason Fitzpatrick
    The rings left behind by coffee cups are distinctly uniform in their distribution–the stain is always around the edge. This video from the University of Pennsylvania’s Physics Department demonstrates why. Check out the above video to see the physics behind the ring-shaped stains and how altering the shape of the particulate in the liquid completely changes the shape of the stain. The Coffee Ring Effect [via Neatorama] HTG Explains: Why Screen Savers Are No Longer Necessary 6 Ways Windows 8 Is More Secure Than Windows 7 HTG Explains: Why It’s Good That Your Computer’s RAM Is Full

    Read the article

  • RHEL Cluster FAIL after changing time on system

    - by Eugene S
    I've encountered a strange issue. I had to change the time on my Linux RHEL cluster system. I've done it using the following command from the root user: date +%T -s "10:13:13" After doing this, some message appeared relating to <emerg> #1: Quorum Dissolved however I didn't capture it completely. In order to investigate the issue I looked at /var/log/messages and I've discovered the following: Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] entering GATHER state from 0. Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] Creating commit token because I am the rep. Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] Storing new sequence id for ring 354 Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] entering COMMIT state. Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] entering RECOVERY state. Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] position [0] member 192.168.1.49: Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] previous ring seq 848 rep 192.168.1.49 Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] aru 61 high delivered 61 received flag 1 Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] Did not need to originate any messages in recovery. Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] Sending initial ORF token Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] CLM CONFIGURATION CHANGE Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] New Configuration: Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] #011r(0) ip(192.168.1.49) Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] Members Left: Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] #011r(0) ip(192.168.1.51) Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] Members Joined: Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CMAN ] quorum lost, blocking activity Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] CLM CONFIGURATION CHANGE Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] New Configuration: Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] #011r(0) ip(192.168.1.49) Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] Members Left: Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] Members Joined: Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [SYNC ] This node is within the primary component and will provide service. Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] entering OPERATIONAL state. Mar 22 16:40:42 hsmsc50sfe1a kernel: dlm: closing connection to node 2 Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] got nodejoin message 192.168.1.49 Mar 22 16:40:42 hsmsc50sfe1a clurgmgrd[25809]: <emerg> #1: Quorum Dissolved Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CPG ] got joinlist message from node 1 Mar 22 16:40:42 hsmsc50sfe1a ccsd[25705]: Cluster is not quorate. Refusing connection. Mar 22 16:40:42 hsmsc50sfe1a ccsd[25705]: Error while processing connect: Connection refused Mar 22 16:40:42 hsmsc50sfe1a ccsd[25705]: Invalid descriptor specified (-21). Mar 22 16:40:42 hsmsc50sfe1a ccsd[25705]: Someone may be attempting something evil. Mar 22 16:40:42 hsmsc50sfe1a ccsd[25705]: Error while processing disconnect: Invalid request descriptor Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] entering GATHER state from 9. Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] Creating commit token because I am the rep. Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] Storing new sequence id for ring 358 Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] entering COMMIT state. Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] entering RECOVERY state. Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] position [0] member 192.168.1.49: Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] previous ring seq 852 rep 192.168.1.49 Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] aru f high delivered f received flag 1 Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] position [1] member 192.168.1.51: Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] previous ring seq 852 rep 192.168.1.51 Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] aru f high delivered f received flag 1 Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] Did not need to originate any messages in recovery. Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] Sending initial ORF token Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] CLM CONFIGURATION CHANGE Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] New Configuration: Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] #011r(0) ip(192.168.1.49) Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] Members Left: Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] Members Joined: Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] CLM CONFIGURATION CHANGE Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] New Configuration: Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] #011r(0) ip(192.168.1.49) Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] #011r(0) ip(192.168.1.51) Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] Members Left: Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] Members Joined: Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] #011r(0) ip(192.168.1.51) Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [SYNC ] This node is within the primary component and will provide service. Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] entering OPERATIONAL state. Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [MAIN ] Node chb_sfe2a not joined to cman because it has existing state Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] got nodejoin message 192.168.1.49 Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] got nodejoin message 192.168.1.51 Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CPG ] got joinlist message from node 1 Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CPG ] got joinlist message from node 2 Mar 22 16:40:42 hsmsc50sfe1a ccsd[25705]: Cluster is not quorate. Refusing connection. Mar 22 16:40:42 hsmsc50sfe1a ccsd[25705]: Error while processing connect: Connection refused Mar 22 16:40:42 hsmsc50sfe1a ccsd[25705]: Invalid descriptor specified (-111). Mar 22 16:40:42 hsmsc50sfe1a ccsd[25705]: Someone may be attempting something evil. Mar 22 16:40:42 hsmsc50sfe1a ccsd[25705]: Error while processing get: Invalid request descriptor Mar 22 16:40:42 hsmsc50sfe1a ccsd[25705]: Invalid descriptor specified (-21). Mar 22 16:40:42 hsmsc50sfe1a ccsd[25705]: Someone may be attempting something evil. Mar 22 16:40:42 hsmsc50sfe1a ccsd[25705]: Error while processing disconnect: Invalid request descriptor How could this be related to the time change procedure I performed?

    Read the article

  • Emacs editor copy and deletion

    - by Null pointer
    I am a huge fan of emacs.. But even after 1 year of practising on emacs I am unable to solve these three annoying issues about emacs please help me! 1.Whenever I want to copy the content from emacs into other things such as a website textarea I have to use GUI copy button because ctrl+w doesn't work. Is there any way to do this from emacs-command line. 2.Whenever I delete something using ctrl+shft+SPC or ctrl+k etc I don't want it to be stored in kill ring how do I do it(I know ctrl+D does this but it deletes only one char at a time)? 3.Whenever I select text by mouse and press backspace then text goes into kill ring(Which I want to change as mentioned) but same doesn't happen when I select text with ctrl+SPC(set mark) and then ctrl+f/ctrl+b etc. Please help me! Thanks in Advanced!

    Read the article

  • What are best practices for testing programs with stochastic behavior?

    - by John Doucette
    Doing R&D work, I often find myself writing programs that have some large degree of randomness in their behavior. For example, when I work in Genetic Programming, I often write programs that generate and execute arbitrary random source code. A problem with testing such code is that bugs are often intermittent and can be very hard to reproduce. This goes beyond just setting a random seed to the same value and starting execution over. For instance, code might read a message from the kernal ring buffer, and then make conditional jumps on the message contents. Naturally, the ring buffer's state will have changed when one later attempts to reproduce the issue. Even though this behavior is a feature it can trigger other code in unexpected ways, and thus often reveals bugs that unit tests (or human testers) don't find. Are there established best practices for testing systems of this sort? If so, some references would be very helpful. If not, any other suggestions are welcome!

    Read the article

  • How do I turn off all the password prompts?

    - by Barkerto
    I've been using Ubuntu 12.04 LTS since release and am trying to figure out a couple things about all of these passwords and key-ring prompts that I've just been living with for a while. Ever since install it seems that every time I boot up my computer and want to do anything (ie. use the internet, use a internet browser, install something, delete something, pick my nose) I'm always prompted for either a normal password entry or a key-ring password entry. Is there anyway to turn off all of this "security" and tell my Ubuntu that it can trust what I'm doing and go take a shower? Thank You in advance, barkerto

    Read the article

  • Dual 7950, freezes during startup

    - by Hiro
    I have two identical video cards right now. Xubuntu will boot up no problem if I only have one card installed. If I install the 2nd card it will just hang midway thru the loading screen. Regardless of which card I use and which position on the motherboard, it will freeze if I have both installed. I've tried installing the proprietary drivers, both fglrx and fglrx-updates, still freezes in the same spot. Uninstalled those and went back to the default ones. The only way I can get a terminal is by booting into failsafe mode. Booting using 13.10 i386 on a USB stick gives me some errors related to what I think is the 2nd video card: [drm:r600_ring_test] ERROR radeon: ring 1 test failed (scratch(0x850C)=0xCAFEDEAD) [drm:r600_dma_ring_test] ERROR radeon: ring 3 test failed (0x02FFFF6B) radeon 0000:0200.2 disabling GPU acceleration"

    Read the article

  • Why a SIGHUP signal to httpd kills the tomcat process?

    - by Geo
    I have a server with a tomcat process binding to port 80 and a httpd processes binding to port 5000. For some reason every time any process send a SIGHUP signal to httpd process my tomcat process disappears without error or anything. I fixed the issue on the server the following way, added an explicit ServerName directive in the httpd.conf and that fixed the issue. I still don't understand why the SIGHUP to httpd killed the tomcat process. NOTE 1: I replicated the kill signal with the following command: find out what the httpd pid is. cat /etc/httpd/run/httpd.pid 4056 then kill with a sighup signal kill -s SIGHUP 4056 NOTE 2: We troubleshoot the issue and find that logrotate running every morning at 4am was sending a SIGHUP signal to realease the logs to be able to rotate them, thus killing tomcat as well.

    Read the article

  • How to close all background processes in unix?

    - by Gabi Purcaru
    I have something like: cd project && python manage.py runserver & cd utilities && ./coffee_auto_compiler.py And I want both of them to close on Ctrl-C (or some other command). How can I accomplish that? EDIT: I tried using jobs -x kill and kill `jobs -p `, but it doesn't seem to kill what I need. Here is what I mean: moon 8119 0.0 0.0 7556 3008 pts/0 S 13:17 0:00 /bin/bash moon 8120 6.8 0.4 24568 18928 pts/0 S 13:17 0:00 python manage.py runserver jobs -p give me just process 8119, but I also need to close 8120, since it's the thing that the first command opened. If it helps, the commands are actually in a Makefile, and I want it to run two daemons at the same time (and somehow close them at the same time). And yes, I'm using ubuntu, with bash

    Read the article

  • On mobile is there a reason why processes are often short lived and must persist their state explicitly?

    - by Alexandre Jasmin
    Most mobile platforms (such as Android, iOS, Windows phone 7 and I believe the new WinRT) can kill inactive application processes under memory pressure. To prevent this from affecting the user experience applications are expected to save and restore their state as their process is killed and restarted. Having application processes killed in this way makes the developers job harder. On various occasions I've seen a mobile app that would: Return to the welcome screen each time I switch back to it. Crash when I switch back to it (possibly accessing some state that no longer exists after the process was killed) Misbehave when I switch back to it (sometimes requiring a restart or tasks killer to fix) Otherwise misbehave in some hard to reproduce way (e.g. android service killed and restarted at the wrong time) I don't really understand why these mobile operating systems are designed to kill tasks in this way especially since it makes application development more difficult and error prone. Desktop operating systems don't kill processes like that. They swap out unused pages of memory to mass storage. Is there a reason why the same approach isn't used on mobile? Mobile hardware is only a few years behind PC hardware in term of performance. I'm sure there are very good reasons why mobile operating systems are designed this way. If you can point me to a paper or blog post that explain these reasons or can give me some insight I'd very much appreciate it.

    Read the article

  • Detecting walls or floors in pygame

    - by Serial
    I am trying to make bullets bounce of walls, but I can't figure out how to correctly do the collision detection. What I am currently doing is iterating through all the solid blocks and if the bullet hits the bottom, top or sides, its vector is adjusted accordingly. However, sometimes when I shoot, the bullet doesn't bounce, I think it's when I shoot at a border between two blocks. Here is the update method for my Bullet class: def update(self, dt): if self.can_bounce: #if the bullet hasnt bounced find its vector using the mousclick pos and player pos speed = -10. range = 200 distance = [self.mouse_x - self.player[0], self.mouse_y - self.player[1]] norm = math.sqrt(distance[0] ** 2 + distance[1] ** 2) direction = [distance[0] / norm, distance[1 ] / norm] bullet_vector = [direction[0] * speed, direction[1] * speed] self.dx = bullet_vector[0] self.dy = bullet_vector[1] #check each block for collision for block in self.game.solid_blocks: last = self.rect.copy() if self.rect.colliderect(block): topcheck = self.rect.top < block.rect.bottom and self.rect.top > block.rect.top bottomcheck = self.rect.bottom > block.rect.top and self.rect.bottom < block.rect.bottom rightcheck = self.rect.right > block.rect.left and self.rect.right < block.rect.right leftcheck = self.rect.left < block.rect.right and self.rect.left > block.rect.left each test tests if it hit the top bottom left or right side of the block its colliding with if self.can_bounce: if topcheck: self.rect = last self.dy *= -1 self.can_bounce = False print "top" if bottomcheck: self.rect = last self.dy *= -1 #Bottom check self.can_bounce = False print "bottom" if rightcheck: self.rect = last self.dx *= -1 #right check self.can_bounce = False print "right" if leftcheck: self.rect = last self.dx *= -1 #left check self.can_bounce = False print "left" else: # if it has already bounced and colliding again kill it self.kill() for enemy in self.game.enemies_list: if self.rect.colliderect(enemy): self.kill() #update position self.rect.x -= self.dx self.rect.y -= self.dy This definitely isn't the best way to do it but I can't think of another way. If anyone has done this or can help that would be awesome!

    Read the article

  • Glassfish v2 alternatedocroot - will DAS sync it?

    - by ring bearer
    Using Sun Glassfish Enterprise server v2.1.1 I am using "alternatedocroot" via sun-web.xml for my web application to abstract out static content from actual deploy-able code (EAR/WAR) What I have is a cluster of two server instances distributed across two physical hosts - HOST1 and HOST2. "alternatedocroot" points to /data/static-content/ on both HOST1 and HOST2. Would DAS (Domain application server )take care of syncing /data/static-content between HOST1 and HOST2 if I use syncinstances=true option while starting up the cluster? Thanks!

    Read the article

  • alternatedocroot

    - by ring bearer
    Using Sun Glassfish Enterprise server v2.1.1 I am using "alternatedocroot" via sun-web.xml for my web application to abstract out static content from actual deploy-able code (EAR/WAR) What I have is a cluster of two server instances distributed across two physical hosts - HOST1 and HOST2. "alternatedocroot" points to /data/static-content/ on both HOST1 and HOST2. Would DAS (Domain application server )take care of syncing /data/static-content between HOST1 and HOST2 if I use syncinstances=true option while starting up the cluster? Thanks!

    Read the article

  • java time check API

    - by ring bearer
    I am sure this was done 1000 times in 1000 different places. The question is I want to know if there is a better/standard/faster way to check if current "time" is between two time values given in hh:mm:ss format. For example, my big business logic should not run between 18:00:00 and 18:30:00. So here is what I had in mind: public static boolean isCurrentTimeBetween(String starthhmmss, String endhhmmss) throws ParseException{ DateFormat hhmmssFormat = new SimpleDateFormat("yyyyMMddhh:mm:ss"); Date now = new Date(); String yyyMMdd = hhmmssFormat.format(now).substring(0, 8); return(hhmmssFormat.parse(yyyMMdd+starthhmmss).before(now) && hhmmssFormat.parse(yyyMMdd+endhhmmss).after(now)); } Example test case: String doNotRunBetween="18:00:00,18:30:00";//read from props file String[] hhmmss = downTime.split(","); if(isCurrentTimeBetween(hhmmss[0], hhmmss[1])){ System.out.println("NOT OK TO RUN"); }else{ System.out.println("OK TO RUN"); } What I am looking for is code that is better in performance in looks in correctness What I am not looking for third-party libraries Exception handling debate variable naming conventions method modifier issues

    Read the article

  • java Properties - to expose or not to expose?

    - by ring bearer
    This might be an age old problem and I am sure everyone has their own ways. Suppose I have some properties defined such as secret.user.id=user secret.password=password website.url=http://stackoverflow.com Suppose I have 100 different classes and places where I need to use these properties. Which one is good (1) I create a Util class that will load all properties and serve them using a key constant Such as : Util is a singleton that loads all properties and keeps up on getInstance() call. Util myUtil = Util.getInstance(); String user = myUtil.getConfigByKey(Constants.SECRET_USER_ID); String password = myUtil.getConfigByKey(Constants.SECRET_PASSWORD); .. //getConfigByKey() - inturns invokes properties.get(..) doSomething(user, password) So wherever I need these properties, I can do steps above. (2) I create a meaningful Class to represent these properties; say, ApplicationConfig and provide getters to get specific properties. So above code may look like: ApplicationConfig config = ApplicationConfig.getInstance(); doSomething(config.getSecretUserId(), config.getPassword()); //ApplicationConfig would have instance variables that are initialized during // getInstance() after loading from properties file. Note: The properties file as such will have only minor changes in the future. My personal choice is (2) - let me hear some comments?

    Read the article

  • Agile and code release

    - by ring bearer
    Do you know of any agile process that is created for code releases? One of the main theme of agile is frequent releases and each company/client would have their own test/approval processes that control code releases. Most of the time these slow down the pace of "frequent releases" Currently we have a proprietary tool based workflow. The team who needs a code promotion needs to create a promotion request to one of the final UAT servers. Once this is complete, and once tests are done, certain customers, technical/non-technical managers need to approve, then it goes in to production deploy stage. Meanwhile no sprint planning meeting or anything of that sort. What is the code release process (Which is agile) that has worked for you?

    Read the article

  • Accessing an HTTPS web service from Glassfish based web-ap

    - by ring bearer
    Hi, I'm trying to access an HTTPS based web service URL from a web/ear application deployed on a Glassfish application server domain. We have obtained the certificate from the vendor that exposes the HTTPS URL What are the steps required for installing SSL certificates in order to access the web service ? (Though I know the outline, let me pretend I am layman) Thanks

    Read the article

  • Sell me Distributed revision control

    - by ring bearer
    I know 1000s of similar topics floating around. I read at lest 5 threads here in SO But why am I still not convinced about DVCS? I have only following questions (note that I am selfishly worried only about Java projects) What is the advantage or value of committing locally? What? really? All modern IDEs allows you to keep track of your changes? and if required you can restore a particular change. Also, they have a feature to label your changes/versions at IDE level!? what if I crash my hard drive? where did my local repository go? (so how is it cool compared to checking in to a central repo?) Working offline or in an air plane. What is the big deal?In order for me to build a release with my changes, I must eventually connect to the central repository. Till then it does not matter how I track my changes locally. Ok Linus Torvalds gives his life to Git and hates everything else. Is that enough to blindly sing praises? Linus lives in a different world compared to offshore developers in my mid-sized project? Pitch me!

    Read the article

  • Java application design question

    - by ring bearer
    I have a hobby project, which is basically to maintain 'todo' tasks in the way I like. One task can be described as: public class TodoItem { private String subject; private Date dueBy; private Date startBy; private Priority priority; private String category; private Status status; private String notes; } As you can imagine I would have 1000s of todo items at a given time. What is the best strategy to store a todo item? (currently on an XML file) such that all the items are loaded quickly up on app start up(the application shows kind of a dashboard of all the items at start up)? What is the best way to design its back-end so that it can be ported to Android/or a J2ME based phone? Currently this is done using Java Swing. What should I concentrate on so that it works efficiently on a device where memory is limited? The application throws open a form to enter new todo task. For now, I would like to save the newly added task to my-todos.xml once the user presses "save" button. What are the common ways to append such a change to an existing XML file?(note that I don't want to read the whole file again and then persist)

    Read the article

  • Will client JVM for a web service(https) throw an SSL Exception when the server is having a valid ce

    - by ring bearer
    I have a web service deployed on tomcat hosted on a remote server. I have set it up such that it can be accessed only via HTTPS. For this, I generated a Certificate Signing Request (CSR) and used it to get a temporary certificate from VeriSign. My web service client is on my local machine. If I try to access the service it will throw a javax.net.ssl.SSLHandshakeException:unable to find valid certification path to requested target If I install the certificate in to local Java's keystore, the issue will be resolved. My question is if I install a valid SSL certificate from a CA in to my tomcat server, will I get this client-side error even if I do not import the certificate to local key store?

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >