Search Results

Search found 6355 results on 255 pages for 'unix socket'.

Page 85/255 | < Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >

  • aumix problem- how can I make the changes permenantly

    - by thillai-selvan
    Hai every one. How can I make the changes in the aumix permanently? I am running the aumix application using the following command thinapplaunch aumix. I am increasing the volume manually. I saved and quit. I am again launching that application using the same command. thinapplaunch aumix Now all the changes I made is still there. All the volumes are full. But when I logged out and login again the changes is not exists. All the volumes are not full. How can I make the changes permanently? Any help much appreciated. Thanks in Advance!!!

    Read the article

  • Exit SSH from the script

    - by Kimi
    I Want to exit ssh: Does the below line work: ssh -f -T ${USAGE_2_USER}@${USAGE_2_HOST} Or do i need to write it some other way . Please tell should I use exit with ssh an how?

    Read the article

  • How to use parallel execution in a shell script?

    - by eSKay
    I have a C shell script that does something like this: #!/bin/csh gcc example.c -o ex gcc combine.c -o combine ex file1 r1 <-- 1 ex file2 r2 <-- 2 ex file3 r3 <-- 3 #... many more like the above combine r1 r2 r3 final \rm r1 r2 r3 Is there some way I can make lines 1, 2 and 3 run in parallel instead of one after the another?

    Read the article

  • Bash: using commands as parameters (specifically cd, dirname and find)

    - by sixtyfootersdude
    This command and output: % find . -name file.xml 2> /dev/null ./a/d/file.xml % So this command and output: % dirname `find . -name file.xml 2> /dev/null` ./a/d % So you would expect that this command: % cd `dirname `find . -name file.xml 2> /dev/null`` Would change the current directory to ./a/d. Strangely this does not work. When I type cd ./a/d. The directory change works. However I cannot find out why the above does not work...

    Read the article

  • Shell script task status monitoring

    - by Bikram Agarwal
    I'm running an ANT task in background and checking in 60 second intervals whether that task is complete or not. If it is not, every 60 seconds, a message should be displayed on screen - "Deploy process is still running. $slept seconds since deploy started", where $slept is 60, 120, 180 n so on. There's a limit of 1200 seconds, after which the script will show the log via 'ant log' command and ask the user whether to continue. If the user chooses to continue, 300 seconds are added to the time limit and the process repeats. The code that I am using for this task is - ant deploy & limit=1200 deploy_check() { while [ ${slept:-0} -le $limit ]; do sleep 60 && slept=`expr ${slept:-0} + 60` if [ $$ = "`ps -o ppid= -p $!`" ]; then echo "Deploy process is still running. $slept seconds since deploy started." else wait $! && echo "Application ${New_App_Name} deployed successfully" || echo "Deployment of ${New_App_Name} failed" break fi done } deploy_check if [ $$ = "`ps -o ppid= -p $!`" ]; then echo "Deploy process did not finish in $slept seconds. Here's the log." ant log echo "Do you want to kill the process? Press Ctrl+C to kill. Press Enter to continue." read log limit=`expr ${limit} + 300` deploy_check fi Now, the problem is - this code is not working. This looks like a perfectly good code and yet, this is not working. Can anyone point out what is wrong with this code, please.

    Read the article

  • iteratively creating graphs

    - by Andrei
    I have a bunch of files containing x and y coordinates, representing time and value (space-separated, but can be amended) For example: 15:06:59 0.0140 ....... I want to create a word file (or some equivalent) to show all these graphs. Right now I am using Excel. It pretty daunting task, as I ahve to plug paste numbers in two rows for each graph, and I have many of them. Thanks

    Read the article

  • Why Does Piping Binary Text to the Screen often Horck a Terminal

    - by Alan Storm
    Imaginary Situation: You’ve used mysqldump to create a backup of a mysql database. This database has columns that are blobs. That means your “text” dump files contains both strings and binary data (binary data stored as strings?) If you cat this file to the screen $ cat dump.mysql you’ll often get unexpected results. The terminal will start beeping, and then the output finishes scrolling by you’ll often have garbage chacters entered on your terminal as through you’d typed them, and sometimes your prompts and anything you type will be garbage characters. Why does this happen? Put another way, I think I’m looking for an overview of what’s actually happening when you store binary strings into a file, and when you cat those files, and when the results of the cat are reported to the terminal, and any other steps I’m missing.

    Read the article

  • In terminal, merging multiple folders into one.

    - by Josh Pinter
    I have a backup directory created by WDBackup (western digital external HD backup util) that contains a directory for each day that it backed up and the incremental contents of just what was backed up. So the hierarchy looks like this: 20100101 My Documents Letter1.doc My Music Best Songs Every First Songs.mp3 My song.mp3 # modified 20100101 20100102 My Documents Important Docs Taxes.doc My Music My Song.mp3 # modified 20100102 ...etc... Only what has changed is backed up and the first backup that was ever made contains all the files selected for backup. What I'm trying to do now is incrementally copy, while keeping the folder structure, from oldest to newest, each of these dated folders into a 'merged' folder so that it overrides the older content and keeps the new stuff. As an example, if just using these two example folders, the final merged folder would look like this: Merged My Documents Important Docs Taxes.doc Letter1.doc My Music Best Songs Every First Songs.mp3 My Song.mp3 # modified 20100102 Hope that makes sense. Thanks, Josh

    Read the article

  • How to feed data over STDIN to multiple external commands in ruby.

    - by Erik
    This question is a bit like my previous (answered) question: How to run multiple external commands in the background in ruby. But, in this case I am looking for a way to feed ruby strings over STDIN to external processes, something like this (the code below is not valid but illustrates my goal): #!/usr/bin/ruby str1 = 'In reality a relatively large string.....' str2 = 'Another large string' str3 = 'etc..' spawn 'some_command.sh', :stdin => str1 spawn 'some_command.sh', :stdin => str2 spawn 'some_command.sh', :stdin => str3 Process.waitall

    Read the article

  • HPUX setacl leaves uid behind

    - by Woot4Moo
    I have a shell script that I execute after uninstalling a web application. The script is meant to clean up permissions that were needed during the execution of the application. find /opt/path -exec setacl -d user:myUser{} ';' After this executes and the acl is removed I am left with an acl that looks as follows user:101:--- /opt/path How can I properly call setacl to remove the user without leaving behind a uid?

    Read the article

  • What do programs see when ZFS can't deliver uncorrupted data?

    - by Jay Kominek
    Say my program attempts a read of a byte in a file on a ZFS filesystem. ZFS can locate a copy of the necessary block, but cannot locate any copy with a valid checksum (they're all corrupted, or the only disks present have corrupted copies). What does my program see, in terms of the return value from the read, and the byte it tried to read? And is there a way to influence the behavior (under Solaris, or any other ZFS-implementing OS), that is, force failure, or force success, with potentially corrupt data?

    Read the article

  • Unknown error in Producer/Consumer program, believe it to be an infinite loop.

    - by ray2k
    Hello, I am writing a program that is solving the producer/consumer problem, specifically the bounded-buffer version(i believe they mean the same thing). The producer will be generating x number of random numbers, where x is a command line parameter to my program. At the current moment, I believe my program is entering an infinite loop, but I'm not sure why it is occurring. I believe I am executing the semaphores correctly. You compile it like this: gcc -o prodcon prodcon.cpp -lpthread -lrt Then to run, ./prodcon 100(the number of randum nums to produce) This is my code. typedef int buffer_item; #include <stdlib.h> #include <stdio.h> #include <pthread.h> #include <semaphore.h> #include <unistd.h> #define BUFF_SIZE 10 #define RAND_DIVISOR 100000000 #define TRUE 1 //two threads void *Producer(void *param); void *Consumer(void *param); int insert_item(buffer_item item); int remove_item(buffer_item *item); int returnRandom(); //the global semaphores sem_t empty, full, mutex; //the buffer buffer_item buf[BUFF_SIZE]; //buffer counter int counter; //number of random numbers to produce int numRand; int main(int argc, char** argv) { /* thread ids and attributes */ pthread_t pid, cid; pthread_attr_t attr; pthread_attr_init(&attr); pthread_attr_setscope(&attr, PTHREAD_SCOPE_SYSTEM); numRand = atoi(argv[1]); sem_init(&empty,0,BUFF_SIZE); sem_init(&full,0,0); sem_init(&mutex,0,0); printf("main started\n"); pthread_create(&pid, &attr, Producer, NULL); pthread_create(&cid, &attr, Consumer, NULL); printf("main gets here"); pthread_join(pid, NULL); pthread_join(cid, NULL); printf("main done\n"); return 0; } //generates a randum number between 1 and 100 int returnRandom() { int num; srand(time(NULL)); num = rand() % 100 + 1; return num; } //begin producing items void *Producer(void *param) { buffer_item item; int i; for(i = 0; i < numRand; i++) { //sleep for a random period of time int rNum = rand() / RAND_DIVISOR; sleep(rNum); //generate a random number item = returnRandom(); //acquire the empty lock sem_wait(&empty); //acquire the mutex lock sem_wait(&mutex); if(insert_item(item)) { fprintf(stderr, " Producer report error condition\n"); } else { printf("producer produced %d\n", item); } /* release the mutex lock */ sem_post(&mutex); /* signal full */ sem_post(&full); } return NULL; } /* Consumer Thread */ void *Consumer(void *param) { buffer_item item; int i; for(i = 0; i < numRand; i++) { /* sleep for a random period of time */ int rNum = rand() / RAND_DIVISOR; sleep(rNum); /* aquire the full lock */ sem_wait(&full); /* aquire the mutex lock */ sem_wait(&mutex); if(remove_item(&item)) { fprintf(stderr, "Consumer report error condition\n"); } else { printf("consumer consumed %d\n", item); } /* release the mutex lock */ sem_post(&mutex); /* signal empty */ sem_post(&empty); } return NULL; } /* Add an item to the buffer */ int insert_item(buffer_item item) { /* When the buffer is not full add the item and increment the counter*/ if(counter < BUFF_SIZE) { buf[counter] = item; counter++; return 0; } else { /* Error the buffer is full */ return -1; } } /* Remove an item from the buffer */ int remove_item(buffer_item *item) { /* When the buffer is not empty remove the item and decrement the counter */ if(counter > 0) { *item = buf[(counter-1)]; counter--; return 0; } else { /* Error buffer empty */ return -1; } }

    Read the article

  • Stopping httpd causes a process started from perl CGI script to receive SIGTERM

    - by Pranav Pal
    I am running a shell script from a perl CGI script: #!/usr/bin/perl my $command = "./script.sh &"; my $pid = fork(); if (defined($pid) && $pid==0) { # background process system( $command ); } The shell script looks like this: #!/bin/sh trap 'echo trapped' 15 tail -f test.log When I run the CGI script from browser, and then stop httpd using /etc/init.d/httpd stop, the script receives a SIGTERM signal. I was expecting the script to run as a separate process and not be tied in anyway to httpd. Though I can trap the SIGTERM, I would like to understand why the script is receiving SIGTERM at all. What wrong am I doing here? I am running RHEL 5.8 and Apache HTTP server 2.4. Thanks, Pranav

    Read the article

  • ImageMagick - how to enforce min/max heights/widths?

    - by Henryh
    Using ImageMagick, how can I resize an image to have a minimum: height of 150px width of 200px and also have a maximum: height of 225px width of 275px UPDATE: In case it helps, here's a further explanation of what I'm experiencing. I have a buch of images with all different ratio dimensions. Some images have 1:5 height/width ratios. Some have 5:1 height/width ratios. So that I want to do is set that a minimum height/width size for the image but also don't want the image size to be larger than a particular size. If I need to apply white padding to an image to make it fit within my constraint so that I don't have to distort the image, I'd like to do so.

    Read the article

  • Editing Multiple files in vi with Wildcards

    - by Alan Storm
    When using the programmers text editor vi, I'll often using a wildcard search to be lazy about the file I want to edit vi ThisIsAReallLongFi*.txt When this matches a single file it works great. However, if it matches multiple files vi does something weird. First, it opens the first file for editing Second, when I :wq out of the file, I get a message the bottom of the terminal that looks like this E173: 4 more files to edit Hit ENTER or type command to continue When I hit enter, it returns me to edit mode in the file I was just in. The behavior I'd expect here would be that vi would move on to the next file to edit. So, What's the logic behind vi's behavior here Is there a way to move on and edit the next file that's been matched? And yes, I know about tab completion, this question is based on curiosity and wanting to understand the shell better.

    Read the article

  • Inserting text in a file from a variable

    - by user754905
    I have a file that looks something like this: ABC DEF GHI I have a shell variable that looks something like this: var="MRD" What I want to do, is to make my file look like this: ABC MRD DEF GHI I was trying to do this: sed -i -e 's/ABC/&$var/g' text.txt but it only inserts $var instead of the value. I also tried this: sed -i -e 's/ABC/&"$var"/g' text.txt but that didn't work either. Any thoughts? Thanks!

    Read the article

  • git crlf configuration in mixed environment

    - by Jonas Byström
    I'm running a mixed environment, and keep a central, bare repository where I pull and push most of my stuff. This centralized repository runs on Linux, and I check out to Windows XP/7, Mac and Linux. In all repositories I put the following line in my .git/config: [core] autocrlf = true I don't have the flag safecrlf=true anywhere. First time when I modify stuff on my one Windows machine (XP) there is no problem and when I look at the diff, it looks fine. But when I do the same on the other Windows machine (7), all lines are shown as changed but local line endings are \r\n as expected (when checked in a hex editor). The same applies to a MacOSX can. Sometimes I get the feeling that the different systems wrestle on line endings, but I can't be sure (I'm loosing track of all the times I change specific files). I didn't use to have the autocrlf set, but set the flag many months back. Could that be causing my current problems? Do I need to clone everything again to loose some old baggage? Or are there other things that needs configuring too? I tried git checkout -- . about a million times, but with no success.

    Read the article

< Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >