Search Results

Search found 26179 results on 1048 pages for 'linux from scratch'.

Page 519/1048 | < Previous Page | 515 516 517 518 519 520 521 522 523 524 525 526  | Next Page >

  • file_operations Question, how do i know if a process that opened a file for writing has decided to c

    - by djTeller
    Hi Kernel Gurus, I'm currently writing a simple "multicaster" module. Only one process can open a proc filesystem file for writing, and the rest can open it for reading. To do so i use the inode_operation .permission callback, I check the operation and when i detect someone open a file for writing I set a flag ON. i need a way to detect if a process that opened a file for writing has decided to close the file so i can set the flag OFF, so someone else can open for writing. Currently in case someone is open for writing i save the current-pid of that process and when the .close callback is called I check if that process is the one I saved earlier. Is there a better way to do that? Without saving the pid, perhaps checking the files that the current process has opened and it's permission... Thanks!

    Read the article

  • Use the output of a command as input of the next command

    - by r2b2
    so i call this php script from the command line : /usr/bin/php /var/www/bims/index.php "projects/output" and it's output is : file1 file2 file3 What I would like to do is get this output and feed to the "rm" command but i think im not doing it right : /usr/bin/php /var/www/bims/index.php "projects/output" | rm My goal is to delete whatever file names the php script outputs. What should be the proper way to do this? Thanks!

    Read the article

  • log4bash: Cannot find a way to add MaxBackupIndex to this logger implementation

    - by Syffys
    I have been trying to modify this log4bash implementation but I cannot manage to make it work. Here's a sample: #!/bin/bash TRUE=1 FALSE=0 ############### Added for testing log4bash_LOG_ENABLED=$TRUE log4bash_rootLogger=$TRACE,f,s log4bash_appender_f=file log4bash_appender_f_dir=$(pwd) log4bash_appender_f_file=test.log log4bash_appender_f_roll_format=%Y%m log4bash_appender_f_roll=$TRUE log4bash_appender_f_maxBackupIndex=10 #################################### log4bash_abs(){ if [ "${1:0:1}" == "." ]; then builtin echo ${rootDir}/${1} else builtin echo ${1} fi } log4bash_check_app_dir(){ if [ "$log4bash_LOG_ENABLED" -eq $TRUE ]; then dir=$(log4bash_abs $1) if [ ! -d ${dir} ]; then #log a seperation line mkdir $dir fi fi } # Delete old log files # $1 Log directory # $2 Log filename # $3 Log filename suffix # $4 Max backup index log4bash_delete_old_files(){ ##### Added for testing builtin echo "Running log4bash_delete_old_files $@" &2 ##### if [ "$log4bash_LOG_ENABLED" -eq $TRUE ] && [ -n "$3" ] && [ "$4" -gt 0 ]; then local directory=$(log4bash_abs $1) local filename=$2 local maxBackupIndex=$4 local suffix=$(echo "${3}" | sed -re 's/[^.]/?/g') local logFileList=$(find "${directory}" -mindepth 1 -maxdepth 1 -name "${filename}${suffix}" -type f | xargs ls -1rt) local fileCnt=$(builtin echo -e "${logFileList}" | wc -l) local fileToDeleteCnt=$(($fileCnt-$maxBackupIndex)) local fileToDelete=($(builtin echo -e "${logFileList}" | head -n "${fileToDeleteCnt}" | sed ':a;N;$!ba;s/\n/ /g')) ##### Added for testing builtin echo "log4bash_delete_old_files About to start deletion ${fileToDelete[@]}" &2 ##### if [ ${fileToDeleteCnt} -gt 0 ]; then for f in "${fileToDelete[@]}"; do #### Added for testing builtin echo "Removing file ${f}" &2 #### builtin eval rm -f ${f} done fi fi } #Appender # $1 Log directory # $2 Log file # $3 Log file roll ? # $4 Appender Name log4bash_filename(){ builtin echo "Running log4bash_filename $@" &2 local format local filename log4bash_check_app_dir "${1}" if [ ${3} -eq 1 ];then local formatProp=${4}_roll_format format=${!formatProp} if [ -z ${format} ]; then format=$log4bash_appender_file_format fi local suffix=.`date "+${format}"` filename=${1}/${2}${suffix} # Old log files deletion local previousFilenameVar=int_${4}_file_previous local maxBackupIndexVar=${4}_maxBackupIndex if [ -n "${!maxBackupIndexVar}" ] && [ "${!previousFilenameVar}" != "${filename}" ]; then builtin eval export $previousFilenameVar=$filename log4bash_delete_old_files "${1}" "${2}" "${suffix}" "${!maxBackupIndexVar}" else builtin echo "log4bash_filename $previousFilenameVar = ${!previousFilenameVar}" fi else filename=${1}/${2} fi builtin echo $filename } ######################## Added for testing filename_caller(){ builtin echo "filename_caller Call $1" output=$(log4bash_abs $(log4bash_filename "${log4bash_appender_f_dir}" "${log4bash_appender_f_file}" "1" "log4bash_appender_f" )) builtin echo ${output} } #### Previous logs generation for i in {1101..1120}; do file="${log4bash_appender_f_file}.2012${i:2:3}" builtin echo "${file} $i" touch -m -t "2012${i}0000" ${log4bash_appender_f_dir}/$file done for i in {1..4}; do filename_caller $i done I expect log4bash_filename function to step into the following if only when the calculated log filename is different from the previous one: if [ -n "${!maxBackupIndexVar}" ] && [ "${!previousFilenameVar}" != "${filename}" ]; then For this scenario to apply, I'd need ${!previousFilenameVar} to be correctly set, but it's not the case, so log4bash_filename steps into this if all the time which is really not necessary... It looks like the issue is due to the following line not working properly: builtin eval export $previousFilenameVar=$filename I have a some theories to explain why: in the original code, functions are declared and exported as readonly which makes them unable to modify global variable. I removed readonly declarations in the above sample, but probleme persists. Function calls are performed in $() which should make them run into seperated shell instances so variable modified are not exported to the main shell But I cannot manage to find a workaround to this issue... Any help is appreciated, thanks in advance!

    Read the article

  • are runtime linking library globals shared among plugins loaded with dlopen?

    - by conejoroy
    I've a C++ program that links at runtime with, lets say, mylib.so. then, the same program uses dlopen()/dlsym() to load a function from myplugin.so, dynamic library that in turn has dependencies to mylib.so. My question is: will the program AND the function in the plugin access the same globals defined in mydlib.so in the same memory area reserved for the program, or each will be assigned different, unrelated copies in its own memory space? if the latter is the default behaviour, is it possible to change that? Thanks in advance =)!

    Read the article

  • Simple POSIX threads question

    - by Andy
    Hi, I have this POSIX thread: void subthread(void) { while(!quit_thread) { // do something ... // don't waste cpu cycles if(!quit_thread) usleep(500); } // free resources ... // tell main thread we're done quit_thread = FALSE; } Now I want to terminate subthread() from my main thread. I've tried the following: quit_thread = TRUE; // wait until subthread() has cleaned its resources while(quit_thread); But it does not work! The while() clause does never exit although my subthread clearly sets quit_thread to FALSE after having freed its resources! If I modify my shutdown code like this: quit_thread = TRUE; // wait until subthread() has cleaned its resources while(quit_thread) usleep(10); Then everything is working fine! Could someone explain to me why the first solution does not work and why the version with usleep(10) suddenly works? I know that this is not a pretty solution. I could use semaphores/signals for this but I'd like to learn something about multithreading, so I'd like to know why my first solution doesn't work. Thanks!

    Read the article

  • How to install the program depending on libstdc++ library

    - by Alex Farber
    My program is written in C++, using GCC on Ubuntu 9.10 64 bit. If depends on /usr/lib64/libstdc++.so.6 which actually points to /usr/lib64/libstdc++.so.6.0.13. Now I copy this program to virgin Ubuntu 7.04 system and try to run it. It doesn't run, as expected. Then I add to the program directory the following files: libstdc++.so.6.0.13 libstdc++.so.6 (links to libstdc++.so.6.0.13) and execute command: LD_LIBRARY_PATH=. ./myprogram Now everything is OK. The question: how can I write installation script for such program? myprogram file itself should be placed to /usr/local/bin. What can I do with dependencies? For example, on destination computer, /usr/lib64/libstdc++.so.6 link points to /usr/lib64/libstdc++.so.6.0.8. What can I do with this? Note: the program is closed-source, I cannot provide source code and makefile.

    Read the article

  • Unmanaged Process in Mono

    - by Residuum
    I want to start a quite expensive process (jackd) from a Mono application, and do not need full access to the process from the application itself. As the process is so expensive in terms of CPU usage, a Glib.IdleHandler for polling the process will not work, as it is never executed, and the GUI becomes unresponsive. Is there any way to have the cake and eating it at the same time in Mono? EDIT: I only need to be able to start and stop the process from Mono, I do not need information about the state of the process or if it has exited, as my application will register itself as a client to jackd, basically I need a "replacement" for bash's jackd &>/dev/null 2>&1 & for the System.Diagnostics.Process ;). Here is what I have so far for starting and stopping the process: public void StartJackd() { _jackd = new Process (); _jackd.StartInfo = _jackdStartup; if (_jackd.Start ()) { _jackd.EnableRaisingEvents = true; _jackd.Exited += JackdExited; } } public void StopJackd() { if (_jackd != null && !_jackd.HasExited) { _jackd.CloseMainWindow (); } } And somewhere else I have this code for registering the IdleHandler: GLib.Idle.Add(new GLib.IdleHandler(UpdateJackdConnections)); This handler will fire all the time, while the process is not running, but never, when jackd is running.

    Read the article

  • debugfs_create_file doesn't create file

    - by bala1486
    Hello, I am trying to create a debugfs file using the debugfs_create_file(...). I have written a sample code for this. static int __init mmapexample_module_init(void) { file1 = debugfs_create_file("mmap_example", 0644, NULL, NULL, &my_fops)\ ; printk(KERN_ALERT "Hello, World\n"); if(file1==NULL) { printk(KERN_ALERT "Error occured\n"); } if(file1==-ENODEV) { printk(KERN_ALERT "ENODEV occured\n"); } return 0; } When i ran insmod i could get the Hello, World message but no the error message. So i think the debugfs_create_file worked fine. However i couldn't find any file in /sys/kernel/debug. The folder is there but it is empty. Can anyone help me with this? Thank you... Thanks, Bala

    Read the article

  • Shortening large CSV on debian

    - by Unkwntech
    I have a very large CSV file and I need to write an app that will parse it but using the 6GB file to test against is painful, is there a simple way to extract the first hundred or two lines without having to load the entire file into memory? The file resides on a Debian server.

    Read the article

  • Replacing a website on a Tomcat Server with a static HTML website

    - by Ashin Mandal
    I made a small static website for my client and now they want me to replace their present dynamic website with the static one. They have Ubuntu with SSH installed on the remote location. Their existing website is running on a Tomcat6 server and the site root is in "/var/lib/tomcat6/webapps/ROOT/". My website consists of just static HTML pages. How can I reconfigure/ replace the present website with the one I made? Should I just stop the server and replace the files in the site root with my files?

    Read the article

  • Checking if an SSH tunnel is up and running

    - by Jarmund
    I have a perl script which, when destilled a bit, looks like this: my $randport = int(10000 + rand(1000)); # Random port as other scripts like this run at the same time my $localip = '192.168.100.' . ($port - 4000); # Don't ask... backwards compatibility system("ssh -NL $randport:$localip:23 root\@$ip -o ConnectTimeout=60 -i somekey &"); # create the tunnel in the background sleep 10; # Give the tunnel some time to come up # Create the telnet object my $telnet = new Net::Telnet( Timeout => 10, Host => 'localhost', Port => $randport, Telnetmode => 0, Errmode => \&fail, ); # SNIPPED... a bunch of parsing data from $telnet The thing is that the target $ip is on a link with very unpredictable bandwidth, so the tunnel might come up right away, it might take a while, it might not come up at all. So a sleep is necessary to give the tunnel some time to get up and running. So the question is: How can i test if the tunnel is up and running? 10 seconds is a really undesirable delay if the tunnel comes up straight away. Ideally, i would like to check if it's up and continue with creating the telnet object once it is, to a maximum of, say, 30 seconds.

    Read the article

  • Shell Script- each unique user

    - by Dinis Monteiro
    Hi guys I need "for each unique user, report which group they are a member of and when they last logged in" so i have: #!/bin/sh echo "Your initial login:" who | cut -d' ' -f1 | sort | uniq echo "Now is logged:" whoami echo "Group ID:" id -G $whoami case $1 in "-l") last -Fn 10 | tr -s " " ;; *) last -Fn 10 | tr -s " " | egrep -v '(^reboot)|(^$)|(^wtmp a)|(^ftp)' | cut -d" " -f1,5,7 | sort -uM | uniq -c esac My question is: how i can show the each unique user? the script above only show the more recent user logged in the system, but i need all unique users. anyone can help? thanks

    Read the article

  • Kernel error causing cpu to go into shutdown state

    - by EpsilonVector
    What kind of Kernel error can cause the cpu to go into a shut down state? I'm doing a homework assignment in OS, and we did changes in sched.c (adding a new scheduling policy, which involved ading another prio_array to the queue and switching between them when needed). Processes using this policy cause the cpu to enter a shut down state when they finish. Any suggestions where to look?

    Read the article

  • Perl standard input with argument inside Bash

    - by neversaint
    I want to have such pipe in bash #! /usr/bin/bash cut -f1,2 file1.txt | myperl.pl foo | sort -u Now in myperl.pl it has content like this my $argv = $ARG[0] || "foo"; while (<>) { chomp; if ($argv eq "foo") { # do something with $_ } else { # do another } } But why the Perl script can't recognize the parameter passed through bash? Namely the code break with this message: Can't open foo: No such file or directory at myperl.pl line 15. What the right way to do it so that my Perl script can receive standard input and parameter at the same time?

    Read the article

  • How to Implement Web Based Find File Database Text Search

    - by neversaint
    I have series of files like this: foo1.txt.gz foo2.txt.gz bar1.txt.gz ..etc.. and a tabular format files that describe the file foo1 - Explain foo1 foo2 - Explain foo2 bar1 - Explain bar1 ..etc.. What I want to do is to have a website with a simple search bar and allow people to type foo1 or just foo and finally return the gzipped file(s) and the explanation of the file(s). What's the best way to implement this. Sorry I am totally new in this area.

    Read the article

  • Ubuntu 10.04: Unable to Start RabbitMQ Server Post-Installation

    - by Garland W. Binns
    After installing RabbitMQ on Ubuntu 10.04 I receive a failure message that the service was unable to start. Any insight into the issue would be greatly appreciated! Below are contents of startup_log and startup_err. Startup_log: {error_logger,{{2012,7,7},{15,50,31}},"Protocol: ~p: register error: ~p~n",["inet_tcp",{{badmatch,{error,etimedout}},[{inet_tcp_dist,listen,1},{net_kernel,start_protos,4},{net_kernel,start_protos,3},{net_kernel,init_node,2},{net_kernel,init,1},{gen_server,init_it,6},{proc_lib,init_p_do_apply,3}]}]} {error_logger,{{2012,7,7},{15,50,31}},crash_report,[[{initial_call,{net_kernel,init,['Argument__1']}},{pid,<0.20.0},{registered_name,[]},{error_info,{exit,{error,badarg},[{gen_server,init_it,6},{proc_lib,init_p_do_apply,3}]}},{ancestors,[net_sup,kernel_sup,<0.9.0]},{messages,[]},{links,[#Port<0.100,<0.17.0]},{dictionary,[{longnames,false}]},{trap_exit,true},{status,running},{heap_size,987},{stack_size,24},{reductions,512}],[]]} {error_logger,{{2012,7,7},{15,50,31}},supervisor_report,[{supervisor,{local,net_sup}},{errorContext,start_error},{reason,{'EXIT',nodistribution}},{offender,[{pid,undefined},{name,net_kernel},{mfa,{net_kernel,start_link,[[rabbitmqprelaunch877,shortnames]]}},{restart_type,permanent},{shutdown,2000},{child_type,worker}]}]} {error_logger,{{2012,7,7},{15,50,31}},supervisor_report,[{supervisor,{local,kernel_sup}},{errorContext,start_error},{reason,shutdown},{offender,[{pid,undefined},{name,net_sup},{mfa,{erl_distribution,start_link,[]}},{restart_type,permanent},{shutdown,infinity},{child_type,supervisor}]}]} {error_logger,{{2012,7,7},{15,50,31}},std_info,[{application,kernel},{exited,{shutdown,{kernel,start,[normal,[]]}}},{type,permanent}]} {"Kernel pid terminated",application_controller,"{application_start_failure,kernel,{shutdown,{kernel,start,[normal,[]]}}}"} Startup_err: Crash dump was written to: erl_crash.dump Kernel pid terminated (application_controller) ({application_start_failure,kernel,{shutdown,{kernel,start,[normal,[]]}}})

    Read the article

  • sudo changes PATH - why?

    - by Michiel de Mare
    This is the PATH variable without sudo: $ echo 'echo $PATH' | sh /opt/local/ruby/bin:/usr/bin:/bin This is the PATH variable with sudo: $echo 'echo $PATH' | sudo sh /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/X11R6/bin As far as I can tell, sudo is supposed to leave PATH untouched. What's going on? How do I change this? (This is on Ubuntu 8.04). UPDATE: as far as I can see, none of the scripts started as root change PATH in any way. From man sudo: To prevent command spoofing, sudo checks ``.'' and ``'' (both denoting current directory) last when searching for a command in the user's PATH (if one or both are in the PATH). Note, however, that the actual PATH environment variable is not modified and is passed unchanged to the program that sudo executes.

    Read the article

  • Question about using awk to print all columns great then nth

    - by Andy
    right now I have this line, and it worked until I had whitespace in the second field. svn status | grep '\!' | gawk '{print $2;}' > removedProjs is there a way to have awk print everything in $2 or greater? ($3, $4.. until we don't have anymore columns?) I suppose I should add that I'm doing this in a windows environment with cygwin.

    Read the article

  • rsyncing - ignoring checksums

    - by paulj3000
    Trying to copy a bunch of data over and time, unfortunately, is of the essence. I'd like to do an rsync of all the data in one direction, pretty much rsync just clobbers what's over on the destination server. Is there a way to do an rsync and just say, "overwrite all files" Is there a better way of doing this? We're talking 500GB of data that only has to go in one direction.

    Read the article

  • Easily measure elapsed time

    - by hap497
    I am trying to use time() to measure various points of my program. What I don't understand is why the values in the before and after are the same? I understand this is not the best way to profile my program, I just want to see how long something take. printf("**MyProgram::before time= %ld\n", time(NULL)); doSomthing(); doSomthingLong(); printf("**MyProgram::after time= %ld\n", time(NULL)); I have tried: struct timeval diff, startTV, endTV; gettimeofday(&startTV, NULL); doSomething(); doSomethingLong(); gettimeofday(&endTV, NULL); timersub(&endTV, &startTV, &diff); printf("**time taken = %ld %ld\n", diff.tv_sec, diff.tv_usec); How do I read a result of **time taken = 0 26339? Does that mean 26,339 nanoseconds = 26.3 msec? What about **time taken = 4 45025, does that mean 4 seconds and 25 msec?

    Read the article

< Previous Page | 515 516 517 518 519 520 521 522 523 524 525 526  | Next Page >