Search Results

Search found 26947 results on 1078 pages for 'util linux'.

Page 521/1078 | < Previous Page | 517 518 519 520 521 522 523 524 525 526 527 528  | Next Page >

  • Problem with .release behavior in file_operations

    - by Yannick
    Hello, I'm dealing with a problem in a kernel module that get data from userspace using a /proc entry. I set open/write/release entries for my own defined /proc entry, and manage well to use it to get data from userspace. I handle errors in open/write functions well, and they are visible to user as open/fopen or write/fwrite/fprintf errors. But some of the errors can only be checked at close (because it's the time all the data is available). In these cases I return something different than 0, which I supposed to be in some way the value 'close' or 'fclose' will return to user. But whatever the value I return my close behave like if all is fine. To be sure I replaced all the release() code by a simple 'return(-1);' and wrote a program that open/write/close the /proc entry, and prints the close return value (and the errno). It always return '0' whatever the value I give. Behavior is the same with 'fclose', or by using shell mechanism (echo "..." /proc/my/entry). Any clue about this strange behavior that is not the one claimed in many tutorials I found? BTW I'm using RHEL5 kernel (2.6.18, redhat modified), on a 64bit system. Thanks. Regards, Yannick

    Read the article

  • PHP mail() function not delivering mail

    - by Ryan Jones
    Hi there, I have a slight problem. I am using a working script (works on my testing account - shared server) to send a mail through PHP using the mail() function. I just got a dedicated server, and I haven't been able to get the function to work. I've spent the last 10 or so hours reading various documentations on BIND (for the SPF record), dovecot, sendmail and postfix trying various things to get this to work. There is clearly something that I am missing. So we know the PHP code works fine. All the headers are fine everything. We know this as it's a direct copy from my testing account. So the problem must arise somewhere in the server config. The path to sendmail is correct, and sendmail is (apparently) working fine. I've set up the script to now deliver "Sent" or "Error" based on the boolean result from the PHP mail() function. That is: if(mail($blah,$blah,$blah,$blah,$blah)) { echo "Sent"; } else { echo "Error";} And the result ALWAYS comes up as "Sent" - however, the email never arrives. Can someone suggest things to check, as I'm completely new to this (24 hours or so!). Thanks in advance. Ryan

    Read the article

  • Running ssh-keygen without human interaction?

    - by Marco
    Would it be possible to run ssh-keygen without human interaction? I have a shell script that takes care of server deployment from start to finish, but ssh-keygen is the only remaining piece that still requires my input. Would it be possible to feed the parameters to it? Or is there something similar to debconf-set-selections that could be used for this? *running Debian

    Read the article

  • log4bash: Cannot find a way to add MaxBackupIndex to this logger implementation

    - by Syffys
    I have been trying to modify this log4bash implementation but I cannot manage to make it work. Here's a sample: #!/bin/bash TRUE=1 FALSE=0 ############### Added for testing log4bash_LOG_ENABLED=$TRUE log4bash_rootLogger=$TRACE,f,s log4bash_appender_f=file log4bash_appender_f_dir=$(pwd) log4bash_appender_f_file=test.log log4bash_appender_f_roll_format=%Y%m log4bash_appender_f_roll=$TRUE log4bash_appender_f_maxBackupIndex=10 #################################### log4bash_abs(){ if [ "${1:0:1}" == "." ]; then builtin echo ${rootDir}/${1} else builtin echo ${1} fi } log4bash_check_app_dir(){ if [ "$log4bash_LOG_ENABLED" -eq $TRUE ]; then dir=$(log4bash_abs $1) if [ ! -d ${dir} ]; then #log a seperation line mkdir $dir fi fi } # Delete old log files # $1 Log directory # $2 Log filename # $3 Log filename suffix # $4 Max backup index log4bash_delete_old_files(){ ##### Added for testing builtin echo "Running log4bash_delete_old_files $@" &2 ##### if [ "$log4bash_LOG_ENABLED" -eq $TRUE ] && [ -n "$3" ] && [ "$4" -gt 0 ]; then local directory=$(log4bash_abs $1) local filename=$2 local maxBackupIndex=$4 local suffix=$(echo "${3}" | sed -re 's/[^.]/?/g') local logFileList=$(find "${directory}" -mindepth 1 -maxdepth 1 -name "${filename}${suffix}" -type f | xargs ls -1rt) local fileCnt=$(builtin echo -e "${logFileList}" | wc -l) local fileToDeleteCnt=$(($fileCnt-$maxBackupIndex)) local fileToDelete=($(builtin echo -e "${logFileList}" | head -n "${fileToDeleteCnt}" | sed ':a;N;$!ba;s/\n/ /g')) ##### Added for testing builtin echo "log4bash_delete_old_files About to start deletion ${fileToDelete[@]}" &2 ##### if [ ${fileToDeleteCnt} -gt 0 ]; then for f in "${fileToDelete[@]}"; do #### Added for testing builtin echo "Removing file ${f}" &2 #### builtin eval rm -f ${f} done fi fi } #Appender # $1 Log directory # $2 Log file # $3 Log file roll ? # $4 Appender Name log4bash_filename(){ builtin echo "Running log4bash_filename $@" &2 local format local filename log4bash_check_app_dir "${1}" if [ ${3} -eq 1 ];then local formatProp=${4}_roll_format format=${!formatProp} if [ -z ${format} ]; then format=$log4bash_appender_file_format fi local suffix=.`date "+${format}"` filename=${1}/${2}${suffix} # Old log files deletion local previousFilenameVar=int_${4}_file_previous local maxBackupIndexVar=${4}_maxBackupIndex if [ -n "${!maxBackupIndexVar}" ] && [ "${!previousFilenameVar}" != "${filename}" ]; then builtin eval export $previousFilenameVar=$filename log4bash_delete_old_files "${1}" "${2}" "${suffix}" "${!maxBackupIndexVar}" else builtin echo "log4bash_filename $previousFilenameVar = ${!previousFilenameVar}" fi else filename=${1}/${2} fi builtin echo $filename } ######################## Added for testing filename_caller(){ builtin echo "filename_caller Call $1" output=$(log4bash_abs $(log4bash_filename "${log4bash_appender_f_dir}" "${log4bash_appender_f_file}" "1" "log4bash_appender_f" )) builtin echo ${output} } #### Previous logs generation for i in {1101..1120}; do file="${log4bash_appender_f_file}.2012${i:2:3}" builtin echo "${file} $i" touch -m -t "2012${i}0000" ${log4bash_appender_f_dir}/$file done for i in {1..4}; do filename_caller $i done I expect log4bash_filename function to step into the following if only when the calculated log filename is different from the previous one: if [ -n "${!maxBackupIndexVar}" ] && [ "${!previousFilenameVar}" != "${filename}" ]; then For this scenario to apply, I'd need ${!previousFilenameVar} to be correctly set, but it's not the case, so log4bash_filename steps into this if all the time which is really not necessary... It looks like the issue is due to the following line not working properly: builtin eval export $previousFilenameVar=$filename I have a some theories to explain why: in the original code, functions are declared and exported as readonly which makes them unable to modify global variable. I removed readonly declarations in the above sample, but probleme persists. Function calls are performed in $() which should make them run into seperated shell instances so variable modified are not exported to the main shell But I cannot manage to find a workaround to this issue... Any help is appreciated, thanks in advance!

    Read the article

  • MySQL nested CASE error I need help with?

    - by AK
    What I am trying to do here is: IF the records in table todo as identified in $done have a value in the column recurinterval then THEN reset date_scheduled column ELSE just set status_id column to 6 for those records. This is the error I get from mysql_error() ... You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'CASE recurinterval != 0 AND recurinterval IS NOT NULL THEN SET date_sche' at line 2 How can I make this statement work? UPDATE todo CASE recurinterval != 0 AND recurinterval IS NOT NULL THEN SET date_scheduled = CASE recurunit WHEN 'DAY' THEN DATE_ADD(date_scheduled, INTERVAL recurinterval DAY) WHEN 'WEEK' THEN DATE_ADD(date_scheduled, INTERVAL recurinterval WEEK) WHEN 'MONTH' THEN DATE_ADD(date_scheduled, INTERVAL recurinterval MONTH) WHEN 'YEAR' THEN DATE_ADD(date_scheduled, INTERVAL recurinterval YEAR) END WHERE todo_id IN ($done) ELSE SET status_id = 6 WHERE todo_id IN ($done) END The following mySQL statement worked just fine before I revised like above. UPDATE todo SET date_scheduled = CASE recurunit WHEN 'DAY' THEN DATE_ADD(date_scheduled, INTERVAL recurinterval DAY) WHEN 'WEEK' THEN DATE_ADD(date_scheduled, INTERVAL recurinterval WEEK) WHEN 'MONTH' THEN DATE_ADD(date_scheduled, INTERVAL recurinterval MONTH) WHEN 'YEAR' THEN DATE_ADD(date_scheduled, INTERVAL recurinterval YEAR) END WHERE todo_id IN ($done) AND recurinterval != 0 AND recurinterval IS NOT NULL

    Read the article

  • How to run 64 bit apps on 32 bit os

    - by Sirish Kumar
    Hi, I am using 32 bit openSUSE OS, and I am using a cross compiler to build 64 bit application( it does not support building 32 apps) as our software will be deployed on a machine which is 64 bit OS. As testing on target is not always possible, is there anyway to run this applications on my 32 bit OS.

    Read the article

  • Is there a reasonable way to attach new path to PATH in bashrc?

    - by Ripley
    Guys I constantly need to attach new paths to the PATH environment variable in .bashrc, like below: export PATH=/usr/local/bin:$PATH Then to make it take effect, I always do 'source ~/.bashrc' or '. ~/.bashrc', while I found one shortcoming of doing so which make me uncomfortable. If I keep doing so, the PATH will getting longer and longer with many duplicated entries, for example in the previous command, if I source it twice, the value of PATH will be PATH=/usr/local/bin:/usr/local/bin:/usr/local/bin:$PATH(<-the original path). Is there a more decent way to attach new path to PATH in bashrc without making it ugly?

    Read the article

  • Android playing Video data from a custom network stream?

    - by Cinar
    Does Android MediaPlayer can only work with file sources? I would like play media (video) from a network stream, but the stream comes in a non-standard protocol, so I have to somehow feed Android MediaPlayer with the data only. Is there anyway to do that? I found a few web pages suggesting using a temporary file for the buffered media data etc. but I would like to minimize the I/O usage as much as I can, so I'm looking for a API only solution if there is any? how about JNI? but looks like the permissions going to be an issue with that also.

    Read the article

  • writing to an ioport resulting in segfaults...

    - by Sniperchild
    I'm writing for an atmel at91sam9260 arm 9 cored single board computer [glomation gesbc9260] Using request_mem_region(0xFFFFFC00,0x100,"name"); //port range runs from fc00 to fcff that works fine and shows up in /proc/iomem then i try to write to the last bit of the port at fc20 with writel(0x1, 0xFFFFFC20); and i segfault...specifically "unable to handle kernel paging request at virtual address fffffc20. I'm of the mind that i'm not allocating the right memory space... any helpful insight would be great...

    Read the article

  • BUG - ProteaAudio with Lua does not work

    - by Stackfan
    Any idea why i cant use or cant build in Lua the ProTeaAudio ? 1) Exist [root@example ~]# yum install lua-devel Loaded plugins: presto, refresh-packagekit Setting up Install Process Package lua-devel-5.1.4-4.fc12.i686 already installed and latest version Nothing to do 2) get failed to build the RtAudio [sun@example proteaAudio_src_090204]$ make g++ -O2 -Wall -DHAVE_GETTIMEOFDAY -D__LINUX_ALSA__ -Irtaudio -Irtaudio/include -I../lua/src -I../archive/baseCode/include -c rtaudio/RtAudio.cpp -o rtaudio/RtAudio.o rtaudio/RtAudio.cpp:365: error: no ‘unsigned int RtApi::getStreamSampleRate()’ member function declared in class ‘RtApi’ rtaudio/RtAudio.cpp: In member function ‘virtual bool RtApiAlsa::probeDeviceOpen(unsigned int, RtApi::StreamMode, unsigned int, unsigned int, unsigned int, RtAudioFormat, unsigned int*, RtAudio::StreamOptions*)’: rtaudio/RtAudio.cpp:5835: error: ‘RTAUDIO_SCHEDULE_REALTIME’ was not declared in this scope rtaudio/RtAudio.cpp:5837: error: ‘struct RtAudio::StreamOptions’ has no member named ‘priority’ make: *** [rtaudio/RtAudio.o] Error 1 [sun@example proteaAudio_src_090204]$ Lua 5.1.4 Copyright (C) 1994-2008 Lua.org, PUC-Rio > require("proAudioRt"); stdin:1: module 'proAudioRt' not found: no field package.preload['proAudioRt'] no file './proAudioRt.lua' no file '/usr/share/lua/5.1/proAudioRt.lua' no file '/usr/share/lua/5.1/proAudioRt/init.lua' no file '/usr/lib/lua/5.1/proAudioRt.lua' no file '/usr/lib/lua/5.1/proAudioRt/init.lua' no file './proAudioRt.so' no file '/usr/lib/lua/5.1/proAudioRt.so' no file '/usr/lib/lua/5.1/loadall.so' stack traceback: [C]: in function 'require' stdin:1: in main chunk [C]: ?

    Read the article

  • Shell Script- each unique user

    - by Dinis Monteiro
    Hi guys I need "for each unique user, report which group they are a member of and when they last logged in" so i have: #!/bin/sh echo "Your initial login:" who | cut -d' ' -f1 | sort | uniq echo "Now is logged:" whoami echo "Group ID:" id -G $whoami case $1 in "-l") last -Fn 10 | tr -s " " ;; *) last -Fn 10 | tr -s " " | egrep -v '(^reboot)|(^$)|(^wtmp a)|(^ftp)' | cut -d" " -f1,5,7 | sort -uM | uniq -c esac My question is: how i can show the each unique user? the script above only show the more recent user logged in the system, but i need all unique users. anyone can help? thanks

    Read the article

  • gcc architecture question

    - by Andy
    Hi, I'm compiling my program with architecture set to -mtune=i386 However, I'm also linking statically against several libs (libpng, zlib, jpeglib, vorbisfile, libogg). I've built these libs on my own using configure and make, so I guess these libs were built with architecture being set to my system's architecture which would be i686. But I don't want that! I want my program to run on i386, too, so I need to make sure that all these libs that I'm statically linking against are built for i386, too. So my question: Is there a convenient way to build libpng/zlib/jpeglib/vorbisfile/libogg etc. for i386 or do I have to modify all of their makefiles manually and make sure that -mtune is set to i386? Thanks for help! Andy

    Read the article

  • valgrind - ignore glibc functions?

    - by Jack
    Is it possible to tell valgrind to ignore some set of libraries? Specifically glibc libraries.. Actual Problem: I have some code that runs fine in normal execution. No leaks etc. When I try to run it through valgrind, I get core dumps and program restarts/stops. Core usually points to glibc functions (usually fseek, mutex etc). I understand that there might be some issue with incompatible glibc / valgrind version. I tried various valgrind releases and glibc versions but no luck. Any suggestions?

    Read the article

  • Checking if an SSH tunnel is up and running

    - by Jarmund
    I have a perl script which, when destilled a bit, looks like this: my $randport = int(10000 + rand(1000)); # Random port as other scripts like this run at the same time my $localip = '192.168.100.' . ($port - 4000); # Don't ask... backwards compatibility system("ssh -NL $randport:$localip:23 root\@$ip -o ConnectTimeout=60 -i somekey &"); # create the tunnel in the background sleep 10; # Give the tunnel some time to come up # Create the telnet object my $telnet = new Net::Telnet( Timeout => 10, Host => 'localhost', Port => $randport, Telnetmode => 0, Errmode => \&fail, ); # SNIPPED... a bunch of parsing data from $telnet The thing is that the target $ip is on a link with very unpredictable bandwidth, so the tunnel might come up right away, it might take a while, it might not come up at all. So a sleep is necessary to give the tunnel some time to get up and running. So the question is: How can i test if the tunnel is up and running? 10 seconds is a really undesirable delay if the tunnel comes up straight away. Ideally, i would like to check if it's up and continue with creating the telnet object once it is, to a maximum of, say, 30 seconds.

    Read the article

  • Sockets and multithreading

    - by V0idExp
    Hi to all! I have an interesting (to me) problem... There are two threads, one for capturing data from std input and sending it through socket to server, and another one which receives data from blocking socket. So, when there's no reply from server, recv() call waits indefenitely, right? But instead of blocking only its calling thread, it blocks the overall process! Why this thing occurs?

    Read the article

  • Bash: Is it ok to use same input file as output of a piped command?

    - by Amro
    Consider something like: cat file | command > file Is this good practice? Could this overwrite the input file as the same time as we are reading it, or is it always read first in memory then piped to second command? Obviously I can use temp files as intermediary step, but I'm just wondering.. t=$(mktemp) cat file | command > ${t} && mv ${t} file

    Read the article

  • comparing two files and merge the data

    - by Ganz Ricanz
    I have the below files, total.txt order1,5,item1 order2,6,item2 order3,7,item3 order4,6,item4 order8,9,item8 changed.txt order3,8,item3 order8,12,item8 total.txt is total order data and changed.txt is recently changed data. I want to merge the recent change with total, i want the output as , Output.txt order1,5,item1 order2,6,item2 order3,8,item3 order4,6,item4 order8,12,item8 Note : 2nd column of (3rd & 5th) row of the total.txt is updated with changed.txt file i have used the below nawk to compare the first coulmn, but not able to print it to the output file. Please help on complete the below command nawk -F"," 'NR==FNR {a[$1]=$2;next} ($1 in a) "print??"' total.txt changed.txt

    Read the article

  • per process configurable core dump directory

    - by Hanno Stock
    Is there a way to configure the directory where core dump files are placed for a specific process? I have a daemon process written in C++ for which I would like to configure the core dump directory. Optionally the filename pattern should be configurable, too. I know about /proc/sys/kernel/core_name_format, however this would change the pattern and directory structure globally. Apache has the directive CoreDumpDirectory - so it seems to be possible.

    Read the article

  • How communicate with pty via minicom or screen?

    - by gscott2112
    I am trying to provide an AT/Modem-like interface around some hardware. Follwing this post I have the server setting up a pty using openpty(). Now I can communicate with the server as expected with a client app that open the slave and communicates via read() and write() calls. However I would also like to be able to use either the screen command or minicom to issue commands by hand to the slave. However the server never seems to receive any data when trying to do this. Is there something significant I am missing with this approach?

    Read the article

  • How to Implement Web Based Find File Database Text Search

    - by neversaint
    I have series of files like this: foo1.txt.gz foo2.txt.gz bar1.txt.gz ..etc.. and a tabular format files that describe the file foo1 - Explain foo1 foo2 - Explain foo2 bar1 - Explain bar1 ..etc.. What I want to do is to have a website with a simple search bar and allow people to type foo1 or just foo and finally return the gzipped file(s) and the explanation of the file(s). What's the best way to implement this. Sorry I am totally new in this area.

    Read the article

  • Unmanaged Process in Mono

    - by Residuum
    I want to start a quite expensive process (jackd) from a Mono application, and do not need full access to the process from the application itself. As the process is so expensive in terms of CPU usage, a Glib.IdleHandler for polling the process will not work, as it is never executed, and the GUI becomes unresponsive. Is there any way to have the cake and eating it at the same time in Mono? EDIT: I only need to be able to start and stop the process from Mono, I do not need information about the state of the process or if it has exited, as my application will register itself as a client to jackd, basically I need a "replacement" for bash's jackd &>/dev/null 2>&1 & for the System.Diagnostics.Process ;). Here is what I have so far for starting and stopping the process: public void StartJackd() { _jackd = new Process (); _jackd.StartInfo = _jackdStartup; if (_jackd.Start ()) { _jackd.EnableRaisingEvents = true; _jackd.Exited += JackdExited; } } public void StopJackd() { if (_jackd != null && !_jackd.HasExited) { _jackd.CloseMainWindow (); } } And somewhere else I have this code for registering the IdleHandler: GLib.Idle.Add(new GLib.IdleHandler(UpdateJackdConnections)); This handler will fire all the time, while the process is not running, but never, when jackd is running.

    Read the article

< Previous Page | 517 518 519 520 521 522 523 524 525 526 527 528  | Next Page >