Search Results

Search found 3993 results on 160 pages for 'broken pipe'.

Page 74/160 | < Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >

  • adding remote ssh printer as local printer

    - by guest
    I have SSH access to a remote host (FreeBSD) that has a printer set up. I do not have root access on that host or any other special user rights. Now I want to print directly from my laptop on that printer (Ubuntu 10.10). The problem is that I don't know how to "import" or whatever the the printer, as it needs authetification from my user account (print quota limitations). E-mailing me the files I want to print or scp them every time is a pain, ATM I pipe the PostScript output manually to a ssh command, but that's also a huge working overhead. E.g. when I want to print a foo.pdf pdftops '/path/to/foo.pdf' - | ssh user@remotehost 'lpr -P printername' So, does anyone know of a smooth way to shorten this procedure? Ideally I would just want to use a printername instead of the whole ssh command

    Read the article

  • How to stop basic Postfix after-queue script from BCC-ing sender?

    - by mjbraun
    I'm building a content filter for Postfix (2.9.3 package installed via apt on an Ubuntu 12.04 test VM) and I'm starting with a very basic Ruby (1.9.3) template and building up functionality. Strangely, when the script is enabled, messages sent are being forwarded on as normal, but also sent back to the sender which is not normal. Disabling the script disables this behavior. Any suggestions about what I have to change to stop that from happening? Thanks for any advice! /etc/postfix/master.cf (only the lines changed from the default) smtp inet n - - - - smtpd -o content_filter=dumper:dummy ... dumper unix - n n - 10 pipe flags=RF user=mailuser argv=/home/mailuser/mailfilter/dumper.rb ${sender} ${recipient}` /home/mailuser/mailfilter/dumper.rb #!/usr/bin/env ruby require 'open3' dir="/home/mailuser/emails" logfile="maillog.log" message = $stdin.read cmd = "/usr/sbin/sendmail -G -i #{ARGV[0]} #{ARGV[1]}" stdin, stdouterr, wait_thr = Open3.popen2e(cmd) stdin.print(message) logfile = File.open("#{dir}/#{logfile}", 'a') logfile.write(stdouterr) stdin.close stdouterr.close exit(0)

    Read the article

  • Recursive Batch File

    - by MCZ
    I have a file that looks this: head1,head2,head3,head4,head5,head6 a11,a12,keyA,a14,a15,a16 a21,a22,keyB,a24,a25 a31,a32,keyC,a34 a41,a42,keyB,a44,a44 a51,a52,keyA,a54,a55,a56 a61,a62,keyA,a64,a65,a66 a71,a72,keyC,a74 some message Objective: Write list of unique keys to a text file. For example, the result for the file described above should be: keyA, keyB, keyC Here's the pseudocode I would like to implement in batch file recur.bat Read second line of inputfile If no key exist on second line, return else continue Append keyX to list FINDSTR /v keyX inputfile Pipe results to recur.bat I don't know if this is the most efficient way to do this without using actual programming language. Any suggestions for actual batch file code?

    Read the article

  • tar - exclude certain files

    - by Alan
    I wish to tar all files in a directory and its subdirectories that do NOT end in .jpg, .bmp, .gif, or png. So, given the following folders and files: foo/file.txt foo/file.gif foo/bar/file foo/bar/image.jpg I want to tar only the files file.txt and file. file.gif and image.jpg should be ignored. I would also like to maintain the folder structure. My first thought was to pipe the results of the find command in conjunction with grep -v ".jpg|.gif|.bmp.png" to a text file, and then use the tar include argument to feed it that list of files. However, the results of the grepped find command also contain directories (in the example above, it would be "foo" and "foo/bar"), and when a directory is fed to tar, it includes all files in that directory, so I would end up with a tar file containing all of the files--not what I want. Is there any way to prevent find from outputting directories? Is there a far easier way to approach this?

    Read the article

  • Advice for migrating email server

    - by Chris Adams
    Hi there, I'm planning to migrate a Zimbra server with about 200gb of data from a server hosted in an office into a datacentre, to increase uptime (we've had a couple of outages when our network here started flaking out, and we have people in other countries relying on this server too). However, I'm not sure how best to migrate the data into the data centre without rendering the connection unusable during office hours, because there's far too much to send in over night over the two meg upstream connection we have here. I'm familiar with using tools like nice to stop a long running process degrading machine performance - is there a simple way to throttle a connection between office hours, so the long running transfer doesn't block the pipe, but then opens up outside of office hours to make the most of the bandwidth? I'm aware the alternative here is to simply mail a hard drive to the data centre, but I'd like to avoid doing that if I could. We're using Centos Linux for our servers, in the office and the datacentre, so extra points for an open source linux answer.

    Read the article

  • VMWare and ALT GR key results in missing characters

    - by donat
    For some odd reason WMware products hijack the AltGy-key despite I make sure that other keys are used as hot keys to release mouse and keyboard from the virtual machine. While this is not a problem for US keyboards, european however who extensively use AltGR for characters such as pipe (|), at-sign (@), left brace ({) and right brace (}). This seem to happen both in Windows and Linux and I can not seem to find a solution that works for both. :( Anyone have an idea how to fix this without the need to modify the guest OS every time? Thank you.

    Read the article

  • php-common causing conflict

    - by Cornwell
    I'm trying to install php-pdo but it always fails because of php-common bash-3.2# yum install php-pdo Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * epel: fedora-epel.mirror.lstn.net Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package php-pdo.i386 0:5.1.6-40.el5_9 set to be updated --> Processing Dependency: php-common = 5.1.6-40.el5_9 for package: php-pdo --> Finished Dependency Resolution php-pdo-5.1.6-40.el5_9.i386 from base has depsolving problems --> Missing Dependency: php-common = 5.1.6-40.el5_9 is needed by package php-pdo-5.1.6-40.el5_9.i386 (base) Error: Missing Dependency: php-common = 5.1.6-40.el5_9 is needed by package php-pdo-5.1.6-40.el5_9.i386 (base) You could try using --skip-broken to work around the problem You could try running: package-cleanup --problems package-cleanup --dupes rpm -Va --nofiles --nodigest The program package-cleanup is found in the yum-utils package. Installing php-common I get: bash-3.2# yum install php-common Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * epel: fedora-epel.mirror.lstn.net Setting up Install Process Package matching php-common-5.1.6-40.el5_9.i386 already installed. Checking for update. Nothing to do Searched here and google but couldn't find anything that worked EDIT: Added new repositories: bash-3.2# yum install php-common Loaded plugins: fastestmirror Repository base is listed more than once in the configuration Repository addons is listed more than once in the configuration Repository extras is listed more than once in the configuration Repository centosplus is listed more than once in the configuration Repository contrib is listed more than once in the configuration Loading mirror speeds from cached hostfile * addons: mirror.raystedman.net * base: mirror.5ninesolutions.com * centosplus: mirror.anl.gov * contrib: yum.singlehop.com * epel: fedora-epel.mirror.lstn.net * extras: centos.unmeteredvps.net * update: mirror.team-cymru.org addons | 1.9 kB 00:00 base | 1.1 kB 00:00 centosplus | 1.9 kB 00:00 centosplus/primary_db | 53 kB 00:01 contrib | 1.9 kB 00:00 contrib/primary_db | 1.1 kB 00:00 epel | 3.6 kB 00:00 extras | 2.1 kB 00:00 nginx | 2.5 kB 00:00 update | 1.9 kB 00:00 update/primary_db | 84 kB 00:04 updates | 1.9 kB 00:00 Setting up Install Process Package matching php-common-5.1.6-40.el5_9.i386 already installed. Checking for update. Nothing to do And then... bash-3.2# yum install php-pdo Loaded plugins: fastestmirror Repository base is listed more than once in the configuration Repository addons is listed more than once in the configuration Repository extras is listed more than once in the configuration Repository centosplus is listed more than once in the configuration Repository contrib is listed more than once in the configuration Loading mirror speeds from cached hostfile * addons: mirror.raystedman.net * base: centos.mirror.lstn.net * centosplus: mirror.anl.gov * contrib: yum.singlehop.com * epel: fedora-epel.mirror.lstn.net * extras: centos.unmeteredvps.net * update: mirrors.loosefoot.com Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package php-pdo.i386 0:5.1.6-40.el5_9 set to be updated --> Processing Dependency: php-common = 5.1.6-40.el5_9 for package: php-pdo --> Finished Dependency Resolution php-pdo-5.1.6-40.el5_9.i386 from base has depsolving problems --> Missing Dependency: php-common = 5.1.6-40.el5_9 is needed by package php-pdo-5.1.6-40.el5_9.i386 (base) Error: Missing Dependency: php-common = 5.1.6-40.el5_9 is needed by package php-pdo-5.1.6-40.el5_9.i386 (base) You could try using --skip-broken to work around the problem You could try running: package-cleanup --problems package-cleanup --dupes rpm -Va --nofiles --nodigest The program package-cleanup is found in the yum-utils package.

    Read the article

  • HDMI Sound for HTPC

    - by Brent Arias
    I have the (perhaps) the same problem as stated in this other HDMI sound to HTPC question. I tried the advice of clicking on the speaker in the system tray. I can see the HDMI audio device I want to use. That device claims to be functioning properly. But there is no sound, and it won't let me select it as the active audio device. When I click on the troubleshooter, it says that there are no speakers connected. I would think this is because my computer us unable to pipe sound through the video card (preventing the HDMI from carrying it), except that it truly claims that it has an HDMI sound device that is working correctly. So I'm not sure what is wrong at this point. Thoughts? My system is Windows 7 x64. In case it makes a difference, the video card I'm using is this GeForce GTX 560

    Read the article

  • ntbackup workalike for adhoc full backups in Windows 7 thats free and preferably open source

    - by Justin Dearing
    On windows 2000 and XP machines I used to be able to do the following: ntbackup backup systemstate c: /f e:\backups\machineName\machineName-full+systemstate_200101206.bkf This gave me a full backup of the system that I could use to do a system restore, after doing a barebones OS install. Windows 7 has a great utility for regular backups with alerting and all that stuff. It does not seem to have command line support. I'd like a backup solution for my Windwos 7 systems that has the following features: Is free Is open source (preferebly) Works while the system is booted and leaves the system functional (clonezilla is great for offline backups, and I use that too) Gives me a backup that is suited for a full system restore or partial system restore (ruling out most imaging software even if they could work while the system is booted via some sort of shadow copy voodoo) Can work via the command line Compression would be nice, the ability to pipe output would be better.

    Read the article

  • Is there an encrypted write-only file system for Linux?

    - by Grumbel
    I am searching for an encrypted filesystem for Linux that can be mounted in a write-only mode, by that I mean you should be able to mount it without supplying a password, yet still be able to write/append files, but neither should you be able to read the files you have written nor read the files already on the filesystem. Access to the files should only be given when the filesystem is mounted via the password. The purpose of this is to write log files or similar data that is only written, but never modified, without having the files themselves be exposed. File permissions don't help here as I want the data to be inaccessible even when the system is fully compromised. Does such a thing exist on Linux? Or if not, what would be the best alternative to create encrypted log files? My current workaround consists of simply piping the data through gpg --encrypt, which works, but is very cumbersome, as you can't easily get access to the filesystem as a whole, you have to pipe each file through gpg --decrypt manually.

    Read the article

  • Use procmail to deliver to stdout and a second server

    - by Halfgaar
    I would like a Postfix server to deliver each message to a certain transport as well as relay to a second server. In master.cf, I have the following transport: zarafa unix - n n - 10 pipe flags= user=vmail argv=/usr/bin/zarafa-dagent ${user} Because I can't get Postfix to deliver to two transports, what I probably need, is a wrapper transport, using procmail maybe, that delivers to zarafa-dagent and relays to a second server (not just forward to an address; relay to a second server). It can also be a script that calls sendmail or whatever, but at the moment, I don't know how to proceed.

    Read the article

  • Check if folders exist in Git repository... testing if a sub-string exists in bash with NULL as a separator

    - by Craig Francis
    I have a common git "post-receive" script for several projects, and it needs to perform different actions if an /app/ or /public/ folder exists in the root. Using: FOLDERS=`git ls-tree -d --name-only -z master`; I can see the directory listing, and I would like to use the RegExp support in bash to run something like: if [[ "$FOLDERS" =~ app ]]; then ... fi But that won't work if there was something like an "app lication" folder... I specified the "-z" option in the git "ls-tree" command so I could use the \0 (null) character as a separator, but not sure how to test for that in the bash RegExp. Likewise I know there is support for specifying a particular path in the ls-tree command, and could then pipe that to "wc -l", but I'd have thought it was quicker to get a full directory listing of the root (not recursive) then test for the 2 (or more) folders with the returned output. Possibly related to: http://stackoverflow.com/questions/7938094/git-how-to-check-which-files-exist-and-their-content-in-a-shared-bare-repos

    Read the article

  • Proper way to rotate Nginx logs

    - by depesz
    I would like to achieve rotation of nginx logs that: would work without any extra software (i.e. - best if without "logrotate") would create rotated files with names based on date Best approach is something like PostgreSQL has - i.e. in it's log_filename config variable I can specify strftime-style %Y-%m-%d, and it will automatically change log on date (or time) change. Another approach from apache - sending logs via pipe to rotatelogs program. As far as I was able to search - no such approach exists. All I can do, is to use logrotate with dateext option, but it has it's own set of drawbacks, and I'd rather use something that works like |rotatelogs or log_filename in PostgreSQL.

    Read the article

  • Piping the output of a program to Preview.app

    - by Abhay Buch
    I'm using an application (the dot program of the graphviz library) that generates a wide variety of file formats including PostScript and PDF. It can send the result to stdout or to a file. I'm currently sending it to a file and opening it with Preview. Is there any way to pipe the output and have it be read by Preview, so that I'd don't have to generate a file and have it lying around? This is going to be used by a number of people who won't know the internal structure of the generating script and I don't want to clutter their folders or complicate their lives. More generally, is there any way to take a program that sends its output to stdout and pass that output to an program that usually takes it's input from a file, without actually creating a file?

    Read the article

  • looping .mpeg dump

    - by Matt Cook
    Need to dump an MPEG2 file in a loop, either to stdout or a named pipe. This works: $ { while : ; do cat myLoop.mpg; done; } | vlc - This works on a text file containing "1234\n": $ mkfifo myPipe $ cat test.txt > myPipe & < myPipe tee -a myPipe | cat - (it correctly loops, outputting "1234" on every line). Why does the following NOT work? $ cat myLoop.mpg > myPipe & < myPipe tee -a myPipe | vlc myPipe I'm primarily interested in re-writing the first statement to remove the improper "cat myLoop.mpg" statement. Will be inputting into VLC, or into FFMPEG and then piped into VLC.

    Read the article

  • Problem accessing MICROSOFT##SSEE database (Error: 18456, Severity: 14, State: 16.)

    - by Philipp Schmid
    After an unexpected server shutdown due to a power failure, I can no longer connect to the internal windows database MICROSOFT##SSEE which is hosting Central Admin for my SBS 2008 server. The log shows: Error: 18456, Severity: 14, State: 16. Login failed for user 'NT AUTHORITY\NETWORK SERVICE'. [CLIENT: <named pipe>] I've tried to connect using the SQL Management studio (connecting to .pipemssql$microsoft##sseesqlquery) but no luck. The SQL Server Configuration Manager doesn't show a entry for 'Protocols for MICROSOFT##SSEE' (but shows it for 2 other database hosted on the same SQL server 2005 Express edition. I have tried to restore the master.ldf and mastlog.log files from a backup, but the issue persists.

    Read the article

  • trigger script on postfix delivery errors

    - by edovino
    I'm trying to get postfix to run a script on soft (4xx) and hard (5xx) delivery errors, but I'm not sure where to start. If I understand things correctly, I could insert (pipe-based) filters in the master.cf file, there's a whole 'milter' infrastructure available, an finally I suppose I could simply grep through the mail.info logs. So - any advice? Should I go the 'handle it via master.cf' route, and if so, what daemon should I intercept? 'bounce'? The grep-the-logs route is probably simplest, but I can't help but feel that there is a better way. Any advice appreciated!

    Read the article

  • Syntax for piping varnish logs to rotatelogs

    - by jetboy
    Ubuntu 12.04 Server x64, Varnish 3.0.2 I'm trying to pipe varnishncsa's logs through Apache's rotatelogs, and running from the shell, things work fine: sudo varnishncsa -a -P /var/run/varnishncsa/varnishncsa.pid |/usr/sbin/rotatelogs /var/log/varnish/varnish.log.%Y%m%d%H 3600 creates a new logfile in /var/log/varnish, with rotation every hour (3600 seconds). However, I'm struggling to get things working the same way inside /etc/init.d/varnishncsa: PATH=/sbin:/bin:/usr/sbin:/usr/bin DAEMON=/usr/bin/$NAME PIDFILE=/var/run/$NAME/$NAME.pid LOGFILE=/var/log/varnish/varnishncsa.log USER=varnishlog DAEMON_OPTS="-a -P ${PIDFILE}" DAEMON_PIPE="|/usr/sbin/rotatelogs /var/log/varnish/varnish.log.%Y%m%d%H 3600" ... start_varnishncsa() { output=$(/bin/tempfile -s.varnish) log_daemon_msg "Starting $DESC" "$NAME" create_pid_directory if start-stop-daemon --start --verbose --pidfile ${PIDFILE} \ --chuid $USER --exec ${DAEMON} -- ${DAEMON_OPTS} \ > ${output} 2>&1; then log_end_msg 0 else log_end_msg 1 cat $output exit 1 fi rm $output } Where should I put DAEMON_PIPE in the above code? I've tried at the end of: if start-stop-daemon --start --verbose --pidfile ${PIDFILE} which is where additional command line parameters usually go, but it isn't creating a logfile.

    Read the article

  • dpkg -S not showing all files in package

    - by dimadima
    I've been using dpkg -S <package_name> to list the contents of a package. Sometimes I pipe to grep bin to quickly scan for executables. I just ran into a case where this didn't work out for me: $ which virtualenv $ sudo apt-get install python-virtualenv Reading package lists... Done ... Setting up python-virtualenv (1.7.1.2-1) ... $ which virtualenv /usr/bin/virtualenv $ dpkg -S /usr/bin/virtualenv python-virtualenv: /usr/bin/virtualenv $ dpkg -S python-virtualenv | grep bin $ /usr/bin/virtualenv seems to be provided by python-virtualenv, but isn't listed in the package contents provided by dpkg -S. All the while, passing /usr/bin/virtualenv to dpkg -S returns that the file comes from python-virtualenv. Can you all explain this?

    Read the article

  • Is there an encrypted write-only file system for Linux?

    - by Grumbel
    I am searching for an encrypted file system for Linux that can be mounted in a write-only mode, by that I mean you should be able to write/append files, but not be able to read the files you have written. Access to the files should only be given when the filesystem is mounted via a password. The purpose of this is to write log files and such, without having the log files themselves be accessible. Does such a thing exist on Linux? Or if not, what would be the best alternative to create encrypted log files? My current workaround consists of simply piping the data through gpg --encrypt, which works, but is very cumbersome, as you can't get easy access to the file system as a whole, you have to pipe each file through gpg --decrypt manually.

    Read the article

  • Failing SSHFS connection drags down the system

    - by skerit
    From time to time my sshfs mount fails. All programs using the mount freeze when it happens. I can't even ls anything or use nautilus. Is there a way to find out what's the cause and how to handle it? I've noticed regular SSH sessions to the server get their fair share of Write failed: broken pipe disconnects, too. If I wait long enough (and I'm talking about 20-ish minutes, here) it will auto reconnect and things start working again.

    Read the article

  • Checking version of Applications installed in ~/Applications with unknown username

    - by ridogi
    I'd like to check the version of Firefox through Apple Remote Desktop of all managed computers. I have written this, but it only checks for Firefox in /Applications /bin/cat /Applications/Firefox.app/Contents/Info.plist | grep -A 1 CFBundleShortVersionString | grep string | sed 's/[/]//' | sed 's/<string>//g' For standard users Firefox auto update breaks if it is in /Applications so I instead have it installed in ~/Applications I'd like to check that copy (if it exists), but I can't specify the path in the command since it is unique to each computer. For example: /Users/jon/Applications/Firefox.app /Users/arya/Applications/Firefox.app Presumably I want to use find and pipe the result to my command. This should work for 10.6 through 10.8

    Read the article

  • Database for heat tolerances of various cables?

    - by I. F.
    Is there any kind of unified database of heat tolerances for networking cables? I've been setting up a number of home/small office networks lately and as a mostly-amateur I could really use some information on what is safe to run behind a radiator, next to a steam pipe, etc. The question I'm up against at the moment is: Can I run normal RJ11 phone line cable (from DSL modem to phone jack) behind a steam radiator without risking a fire? Unlike cat5, I could not find published standards for these, so I'm turning to experts with more experience. This is a cut-rate show. Do I go out and buy more cabling, and if so which, or use the spare that I have?

    Read the article

  • Pass parameters to a script securely

    - by codeholic
    What is the best way to pass parameters to a forked script securely? E. g. passing parameters through command line operands is not secure, since someone who has an account on the host can run ps and see them. Unnamed pipe is quite secure, as far as I understand, isn't it? I mean, passing parameters to STDIN of the forked process. What about passing parameters in environment vars? Is it secure? What about passing parameters by other means I didn't mention?

    Read the article

  • I/O redirection using cygwin and mingw

    - by KLee1
    I have written a program in C and have compiled it using MinGW. When I try to run that program in Cygwin, it seems to behave normally (i.e. prints correct output etc.) However, I'm trying to pipe output to a program so that I can parse information from the program's output. However, the piping does not seem to be working in that I am not getting any input into the second program. I have confirmed this by using the following commands: This command seems to work fine: ./prog Performing this command returns nothing: ./prog | cat This command verifies the first: ./prog | wc Which returns: 0 0 0 I know that the script (including the piping from the program) works perfectly fine in an all Linux environment. Does anyone have any idea for why the piping isn't working in Cygwin? Thanks!

    Read the article

< Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >