Search Results

Search found 2047 results on 82 pages for 'the dude man'.

Page 63/82 | < Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >

  • ffmpeg conversion problem

    - by user33126
    installed ffmpeg and it shows version and all correctly. but even info ffmpeg command itself shows ffmpeg -i Alice_In_Wonderland.mp4 gives messgae like FFmpeg version 0.5, Copyright (c) 2000-2009 Fabrice Bellard, et al. configuration: --prefix=/usr --libdir=/usr/lib64 --shlibdir=/usr/lib64 --mandir=/usr/share/man --incdir=/usr/include --extra-cflags=-fPIC --enable-libamr-nb --enable-libamr-wb --enable-libdirac --enable-libfaac --enable-libfaad --enable-libmp3lame --enable-libtheora --enable-libx264 --enable-gpl --enable-nonfree --enable-postproc --enable-pthreads --enable-shared --enable-swscale --enable-x11grab libavutil 49.15. 0 / 49.15. 0 libavcodec 52.20. 0 / 52.20. 0 libavformat 52.31. 0 / 52.31. 0 libavdevice 52. 1. 0 / 52. 1. 0 libswscale 0. 7. 1 / 0. 7. 1 libpostproc 51. 2. 0 / 51. 2. 0 built on Nov 6 2009 19:11:04, gcc: 4.1.2 20080704 (Red Hat 4.1.2-46) Seems stream 1 codec frame rate differs from container frame rate: 49.93 (9986/200) - 49.92 (599/12) Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'Alice_In_Wonderland.mp4': Duration: 00:01:39.65, start: 0.000000, bitrate: 542 kb/s Stream #0.0(und): Audio: aac, 44100 Hz, stereo, s16 Stream #0.1(und): Video: h264, yuv420p, 480x270, 49.92 tbr, 24.96 tbn, 49.93 tbc At least one output file must be specified Please tell me whats the problem

    Read the article

  • ffmpeg conversion problem

    - by Elamurugan
    installed ffmpeg and it shows version and all correctly. but even info ffmpeg command itself shows ffmpeg -i Alice_In_Wonderland.mp4 gives messgae like FFmpeg version 0.5, Copyright (c) 2000-2009 Fabrice Bellard, et al. configuration: --prefix=/usr --libdir=/usr/lib64 --shlibdir=/usr/lib64 --mandir=/usr/share/man --incdir=/usr/include --extra-cflags=-fPIC --enable-libamr-nb --enable-libamr-wb --enable-libdirac --enable-libfaac --enable-libfaad --enable-libmp3lame --enable-libtheora --enable-libx264 --enable-gpl --enable-nonfree --enable-postproc --enable-pthreads --enable-shared --enable-swscale --enable-x11grab libavutil 49.15. 0 / 49.15. 0 libavcodec 52.20. 0 / 52.20. 0 libavformat 52.31. 0 / 52.31. 0 libavdevice 52. 1. 0 / 52. 1. 0 libswscale 0. 7. 1 / 0. 7. 1 libpostproc 51. 2. 0 / 51. 2. 0 built on Nov 6 2009 19:11:04, gcc: 4.1.2 20080704 (Red Hat 4.1.2-46) Seems stream 1 codec frame rate differs from container frame rate: 49.93 (9986/200) - 49.92 (599/12) Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'Alice_In_Wonderland.mp4': Duration: 00:01:39.65, start: 0.000000, bitrate: 542 kb/s Stream #0.0(und): Audio: aac, 44100 Hz, stereo, s16 Stream #0.1(und): Video: h264, yuv420p, 480x270, 49.92 tbr, 24.96 tbn, 49.93 tbc At least one output file must be specified Please tell me whats the problem

    Read the article

  • mount error 5 = Input/output error

    - by alharaka
    I am running out of ideas. After a long period of testing this morning, I cannot seem to get this to work, and I have no idea why. I want to mount a Windows SMB/CIFS share with a Debian 5.0.4 VM, and it is not cooperating. This the command I am using. debianvm:/home/me# whoami root debianvm:/home/me# smbclient --version Version 3.2.5 debianvm:/home/me# mount -t cifs //hostname.domain.tld/share /mnt/hostname.domain.tld/share --verbose -o user=SUBADDOMAIN.ADDOMAIN.DOMAIN.TLD/username mount.cifs kernel mount options: unc=//hostname.domain.tld\share,ip=10.212.15.53,domain=SUBADDOMAIN.ADDOMAIN.DOMAIN.TLD,ver=1,rw,user=username,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,pass=*********mount error 5 = Input/output error Refer to the mount.cifs(8) manual page (e.g.man mount.cifs) debianvm:/home/me# The word on the nets has not been very specific, and unfortunately it is almost always environment-specific. I receive no authentication errors. I have tried mount -t smbfs and mount -t cifs, along with smbmount and such. I get the same error before. I doubt it is a problem with DNS resolution, because logging shows the correct IP address. dmesg | tail -f no longer shows authentication errors when I format the domain and username accordingly. I have played a little with iocharset=utf8, file_mode, and dir_mode as described here. That did not help either. I have also tried ntlm and ntlmv2 assuming it might be a minimum auth method problem, but not forcing sec=ntlmv2 it can still authenticate without errors anymore. smbclient -L hostname.domain.tld -W SUBADDOMAIN.ADDOMAIN.DOMAIN.TLD -U username correctly lists all the shares and shows it as the following. Domain=[SUBADDOMAIN] OS=[Windows 5.0] Server=[Windows 2000 LAN Manager] Sharename Type Comment --------- ---- ------- IPC$ IPC Remote IPC ETC$ Disk Remote Administration C$ Disk Remote Administration Share Disk Connection to hostname.domain.tld failed (Error NT_STATUS_CONNECTION_REFUSED) NetBIOS over TCP disabled -- no workgroup available I find the last line intriguing/alarming. Does anyone have any pointers!? Maybe I misread the effin manual.

    Read the article

  • Configuring dnsmasq to handle mx records on pfsense 2.0.1

    - by Bob B.
    I know from dnsmasq's man page that it is capable of handling mx records, but I can't seem to find anything in pfsense's web GUI or anywhere online that talks about how to include mx records. I'm running pfsense 2.0.1 on a turnkey hardware appliance. I have root shell access. I would prefer not to move away from using DNS Forwarder/dnsmasq if I can help it. I've searched for a dnsmasq.conf file, but none exists. pfsense handles everything through a centralized xml config file. That file merely designates the dnsmasq section using the tag, then drops immediate into listings for each host override you define. My understanding of pfsense's implementation: In the GUI, you can only define an override using the host, domain, IP and description. In the XML that translates to: <hosts> <host>foo</host> <domain>foo.com</domain> <ip>127.0.0.1</ip> <descr/> </hosts> The above example results in foo.foo.com resolving to 127.0.0.1, for instance. But that's it. No ability to select a record type with which to define things like MX. Anyone had any luck with this? Thank you for any insights you might have.

    Read the article

  • Upstart multiple instances of service not working

    - by Dax
    I started playing with MongoDB on Lucid. Now I would like to run a DB and Config server on the same box. They both use the same binary to launch, but with different config files and running on different ports. All directories for log and lib is split so one goes to mongodb and the other to mongoconf. Each process can be started without any problems on their own. start mongodb stop mongodb start mongoconf stop mongoconf But if I try to start both, the second one would just start and exit. Using 'initctl log-priority debug' I got the following in the logs. Jan 6 12:44:12 mongo4 init: event_finished: Finished started event Jan 6 12:44:12 mongo4 init: job_process_handler: Ignored event 1 (1) for process 5690 Jan 6 12:44:12 mongo4 init: mongoconf (mongoconf) main process (5690) terminated with status 1 Jan 6 12:44:12 mongo4 init: mongoconf (mongoconf) goal changed from start to stop Jan 6 12:44:12 mongo4 init: mongoconf (mongoconf) state changed from running to stopping man 5 init shows that you can use instance names to differentiate the two. I tried using 'instance mongoconf' in the on upstart script and 'instance mongodb' in the other one, and it still fails. I can manually start the other process, so there is definitely no conflicts on port numbers or directories. Any ideas on what to try or how to get output on why it is 'terminated with status 1'? Thanx

    Read the article

  • Postfix misconfigured? 550 Sender rejected from recieving server

    - by wnstnsmth
    We use Postfix on our CentOS 6 machine, having the following configuration. We use PHP's mail() function to send rudimentary password reset emails, but there is a problem. As you will see, mydomain and myhostname is correctly set, afaik. alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix debug_peer_level = 2 html_directory = no inet_interfaces = localhost inet_protocols = all mail_owner = postfix mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man mydestination = $myhostname, localhost.$mydomain, localhost mydomain = ***.ch myhostname = test.***.ch newaliases_path = /usr/bin/newaliases.postfix queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.6.6/README_FILES sample_directory = /usr/share/doc/postfix-2.6.6/samples sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop unknown_local_recipient_reject_code = 550 Now this is the stuff that is in the /var/log/maillog of Postfix upon sending an email to ***.***@***.ch, with ***.ch being the same domain our sending server test.***.ch is on: Dec 13 16:55:06 R12X0210 postfix/pickup[6831]: E6D6311406AB: uid=48 from=<apache> Dec 13 16:55:06 R12X0210 postfix/cleanup[6839]: E6D6311406AB: message-id=<20121213155506.E6D6311406AB@test.***.ch> Dec 13 16:55:07 R12X0210 postfix/qmgr[6832]: E6D6311406AB: from=<apache@test.***.ch>, size=1276, nrcpt=1 (queue active) Dec 13 16:55:52 R12X0210 postfix/smtp[6841]: E6D6311406AB: to=<***.***@***.ch>, relay=mail.***.ch[**.**.249.3]:25, delay=46, delays=0.18/0/21/24, dsn=5.0.0, status=bounced (host mail.***.ch[**.**.249.3] said: 550 Sender Rejected (in reply to RCPT TO command)) Dec 13 16:55:52 R12X0210 postfix/cleanup[6839]: 8562C11406AC: message-id=<20121213155552.8562C11406AC@test.***.ch> Dec 13 16:55:52 R12X0210 postfix/bounce[6848]: E6D6311406AB: sender non-delivery notification: 8562C11406AC Dec 13 16:55:52 R12X0210 postfix/qmgr[6832]: 8562C11406AC: from=<>, size=3065, nrcpt=1 (queue active) Dec 13 16:55:52 R12X0210 postfix/qmgr[6832]: E6D6311406AB: removed Dec 13 16:55:52 R12X0210 postfix/local[6850]: 8562C11406AC: to=<root@test.***.ch>, orig_to=<apache@test.***.ch>, relay=local, delay=0.13, delays=0.07/0/0/0.05, dsn=2.0.0, status=sent (delivered to mailbox) Dec 13 16:55:52 R12X0210 postfix/qmgr[6832]: 8562C11406AC: removed So the receiving server rejects the sender (line 4 of log output). We have tested it with one other recipient and it worked, so this problem might be completely unrelated to our settings, but related to the recipient. Still, with this question, I want to make sure we're not making an obvious misconfiguration on our side.

    Read the article

  • Copy UNC network path (not drive letter) for paths on mapped drives from Windows Explorer

    - by Ernest Mueller
    I frequently want to share network paths to files with other folks on my team via email or chat. We have a lot of mapped drives here, both ones we set up ourselves and ones set up by our IT overlords. What I'd like to be able to do is to copy the full real path (not the drive letter) from Windows Explorer to send to folks. Example: I have a file in my "Q:" drive, \cartman\users\emueller, I want to send a link to file foo.doc to everyone. When I copy the file path (shift+right click, "copy as path") it gets the file name "Q:\foo.doc". This is unhelpful to others, who would like to see \cartman\users\emueller\foo.doc, obviously. In Explorer it clearly knows it - in the address bar I see "Computer - emueller (\cartman\users) (Q:) -". Is there a way to say "hey man copy that path as text with the \cartman\users\emueller not the Q: in it?" I know I could just set up mapped network locations instead of the mapped drives for the ones that I set up personally and avoid this problem, but most of the mapped drives like the "users" share come from our IT policy. I could just make a separate network location and then ignore my Q: drive but that's inconvenient (and they do it so they can move accounts across servers). Sure my emailed path might eventually break because I'm losing the drive letter indirection but that's OK with me.

    Read the article

  • Postfix SMTP-relay server against Gmail on CentOS 6.4

    - by Alex
    I'm currently trying to setup an SMTP-relay server to Gmail with Postfix on a CentOS 6.4 machine, so I can send e-mails from my PHP scripts. I followed this tutorial but I get this error output when trying to do a sendmail [email protected] Output: tail -f /var/log/maillog Apr 16 01:25:54 ext-server-dev01 postfix/cleanup[3646]: 86C2D3C05B0: message-id=<[email protected]> Apr 16 01:25:54 ext-server-dev01 postfix/qmgr[3643]: 86C2D3C05B0: from=<[email protected]>, size=297, nrcpt=1 (queue active) Apr 16 01:25:56 ext-server-dev01 postfix/smtp[3648]: 86C2D3C05B0: to=<[email protected]>, relay=smtp.gmail.com[173.194.79.108]:587, delay=4.8, delays=3.1/0.04/1.5/0.23, dsn=5.5.1, status=bounced (host smtp.gmail.com[173.194.79.108] said: 530-5.5.1 Authentication Required. Learn more at 530 5.5.1 http://support.google.com/mail/bin/answer.py?answer=14257 qh4sm3305629pac.8 - gsmtp (in reply to MAIL FROM command)) Here is my main.cf configuration, I tried a number of different options but nothing seems to work: alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix debug_peer_level = 2 html_directory = no inet_interfaces = localhost inet_protocols = ipv4 mail_owner = postfix mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man mydestination = $myhostname, localhost.$mydomain, localhost myhostname = host.local.domain myorigin = $myhostname newaliases_path = /usr/bin/newaliases.postfix queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.6.6/README_FILES relayhost = [smtp.gmail.com]:587 sample_directory = /usr/share/doc/postfix-2.6.6/samples sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop smtp_sasl_auth_enable = yes smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd smtp_sasl_security_options = noanonymous smtp_sasl_tls_security_options = noanonymous smtp_sasl_type = cyrus smtp_tls_CAfile = /etc/ssl/certs/ca-bundle.crt smtp_use_tls = yes smtpd_sasl_path = smtpd unknown_local_recipient_reject_code = 550 In the /etc/postfix/sasl_passwd files (sasl_passwd & sasl_passwd.db) I got the following (removed the real password, and replaced it with "password"): [smtp.google.com]:587 [email protected]:password To create the sasl_passwd.db file, I did that by running this command: postmap hash:/etc/postfix/sasl_passwd Do anybody got an idea why I can't seem to send an e-mail from the server? Kind Regards Alex

    Read the article

  • Can any iSCSI NAS appliance replicate / clone a LUN to an external drive?

    - by Boden
    I would like to backup using Windows Imaging to some kind of NAS appliance. I believe this will require the NAS to support iSCSI. I would then like the appliance to support the replication of the iSCSI LUN to an external eSATA or USB disk connected directly to the appliance. I've found plenty of NAS appliances that can do iSCSI and replicate to an external drive, but none that I've found thus far can do both at once. That is, the devices can do iSCSI, but then the replication feature doesn't work. The idea here is to backup to an appliance located in a secure office far away from the server room. Offsite backups to external hard drive could be managed from the appliance. The benefits of such a setup would be: 1) very unlikely that fire or random theft would affect both server-room backup and "remote" backup appliance 2) offsite backups could be managed by multiple trusted people without granting access to server room 3) Windows imaging provides poor man's deduplication, so each backup volume can contain a decent backup history. I understand why this would be a non-trivial thing to implement, but I'm wondering if such a thing exists? Preferably a tabletop, low to medium cost device. Alternative solutions welcome. NOTE: I'm backing up very few but very large files, so file replication is not a good option.

    Read the article

  • Copy UNC network path (not drive letter) for paths on mapped drives from Windows Explorer

    - by Ernest Mueller
    I frequently want to share network paths to files with other folks on my team via email or chat. We have a lot of mapped drives here, both ones we set up ourselves and ones set up by our IT overlords. What I'd like to be able to do is to copy the full real path (not the drive letter) from Windows Explorer to send to folks. Example: I have a file in my "Q:" drive, \\cartman\users\emueller, and I want to send a link to the file foo.doc therein to coworkers. When I copy the file path (shift+right click, "copy as path") it gets the file name "Q:\foo.doc". This is unhelpful to others, who would need to see \\cartman\users\emueller\foo.doc to be able to consume the link. In Explorer it clearly knows it - in the address bar I see "Computer - emueller (\\cartman\users) (Q:) -". Is there a way to say "hey man copy that path as text with the \\cartman\users\emueller not the Q: in it?" I know I could just set up mapped network locations instead of the mapped drives for the ones that I set up personally and avoid this problem, but most of the mapped drives like the "users" share come from our IT policy. I could just make a separate network location and then ignore my Q: drive but that's inconvenient (and they do it so they can move accounts across servers). Sure my emailed path might eventually break because I'm losing the drive letter indirection but that's OK with me.

    Read the article

  • Selecting Interface for SSH Port Forwarding

    - by Eric Pruitt
    I have a server that we'll call hub-server.tld with three IP addresses 100.200.130.121, 100.200.130.122, and 100.200.130.123. I have three different machines that are behind a firewall, but I want to use SSH to port forward one machine to each IP address. For example: machine-one should listen for SSH on port 22 on 100.200.130.121, while machine-two should do the same on 100.200.130.122, and so on for different services on ports that may be the same across all of the machines. The SSH man page has -R [bind_address:]port:host:hostport listed I have gateway ports enabled, but when using -R with a specific IP address, server still listens on the port across all interfaces: machine-one: # ssh -NR 100.200.130.121:22:localhost:22 [email protected] hub-server.tld (Listens for SSH on port 2222): # netstat -tan | grep LISTEN tcp 0 0 100.200.130.121:2222 0.0.0.0:* LISTEN tcp 0 0 :::22 :::* LISTEN tcp 0 0 :::80 :::* LISTEN Is there a way to make SSH forward only connections on a specific IP address to machine-one so I can listen to port 22 on the other IP addresses at the same time, or will I have to do something with iptables? Here are all the lines in my ssh config that are not comments / defaults: Port 2222 Protocol 2 SyslogFacility AUTHPRIV PasswordAuthentication yes ChallengeResponseAuthentication no GSSAPIAuthentication no GSSAPICleanupCredentials no UsePAM yes AcceptEnv LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_MESSAGES AcceptEnv LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT AcceptEnv LC_IDENTIFICATION LC_ALL AllowTcpForwarding yes GatewayPorts yes X11Forwarding yes ClientAliveInterval 30 ClientAliveCountMax 1000000 UseDNS no Subsystem sftp /usr/libexec/openssh/sftp-server

    Read the article

  • How do I install php 5.3 on CentOS?

    - by fivelitresofsoda
    Hi, I have to install php5.3 on my centos server. If i do yum install php, the base repo installs 5.1.6 which is too old for the apps i need to install. So i've been trying to use the ius repository, following the official instructions from ius: root@linuxbox ~]# wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/x86_64/ius-release-1-2.ius.el5.noarch.rpm root@linuxbox ~]# wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/x86_64/epel-release-1-1.ius.el5.noarch.rpm root@linuxbox ~]# rpm -Uvh ius-release*.rpm epel-release*.rpm Ok. Now i simply do yum install php53, etc for all i need... but i get this error: Running rpm_check_debug Running Transaction Test Finished Transaction Test Transaction Check Error: file /usr/bin/php from install of php53u-cli-5.3.4-3.ius.el5.x86_64 conflicts with file from package php-cli-5.1.6-27.el5_5.3.x86_64 file /usr/bin/php-cgi from install of php53u-cli-5.3.4-3.ius.el5.x86_64 conflicts with file from package php-cli-5.1.6-27.el5_5.3.x86_64 file /usr/share/man/man1/php.1.gz from install of php53u-cli-5.3.4-3.ius.el5.x86_64 conflicts with file from package php-cli-5.1.6-27.el5_5.3.x86_64 file /etc/php.ini from install of php53u-common-5.3.4-3.ius.el5.x86_64 conflicts with file from package php-common-5.1.6-27.el5_5.3.x86_64 Error Summary ------------- I have no idea how to solve this. I think i have to delete the base packages however as a linux noob i don't know how to do that. Please help. Thank you.

    Read the article

  • How do I batch-downsize images on linux, while keeping small images small?

    - by Gabriel
    I have a whole lot of photos and it's time to clean up the mess and free some disk space. I know mogrify is great to batch-resize things down. The problem is, in some directories I have small images mixed with the big ones. I'd like to batch-downsize all the big one but not upsize the small ones. As an example, I have a rep with tens of MBs-pictures in the 3000x2000s. Some of them I have already downsized so I could email them. They may be 1024x768. I'd like to downsize the big ones to 1600x1200, a disk-space-to-quality tradeoff I like. But then, with mogrify or convert, the small ones will be upsized, which would be a waste of disk space. I found some tricky ways to use identify with cut and some scripting to filter the small pics out and mogrify the others, but man, there's got a way to tell mogrify not to upsize my pics... How ? Is there some other tool better suited ?

    Read the article

  • TLS_REQCERT and PHP with LDAPS

    - by John
    Problem: Secure LDAP queries via command-line and PHP to an AD domain controller with a self-signed certificate. Background: I am working on a project where I need to enable LDAP look-ups from a PHP web application to a MS AD domain controller that is using a self-signed certificate. This self-signed certificate is also using a domain name that is not a FQDN - think of something like people.campus as the domain name. The web application would take the user's credentials and pass them on to the AD domain controller to verify if the credntials are a match or not. This seems simple, but I am having problems trying to get PHP and the self-signed certificate to work. Some people have suggested that I changed the TLS_REQCERT variable from "request" to "never" within the OpenLDAP configuration. I am concerned that this might have larger implications such as a man-in-the-middle attack and I am not comfortable changing this setting to never. I have also read some places online where one can take a certificate and place it as a trusted source within the openldap configuration file. I am curious if that is something that I could do for the situation that I have? Can I, from the command line, obtain the self-signed certificate that the AD domain controller is using, save it to a file, and then have openldap use that file for the trust that it needs so that I do not need to adjust the variable from request to never? I do not have access to the AD domain controller and as a result cannot export the certificate. If there is a way to obtain the certificate from the command line, what commands do I need to use? Is there an alternate method of handling this issue that would be better in the long run? I have some CentOS servers and some Ubuntu servers that I am working with to try and get this going on. Thanks in advance for your help and ideas.

    Read the article

  • Is there any way to force my Linux box to always boot up with a self-assigned IP address?

    - by Jeremy Friesner
    This is perhaps an unusual request: I'm trying to get a Debian Linux box to always give itself a self-assigned IP address (i.e. 169.254.x.y) on boot. In particular, I want it to do that even when there is a DHCP server present on the LAN. That is, it should not request an IP address from the DHCP server. From what I can see in the "man interfaces" text, there is an option for "manual", and an option for "dhcp". Manual assignment won't do, since I need multiple boxes to work on the same LAN without requiring any manual configuration... and "dhcp" does what I want, but only if there is no DHCP server on the LAN. (A requirement is that the functionality of these boxes should not be affected by the presence or absence of a DHCP server). Is there a trick that I can use to get this behavior? EDIT: By "no manual configuration", I mean that I should be able to take this box (headless) to any LAN anywhere, plug in the Ethernet cable, and have it do its thing. I shouldn't have to ssh to the box and edit files to get it working each time it is moved to a different LAN.

    Read the article

  • Make reading more comfortable for the eyes

    - by Shiki
    First, I read the topics about displays. Sadly the "BenQ FP241WZ" is a no go, for ~715 eur it's way too much. I would need some ideas about how could I make reading less tiring. Basically I didn't have this problem back then. But now, I'm reading some books, and also, have to read a lot a day. (A LOT). I look like some hardcore 0-24 gamer when I "finish" :). Think about things like.. background color (like I read 'dark yellow' color + black tint helps), font size, fonts (!) cleartype settings (should be off?) and so on. Display: BenQ E2200 HD (yeah cheap, eek, etc. Poor-man's LCD. :)) My CRT display is far away at the minute. So that is out of question. Also, my ThinkPad is here (T500), but I don't know about it's display. It comes with 1280x800 resolution and that's all I know (you can search back from that the FRU number, but I couldn't find it now). What could I do? (Or basically everyone in such a situation?)

    Read the article

  • 3 simple questions about file permissions

    - by Camran
    1- Wonder, is this a good setup of permissions in the /var directory? drwxr-xr-x 2 root root 4096 2010-05-30 03:34 backups drwxr-xr-x 7 root root 4096 2010-05-29 17:55 cache drwxr-xr-x 29 root root 4096 2010-05-29 17:55 lib drwxrwsr-x 2 root staff 4096 2009-07-14 04:36 local drwxrwxrwt 3 root root 60 2010-06-02 03:34 lock drwxr-xr-x 9 root root 4096 2010-06-02 03:34 log drwxrwsr-x 2 root man 4096 2009-09-20 20:36 mail drwxr-xr-x 2 root root 4096 2009-09-20 20:36 opt drwxrwxrwt 12 root root 420 2010-06-02 12:12 run drwxr-xr-x 4 root root 4096 2009-09-20 20:37 spool drwxrwxrwt 2 root root 4096 2009-07-14 04:36 tmp drwxr-xr-x 14 user root 4096 2010-05-30 22:21 www 2- Could you give me a brief explanation of the columns above? First one is which permissions they have. Second is a nr. Third and fourth says "root root" for example. fifth is another nr (4096 for example). and the others are obvious. 3- Could you give me a brief explanation of the folders above? Especially the "lock" and "tmp" folders. Lock contains an apache2 folder which seems empty. Thanks

    Read the article

  • Mail not piping in postfix

    - by user220912
    I have setup a postfix server and wanted to test the piping of mail to my perl script where i can make use of it and filter the mails.I wrote a test script for that which just logs the information in txt file. but i don't see any changes on sending the mail. My postconf-n output: alias_database = hash:/etc/aliases append_dot_mydomain = no command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix debug_peer_level = 2 debugger_command = PATH=/bin:/usr/bin:/usr/local/bin:/usr/X11R6/bin ddd $daemon_directory/$process_name $process_id & sleep 5 html_directory = no inet_interfaces = all inet_protocols = all mail_owner = postfix mailbox_size_limit = 0 mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man mydestination = yantratech.co.in, localhost.localdomain, localhost myhostname = tcmailer8.in mynetworks = 103.8.128.62, 103.8.128.69/101, 168.100.189.0/28, 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 myorigin = $mydomain newaliases_path = /usr/bin/newaliases.postfix queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.6.6/README_FILES recipient_delimiter = + relayhost = sample_directory = /usr/share/doc/postfix-2.6.6/samples sendmail_path = /usr/sbin/sendmail.postfix smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_tls_cert_file = /etc/pki/tls/certs/tcmailer8.in.cert smtpd_tls_key_file = /etc/pki/tls/private/localhost.key smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes transport_maps = hash:/etc/postfix/transport virtual_alias_maps = hash:/etc/postfix/virtual virtual_gid_maps = static:5000 virtual_mailbox_base = /home/vmail virtual_mailbox_domains = /etc/postfix/vhosts virtual_mailbox_maps = hash:/etc/postfix/vmaps virtual_minimum_uid = 1000 virtual_uid_maps = static:5000 here's my transport: [email protected] email_route my main.cf declaration: transport_maps = hash:/etc/postfix/transport my master.cf declaration: email_route unix - n n - - pipe flags=FR user=nobody argv=/etc/postfix/test.php -f $(sender) -- $(recipient) and my php script: #!/usr/bin/php <?php $fh = fopen('/etc/postfix/testmail.txt','a'); fwrite($fh, "Hello it works\n"); fclose($fh); ?> I am sending mails through telnet in localhost.

    Read the article

  • Problems Installing slapd On Ubuntu Server 11.10

    - by Zach Dziura
    I know that there's a Ubuntu-specific StackExchange website, but I thought that I'd ask here because it's a server-specific question. If I'm wrong in my logic... Well, you people are better at this than I am! O=) On with the show! I'm in the process of installing Oracle Database 11g R2 Standard Edition onto Ubuntu Server 11.10. I found a guide on the Oracle Support Forums that walks you through the process fairly easily. Unfortunately, I'm running into issues installing one particular dependency: slapd. When I go to install it, I get this error message: (Reading database ... 64726 files and directories currently installed.) Unpacking slapd (from .../slapd_2.4.25-1.1ubuntu4.1_amd64.deb) ... Processing triggers for man-db ... Processing triggers for ufw ... Processing triggers for ureadahead ... Setting up slapd (2.4.25-1.1ubuntu4.1) ... Usage: slappasswd [options] -c format crypt(3) salt format -g generate random password -h hash password scheme -n omit trailing newline -s secret new password -u generate RFC2307 values (default) -v increase verbosity -T file read file for new password Creating initial configuration... Loading the initial configuration from the ldif file () failed with the following error while running slapadd: str2entry: invalid value for attributeType olcRootPW #0 (syntax 1.3.6.1.4.1.1466.115.121.1.15) slapadd: could not parse entry (line=1051) dpkg: error processing slapd (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: slapd E: Sub-process /usr/bin/dpkg returned an error code (1) After much Google searches and forum trolling, I have yet to find a definitive answer as to what's going wrong. The error messages seem straight forward enough, but I have no idea how to debug this. Can anyone offer some assistance? Again, if I'm asking in the wrong place, I apologize. If I'm indeed asking properly, then thank you for any and all help!

    Read the article

  • How can I keep SSH's know_hosts up to date (semi-securely)?

    - by Chas. Owens
    Just to get this out in front so I am not told not to do this: The machines in question are all on a local network with little to no internet access (they aren't even well connected to the corporate network) Everyone who has the ability to setup a man-in-the-middle attack already has root on the machine The machines are reinstalled as part of QA procedures, so having new host keys is important (we need to see how the other machines react); I am only trying to make my machine nicer to use. I do a lot of reinstalls on machines which changes their host keys. This necessitates going into ~/.ssh/known_hosts on my machine and blowing away to old key and adding the new key. This is a massive pain in the tuckus, so I have started considering ways to automate this. I don't want to just blindly accept any host key, so patching OpenSSH to ignore host keys is out. I have considered creating a wrapper around the ssh command the will detect the error coming back from ssh and present me with a prompt to delete the old key or quit. I have also considered creating a daemon that would fetch the latest host key from a machine on a whitelist (there are about twenty machines that are being constantly reinstalled) and replace the old host key in known_hosts. How would you automate this process?

    Read the article

  • Package pinning in Debian lenny

    - by bronto
    I need your advice as I don't know if I hit a bug, or I am misunderstanding something. On a Debian Lenny, I am trying to prevent the installation of two particular packages, when they are requested as dependencies fromother packages. I am using the same syntax I successfully used in Squeeze, but with no success at all. On squeeze, the following works as expected: # cat /etc/apt/preferences.d/local-no-pike.pref Package: pike7.6-core Pin: version * Pin-Priority: -1000 If I try to install pike7.6, which depends on pike7.6-core, apt and aptitude refuse to do so. On Lenny, the only difference is that there is no support for "fragments" in /etc/apt/preferences.d, and all preferences must be in the /etc/apt/preferences file. But it's not working. E.g., if the file contains: Package: grub-common Pin: version * Pin-Priority: -1000 apt doesn't stop me from installing grub, which depends on grub-common. I used strace to see if the file is being read, and it is. I was suggested to use some Debug:: options, but they didn't help to pinpoint the problem either. I have google'd a lot with some combinations of "lenny" "prevent" "package" "installation" "pinning" and the like, but nothing nice came out. And of course I read man apt_preferences. What am I missing here?

    Read the article

  • daily rsync backups with hard links, checksums, and a new computer

    - by user75058
    I backup my laptop to a Fedora desktop daily using rsync with hard links. This has worked great for almost a year. I recently purchased a new computer, transferred over my data, and would like to continue backing up this computer daily. However, due to the data transfer from the old laptop to the new laptop, the timestamps have obviously changed, and will thus cause my daily rsync backup to re-transfer all of the data. I thought that by adding the -c (checksum) switch to my rsync backup it would match files based on checksum, instead of timestamp and size, and only transfer those files that are different or not present. This appeared to work, but upon examining the new backup, hard links are not being created, and it appears the files that should be hard linked are simply being copied to the new backup directory from the previous backup directory on the backup server. This is very peculiar behavior to me, and I am having trouble figuring out why this is occurring. Checksums match for files that I think should be hard linked. I have looked through the rsync man page and Google'd around a bit and have been unable to find anything for me to better understand this behavior.

    Read the article

  • DHCPD (Slackware) - Disabling auto-generation of gateway as DNS server

    - by Dogbert
    Good day, I am using a Linux workstation on Slackware 13.37. One "problem" I have had to deal with ever since 11.0 is the following: DNS servers are queried and determined at startup by DHCP daemon (DHCPD) This is invoked at startup by a script located at /etc/rc.d/rc.dhcpd My DNS servers for my ISP are resolved correctly, and are stored in a list located at /etc/resolv.conf However, the one annoying problem is that my gateway IP (ie: 192.168.1.1) is always automatically put at the top of the list in resolv.conf, meaning I have to always wait for a timeout before a valid DNS server is used to resolve an address (ie: timeout on 192.168.1.1 because it is not actually a DNS server, then DHCP uses the next server in the list). I could lower my DNS resolution timeout so the gateway query times out quicker, but that's not what I want, as I don't want to degrade the abilities of legitimate DNS servers. What I would like to do is change how DHCPD operates so that it does NOT put my gateway IP address at the beginning of this list. I've searched via "man dhcpd", etc, and haven't found the exact answer yet. Any help on this issue is appreciated. Thank you all in advance for your time and assistance.

    Read the article

  • How do I prevent a tar pipe from causing swapping?

    - by Jeff Shattock
    I have a rather large filesystem that I need to transfer from one Linux server to another. I figured the best way to do this was via a tar/netcat pipe arrangment, something like tar c . | pv | nc blah blah blah And it works great, the network stays fairly saturated, life is good. Until the source machine starts swapping. The files are on a raid on the source system, so the read speed is much faster than the write speed on the other end. Since the dest machine hasnt picked up the data yet, the source machine needs to stick it somewhere, so into RAM it goes, until there is no more free RAM. It then starts swapping, which is horribly painful since that machine has its OS installed on a somewhat slow CF card. Both machines have 4GB of physical ram, 64 bit Ubuntu 9.04 server. GigE link between them. How do I prevent this swapping? Can I put a "speed-limit" on the tar or netcat process so that the transfer speed doesn't overwhelm the write throughput on the destination end? The man pages didn't list anything, but there might be something I'm overlooking.

    Read the article

  • Equivalent of scp -l bandwidth_cap for .ssh/config?

    - by Mark Bennett
    Short form: You can limit the bandwidth the scp uses with the -l switch, you pass a number that's in kbits/sec. I'd rather set this in my .ssh/config file for certain names machines. What's the equivalent named setting for -l ? I haven't been able to find it. Followup question: Generally, not sure how to map back and forth between ssh command line options and config names, short of doing Google searches or manually comparing man pages on a case by case basis. Is there a table that directly equates the two? Longer form of first question, with context: I've started using ssh config quite a bit, especially now that I need to go through a proxy and do lots of port mappings. I even define the same machine more than once depending on what type of tunneling I need. However, when uploading a large file, it's difficult to do anything else on my machine. Even though I have more download bandwidth than up, I think that scp saturates the link so even my small requests can't reach the Internet. There's a fix for this, using the -l bandwidth command line switch for scp. scp -l 1000 bigfile.zip titan: I'd like to use this in my config instead, so I'd create an additional named entry called "titan-upload" and I'd use that as the target whenever I upload. So instead of: scp bigfile.zip titan: I'd say: scp bigfile.zip titan-upload Or even set different caps depending on where I am: scp bigfile.zip titan-upload-from-home vs. scp bigfile.zip titan-upload-from-work I'm generally on Mac and Linux.

    Read the article

< Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >