Search Results

Search found 1903 results on 77 pages for 'man'.

Page 38/77 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • How do I use novnc on CentOS 6?

    - by hyphen this
    I came across novnc in a yum search, and wanted to use it so I installed. However, there is no information on how to actually use it. The novnc_server command exits with "Could not find vnc.html". The man page and --help menu are of no help. The README on github says: "Use the launch script to start a mini-webserver and the WebSockets proxy (websockify)." Which is also, of no help. The fedora and CentOS wiki have no info.

    Read the article

  • How do I capture my second monitor using avconv?

    - by Codemonkey
    With this command: avconv -f x11grab -s 2560x1440 -i :0.0 I can stream video from my main monitor. I also have a second, 1080p monitor on which I do my gaming. I want to stream from that monitor. This doesn't work: avconv -f x11grab -s 1920x1080 -i :0.1 I assume I have to use -i :0.0 and somehow specify that it should capture 1920x1080 pixels from X position 2560 and Y position 0. My gaming monitor is placed to the right of my main monitor. Unfortunately the man page for avconv is miles long, so I haven't had any luck figuring this out on my own. I have tried Using -vf with crop like this: -vcodec libx264 ... -vf "crop=$IN_WIDTH:$IN_HEIGHT:2560:360" But that only displayed 1080p video from the top left corner of my main display.

    Read the article

  • How can I keep a file in Windows 7's cache?

    - by netvope
    Sometimes you know better than Windows what files will be re-used later. Suppose you have 8GB of memory, and you use the same 1GB file every hour in an I/O-bound application (which takes 1 second to finish if the file is cached, and 1 minute if not.) Now you process some other 16GB of data that are not going to be re-used. Naturally the frequently used 1GB file will be pushed out of the cache. It would be beneficial if one can tell Windows to keep that 1GB file in memory. (Better yet, it would be great if I can tell Windows not to cache those 16GB of data, but I'm not optimistic that this can be done.) The poor-man's way to keep a file in the cache would be to keep reading the file. Are there any better ways? Are you aware of any programs that do this? (If this can be easily done under Linux, please let me know too.)

    Read the article

  • Using windowmaker with quartz-wm in proxy mode on Snow Leopard

    - by Graham Lee
    I can modify my .xinitrc file to exec /opt/local/bin/wmaker, and get WindowMaker 0.90.2 as my window manager in X11.app. I'd like to use quartz-wm not as a window manager, but to provide the pasteboard integration with Aqua using the --only-proxy flag (see the man page). If I add the following line to .xinitrc: exec /usr/bin/quartz-wm --only-proxy & then WindowMaker never starts, complaining that there's already a window manager running. Is it possible to get the two to play nicely together, or is proxy feature part of the Xquartz server now? It seems that the Xquartz manpage has a number of pasteboard-to-clipboard synchronisation settings, but it's not clear whether quartz-wm needs to be running for those to work.

    Read the article

  • Postfix TLS issue

    - by HTF
    I'm trying to enable TLS on Postfix but the daemon is crashing: Sep 16 16:00:38 core postfix/master[1689]: warning: process /usr/libexec/postfix/smtpd pid 1694 killed by signal 11 Sep 16 16:00:38 core postfix/master[1689]: warning: /usr/libexec/postfix/smtpd: bad command startup -- throttling CentOS 6.3 x86_64 # postconf -n alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases broken_sasl_auth_clients = yes command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix debug_peer_level = 2 disable_vrfy_command = yes home_mailbox = Maildir/ html_directory = no inet_interfaces = all inet_protocols = all local_recipient_maps = mail_owner = postfix mailbox_command = mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man mydestination = $myhostname, localhost.$mydomain, localhost mydomain = domain.com myhostname = mail.domain.com mynetworks = 127.0.0.0/8 myorigin = $mydomain newaliases_path = /usr/bin/newaliases.postfix queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.6.6/README_FILES relay_domains = sample_directory = /usr/share/doc/postfix-2.6.6/samples sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop smtp_tls_note_starttls_offer = yes smtp_tls_session_cache_database = btree:/var/lib/postfix/smtpd_tls_cache.db smtp_use_tls = yes smtpd_delay_reject = yes smtpd_error_sleep_time = 1s smtpd_hard_error_limit = 20 smtpd_helo_required = yes smtpd_helo_restrictions = permit_mynetworks, reject_non_fqdn_hostname, reject_invalid_hostname, permit smtpd_recipient_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_unauth_pipelining, reject_non_fqdn_recipient, reject_unknown_recipient_domain, reject_invalid_hostname, reject_non_fqdn_hostname, reject_non_fqdn_sender, reject_unknown_sender_domain, reject_unauth_destination reject_rbl_client cbl.abuseat.org, reject_rbl_client bl.spamcop.net, permit smtpd_sasl_auth_enable = yes smtpd_sasl_local_domain = $myhostname smtpd_sasl_path = private/auth smtpd_sasl_security_options = noanonymous smtpd_sasl_type = dovecot smtpd_sender_restrictions = permit_mynetworks, reject_non_fqdn_sender, reject_unknown_sender_domain, permit smtpd_soft_error_limit = 10 smtpd_tls_CAfile = /etc/postfix/ssl/cacert.pem smtpd_tls_cert_file = /etc/postfix/ssl/smtpd.crt smtpd_tls_key_file = /etc/postfix/ssl/smtpd.key smtpd_tls_loglevel = 1 smtpd_tls_received_header = yes smtpd_tls_session_cache_timeout = 3600s smtpd_use_tls = yes tls_random_source = dev:/dev/urandom unknown_local_recipient_reject_code = 550

    Read the article

  • Linux bash: when to use egrep instead of grep?

    - by Michael Mao
    Hi all : I am preparing for a Linux terminal assessment now, I tried to Google and found most resources are referring to the basic "grep" rather than the more powerful "egrep" -- well, that is at least what the professor said in lecture. I am always working with small samples so performance tuning is a thing too far away. So basically I'd like to know are there any areas where I must switch to egrep to do it in a better way? Is it safe to work with basic "grep" as for now? will there be potential risks? Sorry about my limited knowledge on Linux shell commands, the man page looks like a maze to me and honestly I haven't put much time in understanding all the features both command provide.

    Read the article

  • knife server create- finding lists of flavors

    - by JohnMetta
    I'm new to Chef and I think I'm missing something in reading the docs. I want to create servers using knife server create (options) but can't seem to find fully complete documentation on the options. Specifically, how do I find a mapping of server flavors to whatever knife is looking for? Given the official wiki entry for "Launch Cloud Instances with Knife," the following is an example server creation on Rackspace: knife rackspace server create 'role[webserver]' --server-name server01 --image 49 --flavor 2 Likewise, on the Knife Man Page, there are commands for EC2 server images (using --d --distro DISTRO) and for Slicehost servers (using -f --flavor FLAVOR) However, what none of the documentation I've found describes is how to translate what I want to build on Rackspace ("I want Ubuntu 10.04 LTS") to what the integer entry that knife is seeking. It strikes me that, given the lack of a description in the documentation for how to find the flavor, this should be obvious. Thus, I think I'm missing something.

    Read the article

  • How to secure Firefox traffic (+DNS) through SOCKS proxy under Ubuntu 10.04?

    - by Maarx
    I'm using Ubuntu 10.04, and starting a SOCKS proxy with 'ssh -D', and setting Ubuntu to use it with "System - Preferences - Network Proxy". Firefox uses the proxy, and the proxy's IP appears when I visit a site like http://www.whatismyip.com/. My question is, is Firefox resolving DNS requests through this proxy? Is my web-browsing truly secure? (That is, until I exit the other end of the proxy. I know it's insecure after that.) (And I've verified the keys, I'm not being man-in-the-middled) (And--screw it. You know what I mean. Is it resolving DNS requests through the proxy?) I don't know how I would go about verifying such a thing for myself. Using additional hardware such as another debugging proxy is not an option. If Firefox isn't resolving my DNS requests through the SOCKS proxy, how do I go about fixing it?

    Read the article

  • why can't macports find make

    - by GeoffreyF67
    I am trying to run macports like thus: port install php5 When I do so, however, I get this error: Error: Unable to open port: can't read "build.cmd": Failed to locate 'make' in path: '/opt/local/bin:/opt/local/sbin:/bin:/sbin:/usr/bin:/usr/sbin' or at its MacPorts configuration time location, did you move it? So I looked at my path: declare -x PATH="/Developer/usr/bin:/opt/subversion/bin:/opt/local/bin:/opt/local/sbin:/usr/local/php5/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin" and then looked to make sure make was in one of those dirs: ls -l /Developer/usr/bin/make $ lrwxr-xr-x 1 root admin 7 Aug 7 16:47 /Developer/usr/bin/make -> gnumake And typing: make produces: make: *** No targets specified and no makefile found. Stop. So I know that it's there. But macports can't find it. Any ideas? G-Man

    Read the article

  • How to backup a remote VPS machine?

    - by morpheous
    I am considering opting for a VPS solution, with the server running Ubuntu server. I am pretty new to this, and I need to come up with a backup policy for my server data. Initial data is likely to be about 80Mb, and I expect the data to grow at approximately 5Mb to 10 Mb a day. Can anyone recommend: A backup/restore policy (best practises for a small startup) Which tools to use for backup? Another thing that is not clear to me is - where are the files backed up to normally (in the case of remote servers). If the files are backed up to the same machine (or even to another machine but with the same host), there is potentially, a single point of failure). How do people normally backup their server data, and is the probability of machine meltdown or the host company server farm "catching fire" so remote as not to be worth worrying about - especially for a small (read one man) startup like me?

    Read the article

  • xkb layouts not working (in KDE?) after upgrade from Ubuntu 9.10 to 10.04

    - by Alan
    I customised my keyboard layout in 9.10 by editing the appropriate /usr/share/X11/xkb/symbols/ file. After upgrading to 10.04 I noticed it had overwritten all my modifications, so I recovered the layout and overwrote the symbol file's base entry. Sadly KDE (and, presumably, the entire OS) seems to ignore the files altogether. The help files don't mention anything about modifying layouts anyway (and the layout switcher seems to be using setxkbmap, which uses the above path according to its man page), so I'm at a bit of a loss. Do I need to compile this into some other format somehow or how do I get it to work?

    Read the article

  • Apt-Get "sources.list needs at least 1 source-URI" error when building dependecies for Xen

    - by Entity_Razer
    What i'm trying to do is install Xen in a test environment, now I am trying to run the: apt-get build-dep xen-3.3 command, but it keep throwing a error which literally translated from dutch (installed the debian OS in Dutch) say's: E: your sourcelist (/etc/apt/sources.list) has to contain at least 1 source-URI I've googled it but I can't seem to find a definitive solid answer on how to fix this. By default a source-URI (read man page of apt-get) states it needs to be something along the lines of deb ftp://ftp.debian.org/debian stable contrib Now I've got 2 HTTP sources (default Debian ones) up & running so far and they've been working flawlessly for the better part of a few days now. Only now its starting to act up. Anyone able to help me out ? Much obliged !

    Read the article

  • dhcrelay running as both DHCP and DHCPv6 relay agent on CentOS 6.2

    - by Tibor
    I am trying to set up a DHCP relay agent that would relay DHCP requests for both IPv4 and IPv6. I am using CentOS 6.2 and I am using the dhcrelay from the ISC DHCP implementation. I would like to set it up as a service, but the man page for dhcrelay states: -6 Run dhcrelay as a DHCPv6 relay agent. Incompatible with the -4 option. -4 Run dhcrelay as a DHCPv4/BOOTP relay agent. This is the default mode of operation, so the argu- ment is not necessary, but may be specified for clarity. Incompatible with -6. It seems that the -6 and -4 options are incompatible. How would I still make it work for both protocols without rolling my own service wrapper for both cases?

    Read the article

  • What is the difference between sar -B verses sar -W

    - by Mark
    I am trying to understand why my system is running slowly. I found the sar command, but wanted to know the difference between sar -B and sar -W I read the man page, and I understand that -B gives me the paging statistics and -W gives me the swapping statistics. What I would like to understand is the following: What is the correlation between the two sets of statistics. When should I be concerned about -B and when about -W? ie, what values from each command should I be concerned with? Which statistic is more closely related to system performance Thanks

    Read the article

  • What is the impact of Windows 8 with UEFI on normal users?

    - by Sam
    I am a normal man-in-the-street computer user and so do not really understand what this is about, but I want to. Can someone please explain to me if: The Windows 8/UEFI secure boot thing will make it impossible to run normal/legacy applications in Windows 8 (as they will be unsigned)? It will turn Windows into an Apple-like system where only Microsoft approved applications can be run? As I say, I'm a normal user, and that is the overall impression I have from reading all the blogs, etc about it. If, on the other hand, all it does is make sure the system is booting a signed OS, how does this prevent malware (which is what at least two Microsoft blogs that I read seemed to be saying), given that most malware is not part of the boot process? The only way I can see this making sense is if it is ensuring that all OS components are signed. Is that it? Like I say, I'm a mortal, so please don't get technical on me, but rather explain how it will affect me, the user.

    Read the article

  • Closing telnet connection gracefully from session mode itself without going to telnet prompt.

    - by Kumar Alok
    a normal telnet connection is like this: telnet localhost 22 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. SSH-2.0-OpenSSH_4.2 ^] telnet close Connection closed. I want to close it from telnet session itself without coming to telnet prompt by pressing. My requirement is that if i press some control character from telnet session itself like CTRL+A so it will come out of session and close it automatically. something like this: $ telnet localhost 22 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. SSH-2.0-OpenSSH_4.2 ^A Connection closed. $ I tried all the options given at the man page and tried to do some $HOME/.telnetrc file tests but couldn't achieve it, as telnetrc will execute all the commands written in it with the given host whenever a telnet to that host is done. Can anyone help me in this, like how it can be achieved.

    Read the article

  • Why do I get duplicated entries in my $PATH?

    - by reprogrammer
    I'm using Ubuntu 9.10 (karmic). And, my ~/.pam_environment looks like the following. PATH DEFAULT=${PATH}:~/Adobe/Reader9/bin:~/texlive/2009/bin/x86_64-linux GIT_EDITOR DEFAULT=vim MANPATH DEFAULT=${MANPATH}:~/texlive/2009/texmf/doc/man INFOPATH DEFAULT=${INFOPATH}:~/texlive/2009/texmf/doc/info But, echo $PATH returns me duplicated entries as the following. /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:~/apps/Adobe/Reader9/bin:~/apps/texlive/2009/bin/x86_64-linux:~/apps/Adobe/Reader9/bin:~/apps/texlive/2009/bin/x86_64-linux I've tried replacing DEFAULT by OVERRIDE in my ~/.pam_environment file. But, that didn't help. Does any one know what's wrong with my ~/.pam_environment?

    Read the article

  • Script or Utility to convert .nab to .csv without importing double entries in Outlook.

    - by Chris
    Currently our environment is migrating from Groupwise 7 to Outlook 2003 and we have multiple users with mission critical outside contacts in their frequent contacts that will have to be imported in Outlook. Currently our only solution is to export GW contacts to a .nab, import to excel to scrub out the contacts in our own domain (to avoid double entry) and convert to .csv. This current solution will require a lot of man hours for hand holding because most of our users are not technically savvy AT ALL and are frankly too busy to do this themselves. Anyone know of any kind of tool or script to assist with this?

    Read the article

  • customized xkb layouts not working (in KDE?) after upgrade from Ubuntu 9.10 to 10.04

    - by Alan
    I customised my keyboard layout in 9.10 by editing the appropriate /usr/share/X11/xkb/symbols/ file. After upgrading to 10.04 I noticed it had overwritten all my modifications, so I recovered the layout and overwrote the symbol file's base entry. Sadly KDE (and, presumably, the entire OS) seems to ignore the files altogether. The help files don't mention anything about modifying layouts anyway (and the layout switcher seems to be using setxkbmap, which uses the above path according to its man page), so I'm at a bit of a loss. Do I need to compile this into some other format somehow or how do I get it to work?

    Read the article

  • Suppress log messages about 3ware disk temperature changes on CentOS?

    - by Stefan Lasiewski
    I have a number of CentOS 5 servers which use 3ware RAID controllers. These servers are bugging my team with messages about minor temperature changes, like this: Jun 8 12:32:39 HOST smartd[1231]: Device: /dev/twa0 [3ware_disk_01], SMART Usage Attribute: 194 Temperature_Celsius changed from 119 to 118 Jun 8 12:32:39 HOST smartd[1231]: Device: /dev/twa0 [3ware_disk_03], SMART Usage Attribute: 194 Temperature_Celsius changed from 122 to 121 How can I suppress these messages? According to man smartd.conf : To disable any of the 3 reports, set the corresponding limit to 0. Trailing zero arguments may be omitted. By default, all temperature reports are disabled (´-W 0´). On my systems, smartd is reporting about temperature changes by default. I tried a manual approach. In /etc/smartd.conf, I have the following: /dev/twa0 -d 3ware,1 -a -W 0 /dev/twa0 -d 3ware,3 -a -W 0 But this still does not suppress the messages. Since these messages show up in /var/log/messages, LogWatch is sending unnecessary emails every night.

    Read the article

  • How can I fix this configure error?

    - by balor123
    I'm trying to build mosh from source on a SUSE10 machine and am getting the following error: checking for protobuf... no configure: error: Package requirements (protobuf) were not met: No package 'protobuf' found Consider adjusting the PKG_CONFIG_PATH environment variable if you installed software in a non-standard prefix. Alternatively, you may set the environment variables protobuf_CFLAGS and protobuf_LIBS to avoid the need to call pkg-config. See the pkg-config man page for more details. I downloaded the source to protobuf and installed it in a custom path as well. I'm not using a package manager for any of this and cannot for various reasons outside the scope of the question. I added that custom path to my PATH and rehashed. Typically, this is enough for configure but in this case its not doing the trick. I added the prefix for protobuf to PKG_CONFIG_PATH but am still hitting this error. What should I do next to get past this error?

    Read the article

  • Reconnect Attempts for CIFS share

    - by Davin
    I have a CIFS share mounted in the FSTAB on Ubuntu server, which connects to our SAN and works without issue. Last night we had an issue with the SAN for about 12 hours. We corrected the problem and the Windows boxes restored their mappings. The Ubuntu box did not, but we were able to restore with [mount -a]. I saw options to specify retries in man for NFS but not CIFS. Any ideas on ensuring a reconnect if the SAN goes down again?

    Read the article

  • How to run "mongodb --repair" if it's an Upstart job?

    - by Wolfram Arnold
    My MongoDB server died. The log says something about an unclean shutdown and an existing mongodb.lock file. It recommends to remove the lock file, then restart the mongodb server with --repair. However, on my system (Ubuntu 10.10), I've installed MongoDB via an apt-get package, and it's set up as Upstart job. If I run mongodb from the command line, it won't find the data, none of the paths are set correctly. Surely, I could read the man page, try to emulate what Upstart would do, give it all the correct parameters plus --repair but that seems like a lot of trouble. There must be a simpler way, that's not fighting Upstart. What is it?

    Read the article

  • Vacation sends autoreply message to the recipient as well

    - by elitalon
    Hi, I have configured my Postfix server with vacation for a domain. Sending a message to [email protected] causes two events: The message is delivered to the recipient ([email protected]) An auto-reply message is sent to the sender, alerting that [email protected] should be used instead. Everything works well except for one particular drawback: the auto-reply is also sent to the recipient, so it receives two messages in the end. What can I do to avoid that? I'm only using the $TO variable in the custom vacation.msg message. And here is Postfix's master.cf vacation line: autoreply unix - n n - - pipe flags=Rhu user=vacation argv=/usr/bin/vacation -j -m /home/vacation/.vacation.msg -f /home/vacation/.vacation.db vacation I know using the -j is a little bit risky according to man page, but I'm kind of testing here.

    Read the article

  • QR vcard with a photo

    - by Cayetano Gonçalves
    I am about to get a ton of business cards printed from my new corporation, and I am allowed to have a QR code on it, and I would really like to be able to add a photo to be attached to the vcard. I know in the raw vcard you can add a photo like this: BEGIN:VCARD VERSION:4.0 N:Gump;Forrest;;; FN: Forrest Gump ORG:Bubba Gump Shrimp Co. TITLE:Shrimp Man PHOTO:http://www.example.com/dir_photos/my_photo.gif TEL;TYPE=work,voice;VALUE=uri:tel:+1-111-555-1212 TEL;TYPE=home,voice;VALUE=uri:tel:+1-404-555-1212 ADR;TYPE=work;LABEL="42 Plantation St.\nBaytown, LA 30314\nUnited States of America" :;;42 Plantation St.;Baytown;LA;30314;United States of America EMAIL:[email protected] REV:20080424T195243Z END:VCARD But I can't find any way to include the photo field into a QR code, any suggestions would be greatly appreciated.

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >