Search Results

Search found 1903 results on 77 pages for 's man'.

Page 39/77 | < Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >

  • Does ssh-copy-id overwrite previous keys?

    - by decker
    I haven't yet found any definitive answer on this using google. It seems like the answer is no, but I need to know for sure before I go ahead and do it. Does ssh-copy-id append the key to authorized_keys or does it overwrite the previous keys? Thanks. Addendum: So the answer is right there in the man page. Go figure. I guess the question can at least help fellow Google-jockeys like me who get a little too used to googling and finding tutorials (that often explain things in layman's terms for us poor folks who have only used Windows our whole lives).

    Read the article

  • Finding optimal ddrescue command line options where Accuracy > Speed

    - by gav
    I'm read up a bit about this tool and obviously looked at the man pages. The trouble is that ddrescue takes so long I need to get the command right first time. I wasn't sure how to improve on the vanilla; $ sudo ./ddrescue -v /dev/disk0s5 MyVolImage.dmg MyVolRescue.log $ sudo ./ddrescue -v MyVolImage.dmg /dev/disk1s3 MyVolRestore.log From HSF+ to HSF+ drives Source (Broken) HDD is connected via USB 2.0 Dest HDD is inside MacBook I would choose accuracy over speed There seem to be a lot of options but I'm not sure how they impact quality and speed of recovery. Thanks, Gav

    Read the article

  • HTTP traffic through PIX VPN from outside site

    - by fwrawx
    I have a remote site with a website that only allows access from the outside IP assigned to our local PIX. I have users connecting to the local networking using a VPN that need to be able to view this remote site. I don't think this works because the packets want to come in and go out over the same (ext) interface. So I'm looking for a way to make this work using the PIX or setting up a service on a server on the local network to act as a middle-man for the HTTP requests. The remote site doesn't support setting up a VPN to our PIX. The remote website is dishing out pages over a non-standard port. Can I use squid or something similar to proxy just one site?

    Read the article

  • Where are ghostscript options / switches documented?

    - by sdaau
    I know there is a Ghostscript option, for instance -dPDFSETTINGS=/screen - where is that documented? How can I see what other options it accepts, appart from screen? Also, -dMaxSubsetPct=100 - what does it do? I open man gs, search for PDFSET, I get "Pattern not found". I type in a search engine, I get a myriad of personal webpages, no documentation hits. Can anyone help with a link? Many thanks in advance, Cheers! EDIT: also see this related post: Querying Ghostscript for the default options/settings of an output device (such as 'pdfwrite' or 'tiffg4') - Stack Overflow ... for getting a list of supported options for a given device.

    Read the article

  • Why isn't 'ether proto \ip host host' a legal tcpdump expression?

    - by Ezequiel Garzon
    In its description of valid tcpdump expressions, the pcap-filter man pages state: The filter expression consists of one or more primitives. Primitives usually consist of an id (name or number) preceded by one or more qualifiers. In turn, these qualifiers are type, dir and proto. So far so good, but further down we find this: ip host host which is equivalent to: ether proto \ip and host host In the first case, ip and host are, respectively, proto and type. What pattern does ether proto \ip follow? Isn't that, as a whole, a proto qualifier? If so, why isn't (a properly escaped) 'ether proto \ip host host' legal (no and)?

    Read the article

  • mounts aren't case-sensitive

    - by Asi
    I mounted a few drives from Linux boxes in my network, but those mounts aren't case-sensitive. The mount command I used ( from the man mount.cifs, case-sensitive should be the default ): mount //10.0.1.10/remote_folder /local_folder -t cifs -o username=xxxx,password=xxxx but those mounts aren't sensitive. for example doing: ls -l /local_folder/testfile.txt ls -l /local_folder/TESTFILE.TXT give's the same result... instead of 'file not found' Couple of important points: All drives are running on Linux machines. My local machine is running Fedora 18 and it is case-sensitive for ANY folder/file expect the mounted drives. All drive/mounts are case-sensitive when when doing SSH. So if I SSH from my local machine to a remote machine, doing ls -l /local_folder/TESTFILE.TXT will say file not found as it should. So I believe the issue is in my local machine and not in the way I did the mount. but I'm not sure where to look next (I'm new to Linux)

    Read the article

  • Unix tool for splitting archives

    - by Richo
    I'm dumping an svn repository to a giant USB disk that is formatted FAT due to necessity (treat this as unchangeable). It conks out when you try to create a file larger than 4 gb. I need a tool that I can pipe data to that will create files of arbitrary size that when catted together will be the original file. I can write a tool to do this, but if one already exists I'd rather use it. Cheers EDIT: A second look at the split man page looks like it might work.

    Read the article

  • How does one check whether the OS X "disabled" flag for launchd services is set?

    - by Charles Duffy
    According to the man page for launchctl (emphasis mine):    -w   Overrides the Disabled key and sets it to false. In previous versions, this option would modify the configuration file. Now the state of the Disabled key is stored elsewhere on-disk. Because the current state of the disabled flag is no longer set in the .plist file itself, checking for the Disabled key is no longer an accurate way to tell if the service will run on next boot. Where is this "elsewhere on-disk"? More to the point (and more importantly), how does one check whether this flag is set? Also, is it possible to set a service to run on next boot without forcing it to start immediately (as with launchctl load -w /Library/LaunchDaemons/my-service.plist)?

    Read the article

  • Creating a FAT file system and save it into a file in GNU/linux?

    - by RubenT
    I tell you my problem: I want to create a FAT file system and save it into a so I can mount it in linux using something like: sudo mount -t msdos <file> <dest_folder> Maybe I'm wrong and this cannot be done. Anyway, the problem is this: I'm trying to create the file containing a FAT file system, and I'm running this command: sudo mkfs.vfat -F 32 -r 112 -S 512 -v -C "test.fat" 100 That, accordingly to the mkfs man page, will create a FAT32 file system with 112 rootdir entries, logical sector size of 512 bytes, 100 blocks in total, and save it into "test.fat". But it fails, and the bash tells me: mkfs.vfat: unable to create test.fat What is going on? I think I am misunderstanding how mkfs works and how to use it. It is possible to write a filesystem into a file?

    Read the article

  • iptables: limiting bytes downloaded per IP per day?

    - by Miles
    On a public-facing web server, I'd like to limit the total bytes downloaded per IP address per day. For example, after a visitor downloaded 100MB, any additional requests would be dropped or rejected for the next 24 hours. Is it possible to accomplish this using iptables alone? The connbytes, connlimit, hashlimit, quota, and recent options all look promising, but the man page plays its cards close to the vest (e.g., "quota - Implements network quotas by decrementing a byte counter with each packet. --quota bytes The quota in bytes."). Would like to avoid using a proxy (like Squid) if possible.

    Read the article

  • Encrypt tar file asymmetrically

    - by DerMike
    I want to achieve something like tar -c directory | openssl foo > encrypted_tarfile.dat I need the openssl tool to use public key encryption. I found an earlier question about symmetric encryption at the command promt (sic!), which does not suffice. I did take a look in the openssl(1) man page and only found symmetric encryption. Does openssl really not support asymmetric encryption? Basically many users are supposed to create their encrypted tar files and store them in a central location, but only few are allowed to read them.

    Read the article

  • IIS permissions issue pointing docroot to Samba share

    - by lalalalalalalambda
    I have an IIS project which is stored on a Samba shared, network mounted with the following line: X: \\my-samba-server\dev /user:freddie Connectivity is fine, can read/write files from X:. In IIS, I'm trying to set it as the Physical path via \\my-samba-server\dev\folder\to\my\files, which results in the following 500.19 error: Config Error | Cannot read configuration file due to insufficient permissions It is by default trying to use the Pass-through authentication. If I try to set this to connect as the specific user freddie, I receive: The specified user does not exist What is the correct way to connect to a path which has been setup as described above? *Samba man pages indicate version 3.6 is on the Debian host

    Read the article

  • Folder permissions, red x on user object

    - by Matt Bear
    This question was asked before but was no answer. On shared folders on the file server, for the domain user name object under the security tab, the icon has a red x. There are no symptoms, the users have full access, there is just a red x on the icon for their name. Why is this? For clarification, logged into the windows 2008 r2 file server, browse to a users shared folder, right click on the folder, hit properties, click the security tab. The object representing the users domain name has a little red x on the lower right hand corner of the icon that looks like a single man. There are no symptoms beyond me wondering why the red x is there.

    Read the article

  • Moving only the contents of a map and not the map itself on linux

    - by WebDevHobo
    Using the cp command, one can move files and folders on linux. I want to make a new user and move the contents of the skeleton map to their home directory. I use this command: cp -r /etc/skel/ /home/testuser/ However, this only creates a skel folder in testuser. The idea is that the contents of the /etc/skel folder be copied to /home/testuser, and not that a map be made in /home/testuser with those contents. I've checked the man page: Link, but nothing on there really seemed like the solution to me. Is there a way to do this, or do files really need to be moved manually, 1 by 1?

    Read the article

  • Slackware 12 - installed cairo but cannot be seen

    - by piro
    Hi. I wanted to install gtk+ 2.16.5, so i also installed glib, pango and cairo. All seemed to work well, except for cairo. At first I got an error while configuring: Requested 'cairo = 1.6' but version of cairo is 1.4.12 I installed the newest version of cairo without any problems, i rebooted the comp and when i started the configure again the same thing happened and it showed me the same error. I also can see this: Consider adjusting the PKG_CONFIG_PATH environment variable if you installed software in a non-standard prefix. Alternatively, you may set the environment variables BASE_DEPENDENCIES_CFLAGS and BASE_DEPENDENCIES_LIBS to avoid the need to call pkg-config. See the pkg-config man page for more details. Can someone help me ? Thanks.

    Read the article

  • samba - join domain - automatically set workgroup

    - by ftiaronsem
    Hell alltogether Since I have to do this often, I want to automate the joining to a windows domain as much as possible. While joining a domain one has to specify realm = in the /etc/smb.conf, along with some other settings like security=ads. Among these settingst there is workgroup = My question is: Is it possible to fill this field automatically by samba, while joining a domain? Normally I would just have said never, but as I tried leaving this field blank while joining a domain, i got: Failed to join domain: Invalid configuration ("workgroup" set to '', should be 'BLABLA') and configuration modification was not requested This has made me wondering whether an automatic modification is possible and if so how? A search on the internet and the man page brought no results. It would be really great, if someone could answer that. Thanks in advance ftiaronsem

    Read the article

  • Need to set mailx variable to specify the From address

    - by user256817
    Running Oracle Linux 5.8 (which is just re-branded RedHat EL 5.8) I must change the From address. But we have scripts that use mailx which cannot be re-written to use any extra flags, so I'd like to use internal variables instead, which I see on the linux.die.net manpage on mailx is an alternative to the -r flag: -r address Sets the From address. Overrides any from variable specified in environment or startup files. Tilde escapes are disabled. The -r address options are passed to the mail transfer agent unless SMTP is used. This option exists for compatibility only; it is recommended to set the from variable directly instead. (Source: http://linux.die.net/man/1/mailx) How can we use these mailx variables? I tried adding this to /root/.mailrc, no go: set [email protected] I also added that to /etc/mail.rc with no gold. So I am turning to you, SuperUsers...

    Read the article

  • Apache Subversion and Sudo - Why can't I resolve this hostname?

    - by Hollowsteps
    Okay, I made a mistake and I'll be the first to admit I'm new at this setup. I built a bare bones kit, installed Ubuntu on it, and attempted to set up a source control server for a project some friend and I were going to work on. Unfortunately, I screwed up. I followed a dodgy tutorial from 2005 and when it didn't work, started mixing and matching trying to get to the source of my problem. So now I sit before you, a broken and miserable man. Desperate to escape this annoying echo of 'Unable to resolve host computer.repositoryname.com', I uninstalled apache and subversion. That did not fix it. Next I tried to edit my /etc/hosts, going so far as to remove the reference to '127.0.1.1 computername'. Still I'm plagued. I know I messed up, is there any way to track down this wayward bug?

    Read the article

  • Why change net.inet.tcp.tcbhashsize in FreeBSD?

    - by sh-beta
    In virtually every FreeBSD network tuning document I can find: # /boot/loader.conf net.inet.tcp.tcbhashsize=4096 This is usually paired with some unhelpful statement like "TCP control-block hash table tuning" or "Set this to a reasonable value." man 4 tcp isn't much help either: tcbhashsize Size of the TCP control-block hash table (read-only). This may be tuned using the kernel option TCBHASHSIZE or by setting net.inet.tcp.tcbhashsize in the loader(8). The only document I can find that touches on this mysterious thing is the Protocol Control Block Lookup subsection beneath Transport Layer in Optimizing the FreeBSD IP and TCP Stack, but its description is more about potential bottlenecks in using it. It seems tied to matching new TCP segments to their listening sockets, but I'm not sure how. What exactly is the TCP Control Block used for? Why would you want to set its hash size to 4096 or any other particular number?

    Read the article

  • Is there a software package that safely allows SSH via web on simple web host?

    - by spoulson
    I want to be able to use a secured web page on my shared web host to make SSH connections out to any destination. A shared web host is cheap and easy to maintain, and usually allows ssh to the web server. There are times I'd like to ssh into my web server, but don't have direct ssh connectivity. I'm aware of consoleFISH, Ajaxterm, and Anyterm. The problem is consoleFISH is a man-in-the-middle by design, and Ajaxterm/Anyterm require running a daemon process on the hosting server. Web hosts can usually support cron jobs, but not continuously running daemon processes. Additional Apache modules are usually out, too, as they require reconfiguration of the server and affects all other customers. Are there any software packages out there I can run on my shared web hosting account that provide a true ssh experience with these limitations?

    Read the article

  • How to migrate KVM based VMs running in LVM setup to Vmdk images

    - by Bond
    I am using KVM on Ubuntu Server 10.04. and Virtual Machines are running on it in LVM. I have to migrate some of them to Vmware server.How can I achieve this? I searched and came across some links but they all talked converting vmdk images to qcow or so.In this case I have OS in LVM. I also looked at man page of qemu-img and as I understand it should do what I am asking in this thread. But how exactly should I proceed in this case.Since it is not a file based image (OS running in an LVM which has filesystem in that LVM). So I am not able to understand what should I be doing to achieve the same. Can I achieve the above with snapshots of LVMs rather than shutting down the VM itself.

    Read the article

  • How should I capture Linux kernel panic stack traces?

    - by Alnitak
    What's current best practice to capture full kernel stack traces on a Linux system (RHEL 5.x, kernel 2.6.18) that occasionally panics in a device driver? I'm used to the "old" SunOS way of doing things - crash dumps get written to swap, and on reboot the dump gets retrieved in the local file system. man 8 crash refers to diskdump, but that appears to be unsupported. and/or deprecated. I've played with kdump, but it's unclear whether I can get a stack trace from that. Triggering a panic via Magic SysRq didn't create one. It also seems wasteful to reserve so much memory (128MB) just for a kexec crash recovery kernel.

    Read the article

  • When to use delaycompress option in logrotate?

    - by Anand Chitipothu
    The man page of logrotate says that: It can be used when some program cannot be told to close its logfile and thus might continue writing to the previous log file for some time. I'm confused by this. If a program cannot be told to close its logfile, it will continue to write forever, not for sometime. If the compression is postponed to next rotation cycle, the program continues to write to that file even after the next rotation cycle. How is postponing solving the problem? My understanding is that copytruncate should be used when a program cannot be told to close the logfile. I'm aware that some data written to the logfile gets lost when the copy is in progress. I was looking at the logrotate file for couchdb, and it had both copytruncate and delaycompress options. /usr/local/couchdb-1.0.1/var/log/couchdb/*.log { weekly rotate 10 copytruncate delaycompress compress notifempty missingok } It looks like there is no point using delaycompress when copytruncate is already there. What am I missing?

    Read the article

  • dhclient and dhcpcd the real difference

    - by rubixibuc
    I can't figure out the difference from just the man pages. I can see what is a daemon and one is a client, but what does that mean practically when using the commands? Also what is the difference between the client and daemon in this case, not just the terms (client and daemon) but functionally wise? EDIT: How are the tasks divided, if the client updates the information on the client, what is the purpose of the daemon. I'm talking about the client daemon in this case dhcpcd not dhcpd. Both come installed by default with some versions of Linux and seem to share the duties of the dhcp client. NAME dhcpcd - DHCP client daemon Name dhclient - Dynamic Host Configuration Protocol Client

    Read the article

  • Security and encryption with OpenVPN

    - by Chris Tenet
    The UK government is trying to implement man-in-the-middle attack systems in order to capture header data in all packets. They are also equipping the "black boxes" they will use with technology to see encrypted data (see the Communications Data Bill). I use a VPN to increase my privacy. It uses OpenVPN, which in turn uses the OpenSSL libraries for encrypting data. Will the government be able to see all the data going through the VPN connection? Note: the VPN server is located in Sweden, if that makes a difference.

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >