Search Results

Search found 24630 results on 986 pages for 'kali linux'.

Page 413/986 | < Previous Page | 409 410 411 412 413 414 415 416 417 418 419 420  | Next Page >

  • How can I automatically convert all source code files in a folder (recursively) to a single PDF with syntax highlighting?

    - by Bentley4
    I would like to convert source code of a few projects to one printable file to save on a usb and print out easily later. How can I do that? Edit First off I want to clarify that I only want to print the non-hidden files and directories(so no contents of .git e.g.). To get a list of all non-hidden files in non-hidden directories in the current directory you can run the find . -type f ! -regex ".*/\..*" ! -name ".*" command as seen as the answer in this thread. As suggested in that same thread I tried making a pdf file of the files by using the command find . -type f ! -regex ".*/\..*" ! -name ".*" ! -empty -print0 | xargs -0 a2ps -1 --delegate no -P pdf but unfortunately the resulting pdf file is a complete mess.

    Read the article

  • Model M Keyboard inputs incorrect characters after logging in to Fedora

    - by mickburkejnr
    I recently bought a 24 year old IBM Model M keyboard. From what I gather, it'd been left on a shelf for the last 5 years, so you can imagine the amount of dust dirt and crap that was on it. Before cleaning it, I plugged it in to my laptop (running Fedora 17) using a PS/2 to USB adapter. What I found was, while it still works, the keys I press don't correspond to what is displayed on the screen. So for example, when I type S on the keyboard, I get ß display on the screen instead. At the time, I put this down to the adapter not working properly. Since then, I stripped the keys off the keyboard and cleaned the whole thing. It looks like it's just come out of a box! I then plugged it in to my computer (also running Fedora 17) via a standard PS/2 plug. The computer loaded up to the login screen, and I typed in my password. Pressed enter, and I logged straight in to my machine. At this point, I opened up a text editor and started typing some stuff. To my horror, the keystrokes I was entering weren't coming up as intended. What came up instead were characters that would map to the pressed key but only under a different keyboard language setting. I opened up a program to see what keyboard language had been selected, and the correct one for the keyboard was selected (which is UK in my case). I opened up a window that would show what characters mapped to what keys, and I pressed every single key on the keyboard, and every corresponding block representing each key lit up. I went back to the text editor to try again, but I was still getting these random characters. Whats more is that the backspace key would not work, although in the other utility it would flash when pressed. What I know is that at the login screen the keyboard must have entered the correct characters, otherwise I wouldn't have been able to log in. Further more, keys that don't respond while using a text editor as sending signals to the computer, as illustrated in that keyboard utility. The question is why random characters are displayed when they really shouldn't be? Would this be a hardware fault or a software issue?

    Read the article

  • How do I change the .bash_history file location?

    - by Brian Graham
    I'm running CentOS 6.x and want to move the .bash_history to a different location. The home directories of my users are (because I run a VPS) in /var/www/vhost/<domain>.<tld> which is FTP accessible (and it should be). Because of this, I have changed the AuthorizedKeysFile for SSH connections out of the normal ~/.ssh/authorized_keys since FTP connections would easily be able to locate them. At the same time I want to move the .bash_history file to /home/%u/.bash_history where %u is the current user.

    Read the article

  • what to do when ctrl-c can't kill a process?

    - by Dustin Boswell
    Ctrl-c doesn't always work to kill the current process (for instance, if that process is busy in certain network operations). In that case, you just see "^C" by your cursor, and can't do much else. What's the easiest way to force that process to die now without losing my terminal? Summary of answers below: Usually, you can Ctrl-z to put the process to sleep, and then do "kill -9 process-pid", where you find the process's pid with 'ps' and other tools. On Bash (and possibly other shells) you can do "kill -9 %1" (or '%N' in general) which is easier. If Ctrl-z doesn't work, you'll have to open another terminal and kill from there.

    Read the article

  • (monit) What does failure "Changed" mean

    - by bresc
    Hi, I installed monit on my server and tried to monitor nginx. check process nginx with pidfile /var/run/nginx.pid start program = "/etc/init.d/nginx start" stop program = "/etc/init.d/nginx stop" group server And I get Process 'nginx' status Changed monitoring status monitored data collected Wed Mar 24 00:37:49 2010 What does "Changed" mean? I couldn't find anything. Thx

    Read the article

  • unable to start apache after changes to rc.conf and resov.conf

    - by shupru
    I had a working configuration this morning with the following simple /etc/rc.conf ifconfig_rl0="DHCP" ifconfig_xl="inet 192.168.1.11 netmask 255.255.255." defaultrouter="192.168.1.1" I added the following lines: firewall_enable="YES" firewall_type="SIMPLE" firewall_logging="YES" sshd_enable="YES" apache_enable="YES" mysql_enable="YES" my httpd.conf includes: NameVirtualHost 192.168.1.11 <VirtualHost 192.168.1.11> ... </VirtualHost> now apache and ssh server are down. changed rc.conf back to last working configuration and still no ssh or apache apachectl start #--> /usr/local/sbin/apachectl start: httpd could not be started apachectl status #--> Looking up localhost Making http connection to localhost Alert!: Unable to connect to remote host.

    Read the article

  • Explanation of nodev and nosuid in fstab

    - by Ivan Kovacevic
    I see those two options constantly suggested on the web when someone describes how to mount a tmpfs or ramfs. Often also with noexec but I'm specifically interested in nodev and nosuid. I basically hate just blindly repeating what somebody suggested, without real understanding. And since I only see copy/paste instructions on the net regarding this, I ask here. This is from documentation: nodev - Don't interpret block special devices on the filesystem. nosuid - Block the operation of suid, and sgid bits. But I would like a practical explanation what could happen if I leave those two out. Let's say that I have configured tmpfs or ramfs(without these two mentioned options set) that is accessible(read+write) by a specific (non-root)user on the system. What can that user do to harm the system? Excluding the case of consuming all available system memory in case of ramfs

    Read the article

  • Unable to set initcwnd on a Hetzner server

    - by Sergi
    We just ordered a bunch of Hetzner EX40SSD servers with the minimal Debian install image that they provide and everything is just fine except that looking at tcpdumps for fine tuning the network from various locations the initcwnd param seems to be stuck at 6 no matter how we change it. By default Debian 3.2 kernels should have that setting to 10 so it's pretty strange. Is it possible that the NIC driver or a custom setting in the Hetzner Debian image is limiting this param? Even if we set it to 4, like the old kernel default, it doesn't work. Any ideas would be much appreciated! Does anyone know if the NIC drivers provided by default by Debian have some kind on limitation. In a long thread in http://www.webhostingtalk.com/showthread.php?t=1200617&highlight=hetzner they talk about a page http://wiki.hetzner.de/index.php/Installation_des_r8168-Treibers/en where Hetzner states that the included Realtek r8168 driver is not working properly, but nowhere do they say that the initcwnd could be affected. Tomorrow i will try to install a CentOs image and see if Debian is the problem...Last resort would be to install a custom debian image, but that is a pain in the ass! Thanks!

    Read the article

  • Using mongodump with an auth enabled mongodb server

    - by bb-generation
    I'm trying to do a daily backup of my mongodb server (auth enabled) using the mongodump tool. mongodump provides two parameters to set the credentials: -u [ --username ] arg username -p [ --password ] arg password Unfortunately they don't provide any parameter to read the password from stdin. Therefore everytime I run this command, everyone on the server can read the password (e.g. by using ps aux). The only workaround I have found is stopping the database and directly accessing the database files using the --dbpath parameter. Is there any other solution which allows me to backup the mongodb database without stopping the server and without "publishing" my password? I am using Debian squeeze 6.0.5 amd64 with mongodb 1.4.4-3.

    Read the article

  • Does lshw list the "factory" speed of a memory module or the effective speed and how to find the former?

    - by Panayiotis Karabassis
    I hope I phrased this correctly. lshw gives: description: DIMM Synchronous 400 MHz (2.5 ns) product: M378B5773CH0-CH9 vendor: Samsung physical id: 0 slot: DIMM0 size: 2GiB width: 64 bits clock: 400MHz (2.5ns) And indeed the memory speed is set is set to 800MHz in the BIOS, which I think makes sense since it is a double rate. On the other hand, Googling strongly suggests that to this product number corresponds the PC3-10600 type, which is 1333MHz, not 800MHz. And this seems to be confirmed in the BIOS, where if I select Auto for memory bus speed, 1333MHz is selected "based on SPD settings". However in the latter case, the computer does not boot, i.e. the kernel panics, complaining that something attempted to kill the Idle process. So, I am I am beginning to suspect that I have been given defective memory, the technician that installed saw this, and lowered the bus speed. Is this a possibility?

    Read the article

  • What can impact the throughput rate at tcp or Os level?

    - by Jimm
    I am facing a problem, where running the same application on different servers, yields unexpected performance results. For example, running the application on a particular faster server (faster cpu, more memory), with no load, yields slower performance than running on a less powerful server on the same network. I am suspecting that either OS or TCP is causing the slowness on the faster server. I cannot use IPerf , unless i modify it, because the "performance" in my application is defined as Component A sends a message to Component B. Component B sends an ACK to component A and ONLY then Component A would send the next message. So it is different from what IPerf does, which to my knowledge, simply tries to push as many messages as possible. Is there a tool that can look at OS and TCP configuration and suggest the cause of slowness?

    Read the article

  • Input/output (read) errors in Bacula while setting up a Tape Drive + Autochanger

    - by Kyle Brandt
    When running the label barcode command in bacula I am getting Input/output errors. I am just getting started in trying to set this up: Connecting to Storage daemon TapeDevice at ny-back01.ny.stackoverflow.com:9103 ... Sending label command for Volume "ACJ332" Slot 1 ... 3307 Issuing autochanger "unload slot 8, drive 0" command. 3304 Issuing autochanger "load slot 1, drive 0" command. 3305 Autochanger "load slot 1, drive 0", status is OK. block.c:1010 Read error on fd=5 at file:blk 0:0 on device "ULTRIUM-HH4" (/dev/st0). ERR=Input/output error. 3000 OK label. VolBytes=64512 DVD=0 Volume="ACJ332" Device="ULTRIUM-HH4" (/dev/st0) Catalog record for Volume "ACJ332", Slot 1 successfully created. Sending label command for Volume "ACJ331" Slot 2 ... 3307 Issuing autochanger "unload slot 1, drive 0" command. 3304 Issuing autochanger "load slot 2, drive 0" command. 3305 Autochanger "load slot 2, drive 0", status is OK. block.c:1010 Read error on fd=5 at file:blk 0:0 on device "ULTRIUM-HH4" (/dev/st0). ERR=Input/output error. 3000 OK label. VolBytes=64512 DVD=0 Volume="ACJ331" Device="ULTRIUM-HH4" (/dev/st0) Catalog record for Volume "ACJ331", Slot 2 successfully created. Sending label command for Volume "ACJ328" Slot 3 ... 3307 Issuing autochanger "unload slot 2, drive 0" command. 3304 Issuing autochanger "load slot 3, drive 0" command. 3305 Autochanger "load slot 3, drive 0", status is OK. block.c:1010 Read error on fd=5 at file:blk 0:0 on device "ULTRIUM-HH4" (/dev/st0). ERR=Input/output error. 3000 OK label. VolBytes=64512 DVD=0 Volume="ACJ328" Device="ULTRIUM-HH4" (/dev/st0) Catalog record for Volume "ACJ328", Slot 3 successfully created. Sending label command for Volume "ACJ329" Slot 4 ... 3307 Issuing autochanger "unload slot 3, drive 0" command. 3304 Issuing autochanger "load slot 4, drive 0" command. 3305 Autochanger "load slot 4, drive 0", status is OK. block.c:1010 Read error on fd=5 at file:blk 0:0 on device "ULTRIUM-HH4" (/dev/st0). ERR=Input/output error. 3000 OK label. VolBytes=64512 DVD=0 Volume="ACJ329" Device="ULTRIUM-HH4" (/dev/st0) Catalog record for Volume "ACJ329", Slot 4 successfully created. Sending label command for Volume "ACJ335" Slot 5 ... 3307 Issuing autochanger "unload slot 4, drive 0" command. 3304 Issuing autochanger "load slot 5, drive 0" command. 3305 Autochanger "load slot 5, drive 0", status is OK. block.c:1010 Read error on fd=5 at file:blk 0:0 on device "ULTRIUM-HH4" (/dev/st0). ERR=Input/output error. 3000 OK label. VolBytes=64512 DVD=0 Volume="ACJ335" Device="ULTRIUM-HH4" (/dev/st0) Catalog record for Volume "ACJ335", Slot 5 successfully created. Sending label command for Volume "ACJ334" Slot 6 ... 3307 Issuing autochanger "unload slot 5, drive 0" command. 3304 Issuing autochanger "load slot 6, drive 0" command. 3305 Autochanger "load slot 6, drive 0", status is OK. block.c:1010 Read error on fd=5 at file:blk 0:0 on device "ULTRIUM-HH4" (/dev/st0). ERR=Input/output error. 3000 OK label. VolBytes=64512 DVD=0 Volume="ACJ334" Device="ULTRIUM-HH4" (/dev/st0) Catalog record for Volume "ACJ334", Slot 6 successfully created. Sending label command for Volume "ACJ333" Slot 7 ... 3307 Issuing autochanger "unload slot 6, drive 0" command. 3304 Issuing autochanger "load slot 7, drive 0" command. 3305 Autochanger "load slot 7, drive 0", status is OK. block.c:1010 Read error on fd=5 at file:blk 0:0 on device "ULTRIUM-HH4" (/dev/st0). ERR=Input/output error. 3000 OK label. VolBytes=64512 DVD=0 Volume="ACJ333" Device="ULTRIUM-HH4" (/dev/st0) Catalog record for Volume "ACJ333", Slot 7 successfully created. Sending label command for Volume "ACJ330" Slot 8 ... 3307 Issuing autochanger "unload slot 7, drive 0" command. Bacula-dir: # Definition of file storage device Storage { Name = TapeDevice # Do not use "localhost" here Address = ny-back01.... # N.B. Use a fully qualified name here SDPort = 9103 Password = "..." Device = ULTRIUM-HH4 Media Type = LTO-4 Media Type = File Autochanger = Yes } Bacula-sd: Autochanger { Name = StorageLoader1U Device = ULTRIUM-HH4 Changer Command = "/etc/bacula/scripts/mtx-changer %c %o %S %a %d" Changer Device = /dev/sg5 } Device { Name = ULTRIUM-HH4 Media Type = LTO-4 Archive Device = /dev/st0 AutomaticMount = yes; AlwaysOpen = yes; RemovableMedia = yes; RandomAccess = no; AutoChanger = yes; RandomAccess = no; } Anyone knows what this means / why I am getting this?

    Read the article

  • What is the cleanest way to upgrade Fedora and also my individual installs while keeping /home?

    - by Don
    I am a professional programmer, using Fedora 10 (and a host of other packages individually installed). I use my system to telecommute. Every year or so, I go through the ritual dance, usually with a second computer and a KVM switch as I don't have office space for two monitors, to build the next version of Fedora and install all my favorite apps. Is there a better way? At least a nice way to keep track of what I need to 'add on' so that I don't have to manually install my app collection? Also, I keep /home on a separate raid-ed drive set so I can also fall prey to 'old-config-file-itis'.

    Read the article

  • GitLab on a fresh Ubuntu 13 EC2 instance

    - by Polly
    I've spun up a fresh Amazon EC2 instance for a micro Ubuntu 13 server to be used as a GitLab server. I know the specs are a little low, but it should serve well for my purposes. It has an elastic (static) IP address that I have created an A record for git.mydomain.com. The first thing I did to the instance was add 1GB of swap to keep it happy from a memory perspective. I then set the hostname of the box to be git.mydomain.com and followed https://github.com/gitlabhq/gitlabhq/blob/6-2-stable/doc/install/installation.md to the letter. Everything seems to have worked, except for the web server side of things. Doing a gitlab:check shows the following: Checking Environment ... Git configured for git user? ... yes Has python2? ... yes python2 is supported version? ... yes Checking Environment ... Finished Checking GitLab Shell ... GitLab Shell version >= 1.7.4 ? ... OK (1.7.4) Repo base directory exists? ... yes Repo base directory is a symlink? ... no Repo base owned by git:git? ... yes Repo base access is drwxrws---? ... yes update hook up-to-date? ... yes update hooks in repos are links: ... can't check, you have no projects Running /home/git/gitlab-shell/bin/check Check GitLab API access: /usr/local/lib/ruby/2.0.0/net/http.rb:878:in `initialize': Connection refused - connect(2) (Errno::ECONNREFUSED) from /usr/local/lib/ruby/2.0.0/net/http.rb:878:in `open' from /usr/local/lib/ruby/2.0.0/net/http.rb:878:in `block in connect' from /usr/local/lib/ruby/2.0.0/timeout.rb:52:in `timeout' from /usr/local/lib/ruby/2.0.0/net/http.rb:877:in `connect' from /usr/local/lib/ruby/2.0.0/net/http.rb:862:in `do_start' from /usr/local/lib/ruby/2.0.0/net/http.rb:851:in `start' from /home/git/gitlab-shell/lib/gitlab_net.rb:62:in `get' from /home/git/gitlab-shell/lib/gitlab_net.rb:29:in `check' from /home/git/gitlab-shell/bin/check:11:in `<main>' gitlab-shell self-check failed Try fixing it: Make sure GitLab is running; Check the gitlab-shell configuration file: sudo -u git -H editor /home/git/gitlab-shell/config.yml Please fix the error above and rerun the checks. Checking GitLab Shell ... Finished Checking Sidekiq ... Running? ... yes Number of Sidekiq processes ... 1 Checking Sidekiq ... Finished Checking GitLab ... Database config exists? ... yes Database is SQLite ... no All migrations up? ... yes GitLab config exists? ... yes GitLab config outdated? ... no Log directory writable? ... yes Tmp directory writable? ... yes Init script exists? ... yes Init script up-to-date? ... yes projects have namespace: ... can't check, you have no projects Projects have satellites? ... can't check, you have no projects Redis version >= 2.0.0? ... yes Your git bin path is "/usr/bin/git" Git version >= 1.7.10 ? ... yes (1.8.3) Checking GitLab ... Finished It seems like I'm very nearly there. Searching on this error I have only found advice that unfortunately hasn't helped. I'm not using any kind of SSL setup, which a lot of the posts I found were about. I have tried appending 127.0.0.1 git.mydomain.com to /etc/hosts and giving the instance a reboot but there was no change. My config/gitlab.yml file has host: git.mydomain.com in it, and my gitlab-shell/config.yml has gitlab_url: "http://git.mydomain.com/" in it. I'm sure I'm missing something simple, but I've been through every relevant link I can find and have had no positive results; thank you in advance for any help!

    Read the article

  • btrfs won't run from cron

    - by Mikkel
    I'm trying to set up a cron job to create a btrfs subvolume snapshot of my root partition. The command works perfectly if I run it from the command line, but nothing happens at the scheduled cron time. I've tried piping to logger and redirecting stdout/stderr to file, and not only is there no content, the file I'm logging to isn't even created. The cron command I have is as follows: 0 0 * * * /sbin/btrfs subvolume snapshot / "/snapshots/$(date +%Y-%m-%d)" I've tried prefixing it with /bin/bash, but that makes no difference. What am I missing?

    Read the article

  • Why is my ethernet interface in promiscuous mode

    - by nhed
    I read that seeing a flag of M in netstat -i is the way to tell which of your interfaces is in promiscuous mode I run it and I see that eth1 is in promiscuous mode $ netstat -i Kernel Interface table Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg eth1 1500 0 1770161198 0 0 0 57446481 0 0 0 BMRU lo 16436 0 97501566 0 0 0 97501566 0 0 0 LRU This seems to be the case on all the machines I checked (All Centos6.0, both virtual and physical), any idea why ethernet devices would be in such a mode unless someone was running any pcap based app (sudo lsof | grep pcap shows nothing)? I did not see any mention of promiscuous in any of the config files (sudo grep -r promis /etc) Any ideas what puts the interface into that mode and why? p.s. most of the posts I see seem to be security related, this is not that

    Read the article

  • Decrease in disk performance after partitioning and encryption, is this much of a drop normal?

    - by Biohazard
    I have a server that I only have remote access to. Earlier in the week I repartitioned the 2 disk raid as follows: Filesystem Size Used Avail Use% Mounted on /dev/mapper/sda1_crypt 363G 1.8G 343G 1% / tmpfs 2.0G 0 2.0G 0% /lib/init/rw udev 2.0G 140K 2.0G 1% /dev tmpfs 2.0G 0 2.0G 0% /dev/shm /dev/sda5 461M 26M 412M 6% /boot /dev/sda7 179G 8.6G 162G 6% /data The raid consists of 2 x 300gb SAS 15k disks. Prior to the changes I made, it was being used as a single unencrypted root parition and hdparm -t /dev/sda was giving readings around 240mb/s, which I still get if I do it now: /dev/sda: Timing buffered disk reads: 730 MB in 3.00 seconds = 243.06 MB/sec Since the repartition and encryption, I get the following on the separate partitions: Unencrypted /dev/sda7: /dev/sda7: Timing buffered disk reads: 540 MB in 3.00 seconds = 179.78 MB/sec Unencrypted /dev/sda5: /dev/sda5: Timing buffered disk reads: 476 MB in 2.55 seconds = 186.86 MB/sec Encrypted /dev/mapper/sda1_crypt: /dev/mapper/sda1_crypt: Timing buffered disk reads: 150 MB in 3.03 seconds = 49.54 MB/sec I expected a drop in performance on the encrypted partition, but not that much, but I didn't expect I would get a drop in performance on the other partitions at all. The other hardware in the server is: 2 x Quad Core Intel(R) Xeon(R) CPU E5405 @ 2.00GHz and 4gb RAM $ cat /proc/scsi/scsi Attached devices: Host: scsi0 Channel: 00 Id: 32 Lun: 00 Vendor: DP Model: BACKPLANE Rev: 1.05 Type: Enclosure ANSI SCSI revision: 05 Host: scsi0 Channel: 02 Id: 00 Lun: 00 Vendor: DELL Model: PERC 6/i Rev: 1.11 Type: Direct-Access ANSI SCSI revision: 05 Host: scsi1 Channel: 00 Id: 00 Lun: 00 Vendor: HL-DT-ST Model: CD-ROM GCR-8240N Rev: 1.10 Type: CD-ROM ANSI SCSI revision: 05 I'm guessing this means the server has a PERC 6/i RAID controller? The encryption was done with default settings during debian 6 installation. I can't recall the exact specifics and am not sure how I go about finding them? Thanks

    Read the article

  • Screen flicker during content update, especially in Firefox

    - by Denis Malinovsky
    I'm using Nouveau video driver for my NVIDIA GeForce 6150SE nForce 430 video-card with Ubuntu 10.04. Screen flickers frequently, especially when I'm loading pages with many images/banners in Firefox. I tried to use proprietary NVIDIA driver, but it behaves itself even worse. Nv driver doesn't work at all. I also filed a bugreport in launchpad if you need any additional information.

    Read the article

  • Background process and SIGHUP

    - by Charles Salvia
    My understanding is that a program that is associated with a terminal will receive the SIGHUP signal if that terminal is closed. This usually will terminate the program. I also know that you can use the nohup command along with the & symbol to run the program in the background, and disassociate it from the terminal so that the program is not terminated when the terminal closes (on log out.) However, suppose a program is run normally without nohup, but is then suspended using Cntl-Z. If the program is then resumed in the background using the bg command, will it receive the SIGHUP signal on log out? Or to put it another way: if I have a program which is already running, and I don't want to stop it but I'd like to log out, can I suspend it using Cntl-Z and run it in the background using bg? Or will the program be terminated when I log out?

    Read the article

  • How to limit reverse SSH tunelling ports?

    - by funktku
    We have a public server which accepts SSH connections from multiple clients behind firewalls. Each of these clients create a Reverse SSH tunnel by using the ssh -R command from their web servers at port 80 to our public server. The destination port(at the client side) of the Reverse SSH Tunnel is 80 and the source port(at public server side) depends on the user. We are planning on maintaining a map of port addresses for each user. For example, client A would tunnel their web server at port 80 to our port 8000; client B from 80 to 8001; client C from 80 to 8002. Client A: ssh -R 8000:internal.webserver:80 clienta@publicserver Client B: ssh -R 8001:internal.webserver:80 clientb@publicserver Client C: ssh -R 8002:internal.webserver:80 clientc@publicserver Basically, what we are trying to do is bind each user with a port and not allow them to tunnel to any other ports. If we were using the forward tunneling feature of SSH with ssh -L, we could permit which port to be tunneled by using the permitopen=host:port configuration. However, there is no equivalent for reverse SSH tunnel. Is there a way of restricting reverse tunneling ports per user?

    Read the article

  • Intermittent apt-get 'no installation candidate' error on fabric deploy

    - by jberryman
    I'm experiencing a strange issue with a fabric script I'm using to bootstrap a server on EC2. I launch a stock Ubuntu 12.04 AMI, wait for it to start, then proceed with: with settings(host_string="ubuntu@%s" % i.dns_name, connection_attempts=30): sudo('apt-get -qy update') sudo('apt-get -qy install --no-install-recommends mdadm') # don't install postfix #etc... The apt-get update appears to run fine and gives no errors, however (2/3 of the time or so) installing mdadm throws a "no installation candidate" error. When I ssh into the server and run apt-get install mdadm I get the same error. Running apt-get update by hand, then the package installs fine. Any ideas on what might be happening, or ideas for debugging?

    Read the article

  • push commits to git (gitolite) repository messes up file permissions (no more trac access)

    - by klemens
    already posted here so feel free to answer there. everytime i commit/push something to the git server the file permissions change (all added/edited files in the repository have no read and execute access for the group). thus trac can't access the repository. do I need to change permissions of the folder differently? chmod u=rwx,g=rx,o= -R /home/git/repositories or do i need to setup gitolite somehow to write files with different permissions??? regards, klemens

    Read the article

  • How can I change the flow through this PAM (programmable authentication module) file?

    - by Jamie
    I'd like the PAM module to skip the pam_mount.so line when a unix login succeeds. I've tried various things including: auth [success=2 default=ignore] pam_unix.so nullok_secure auth [success=2 default=ignore] pam_winbind.so krb5_auth krb5_ccache_type=FILE cached_login try_first_pass auth requisite pam_deny.so auth requisite pam_permit.so auth required pam_permit.so auth optional pam_mount.so But can't get it to work. Conversely, when a session shuts down, how can I modify the following os that an unmount command (via pam_mount.so) is avoided during a unix login? session [default=1] pam_permit.so session requisite pam_deny.so session required pam_permit.so session required pam_unix.so session optional pam_winbind.so session optional pam_mount.so

    Read the article

  • How to ask Debian not to check last mount time of its file system?

    - by Landy
    I'm using Debian 6.0.5. To test a feature of my product, I need to modify the system date&time back and forth frequently. Once a time I set the system date back to one month ago, then I reboot the system, and it reported the last mount time of the file system is in the future and enter the maintenance mode automatically. I had to run the fsck to make sure the file system is not broken to boot into Debian. Is there any way to ask Debian stop checking the last mount time of its file system when booting? Thanks.

    Read the article

< Previous Page | 409 410 411 412 413 414 415 416 417 418 419 420  | Next Page >