Search Results

Search found 6171 results on 247 pages for 'debian installer'.

Page 71/247 | < Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >

  • Unable to view users and groups

    - by Ewr Xcerq
    I am using Centos5 running on a VMWare but whenever I choose to open the User Manager menu from System-Administration, an error message always displays The user database cannot be read. This problem is most likely caused by a mismatch between etc/passwd and /etc/shadow or /etc/group and /etc/gshadow/. The program will now exit. I am a Linux novice and have no idea how to fix this tiny issue. ANy help is thankful. Thank you.

    Read the article

  • Linux servers in a (primarily) Windows (AD) environment

    - by HannesFostie
    When I arrived at my current position, our environment existed almost exclusively of Windows servers. However, I am a big fan of using Linux for certain applications, like the webgallery I was asked to set up, a simple SFTP server, Nagios for monitoring etc. I do fine setting these up, but not being the Linux expert, I am not sure how to properly join these servers to the domain and was therefor wondering what procedures or guidelines other people follow. We often use ping -a to quickly figure out the hostname of a certain server, but this does not seem to work for the linux machines, most likely because of the whole WINS/NetBios thing I assume. I just joined one server to the domain, but probably missed something cause it's not working even after a dnsflush. Next to that, the couple procedures I've found so far are pretty extensive and most of the time don't seem worth the time. Best case scenario, I download some kind of client (smbclient?), enter the domain name and maybe the server to use, supply an administrator password and that's it. Is that possible at all? Thanks

    Read the article

  • How can I use sudo when I logged in with a SSH key in PuTTY?

    - by Alex
    I know the title probably doesn't even make sense, but anyway. I downloaded PuTTY and set it up, and followed this tutorial to set up SSH keys so I don't have to input a user or password when logging in with SSH. I noticed that when I made a new user I used the --disabled-password parameter, since I wouldn't be needing it... but now when I give the user sudo powers I can't proceed as it asks me for the user's password, and I don't have one. What do I do?

    Read the article

  • Postfix filter messages and pass to PHP script

    - by John Magnolia
    Each time a user signs up to our website through an external provider we get a basic email with the body contents containing the user details. I want to write a personalised automatic reply to this user. The actual parsing of the email body and reply via PHP I have already wrote but how do I go about configuring this from postfix? At the moment it is configured using a roundcube Sieve plugin where the email gets moved into a folder "Subscribe". Is it possible to create a custom action here? Debain Squeeze, Postfix and Dovecot

    Read the article

  • Someone used my postfix smtp (port 25) to send spam mails to me

    - by Andreas
    This week, someone started to send spam-mails through my postfix-smtp access (I verified by logging in through telnet from an arbitrary pc and sending mails with any ids myself) on my server, with recipient and target being [email protected]. Since I have a catchall and mail-fowarding to my google account, I received all those (many) mails. After a lot of configuration (I lost track of what change did what, going through dozends of topics here and over the net) that hole seems fixed. Still, what hapened? Does port 25 need to be open and accepting for my catchall to work? What configuration did I do wrong? I remember the first thing I changed (that had an effect) was the inet_interface setting in main.cf, only later to find out that if this does not say "all", my mail to mydomain.com does not get forwarded any more.

    Read the article

  • How to version lock packages in Ubuntu?

    - by Sandra
    On CentOS exists the yum versionlock option, where you can lock a package to a specific version, so it is never upgraded past that. I would like that puppet-server-2.7.19-1 puppet-2.7.19-1 stays on 2.7, and never upgraded to 3.0. Puppet Labs have released 3.0 and put it into the stable repo, so 2.7 will get upgraded to 3.0, which is not backwards compatible. Does Ubuntu have something similar to yum versionlock?

    Read the article

  • LMDE detects a wireless card, but can't use it

    - by Davidos
    LMDE can see my wireless card, and correctly identifies it, but it refuses to let me turn it on through the shiny graphical interface. Is there a way to use the non-shiny terminal to turn on my wireless? One small tidbit I noticed was the stated version; it says the version is 00. I believe that's hexadecimal for 0, which may indicate something screwy with the software. It would be nice if someone could tell me how to figure out how to solve this kind of problem in the future. *-network DISABLED description: Wireless interface product: Centrino Wireless-N 1000 vendor: Intel Corporation physical id: 0 bus info: pci@0000:09:00.0 logical name: wlan0 version: 00 serial: 91:e3:7b:0d:a3:a9 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=iwlagn driverversion=3.0.0-1-amd64 firmware=39.31.5.1 build 35138 latency=0 link=no multicast=yes wireless=IEEE 802.11bgn resources: irq:43 memory:e1d00000-e1d01fff I've tested multiple other network managers, and none of them work. W ireless switch is on, I'm sure it's a driver problem.

    Read the article

  • server down "without reason"

    - by Nick
    I have a Lenny dedicated server at Hivelocity. My server went down today. They doesn't know why. I don't know why. MTRG shows 7Mbps before went the server goes offline, ddos not probably. Hardware failure? maybe. but now is running ok. hacked? maybe. lastlog, md5sum, rkhunter, syslog and auth.log seems ok. my load is always between 0.02 and 0.3, the server runs a small website but with 2million pageviews/day and never failed before. Where can I find more information in my logs? where I start looking?

    Read the article

  • How to convert OpenVZ OS templte to bootable to image file?

    - by Medi
    My question is how to convert a pre-created OpenVZ OS template which are in tar.gz format (such as these) to an image file in order to be able boot it with other virtualization solutions such as QEMU or VirtualBox. In order to achieve this, I made an empty image file, I partitioned it, and made two partition, a primary partition and a extended partition for swap. I made the first partition ext3 (0x83) and the other one swap (0x82). Then I made the first one bootable, and copied the content of tar.gz to the first partition. But when I try to boot, it hangs at the first stage of booting.

    Read the article

  • Workaround broken sudo?

    - by perreal
    I managed to break sudo by deleting the libc.so.6 sym-link in /lib. I copied the actual file and created a symbolic link with the same name under my home directory by using LD_PRELOAD=/lib/libc-2.11.3.so. At this point, all binaries linking libc are working through preload except sudo. For sudo, I need to write (and don't know why): $ /lib/ld-linux-x86-64.so.2 --library-path . /usr/bin/sudo but this gives me: $ sudo: must be setuid root Checking the permissions: $ ls -l /usr/bin/sudo $ -rwsr-xr-x 2 root root 166120 So the setuid bit is actually set. Question: I need to create a symbolic link named /lib/libc.so.6 through my active ssh connection without using sudo, or, make sudo work somehow. I don't have the root password and I can't connect through ssh anymore. Is there any other way I can get authorization?

    Read the article

  • How to get a list of Dovecot IMAP users

    - by Colt McCormack
    How do you get a list of users for a dovecot email server that connect via IMAP (as opposed to POP)? Our server is setup to authenticate via LDAP/PAM. Is there an easy way to get a list of the users who are accessing their mail via IMAP, rather than POP? I am about to migrate our server to Google Apps and want to migrate all of the mail for my IMAP users only (couple hundred out of several hundred total users). POP mail will be migrated separately from the client end obviously. I would much rather migrate only the IMAP users rather than the whole domain which would include migrating a bunch of POP mail left in the server that has already been read/sorted/deleted in the client's email program. Migrating all of that extra useless leftover POP mail could waste weeks of migration time. I suppose parsing some logs to see who has connected on an IMAP port (995 or 993) would give me a list would work if someone could help me do that. I know I have the raw dovecot logs, but am hoping for a cleaner solution.

    Read the article

  • How to avoid duplicates when copying files that have been renamed at the destination

    - by Benoitt
    I have to get pictures from a folder – with subfolders which are updated automatically – with their extensions. These files have to be copied in a folder where a website based on PHP will edit them (by renaming and creating an XML file) to be downloadable and integrated in an XML feed. Because of the rename function of the script, when I perform the copy gain, all the files are duplicated, because the script has renamed the original ones already. I've tried a few things with rsync but I'm looking for something more powerful because I can't copy files with an external "history". #!/bin/bash find '/home/name/picture' -name '*.jpg' | while read FILE ; do rsync --backup --backup-dir=incremental --suffix=.old "$FILE" /var/www/media ; done wget --spider 'http://myscript.php' ; #exit 0 PS: As a little addition, I'd like to replace '.' with a 'space' just after the *.jpeg copy. My PHP script has some problem to define files with comma because of the extension. I'm finking about a command with find – like I did before – with a sed function? Is that a good idea?

    Read the article

  • Should I update the kernel on a Linux machine?

    - by Legate
    As I understand it, updating to a new kernel (with the normal linux-image... package, not by rolling my own) requires a server restart. However, one of our servers (Ubuntu 10.04) is running several extensive screen sessions. Restarting kills those which is always a major hassle to their owners (mostly because of lost session histories). What should I do? I see several possibilites: Not doing anything, that is update only non-kernel packages (perhaps use apt-pinning?) Update the kernel, but not restart. (Is that smart? I seem to remember there might be some problems with loading kernel modules.) Updating the kernel and restarting. Is there perhaps some way to preserve the screen sessions? I guess it ultimately boils down to this question: How important is it to update the kernel? I posted this question here instead of askubuntu.com as I think this is not an Ubuntu-specific issue though this server is running Ubuntu.

    Read the article

  • I lost /dev/md2 on my server

    - by sten
    Hi, My 2 hard drives fried at the same moment apparently. My host company rebooted my server in rescue mode and I am trying to recover my data. They told me to mount /dev/sda2 to recover the data I need but, looking at a similar server that I have in pool, the data I'm looking for should be instead in /dev/md2. I can find /dev/md0 but not /dev/md2 (nor /dev/md1). I've looked on several places on the web and I could only find messages explaining how to create new partition. I just need to recover some data, not all of it and I'll be glad if anyone could help me to mount the /dev/md2 folder (or any other trick that would allow me to recover the data that was stored there). Thanks in advance, Sten

    Read the article

  • xen virtual machines get to many porcentages of cpus

    - by ki0
    Hi everyone, This is my question. I have one Xen server with 8 CPU's and 6 virtual machine running, each virtual hard disk are running in diferent physical hardisk. Everything worked fine but sometimes one virtual machine get almost whole CPU, if the Domain-0 is 90% that is normal, the virtual machine is 500% usage of CPU. I have improved that it is not depends who are working with the VM even when nobody are working with the server this still happens. I dont know what happen. Anyone have any idea?¿ or anyone have happened the same?¿

    Read the article

  • Run preseed commands as specific user / switching users

    - by pduersteler
    Beside the usual setup where I create a normal user foo, I want to run a few d-i preseed/late_command commands as that foo user. My initial thought was to simply call those commands with sudo, e.g: d-i preseed/late_command in-target echo "<pwd>" | sudo -Si <command>. This works for some sort of commands. However the problem is that some of the commands load up shell scripts which require to not be run with sudo. Issuing a su -c "<command>" would be an alternative, but su does not offer the possibility to read the password from stdin. Is it safe to jump around between the users using su (And if yes, how do I provide the stdin? and does it work or just result in a su: must be run from a terminal) or would this cause issues?

    Read the article

  • ZFS & Deduplicating FLAC Data

    - by jasongullickson
    I'm experimenting with using ZFS to deduplicate a large library of FLAC files. The purpose of this is twofold: Reduce storage utilization Reduce bandwidth needed to sync the library with cloud storage Many of these files are of the same music tracks but from different physical media. This means that for the most part they are the same and usually close to the same size, which makes me think that they should benefit from block-level deduplication. However in my testing I'm not seeing good results. When I create a pool and add three of these tracks (identical songs from different source media) zpool list reports 1.00 dedupe. If I copy all of the files (make exact duplicates of the three) dedupe climbs, so I know that it is enabled and functioning, but it's not finding any duplication in the original collection of files. My first thought was that perhaps some of the variable header data (metadata tags, etc.) might be mis-aligning the bulk of the data in these files (the audio frames) but even making the header data consistent across the three files doesn't seem to have any impact on deduplication. I'm considering taking alternate routes (testing other dedupe filesystems as well as some custom code) but since we're already using ZFS and I like the ZFS replication options, I'd prefer to use ZFS dedupe for this project; but perhaps it's simply not capable of working well with this sort of data. Any feedback regarding tuning that might improve dedupe performance for this sort of dataset, or confirmation that ZFS dedupe is not the right tool for this job are appreciated.

    Read the article

  • Why doesn't the value in /proc/meminfo seem to map exactly to the system RAM?

    - by Eric Asberry
    The values in /proc/meminfo for MemTotal don't make sense. As a human, eyeballing it, it seems to roughly correspond to the installed RAM, but for using it to display the installed RAM from an automated utility it appears to be inexact, and inconsistent. For a system with 1G of RAM, I would expect the MemTotal line to have a value of 1048576 - 1024*1024. But instead, I'm seeing 1029392. On another 4G box, I'm seeing 3870172, which is not a multiple of 1024, and it's not even close to 1029392*4. On an 8G box, I get 8128204, which again seems to have no correlation to the other values, nor is it a multiple of 1024. I'm trying to use this information to report the RAM on a status web page. My work-around is to just "round" it to the nearest 1G multiple, but I'd like to understand why these values seem inconsistent and don't match my expectations. Can somebody fill me in on what I'm missing here? EDIT: To expand on the accepted answer below.... The reference can be found here. Also of interest to me from that page, which explains the inconsistency, is this bit: meminfo: Provides information about distribution and utilization of memory. This varies by architecture and compile options. ...

    Read the article

  • Question marks showing in ls of directory. IO errors too.

    - by jaymoo
    Has anyone seen this before? I've got a raid 5 mounted on my server and for whatever reason it started showing this: jason@box2:/mnt/raid1/cra$ ls -alh ls: cannot access e6eacc985fea729b2d5bc74078632738: Input/output error ls: cannot access 257ad35ee0b12a714530c30dccf9210f: Input/output error total 0 drwxr-xr-x 5 root root 123 2009-08-19 16:33 . drwxr-xr-x 3 root root 16 2009-08-14 17:15 .. ?????????? ? ? ? ? ? 257ad35ee0b12a714530c30dccf9210f drwxr-xr-x 3 root root 57 2009-08-19 16:58 9c89a78e93ae6738e01136db9153361b ?????????? ? ? ? ? ? e6eacc985fea729b2d5bc74078632738 The md5 strings are actual directory names and not part of the error. The question marks are odd, and any directory with a question mark throws an io error when you attempt to use/delete/etc it. I was unable to umount the drive due to "busy". Rebooting the server "fixed" it but it was throwing some raid errors on shutdown. I have configured two raid 5 arrays and both started doing this on random files. Both are using the following config: mkfs.xfs -l size=128m -d agcount=32 mount -t xfs -o noatime,logbufs=8 Nothing too fancy, but part of an optimized config for this box. We're not partitioning the drives and that was suggested as a possible issue. Could this be the culprit?

    Read the article

  • BackupPC back-up has not been finished in 12 hours(!)

    - by chronoz
    I installed BackupPC toda on a server and set it to do a back-up 12 hours ago... while it's been backing up since, it seems very very slow and it's not completed yet. It's just backing up a testserver with a total disk usage of 1.8GB. What could cause the back-up process to be so slow? rsnapshot always worked wonderfully fast, but I want to improve my back-up solution. df shows that the size on the back-up disk is actually still increasing.

    Read the article

  • Routing Traffic With OpenVPN

    - by user224277
    Few minutes ago i configured my VPN server, and actually I can connect to my VPN but all trafic is going through my normal home network. On my OpenVPN application I've got an information : Server IP: **.185.***.*10 Client IP: 10.8.0.6 Traffic: 7.3 KB in, 5.6 KB out Connected: 10 June 2014 19:21:59 So everything is connected but how I can setup on windows 7 that all trafic have to go through OpenVPN network card ?? Client setting : client dev tun proto udp # enter the server's hostname # or IP address here, and port number remote **.185.***.*10 1194 resolv-retry infinite nobind persist-key persist-tun # Use the full filepaths to your # certificates and keys ca ca.crt cert user1.crt key user1.key ns-cert-type server comp-lzo verb 6 Server setting : port 1194 proto udp dev tun # the full paths to your server keys and certs ca /etc/openvpn/keys/ca.crt cert /etc/openvpn/keys/server.crt key /etc/openvpn/keys/server.key dh /etc/openvpn/keys/dh2048.pem cipher BF-CBC # Set server mode, and define a virtual pool of IP # addresses for clients to use. Use any subnet # that does not collide with your existing subnets. # In this example, the server can be pinged at 10.8.0.1 server 10.8.0.0 255.255.255.0 # Set up route(s) to subnet(s) behind # OpenVPN server push "dhcp-option DNS 8.8.8.8" push "dhcp-option DNS 8.8.4.4" ifconfig-pool-persist /etc/openvpn/ipp.txt keepalive 10 120 status openvpn-status.log verb 6 and sysctl : net.ipv4.ip_forward=1 Thank you for your time and help.

    Read the article

  • In Ubuntu Linux, how do I list packages installed from the “universe” repository?

    - by Nate
    On an Ubuntu 10.04 LTS server, I want to list installed packages and see what repository they come from. It’s easy to list installed packages, but it does not include the name of the repository (such as “main” or “universe”). And this information isn’t in /var/lib/dpkg/status, so dpkg-query doesn’t show it either. I want to get a list of “unsupported” software—that is, software that doesn’t come from the “main” repository, and for which Ubuntu does not guarantee security updates. Note: This is a server. It does not have X, GNOME or KDE installed.

    Read the article

  • How can I start new window in the same screen session automatically?

    - by Mato
    I read How can I start multiple screen sessions automatically?, but I don't understand the first accepted reply: screen -dmS "$SESSION_NAME" "$COMMAND" "$ARGUMENTS" In my case I need to automatically create one screen session for one script, and afterwards I need to create a new window in the same session for another script. Manually, I would: run screen enter command CTRL+A CTRL+C enter command CTRL+A CTRL+D How can I do this automatically in a script? A simple example would help me a lot. Thank you for replies.

    Read the article

  • Availability of big files on multiple servers

    - by Imises
    I have to handle many (1'000 - 30'000) big files ranging from 200MB up to 2GB. The demand for these files is variable (0 - 300 downloads / file). This is why a single file must saved on 2 or more servers. My servers are placed in different datacenters (France), with different size HDDs (750GB to 4TB). Currently I share the files using PHP and ncftpget / ncftpput, but it's very slow. I need a solution to handle balancing these files across 7+ servers.

    Read the article

< Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >