Search Results

Search found 26263 results on 1051 pages for 'linux guest'.

Page 454/1051 | < Previous Page | 450 451 452 453 454 455 456 457 458 459 460 461  | Next Page >

  • CentOS 6 init script doesn't work properly

    - by user711643
    I'm setting up my ruby production server based on CentOS 6. I need a process called god (which is a process monitoring tool) to start at boot. I'm using an init script that I found here. Just as stated in the guide I ran: chkconfig --add god and then chkconfig --level 345 god on After this if I run "service god start|restart" everything works. It loads the available configurations and brings up the related processes (if they are not running). Problem is it doesn't work at boot. If I reboot the system, then I do "ps -aux | grep god". At this point "god" is running but apparently it didn't load the configuration files. If i run again service god restart, it loads everything without problems. What am I doing wrong?

    Read the article

  • Using remote station as original

    - by Neka
    I have 2 computers with totally same Debian, config, apps and other stuff. One at work and another at home. It's inconvenient to maintain the same configuration on these stations - upgrading OS, sync configuration, etc. Is there the way to use my home station as "host", and such a "terminal" at work? As if i have one HDD on 2 computers, but must use they own resources like an videocard and another. Looks like i need some remote tool as VNC, but this is no sessional event, I need to use "terminal" comp like original all of the time.

    Read the article

  • how to properly edit hosts, hostname and resolf.conf? [migrated]

    - by Firewall
    i,v been searching the internet for a real noop tutorial on the subject but could not found any direct info. on how to edit these files the proper way. i,v got a debian internet server that i use to host some personal domains and runs squid and rTorrent. the server is up and running with no problems but i am confused about a few things. lets say that i named my server (foo), my domain is (example.com) and my public IP is 95.211.133.200 now: should /etc/hostname contains: tango.example.com or tango <----- just the server name should /etc/hosts contains: 127.0.0.1 localhost.localdomain localhost 95.211.133.200 foo.example.com foo should /etc/resolf.conf contains (along with the nameservers) both: domain example.com search example.com or just the first one. are there any other files that i should edit in order to make things right? last thing, the command: domainname returns: (none) i believe it should return (example.com). what should i do to correct that?

    Read the article

  • dhclient.conf: Send 2x host-names to the DHCP server?

    - by RobM
    Already working: Debian box DHCP with send host-name me.company.com in dhclient.conf DNS updates automatically with an entry for me.company.com What I want to add: Send a second host-name, so both are automatically registered with DNS In other words: I want a DHCP client to register with DNS twice using different names, preferably without having to maintain DNS records manually. Is this even possible with DHCP?

    Read the article

  • VPS stops responding every now and again

    - by Or W
    I have a Linode vps that I use to host some of my websites on. It's Ubuntu based and it's up to date in terms of all packages. I don't have any cron jobs scheduled or any automatic processes. I host a few (up to date) wordpress blogs there that have very little traffic altogether. Every day (at a different time) my server stops responding, I can't SSH to it, web access is getting timed out and it just dies until I reboot it through the Linode manager. On the linode dashboard I can see that the CPU is not very high (2-3%) Incoming/Outgoing traffic is on 0 and the IO count has a spike just before the server stops responding (SWAP IO is at 2k and IO Rate is at 5k). When I reboot the server everything is just fine. I'm trying to figure out a way to analyze what's going on at these random times where the server freezes up. How can I determine the problem?

    Read the article

  • CSF Unresolved issue

    - by josephmarhee
    I began receiving service failures for CSF/LFD once the limit was reached in iptables preventing the service from working properly. I flushed all iptables rules, and redid by rules using CIDR rather than the individual IPs that were listed and the issue persists. Error: The VPS iptables rule limit (numiptent) is too low (1527/1536) - stopping firewall to prevent iptables blocking all connections, at line 1459 This is after restarting CSF, which gave me: You have an unresolved error when starting csf. You need to restart csf successfully to remove this warning CSF still seems to be trying to enforce rules that no longer exists (lists entire chains upon trying to be restarted,only to fail with that error). Any idea of what's going on?

    Read the article

  • Getting PAM/user info into php - something like Net_Finger instead of a db?

    - by digitaltoast
    I've got a very small user group who just need to login, upload, check and then move specific files to a different area when ready. Right now, I use the nginx PAM auth module to log them in against their unix accounts. As their login is their home directory, I've already got the info to send the uploads to the right area - one line of php and no database needed. But I'm maintaining a separate DB just so PHP can welcome them, grab their email and send them an email when processed. Yes, sure I could use nosql or sqlite instead so as to not need a whole mysql install. But it occurred to me that as I've got all these blank user fields for phone numbers I could populate with any data, that I could use something like php's Net_Finger. Which failed for me with: sudo pear install Net_Finger Starting to download Net_Finger-1.0.1.tgz (1,618 bytes) ....done: 1,618 bytes could not extract the package.xml file from "/build/buildd/php5-5.5.9+dfsg/pear-build-download/Net_Finger-1.0.1.tgz" Download of "pear/Net_Finger" succeeded, but it is not a valid package archive Error: cannot download "pear/Net_Finger" At which point I thought I'd stop, and take a ServerFault reality check - is this a really bad/dangerous/stupid idea just to stop me having to maintain details in two places rather than one? It there a better way? Googling shows that it's not an oft-asked thing, so perhaps with good reason?

    Read the article

  • How do I configure a swap partition using swapspace

    - by jcalfee314
    I finally have the swapspace project installed and running (via init.d). The purpose is to have a dynamically re-sizing swap partition. I'm clueless however on how to use it. It has good documentation but just does not go into that last step. How to I configure a swap partition using swapspace? The process is probably the same for any 3rd party program that would provide a swap space implementation to the kernel. I know this was intended to run as a process because the project provides an init.d script.

    Read the article

  • DHCP server can't see other machines

    - by William
    Hi, I setup a private network from virtual machines and one of the machines is the DHCP server for the group. I want to specify a next-server for the DHCP server but I'm having trouble connecting to any of the machines that I lease IPs to. I'm just trying to do a simple ping/ssh to 10.0.0.252 (a machine with a lease) but it doesn't seem to respond. Any advice? I'm assuming I need to be able to connect to my next-server but maybe I'm wrong. Thanks.

    Read the article

  • Backing up 80G hard drive 1G per day

    - by barrycarter
    I want to securely backup my 80G HD, but doing a complete backup takes forever and slows down my machine, so I want to backup just 1G per day. Details: % First hurdle: on the first day, I want to backup the "first" 1G of the hard drive. Of course, there really is no "first" 1G on a hard drive. % After 80 days, I'll have my whole HD backed up... assuming none of my files ever change, which of course they do. So the backup plan/program must also catch file creation/changes as they come along. % The backups must be consistent, in that I can restore my system by restoring the backups sequentially. In other words, "dd if=/harddrive" probably won't work. % The backups should encrypt file contents AND names, but I don't see this as a major hurdle. % Once the backup has backed up everything (even changed files), it can re-backup the first 1G on my hard drive. Even though this backup is redundant, that's OK, because I always want to be backing up something (eg, if I'm backing up to optical media, the older media might start going corrupt). Is there a magic backup plan/program that does this? In reality, I want to do this for multiple machines with multiple drives each, but think that solving the above will solve the general case.

    Read the article

  • USB webcam just works once and next time I've to reboot

    - by user30262
    I'm using Ubuntu 9.10, and a usb webcam that is shown as 'Bus 001 Device 005: ID 0ac8:3450 Z-Star Microelectronics Corp.' by lsusb. The problem is that on connecting the cam, it just works with the first program I start (skype, tokbox, messenger), and if I disconnect it or switch to another program, it stops to work and I have to restart my computer to make it work again. Has anyone else noticed this behaviour? Is there some good solution to reset the camera without rebooting to make it work again?

    Read the article

  • Open application in background without losing current window focus. Fedora 17, Gnome 3

    - by Ishan
    I'm running a script in the background which loads an image with feh depending on which application is currently in focus. However, whenever the script opens the image, window focus is lost to feh. I was able to circumvent this by using xdotool to switch back to the application that was originally in focus, but this introduces a short annoying period of time where the focus is switched from feh to the application. My question is this: is there any way to launch feh in the background such that window focus is NOT lost? System: Fedora 17, Gnome 3, Bash Thanks a ton!

    Read the article

  • Autosaving on emacs or xemacs files (preferably on loss of focus)

    - by Spencer
    Ideally I want to replicate with emacs functionality from TextMate, whereby on loss of focus i.e. I click away from the buffer, my file saves. If this isn't possible, I want to customize emacs so that it will autosave the file for every character I write. When I say this I don't mean I want to autosave to the ~ backup files. I want to save the file I am currently working on. I am working on a Fedora VM. Note I am not looking for a backup or autosave. I want the file I am actually in to save, so that if I loaded the html file I am editing in a web browser it would reflect my new changes without me having to explicitly change it.

    Read the article

  • ssh can't connect after server ip changed

    - by Kery
    I have a server with ubuntu installed. After I change the network configuration and restart server, ssh client can't connect server any more. But in the server I can use ssh client to connect itself and the netstat command shows that sshd is listening port 22. And in my computer (win7) ping command is OK to server's new IP. The configuration in /etc/network/interfaces is: auto eth0 iface eth0 inet static address 10.80.x.x netmask 255.255.255.0 gateway 10.80.x.1 I'm very confused about this. Hope somebody can give me some idea. Thank you in advance!!!

    Read the article

  • Remove some junk characters from server console log.

    - by Jayakrishnan T
    Please look in to the picture,here am trying to open(with vi editor) my server console log file(around 100MB) and it takes more than two minutes to open with so many special characters.after deleting the first line (means typing "dd")then i can easily view the file and size of the file is also reduced very much.My server OS is RHEL 5.4 and jboss is running in to it. Please help me to avoid such junk characters coming to my server console log files and it helps me to save my valuable space in server.

    Read the article

  • Almost All Logical Volumes Disappeared - Recovery?

    - by Alex
    We had a hard disc crash of one of two hard discs in a software raid with a LVM on top. The server is running Citrix xenserver. On the hard disk which is still intact, the volume group gets detected well, but only one LV is left. (some hashes replaced by "x") # lvdisplay --- Logical volume --- LV Name /dev/VG_XenStorage-x-x-x-x-408b91acdcae/MGT VG Name VG_XenStorage-x-x-x-x-408b91acdcae LV UUID x-x-x-x-x-x-vQmZ6C LV Write Access read/write LV Status available # open 0 LV Size 4.00 MiB Current LE 1 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:0 root@rescue ~ # vgdisplay --- Volume group --- VG Name VG_XenStorage-x-x-x-x-408b91acdcae System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 4 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 698.62 GiB PE Size 4.00 MiB Total PE 178848 Alloc PE / Size 1 / 4.00 MiB Free PE / Size 178847 / 698.62 GiB VG UUID x-x-x-x-x-x-53w0kL I could understand if a full physical volume is lost - but why only the logical volumes? Is there any explanation for this? Is there any way to recover the logical volumes? EDIT We are here in a rescue system. The problem is that the whole server does not boot (GRUB error 22) What we are trying to do is to access the root filesystem. But everything was in the LVM. We have only this: (parted) print Model: ATA SAMSUNG HD753LJ (scsi) Disk /dev/sdb: 750GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 32.3kB 750GB 750GB primary boot, lvm And this 750GB LVM volume is exactly what we see on top.

    Read the article

  • How do I install Apache Portable Runtime?

    - by apache
    My Apache is installed by yum install apache And now I'm trying to install subversion server from source following instructions here. But when I try to configure,get an error: [root@vps303 subversion-1.6.9]# ./configure configure: Configuring Subversion 1.6.9 configure: creating config.nice checking for gcc... gcc checking for C compiler default output file name... a.out checking whether the C compiler works... yes ... checking for APR... no configure: WARNING: APR not found The Apache Portable Runtime (APR) library cannot be found. Please install APR on this system and supply the appropriate --with-apr option to 'configure' or get it with SVN and put it in a subdirectory of this source: svn co \ http://svn.apache.org/repos/asf/apr/apr/branches/1.2.x \ apr Run that right here in the top level of the Subversion tree. Afterwards, run apr/buildconf in that subdirectory and then run configure again here. Whichever of the above you do, you probably need to do something similar for apr-util, either providing both --with-apr and --with-apr-util to 'configure', or getting both from SVN with: svn co \ http://svn.apache.org/repos/asf/apr/apr-util/branches/1.2.x \ apr-util configure: error: no suitable apr found How do I get around this problem? BTW,will both client and server software be installed by compiling from source?

    Read the article

  • I can connect to Samba server but cannot access shares.

    - by jlego
    I'm having trouble getting samba sharing working to access shares. I have setup a stand-alone box running Fedora 16 to use as a file-sharing and web development server. It needs to be able to share files with a Windows 7 PC and a Mac running OSX Snow Leopard. I've setup Samba using the Samba configuration GUI tool on Fedora. Added users to Fedora and connected them as Samba users (which are the same as the Windows and Mac usernames and passwords). The workgroup name is the same as the Windows workgroup. Authentication is set to User. I've allowed Samba and Samba client through the firewall and set the ethernet to a trusted port in the firewall. Both the Windows and Mac machines can connect to the server and view the shares, however when trying to access the shares, Windows throws error: 0x80070035 " Windows cannot access \\SERVERNAME\ShareName." Windows user is not prompted for a username or password when accessing the server (found under "Network Places"). This also happens when connecting with the IP rather than the server name. The Mac can also connect to the server and see the shares but when choosing a share gives the error: The original item for ShareName cannot be found. When connecting via IP, the Mac user is prompted for username and password, which when authenticated gives a list of shares, however when choosing a share to connect to, the error is displayed and the user cannot access the share. Since both machines are acting similarly when trying to access the shares, I assume it is an issue with how Samba is configured. smb.conf: [global] workgroup = workgroup server string = Server log file = /var/log/samba/log.%m max log size = 50 security = user load printers = yes cups options = raw printcap name = lpstat printing = cups [homes] comment = Home Directories browseable = no writable = yes [printers] comment = All Printers path = /var/spool/samba browseable = yes printable = yes [FileServ] comment = FileShare path = /media/FileServ read only = no browseable = yes valid users = user1, user2 [webdev] comment = Web development path = /var/www/html/webdev read only = no browseable = yes valid users = user1 How do I get samba sharing working? UPDATE: I Figured it out, it was because I was sharing a second hard drive. See checked answer below. Speculation 1: Before this box I had another box with the same version of fedora installed (16) and samba working for these same computers. I started up the old machine and copied the smb.conf file from the old machine to the new one (editing the share definitions for the new shares of course) and I still get the same errors on both client machines. The only difference in environment is the hardware and the router. On the old machine the router received a dynamic public IP and assigned dynamic private IPs to each device on the network while the new machine is connected to a router that has a static public IP (still dynamic internal IPs though.) Could either one of these be affecting Samba? Speculation 2: As the directory I am trying to share is actually an entire internal disk, I have tried these things: 1.) changing the owner of the mounted disk from root to my user (which is the same username as on the Windows machine) 2.) made a share that only included one of the folders on the disk instead of the entire disk with my user again as the owner. Both tests failed giving me the same errors regarding the network address. Speculation 3: Whenever I try to connect to the share on the Windows 7 client I am prompted for my username and password. When I enter the correct credentials I get an access denied message. However I did notice that under the login box "domain: WINDOWS-PC-NAME" is listed. I believe this could very well be the problem. Speculation 4: So I've completely reinstalled Fedora and Samba now. I've created a share on the first harddrive (one fedora is installed on) and I can access that fine from Windows. However when I try to share any data on the second disk, I am receiving the same error. This I believe is the problem. I think I need to change some things in fstab or fdisk or something. Speculation 5: So in fstab I mapped the drive to automount in a folder which works correctly. I also added the samba_share_t SElinux label to the mountpoint directory which now allows me to access the shares on the Windows machine, however I cannot see any of the files in the directory on the windows machine. (They are there, I can see them in the fedora file browser locally)

    Read the article

  • query keepalived

    - by tdimmig
    *Note: I have trouble deciding what should go in serverfault and what should go in superuser, if some kindly admin decides this is in the wrong place please move it for me - many thanks. I am implementing a basic HA system with keepalived. I only want to be notified of the failover in the case of hardware failure. I do, however, have the servers switch roles periodically. I have a track_script running on the backup that will vary it's return between 0 and 1 on an interval (once a week, once a month, whatever). Upon returning 0, the priority is raised above that of the master, upon returning 1 the priority is lowered again. This way they trade places on the configured interval. The question: What can I do to tell the difference between a switch caused by my script, and a switch caused because one of the servers died? I certainly want to be notified when there is an actual problem, but not every time the servers change places because of the script. I see that version 1.2.7 has snmp support and I may be able to use it to get some information that could tell me one way or another, but to be honest I've never used snmp before and I don't know how to get the information I want with it (my Google foo failed me).

    Read the article

  • How to fix Ubuntu 10.10 black screen from terminal?

    - by none
    I'm trying to install Ubuntu Desktop 10.10 on an Intel Atom mainboard (Intel D945GCLF2) with CRT that has been running Ubuntu 9.x previously. Both, Desktop live CD / installer and alternate install CD cause the screen to go black (and the status LED blinks). I was able to get a bit further into the boot process with nomodeset as parameter with the Live CD, unfortunately I can't pass GRUB any parameters now that I have used the alternate Install CD by pressing 'e', it just boots. So now I have Ubuntu installed, I get a terminal with CTRL-ALT-F1 but I don't know what I need to do now or how to adjust resolution or video settings from command line.

    Read the article

  • How expensive is a hostname in htaccess? Other solutions possible?

    - by Nanne
    For easy allow or disallowing of dynamic IP-adresses you can add them as a hostname in a .htaccess file. As I have read from: .htaccess allow from hostname? it does a reverse lookup on the connecting ip address, seeing if the response matches the allowed name. (Well, actually Apache is doing a double lookup, first a reverse lookup and then a forward lookup on the result of the reverse.) This is the reason we are currently not using dynamic-ip hostnames in the .htaccess: this "sounds" quite heavy: 2 extra lookups for every request. Is this indeed quite heavy, and would a reasonably busy server that is rather looking for less then more load get away with this :)? (e.g.: how does this 'load' compare to the rest? If a request is 1000 times more expensive then the lookups it might be negligible. otoh, it could be that final straw :) ) Are there other solutions? I can write a script that does a lookup of the hostname and put it in .htaccess files ofcourse, but this feels a bit like a hack.

    Read the article

  • Confused about setting up subversion

    - by apache
    I've already compiled and installed subversion, now trying to add users to it. And I find two articles on this, but they seem to be going in entire different direction. The 1st is here, which looks very simple, and seems it's not necessary to create a user account(useradd ...) the 2nd is here, which is a lot more complicated, and seems I need to create a user account for each svn user. Which one should I follow?

    Read the article

< Previous Page | 450 451 452 453 454 455 456 457 458 459 460 461  | Next Page >