Search Results

Search found 26263 results on 1051 pages for 'linux guest'.

Page 461/1051 | < Previous Page | 457 458 459 460 461 462 463 464 465 466 467 468  | Next Page >

  • Lighttpd based server issues crop up when port forwarding

    - by michael
    I have four host computers running lighttpd webservers. they are sitting behind a hspa modem, which each occupying a http port between [81 - 84]. 80 is taken by the modem itself. The port forwarding is setup correctly, however, only a portion of any webpage I request from any of the hosts comes through (they all fails after %20 of the page). If I put the host on port 81 into the dmz, it serves pages fine. The others do not respond to the dmz treatment. Is it possible the web content on the hosts somehow require ports aside from their respective http port? Or is it possible that even though the server.port in the lighttpd_ssl.conf file is set, the individual hosts are still expecting to serve on port 80? I am not familiar with lighttpd, nor did i set them up. they are running on video encoders i purchased. I can grab any files from them required for further information on the problem.

    Read the article

  • ruby on rails server is intermittently slow

    - by Richard
    My rails installation was chugging along nicely. Last night we had to perform a hot-patch with was really a standard deploy of some exception code. Once capistrano finished the operation one of our admins discovered that there were two long running passenger processes. While we have deployed release over the past two weeks it would appear that these processes have been here and alive the whole time. Granted they could have been zombies or any other artifact and at this point we do not know what state they were in. Which leads me to the question: There are so many moving parts between the rails application and the OS/hardware that being a SME is probably no longer possible. So; how does a sysadmin perform root-cause analysis with any certainty? And: When do I just start rebooting servers?

    Read the article

  • Keeping folder of files in sync over 3 machines

    - by Wizzard
    Morning, Got 3 machines that have user content on them, which I need to keep in sync. This is a 3 way sync. Currently I run rsync but we just don't handle deletes. Have looked at something like gluster, but that seems a little over the top Any other software out there to do a 3 way sync, or a good network file system...? There is for web servers so we don't want a slow / IO hungry process. 3 servers... user content could be added to 1 and needs to be moved to other two.

    Read the article

  • Fedora 16 Running Hot

    - by sdasdadas
    Since switching from Windows 7 to Fedora 16, my laptop has been running incredibly hot (by the air exhaust). The laptop is an Asus K73S. Running 'sensors', I receive: acpitz-virtual-0: 75.0 celsius nouveau-pci-0100: 66.0 celsius asus-isa-0000: 75.0 celsius The only CPU hog is Firefox at 30 - 40% on average. My GPU information (from lspci) is: Intel Corporation Xeon E3-1200/2nd Generation Core Process or Family Integrated Graphics Controller (rev 09). Running lspci | grep -i VGA, returns: 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 01:00.0 VGA compatible controller: nVidia Corporation GF106 [GeForce GT 555M SDDR3] (rev a1) I don't notice a huge difference running without the battery, but it does seem a little cooler. Thanks!

    Read the article

  • Stopping immediate right button up in Mint 13 (Cinnamon) actioning first menu choice

    - by jontyc
    In Windows, a single right click (i.e., with release) displays a context menu on the screen, allowing you to select the appropriate choice with a further click from either button. In Mint 13, Cinnamon, it's hold down the right button, drag, then release on the appropriate menu choice. Both methods are fine, but constantly using both OSs regularly, I'm doing the Windows procedure in Mint by mistake all the time. This makes the single right click and release bring up the context menu and immediately action the first menu choice. Is there any mechanism to ignore right-button-up if a substantial dragging action or time period hasn't occurred, and have Mint looking for a further click to select?

    Read the article

  • Backing up to smaller drive

    - by Dave
    In a few hours I'll have a new 500GB Sony laptop, filled with the usual Sony rubbish which I'll promptly be replacing with Ubuntu or Crunchbang or something. However, first I want to make a full clone of the drive (including recovery partitions), should I wish to return it to Sony or sell it on in its factory state. The problem is that the only backup drives I have are less than 500GB - the biggest I have is 250GB or so! So I need to backup and compress on-the-fly. What's the best way to do this? Presumably dd piped into gzip would do the trick, or does anyone have any other suggestions to accomplish this?

    Read the article

  • where is my disk space?

    - by user166241
    I recently had a problem with .xsession-errors file - it became very big ( 90GB) and took all disk space: How I can check what takes disk space in /tmp?. I cleaned it with command > .xsession-errors but after an hour it became large again. So I deleted it (rm .xsession-errors) - it helped because it wasn't recreated but again after hour disk space disappeared - now there is no .xsession-errors anymore but I don't know where is the memory: df Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda1 106640456 101223392 4 100% / udev 8166744 8 8166736 1% /dev tmpfs 3270224 972 3269252 1% /run none 5120 0 5120 0% /run/lock none 8175552 152 8175400 1% /run/shm du -sc * .[^.]* | sort -n 0 initrd.img 0 initrd.img.old 0 proc 0 sys 0 vmlinuz 0 vmlinuz.old 4 cdrom 4 lib64 4 media 4 mnt 4 selinux 8 dev 12 srv 16 lost+found 68 tmp 1124 run 3396 lib32 5164 .rpmdb 5540 root 8888 sbin 9120 bin 17132 etc 106080 opt 116956 boot 861908 lib 3530584 usr 3821836 var 13371260 home 21859112 total So there is around 100GB used but executing du -sc * .[^.]* | sort -n in root directory finds only ~21 GB - so what takes 80GB?? How to check it? I suspect that when I deleted the `.xsession-errors' file the errors were redirected somwhere else - but where?

    Read the article

  • sudoers entries

    - by Pochi
    Is there a way to have a sudoers entry that allows executing of only a particular command, without any extra arguments? I can't seem to find a resource that describes how command matching works with sudoers. Say I want to grant sudo for /path/to/executable arg. Does an entry like the following: user ALL=(ALL) /path/to/executable arg strictly allow sudo access to a command exactly matching that? That is, it doesn't grant user sudo privileges for /path/to/executable arg arg2?

    Read the article

  • curl makes a site work externally once run locally (apache)

    - by Kyle_at_NU
    Currently when I visit mysite.mydomain.com external to the local network I get in the browser: This is the default web page for this server. Nothing to see here. This is not even the "It Work's" Apache page. Then if locally (Apache2 on Ubuntu Server 12.04 with curl installed ) type: curl mysite.mydomain.com I get the site I expect. Then the next time i visit the page externally I get the correct site. Has anyone seen this before? Tips/Suggestions?

    Read the article

  • Connection closed by remote host followed by Connection refused

    - by Khosrow
    All of a sudden my ssh connection to server has been damaged. Here is what's happened: $ ssh -vvv -p <PORT> -l <USER> <HOST> OpenSSH_5.3p1 Debian-3ubuntu7, OpenSSL 0.9.8k 25 Mar 2009 debug1: Reading configuration data /home/khosrow/.ssh/config debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to <HOST> [<IP>] port <PORT>. debug1: Connection established. debug1: identity file /home/khosrow/.ssh/identity type -1 debug1: identity file /home/khosrow/.ssh/id_rsa type -1 debug1: identity file /home/khosrow/.ssh/id_dsa type -1 ssh_exchange_identification: Connection closed by remote host I've recently updated the box with yum update and sshd got updated as well. I honestly don't know if this caused any damages or not. But it's prompted that /etc/ssh/sshd_config was stored as /etc/ssh/sshd_config.rpmnew which was quite normal. I've seen similar posts while googling, but almost all of them suggests that I should check /etc/hosts.allow and /etc/hosts.deny, which in my case, I can't. I can not connect to the box to see what's going on there. I rebooted the box, through web interface of server provider, and it even got worse. I'm now getting this: $ ssh -vvv -p <PORT> -l <USER> <HOST> OpenSSH_5.3p1 Debian-3ubuntu7, OpenSSL 0.9.8k 25 Mar 2009 debug1: Reading configuration data /home/khosrow/.ssh/config debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to <HOST> [<IP>] <PORT>. debug1: connect to address <IP> port <PORT>: Connection refused ssh: connect to host <HOST> port <PORT>: Connection refused with both <CUSTOM_PORT> and default 22 ports. I would really appreciate if anyone could help me on this.

    Read the article

  • Remove folder structure from archive, ignore folder while archiving and fix error

    - by Michael
    I am trying to make a script to backup each of my plesk hosts to individual files, I am having two problems: I would like to remove the folder structure from archive, the tar is 3 folders deep I am getting this error: tar: Removing leading `/' from member names I need my archive to ignore folders named "catch" because I don't need them in my archive. The code: FILES=/var/www/vhosts/* FNAME="" for f in $FILES do FNAME=`basename $f` tar cfv "/root/backup/ftp/$FNAME.tar" $f done Sample output: tar: Removing leading `/' from member names /var/www/vhosts/mydomain.com/ /var/www/vhosts/mydomain.com/conf /var/www/vhosts/mydomain.com/etc/ /var/www/vhosts/mydomain.com/etc/group /var/www/vhosts/mydomain.com/etc/termcap /var/www/vhosts/mydomain.com/etc/passwd /var/www/vhosts/mydomain.com/usr/

    Read the article

  • why is this happening?-"dhcpcd will not work correctly unless run as root"

    - by user330317
    i have installed archlinux and gnome on virtualbox. had no problem connecting to internet but now after installing gnome and rebooting there is no internet connection after following instructions from archwiki,i have tried . but i cant figure out the problem please help. host-63drhd% sudo netctl status enp0s3 ? [email protected] - Networking for netctl profile enp0s3 Loaded: loaded (/usr/lib/systemd/system/[email protected]; static) Active: inactive (dead) Docs: man:netctl.profile(5) host-63drhd% sudo netctl enable enp0s3 Profile 'enp0s3' does not exist or is not readable host-63drhd% sudo dhcpcd dhcpcd[1486]: sending commands to master dhcpcd process host-63drhd% dhcpcd dhcpcd[1543]: control_open: Permission denied dhcpcd[1543]: dhcpcd will not work correctly unless run as root dhcpcd[1543]: open `/run/dhcpcd.pid': Permission denied dhcpcd[1543]: control_start: Permission denied dhcpcd[1543]: version 6.3.2 starting dhcpcd[1543]: enp0s3: if_init: Permission denied dhcpcd[1543]: enp0s8: if_init: Permission denied dhcpcd[1543]: no valid interfaces found dhcpcd[1543]: no interfaces have a carrier dhcpcd[1543]: forked to background, child pid 1544

    Read the article

  • I tried installing Ubuntu 10.04 and I got this message - any ideas on what to do?

    - by user41926
    No root file system defined. Please correct this from the partition menu. This message shows up when I first boot into Ubuntu after the installation. I installed it by mounting the ISO with Daemon Tools, and I just did the default Wubi installation. I keep reading everywhere that I need to choose my installation directory, but I don't get any option to do that. These are all the options I get for installation directory. I have a C and D partition on my drive, and I tried installing it on both and no luck either way. Any ideas?

    Read the article

  • wget recursive limited within subdomain

    - by Paul Seangwongree
    I want to download the following subdomain with the recursive option using wget: www.example.com/A/B So if that URL has links to www.example.com/A/B/C and www.example.com/A/B/D, these two should also be downloaded. But I don't want anything outside the www.example.com/A/B subdomain to be downloaded. For example, if www.example.com/A/B/C has a link back to www.example.com, the page www.example.com should not be downloaded. What wget command should I use?

    Read the article

  • Cross-platform centralized desktop password manager

    - by Dave
    I have been using KeePass as a desktop password manager on Windows for many years. Love it! However, I am now needing to work on different platforms much of my day (Windows 7, Windows XP, Mac OS X, Ubuntu, and OpenSUSE.) I'm looking for a password manager I can share across all these platforms. My ideal solution would: Run natively (not in a virtual machine) on all platforms. Store the "official" copy of the password data on a local network so I can get to it from any and all machines. It is OK if it locks (or becomes read-only) when one client is accessing it. Keep a local cached copy (read-only is fine) so I can still get to my passwords when disconnected from the network. Does any such beast exist?

    Read the article

  • Performance Drop Lingers after Load [closed]

    - by Charles
    Possible Duplicate: How do you do Load Testing and Capacity Planning for Databases I'm noticing a drop in performance after subsequent load tests. Although our cpu and ram numbers look fine, performance seems to degrade over time as sustained load is applied to the system. If we allow more time between the load tests, the performance gets back to about 1,000 ms, but if you apply load every 3 minutes or so, it starts to degrade to a point where it takes 12,000 ms. None of the application servers are showing lingering apache processes and the number of database connections cools down to about 3 (from a sustained 20). Is there anything else I should be looking out for here?

    Read the article

  • Allow PHP to write file without 777

    - by camerongray
    I am setting up a simple website on webspace provided by my university. I do not have database access so I am storing all the data in a flat file. The issue I am experiencing is related to file permissions. I need PHP to be able to read and write the data file but I don't really want to set the file to 777 as anybody else on the system could modify it, they already have read access to everyone's web directories. Does anyone have any ideas on how to accomplish this? Thanks in advance

    Read the article

  • Edit write-protected files by breaking hard links

    - by Taymon
    A directory which I own and can write to contains hard links to files that I don't own and don't have write permission for. I want to open and edit these files in Emacs. When I save my changes, Emacs should rename the existing hard link by appending ~, then write my new version of the file as a new file owned by me. I was under the impression that Emacs could just do this (because of the way it does backups), but it's not working; when I save, it attempts to change the file's permissions in order to write to it (and fails because I don't own the file). How do I make this happen?

    Read the article

  • Transparent, unicode X terminal not tied to a Desktop Environment?

    - by jamuraa
    I've been looking for this for a while now and I just haven't been able to find one. The last few that I used were: aterm - this one was fast and had good transparency support, but it doesn't support Unicode at all as far as I can tell. The dependency graph is also reasonable. gnome-terminal - was good, and had good transparency support plus unicode, but it pulls in about everything in gnome, and I don't use anything else in gnome. It was also somewhat slow (noticable lag in updating at times) and wouldn't use fonts that I wanted. Eterm - same thing as aterm, good dependencies and transparency but no unicode. Does anyone have suggestions, or will I be stuck with gnome-terminal's dependencies and slowness?

    Read the article

  • How can I use apt-get to resolve package dependencies when there are multiple versions in the repository?

    - by user1165144
    I've package a-package.deb which depends on b-package.deb in version 1.0. Everything works fine. But now a b-package in version 1.1 gets added to the repository. I'd suspect that apt-get installs the a-package and version 1.0 of the b-package. What really happens is, that a-package won't get installed: # apt-get install a-package Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: a-package : Depends: b-package (= 1.0) but 1.1 is to be installed E: Unable to correct problems, you have held broken packages. Is there a workaround to fix the behavior? Is there other software to use, that can handle the dependencies as defined?

    Read the article

  • Spambot Infection Detection

    - by crankshaft
    My server has been blocked by CBL for participating in curtwail spambot. Initially we suspected that it was coming from a PC and not from the server, but the router is blocking all packets on 25 except those coming from the server. I have just executed the tcpdump command and every 5 minutes I see a flurry of activity on port 25 that is very suspicious and I am sure that there is some process running on the server: 13:02:30.027436 IP exprod5og110.obsmtp.com.53803 > ubuntu.local.smtp: Flags [S], seq 171708781, win 5744, options [mss 1436,sackOK,TS val 3046699707 ecr 0,nop,wscale 2], length 0 I have stopped postfix, and yet there is still traffic on port 25 above. But how can I find what process is actually communicating on port 25 as it only rund for a few seconds and so for example lsof -i :25 will never catch it. I have been working on this now for 2 days, it is a live server and I cannot simply shut it down, any suggestion on how I can detect the source of this email bot process ?

    Read the article

  • a safer no password sudo?

    - by Stacia
    Ok, here's my problem - Please don't yell at me for being insecure! :) This is on my host machine. I'm the only one using it so it's fairly safe, but I have a very complex password that is hard to type over and over. I use the console for moving files around and executing arbitrary commands a LOT, and I switch terminals, so sudo remembering for the console isn't enough (AND I still have to type in my terrible password at least once!) In the past I have used the NOPASSWD trick in sudoers but I've decided to be more secure. Is there any sort of compromise besides allowing no password access to certain apps? (which can still be insecure) Something that will stop malware and remote logins from sudo rm -rf /-ing me, but in my terminals I can type happily away? Can I have this per terminal, perhaps, so just random commands won't make it through? I've tried running the terminal emulations as sudo, but that puts me as root.

    Read the article

  • How iptables behaves on timezone change?

    - by pradipta
    I have doubt how iptables keep changing the info in iptables when timezone is change. I am using iptables s v 1.4.8 I have blocked one IP with following details # date Thu Jun 6 12:46:42 IST 2013 #iptables -A INPUT -s 10.0.3.128 -m time --datestart 2013-6-6T12:0:00 --datestop 2013-6-6T13:0:00 -j DROP # iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination DROP all -- 10.0.3.128 anywhere TIME starting from 2013-06-06 12:00:00 until date 2013-06-06 13:00:00 Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination But after I change the timezone following things happened automatically . AFTER TIME ZONE CHANGE +++++++++++++++++++++++ #date Thu Jun 6 15:17:48 HKT 2013 # iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination DROP all -- 10.0.3.128 anywhere TIME starting from 2013-06-06 14:30:00 until date 2013-06-06 15:30:00 Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination # The time value is changed in the rule . It is changing with the timezone how. Where iptables keeps track of timezone. Kindly explain me.

    Read the article

  • What alternative is there to Nginx that supports http keep-alive between backends ?

    - by felace
    Hi. I recently asked a question about how to keep a backend connection persistent using Nginx, but found out it wasn't possible anyway, It is an HTTP/1.0 proxy without the ability for keep-alive requests yet. (As a result, backend connections are created and destroyed on every request.) It works all fine right now (since the connection between client and Nginx is kept alive and the result is simply the same), but I don't want to establish a new connection every single time a new request is received ,even if it's on a unix domain socket. So, what software (preferably open-source and not too tedious to configure) do you recommend to accomplish that such connections ?

    Read the article

  • USB webcam just works once and next time I've to reboot

    - by user30262
    I'm using Ubuntu 9.10, and a usb webcam that is shown as 'Bus 001 Device 005: ID 0ac8:3450 Z-Star Microelectronics Corp.' by lsusb. The problem is that on connecting the cam, it just works with the first program I start (skype, tokbox, messenger), and if I disconnect it or switch to another program, it stops to work and I have to restart my computer to make it work again. Has anyone else noticed this behaviour? Is there some good solution to reset the camera without rebooting to make it work again?

    Read the article

< Previous Page | 457 458 459 460 461 462 463 464 465 466 467 468  | Next Page >