Search Results

Search found 27819 results on 1113 pages for 'linux intel'.

Page 482/1113 | < Previous Page | 478 479 480 481 482 483 484 485 486 487 488 489  | Next Page >

  • iptables, forward traffic for ip not active on the host itself

    - by gucki
    I have kvm guest which's netword card is conntected to the host using a tap device. The tap device is part of a bridge on the host together with eth0 so it can access the public network. So far everything works, the guest can access the public network and it can be accessed from the public network. Now the kvm process on the host provides a vnc server for the guest which listens on 127.0.0.1:5901 on the host. Is there any way to make this vnc server accessible by the ip address which the guest is using (ex. 192.168.0.249), without interrupting the guest from using the same ip (port 5901 is not used by the guest)? It should also work when the guest is not using any ip address at all. So basically I just want to fake IP xx is on the host and only answer/ forward traffic to port 5901 to the host itself. I tried using this NAT rule on the host, but it doesn't work. Ip forwarding is enabled at the host. iptables -t nat -A PREROUTING -p tcp --dst 192.168.0.249 --dport 5901 -j DNAT --to-destination 127.0.0.1:5901 I assume this is because the IP 192.168.0.249 is not not bound to any interfaces and so no ARP requests for it get answered and so no packets for this IP arrive at the host. How can make it work? :)

    Read the article

  • Netbeans automatically changes the file owner when updating files

    - by Alon_A
    We use Netbeans IDE 7.2 to edit our PHP files. In the Run Configuration it is configured as Remote Web Site to automatically save the changes on our web server (Centos OS 6.3). The problem is that every time it is updating the files the owner of the file is changed from apache:apache to userThatUploadedTheFile:users. This causes us problems with SOAP cache files that are configured with apache:apache ownership, and we need to manually chownit back to apache:apache. We've checked the "Preserve Remote File Permissions" checkbox, so the permissions are not changed, only the owner. Is there any solution to preserve the ownership ?

    Read the article

  • How to auto detect text file encoding?

    - by ???
    There are many plain text files which were encoded in variant charsets. I want to convert them all to UTF-8, but before running iconv, I need to know its original encoding. Most browsers have an Auto Detect option in encodings, however, I can't check those text files one by one because there are too many. Only having known the original encoding, I then can convert the texts by iconv -f DETECTED_CHARSET -t utf-8. Is there any utility to detect the encoding of plain text files? It doesn't have to be a 100% perfect correct, but it should recognize most of them.

    Read the article

  • Two distinct mount points with one device

    - by user1761555
    After being disappointed with Ubuntu's release update feature, I finally decided to have separate mount points for / and /home. Towards this, I reformatted my HDD giving most of my drive to sda1(meant to be /home) and allocated about 40GB to rootfs (/). Unfortunately, I would also like to have a /projects which is to be located on sda1. Currently, sda1 is being mounted as /dev/sda1 on /home type ext4 (rw) I've tried looking online for a solution to this problem..however, I'm not sure as to what to look for! Is it possible to mount the 'home' directory of sda1 as /home and 'projects' directory of sda1 as /projects?

    Read the article

  • Website and file/directory permissions

    - by mathiass
    I've been given a task to fix this one website. One of its issues is that on one page, the images have broken links - the images are not showing, and clicking on the image (i.e. direct link to the image file) results in a 403 (Forbidden) error. I am looking for some feedback on what could be the possible cause. The directory where the images are stored has the following permissions: drwxrws--- www "group" 10240 Aug 2008 "image directory name" I had to hide the names. I checked the page source code, and everything seems to be in place. The rest of the site, and other images outside that image directory are showing fine. I was told that recently there have been some changes to the server. I'm trying to assume that there is no fault in the source code, and the permissions are - or used to be - correct (since the site has been working before, and no recent changes to the site itself have been made). My only thoughts at the moment is that either: a) the directory permission should be: drwxrws--x (executable) for the other users, or b) there is a change in the server settings that I don't know of. Is there anything else I should check?

    Read the article

  • How can I make zsh completion behave like Bash completion?

    - by Nate
    I switched to zsh, but I dislike the completion. If I have 20 files, each with a shared prefix, on pressing tab, zsh will fully complete the first file, then continue going through the list with each press of tab. If I want one near the end, I would have to press tab many times. In bash, this was simple - press tab and I would get the prefix. If I continued typing (and pressing tab), bash would complete as far as it could be certain of. I find this behavior to be much more intuitive but prefer the other features of zsh to bash. Is there a way to get this style of completion? Google suggested setopt bash_autolist, but this had no effect for me (and no error message was printed upon starting my shell). Thanks.

    Read the article

  • How do I minimize Evolution to the system tray in Ubuntu?

    - by Jephir
    In Ubuntu some applications can be set to minimize instead of exit on close. For example, Empathy minimizes to the system tray (mail icon) when the close button is pressed in the application window. How do I make Evolution do this as well? Essentially I would like to have Evolution hidden in the system tray instead of having to re-launch it every ten minutes to check for new messages (or leave it open and clutter the taskbar).

    Read the article

  • Stopping immediate right button up in Mint 13 (Cinnamon) actioning first menu choice

    - by jontyc
    In Windows, a single right click (i.e., with release) displays a context menu on the screen, allowing you to select the appropriate choice with a further click from either button. In Mint 13, Cinnamon, it's hold down the right button, drag, then release on the appropriate menu choice. Both methods are fine, but constantly using both OSs regularly, I'm doing the Windows procedure in Mint by mistake all the time. This makes the single right click and release bring up the context menu and immediately action the first menu choice. Is there any mechanism to ignore right-button-up if a substantial dragging action or time period hasn't occurred, and have Mint looking for a further click to select?

    Read the article

  • solution for an offline server

    - by dashmug
    I'm trying to setup a development server at work that will ideally be able to test drive a couple of projects in PHP, Rails, or Django (not always running at the same time). I develop the apps locally on a Mac and then I'll put the projects up on this server for testing with my actual users (non-techies) before deploying to a production server. My problem is that we have a very poor internet connection (almost negligible) at work and doing the usual apt-get/yum/ports (make, clean, install) processes for setting up servers always get their packages from online repositories somewhere. I know I could probably download the source and then compile them myself but that's going to be too much of a hassle for me. I'm thinking about two solutions: Plan A: Run a server VM on my Mac and then use this VM as the source repository for the offline server. I've read about Ubuntu's apt-proxy and it seems to be good enough though I haven't tried it yet. I'm not sure if this is possible but can I simply do apt-get install nginx --downloadonly so that the package and its dependencies will be downloaded into my VM and my server can use the VM as the source repo for apt-get? Plan B: Run a server VM on my Mac (which I can setup/update easily when I'm home) and then clone the VM to the offline development server. Maybe I should simply make the server a VM host so I can simply copy the VM over. I think this is okay for the first-time setup but subsequent updates will take too long (cloning the VM image). If I was working on Windows, I imagine it'd be easier because most services have an installer file that I can download and then run at the server. If you could suggest another way, it would be much appreciated. Update: From Michael Hampton's answer, I found a possible solution which is apt-cacher. I also found this page on Ubuntu's website. I wonder if there is a better tool than this one.

    Read the article

  • Access denied for user 'root@localhost' (using password:NO)

    - by murgatroid99
    I am attempting to install a network management package called cacti onto Ubuntu running under Windows Virtual PC. I attempted to install MySQL as it is one of cacti's dependencies. I can install and start the MySQL server, but whenever I try to access it in any other way, such as to change the password, I get the error message Access denied for user 'root@localhost' (using password:NO). I would like to know what is causing this and how to fix it. Edit: (just in case my comments are not visible) The answers from HD and Devin Ceartas did not work for me.

    Read the article

  • How to find the reason for a weekly downtime on an Ubuntu web server hosted by AWS?

    - by IceSheep
    We started monitoring our web server using Pingdom and found out that we have a downtime of a few minutes every Sunday at 0:00 UTC. The test runs every minute and checks if a successful HTTP response (code 200) is returned on port 80. The test fails due to a timeout (no response after 30 seconds). Here's what we've already checked – without success: Since we run our webserver behind a load balancer, I've set the Pingdom test on the load balancer's public DNS and the webserver's public DNS in order to find out if there's a problem with the AWS load balancer – both tests return the same result We set up Munin on our webserver. Everything looked fine even after the failure. Since the last failure lasted only 2 minutes I suppose Munin couldn't capture a potential problem (it only checks every 5 minutes) I have checked /var/log/apache2/error.log and /var/log/syslog for suspicious entries I have checked /etc/cron.weekly and /etc/crontab for suspicious entries I have searched for files created or last-modified during 0:00 and 0:15 using this method: touch -t 201209020000 start touch -t 201209020015 end find / -newer start -and ! -newer end (nothing found) Has anybody experienced a similar problem? Any proposals on how to find the reason for this behavior? It's Ubuntu 10.04 LTS running on an AWS m1.large instance. Thanks!

    Read the article

  • Le noyau Linux 3.2 disponible : intégration du code d'Android, améliorations réseaux, Btrfs et support d'une nouvelle architecture

    Le noyau Linux 3.2 disponible : intégration du code d'Android améliorations réseaux, Btrfs et support d'une nouvelle architecture Linus Torvalds vient d'annoncer la disponibilité de la version 3.3 du noyau Linux. Au menu des nouveautés, on notera essentiellement la réintégration des portions de code du noyau d'Android . Pour rappel, en 2009, les pilotes d'Android avaient été exclus du noyau parce qu'ils n'étaient pas suffisamment maintenus. L'intégration d'Android permettra aux développeurs d'utiliser le noyau Linux pour faire fonctionner un système Android, développer un pilote pour les deux et réduira les couts de maintenance des correctifs indépendants d'une...

    Read the article

  • udev rule not being executed

    - by jyavenard
    I have the following device that udevadm lists as: looking at device '/devices/pci0000:00/0000:00:1c.7/0000:09:00.0/usb6/6-2/6-2:1.0/ttyUSB0/tty/ttyUSB0': KERNEL=="ttyUSB0" SUBSYSTEM=="tty" DRIVER=="" looking at parent device '/devices/pci0000:00/0000:00:1c.7/0000:09:00.0/usb6/6-2/6-2:1.0/ttyUSB0': KERNELS=="ttyUSB0" SUBSYSTEMS=="usb-serial" DRIVERS=="pl2303" ATTRS{port_number}=="0" looking at parent device '/devices/pci0000:00/0000:00:1c.7/0000:09:00.0/usb6/6-2/6-2:1.0': KERNELS=="6-2:1.0" SUBSYSTEMS=="usb" DRIVERS=="pl2303" ATTRS{bInterfaceNumber}=="00" ATTRS{bAlternateSetting}==" 0" ATTRS{bNumEndpoints}=="03" ATTRS{bInterfaceClass}=="ff" ATTRS{bInterfaceSubClass}=="00" ATTRS{bInterfaceProtocol}=="00" ATTRS{supports_autosuspend}=="1" So I created the rule: KERNEL=="ttyUSB0", SUBSYSTEM=="tty", SUBSYSTEMS=="usb-serial", DRIVERS=="pl2303", KERNELS=="6-2:1.0", SYMLINK+="cc128serial" this doesn't work. However if I do: KERNEL=="ttyUSB0", SUBSYSTEM=="tty", SUBSYSTEMS=="usb-serial", DRIVERS=="pl2303", SYMLINK+="cc128serial" then it works. I tried with KERNELS=="6*" etc.. to no available any ideas ? thanks

    Read the article

  • Debian - Problems Unmounting External Hard Drives

    - by user331981
    I recently installed Debian Testing on a new laptop and I just noticed that I am having some issues with unmounting external hard drives. I am using Mate Desktop 1.8.1. With the 1st drive, if I right click on the drive and select “safely remove”: The drive unmounts, spins down, immediately spins back up an remounts. Unable to unmount. With the 2nd drive, if I right click on the drive and select “safely remove”: The drive unmounts but does not spin down. With the 3rd drive, if I right click on the drive and select “safely remove”: The drive unmounts but does not spin down, immediately spins back up but does not remount, and after 20 seconds, it spins down and stays that way. Behavior is the same on both USB 2.0 and USB 3.0 ports. On my last laptop, on which I also used Debian Testing + Mate desktop, the safe removal of drives worked out of the box and I never had an issue with it. The drives would unmount, spin down and stay that way. To remount the drive, one needed to unplug the device and plug it back in. I am unsure how to troubleshoot this issue and I am not sure if it is merely a matter of installing a “missing” package of editing a config file. Thank you in advance.

    Read the article

  • How to restrict all services to single domain in Ubuntu?

    - by harold
    Someone has pointed an unknown domain to my server's IP address likely via A records. I would like to reject access to ALL services (httpd, ssh, mail, etc.) from this domain and only allow requests from my domain. I want to make it so when I connect to that domain it's completely rejected from my server. I can disallow access from HTTP by changing my web server settings, but I want to do this for every single type of connection. How can I do this?

    Read the article

  • What is excessive swapping.

    - by amateur barista
    This post led me to ask that question. Cache contention On a large site, if you are using MyISAM, contention occurs in the database tables when the cache is forced to clear after a node or a comment is added. With tens of thousands of filter text snippets needing to be deleted, the table will be locked for a long period, and any accesses to it will be queued pending the purge of the data in it. The same is true for the page cache as well. This often causes a "site hang" for a minute or two. During that time new requests keep piling up, and if you do not have the MaxClients parameter in Apache setup correctly, the system can go into thrashing because of excessive swapping.

    Read the article

  • Started an application through SSH, command line now gone, what happens next?

    - by Chris Dutrow
    Context: This is a very basic question Using Putty and SSH for the first time to do some serious server setup and run into the situation where I have started a process that I do not want to stop. The process is the gunicorn WSGI HTTP Server (running on Centos 6.3). The command I used to start the process is (as per their Quick Start): gunicorn -w 4 myapp:app At this point in the work session, I have lost the command prompt. This must be such a non-issue that it doesn't even enter into an experienced user's consciousness. But unfortunately at my level of experience, I am left with several fundamental questions: Does the fact that I have lost the command prompt mean that the process is still running? How do I get back to the command prompt without killing the process? How do I come back and monitor the process later? How do I eventually kill the process? Any help is appreciated, thanks so much!

    Read the article

  • Free web gallery installation that can use existing directory hierarchy in filesystem?

    - by user1338062
    There are several different free software gallery projects (Gallery, Coppermine, etc), but as far as I know each of those creates a copy of imported images in their internal storage, be it directory structure or database). Is there any gallery software that would allow keeping existing directory hierarchy of media files (images, videos), as-is, and just store the meta-data of them in a database? I guess at least various NAS solutions ship with software like this.

    Read the article

  • nfs mount fails in Ubuntu 10, but not with -v

    - by stuartreynolds
    (1) mount -t nfs remotehost:/remotedir localmountpoint -o owner,rw (2) mount -v -t nfs remotehost:/remotedir localmountpoint -o owner,rw (1) Used to work with Ubuntu 9 and now fails with Ubuntu 10 (2.6.32-21-generic kernel) with the error: mount.nfs: an incorrect mount option was specified Strangely, adding -v (verbose) in (2) makes the problem go away. This is currently a blocker for me because the fstab line: remotehost:/remotedir localmountpoint nfs owner,rw 0 0 causes the same error (I don't believe I can specify verbose in fstab). Is this a bug in mount or my options really incorrect?

    Read the article

< Previous Page | 478 479 480 481 482 483 484 485 486 487 488 489  | Next Page >