Search Results

Search found 26179 results on 1048 pages for 'linux from scratch'.

Page 418/1048 | < Previous Page | 414 415 416 417 418 419 420 421 422 423 424 425  | Next Page >

  • lucid lynx runaway process

    - by Jerome
    I am runnning lucid lynx with a gnome desktop. After uninstalling the klipper program there is a new and unwelcome process that begins at startup and takes around 66% of my cpu time. According to "top", the process number is 10 and the name of the process is "events/1". I am unable to kill the process using "top" or "kill". I have no extra startup applications except for gnome-do and tomboy. Eliminating these has no effect. Can anyone help?

    Read the article

  • Use preforker(ruby gem) with supervisor

    - by user1548832
    I also asked same question on stackoverflow.com http://stackoverflow.com/questions/13871169/use-preforkerruby-gem-with-supervisor But, superuser.com might much help to me. Can anyone amswer this? I want to run a server program using preforker ruby gem with supervisor. But error has occured. I wrote a following test program using preforker. #!/usr/bin/env ruby require 'rubygems' require 'preforker' Preforker.new(:app_name => 'test-preforker', :timeout => 60, :workers => 1) do |master| while master.wants_me_alive? do puts "hello" sleep 10 end end.run And a following supervisor config. [program:test-preforker] command=/home/tkono/tmp/test-preforker.rb stdout_logfile_maxbytes=1MB stderr_logfile_maxbytes=1MB stdout_logfile=/var/log/%(program_name)s.log stderr_logfile=/var/log/%(program_name)s.log autorestart=true Then, reload supervisor. # supervisorctl reload Restarted supervisord Here is the log file of supervisor. 2012-12-13 17:50:47,161 CRIT Supervisor running as root (no user in config file) 2012-12-13 17:50:47,163 WARN Included extra file "/etc/supervisor.d/test-preforker.ini" during parsing 2012-12-13 17:50:47,209 INFO RPC interface 'supervisor' initialized 2012-12-13 17:50:47,213 CRIT Server 'unix_http_server' running without any HTTP authentication checking 2012-12-13 17:50:47,215 INFO supervisord started with pid 12437 2012-12-13 17:50:48,231 INFO spawned: 'test-preforker' with pid 12440 2012-12-13 17:50:48,233 INFO exited: test-preforker (exit status 1; not expected) 2012-12-13 17:50:49,248 INFO spawned: 'test-preforker' with pid 12441 2012-12-13 17:50:49,261 INFO exited: test-preforker (exit status 1; not expected) 2012-12-13 17:50:51,267 INFO spawned: 'test-preforker' with pid 12442 2012-12-13 17:50:51,284 INFO exited: test-preforker (exit status 1; not expected) 2012-12-13 17:50:54,305 INFO spawned: 'test-preforker' with pid 12443 2012-12-13 17:50:54,308 INFO exited: test-preforker (exit status 1; not expected) 2012-12-13 17:50:55,311 INFO gave up: test-preforker entered FATAL state, too many start retries too quickly Please tell me what is wrong? A program using preforker cannot run with supervisor? preforker https://github.com/dcadenas/preforker supervisor http://supervisord.org/index.html

    Read the article

  • After installing Ubuntu how do I get rid of unity and go back to gnome?

    - by aseq
    After I have installed the newest Ubuntu LTS release (12.04, still in beta though) I am greeted with an unfamiliar and difficult to use desktop environment. I believe it is called unity. However I have used gnome for a decade and a half and I would not like to move to this new and (for me) unusable desktop environment. What is a quick and easy way to remove (most) of unity and bring back gnome, as well as configure my display manager to load gnome by default with the environment as close as possible to the way it was before?

    Read the article

  • How can one keep secure regular backups of his desktop on a remote server through aDSL? [on hold]

    - by Antonis Christofides
    I'm a system administrator and I use rsnapshot to backup some servers, duplicity for some others. Both work fine, each one with advantages and disadvantages. Despite that, I am at a loss on how to backup my own private files. I'd use duplicity to automatically backup my files to a remote server; but the problem is that once in a while I must do a full backup. My emails and important files are 9G, and I expect this to increase. Uploading through aDSL at 1Mbit would be 20 hours. Too much. rsnapshot doesn't require periodic full backups (only the first time), but it must be running on the remote server and have a means to connect to my computer; if the server is compromised (or simply if the NSA decides to use it), my own machine is also compromised. Not good. The only solution I've come up with is use encfs, use unison to synchronize the files to a remote server, and use duplicity or rsnapshot on the remote server to backup these files. In that case, the question is whether I can sync the files on many computers; is it possible for encfs to be used with the same key on many computers? I also think that if I append one character to the unencrypted file, its encrypted encfs counterpart might change a lot, so that incrementals with duplicity would be less efficient—but not a big deal. Maybe also, when I need to restore a file, finding the correct file to restore could be a pain, because of filename encryption. I wonder whether there is any other possibility that I've overlooked. Maybe I'm asking too much for my personal use, and I should settle with an external disk?

    Read the article

  • Is there DBus command to set position of KDE panel?

    - by Liss
    I have single vertical KDE panel. When I switch from single monitor X screen back to triple monitor X screen (with xrandr) my KDE panel ends up on the right edge of middle monitor, instead of right edge of right monitor. Also, many windows are in wrong place after the switch, but I have a script which restores geometry of all windows (as is used to be in triple monitor state before I switched to single monitor mode), so this is not a problem. Unfortunately, it does not work for KDE panel - it stays in the wrong place. I guess I need to use DBus to restore KDE panel position (programmatically move it to right edge of right screen), I tried googling, but it seems it is not very well documented. Is there DBus command to set position of KDE panel?

    Read the article

  • SSH rsa key works with external IP not internal IP

    - by Ian
    I am using rackspace cloud hosting. I have 2 servers behind a load balancer. Each server has an external IP and an internal IP. I want to setup a sync job that uses SSH to transfer files. I made an rsa key, and I can successfully SSH from server A into server B, using the external IP of server B, without being prompted for a password. If I try to do the same but use the internal IP, it prompts me for a password. I want to be able to use the key instead of the password. Why is this? Is there something special I have to do during key generation so it works for both IPs? Any help is appreciated.

    Read the article

  • Partition is missing in /dev

    - by haimg
    I'm having a strange problem since I moved from Centos5 to Centos6. I have three disks, first two are used as a RAID1, and third one is a stand-alone backup disk that is not listed in /etc/fstab (it is mounded when needed and then unmounted). My problem: After a boot, /dev/sdc exists but /dev/sdc1 does not. Also, the links in /dev/disks are also absent for the first partition of sdc. Disk itself is fine, and if I hot-remove it and plug it back in, /dev/sdc1 appears ok and everything is working. My question: What subsystem manages auto-discovery of disks, partitions, etc. during the boot process (e.g. what creates /dev/disks/by-label)? How do I configure it to scan /dev/sdc too and create all relevant files and links in /dev ? Edit: Here's the relevant part of dmesg output (the only place sdc appears). It does list sdc1, but it's not in /dev! sd 1:0:0:0: [sdb] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB) sd 3:0:0:0: [sdc] 976773168 512-byte logical blocks: (500 GB/465 GiB) sd 1:0:0:0: [sdb] Write Protect is off sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA sd 3:0:0:0: [sdc] Write Protect is off sd 3:0:0:0: [sdc] Mode Sense: 00 3a 00 00 sd 3:0:0:0: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA sdb: sdc: sd 0:0:0:0: [sda] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB) sd 0:0:0:0: [sda] Write Protect is off sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA sda: DMAR:[DMA Read] Request device [00:1e.0] fault addr 361bc000 DMAR:[fault reason 06] PTE Read access is not set sdb1 sdb2 sdb3 sdc1 sda1 sd 1:0:0:0: [sdb] Attached SCSI disk sd 3:0:0:0: [sdc] Attached SCSI disk sda2 sda3 sd 0:0:0:0: [sda] Attached SCSI disk

    Read the article

  • How to generate new CSRs for TLS use in sendmail?

    - by Mikey B
    SendMail 8.13.8 | CentOS 5.x Hi Guys, I'm using ca-signed TLS certificates on my sendmail server and they are up for renewal soon. Our new CA doesn't like our old CSR so I need to generate a new CSR. Can someone point me to the procedure for doing this (without affecting the production certs that are already in use)? I'm paranoid of overwriting the old TLS certs in the process of generating a CSR. Most of the instructions I've found are for implementing self-signed TLS certs -- which isn't an option for me at this time. I'm thinking it would something like: openssl req -new -nodes -out new-tls.csr -keyout new-tls-private.key But I wasn't sure if I was missing some options there such as the -x509 option... -M

    Read the article

  • CoovaAP router does not get DHCP settings from modem

    - by Dan Sosedoff
    Im having some serious trouble making CoovaAP-powered router get network settings from modem. Wireless router works in captive portal mode (so it handles user login on the same device). But DNS settings and connection ip (for router) are not set as they supposed to via DHCP on modem. It makes installation in other location really complicated, because you have to go and set everything manually. For now i use google`s dns servers. What`s the way to make router self-configurable via modem's dhcp ?

    Read the article

  • Video Player/Library for Ubuntu with ratings and thumbs

    - by greggannicott
    I've just made the switch to Ubuntu on my main PC and I've been looking for a media player that can: Play all the usual video formats Rate (and ideally, tag) each file Display thumbnails for each file Other than that there isn't much I'm after. Banshee comes close, but doesn't display thumbnails. I've Google'd lots but I'm running out of search terms to try. Does anyone have any suggestions? Cheers!

    Read the article

  • How to limit reverse SSH tunelling ports?

    - by funktku
    We have a public server which accepts SSH connections from multiple clients behind firewalls. Each of these clients create a Reverse SSH tunnel by using the ssh -R command from their web servers at port 80 to our public server. The destination port(at the client side) of the Reverse SSH Tunnel is 80 and the source port(at public server side) depends on the user. We are planning on maintaining a map of port addresses for each user. For example, client A would tunnel their web server at port 80 to our port 8000; client B from 80 to 8001; client C from 80 to 8002. Client A: ssh -R 8000:internal.webserver:80 clienta@publicserver Client B: ssh -R 8001:internal.webserver:80 clientb@publicserver Client C: ssh -R 8002:internal.webserver:80 clientc@publicserver Basically, what we are trying to do is bind each user with a port and not allow them to tunnel to any other ports. If we were using the forward tunneling feature of SSH with ssh -L, we could permit which port to be tunneled by using the permitopen=host:port configuration. However, there is no equivalent for reverse SSH tunnel. Is there a way of restricting reverse tunneling ports per user?

    Read the article

  • P2V Wouldn't Boot, Rebuilt initrd, Need to Clean Up

    - by Mike Soule
    We have a CentOS 5.4 server (build 2.6.18-164.el5xen). We went to P2V this server so we can have redundancy, the physical only has one PSU. The P2V only completed 99% of the way, we have a VMWare ticket opened, but they marked the ticket as low priority. I was able to boot into a rescue disc of Red Hat 5.4 and rebuild the initrd with the help of this blog post. Now the only issue is the original server had a modified initrd, which was also from a different OS build and made by an outside provider. We do not have a document outlining modifications. My question is, is it at all possible to copy the initrd off of the physical server and replace it on the virtual and some how have the virtual machine boot? Thanks for any input. Edit: I copied the initrd img from the physical and it recreated the original issue. Here is a screen capture of the error. http://i.imgur.com/MqC73.jpg Edit2: echo Scanning logical volumes lvm vgscan --ignorelockingfailure echo Activating logical volumes lvm vgchange -ay --ignorelockingfailure VolGroup00 resume /dev/VolGroup00/LogVol01 echo Creating root device. mkrootdev -t ext3 -o defaults,ro /dev/VolGroup00/LogVol00 echo Mounting root filesystem. mount /sysroot

    Read the article

  • Co-worker told me ubuntu server LTS is not built for production env

    - by Sandro Dzneladze
    I was confronted with a very vague statement today, but unfortunately I had no clue how to respond or analyse this. Co-worker told me ubuntu server LTS is not built for production env. What is there that makes it not built for production environment? (obviously no answer followed, so this is my question to you...) is it ...packages? ...ease of use? ...supported systems? ...stability? ...support? ...popularity?

    Read the article

  • BIND having trouble resolving service.graphicly.com

    - by Keith Burgoyne
    Since about two weeks ago, we haven't been able to resolve service.graphicly.com: dig @192.168.0.12 service.graphicly.com ; <<>> DiG 9.3.4-P1 <<>> @192.168.0.12 service.graphicly.com ; (1 server found) ;; global options: printcmd ;; connection timed out; no servers could be reached Digging on the name servers listed for graphicly.com shows that service.graphicly.com is a CNAME to takecomicsadmin.cloudapp.net. Digging on cloudapp.net's name servers seems to fail: dig @NS1.LIVEDNS.MSFT.NET takecomicsadmin.cloudapp.net ; <<>> DiG 9.3.4-P1 <<>> @NS1.LIVEDNS.MSFT.NET takecomicsadmin.cloudapp.net ; (1 server found) ;; global options: printcmd ;; connection timed out; no servers could be reached Somehow, my home ISP's name servers can resolve service.graphicly.com without issue. Has anyone else noticed this problem? Does anyone know what the cause of this problem could be? Thanks!

    Read the article

  • Entire filesystem restore from rdiff-backup snapshot

    - by atmosx
    I'm trying to make a complete system restore from an rdiff-backup. The cli for backing was: rdiff-backup --exclude-special-files --exclude /tmp --exclude /mnt --exclude /proc --exclude /sys / /mnt/backup/ebox/ I created a new partition mounted the partition at /mnt/gentoo and did: rdiff-backup -r /mnt/vol2 /mnt/gentoo However when I try to chroot to this system (following gentoo's manual, which means mounting /dev/ and /proc) I get the following error: chroot: failed to run command/bin/bash': No such file or directory` All this takes place on a Parallels (virtual machine) Debian installation. Any ideas on how to proceed in order to fully restore the system? Best Regards ps. /mnt/gentoo/bin/bash works fine if I execute it. All files and permissions are in place rdiff-backup seems to work just fine. However the system cannot neither boot (exits with kernel panic - cannot find init) or be chrooted.

    Read the article

  • df says disk is full, but it is not

    - by Chris
    On a virtualized server running Ubuntu 10.04, df reports the following: # df -h Filesystem Size Used Avail Use% Mounted on /dev/sda1 7.4G 7.0G 0 100% / none 498M 160K 498M 1% /dev none 500M 0 500M 0% /dev/shm none 500M 92K 500M 1% /var/run none 500M 0 500M 0% /var/lock none 500M 0 500M 0% /lib/init/rw /dev/sda3 917G 305G 566G 36% /home This is puzzling me for two reasons: 1.) df says that /dev/sda1, mounted at /, has a 7.4 gigabyte capacity, of which only 7.0 gigabytes are in use, yet it reports / being 100 percent full; and 2.) I can create files on / so it clearly does have space left. Possibly relevant is that the directory /www is a symbolic link to /home/www, which is on a different partition (/dev/sda3, mounted at /home). Can anyone offer suggestions on what might be going on here? The server appears to be working without issue, but I want to make sure there's not a problem with the partition table, file systems or something else which might result in implosion (or explosion) later.

    Read the article

  • Untar with date filter

    - by Don
    Is there any way to untar and only extract those files that are above a certain date including directory structure?? I restored a backup on a play server but it was a few days old. However I have a tar archive of the entire structure that is more up to date and healthy so now I want to extract all files (including directory structure) based on a date filter on the files if possible?

    Read the article

  • Rails /tmp/cache/assets permissions issue using Debian virtual machine hosted on OS X Lion

    - by Jim
    I am running Parallels Desktop 7 on OS X Lion. I have a VM with Debian installed, and inside that VM I setup a Rails development environment. I am using Parallels Tools to share out my OS X home directory to the VM - the goal here is to run the Rails server on the VM, but host the files on OS X (so they are automatically backed up, and so I can use tools like Textmate to develop with). Everything seems to work with the shared directory - my Debian user can read, write, and execute files. However, when I cloned a recent Rails project from Git, I got an error message when it tried to compile the CSS assets. My symptoms are exactly the same as in the question: http://stackoverflow.com/questions/7556774/rails-sprocket-error-compiling-css-assest-chown-issue I believe this is permissions-based, but it is really weird. My entire Rails project directory has permissions set to 777 and my Debian user owns it. If I navigate into /tmp/cache/assets, those permissions are the same. However, the three-character directories Rails is creating (DCE, DA1, D05, etc...) are being created without write permissions! If I refresh the Rails page a few times, about 4 or 5 (with Rails creating new three-character directories every time), eventually it will create one of the directories with the proper 777 permissions and everything will work! This will persist until I make a change to the CSS files and it has to recompile. Does anyone have any idea what might be going on here? I can't fathom why it is creating temp directories with incorrect permissions, or why after a few refreshes the good permissions kick in and it works... It definitely seems to be an issue with the share, since if I move the project into a different directory on the VM, it seems to work fine. On the OS X side, I've given the shared folder 777 permissions as well, but no dice...any ideas? Update I've found that the number of times I need to refresh before it works is not random - it has to do with how many assets are being compiled. For example, if I edit one of my CSS files, and there are four CSS files in the app/assets/stylesheets directory, I have to refresh four times before the app will finally work without the operation not permitted error...

    Read the article

  • How to secure postfix to find out whether the emails are coming really from the sender?

    - by codeworxx
    Is it possible to secure postfix in a way, that incoming emails are checked on whether the email comes really from the sender? Is that possible to write php script and chose a sender, like the mail is really coming from the sender and what are the possibilities for postfix to find out that this mail is not actually coming from the real sender? What I have found out and activated are the options smtpd_sender_restrictions = reject_unknown_sender_domain unknown_address_reject_code = 554 smtpd_client_restrictions = reject_unknown_client unknown_client_reject_code = 554 Please mention, whether I have missed out on any points!

    Read the article

  • Are PHP session files ever deleted?

    - by GetFree
    I see there are thousands of files in my "/tmp" directory (a CentOS machine) and almost all of them are PHP session files. I'm worried about the possible impact this might have on my system. Are those files ever deleted either by the OS, Apache or PHP? or I have to take care of it myself?

    Read the article

  • Sniff packets using tcpdump

    - by denisk
    I have a completely noob question. I want to see all packets that come to my computer from particular site (google.com). So I start tcpdump sudo tcpdump -i eth0 host google.com and enter google.com in a browser and hit enter - nothing gets captured. I can't figure out why it happen. What do I do wrong? Edit It appeared that I was listening to the wrong interface. I had changed eth0 to any and it worked. It was ppp1 that needed listening. Thanks for your answers!

    Read the article

  • How to turn off ATI adapter on Acer Timeline 4810G with ubuntu 9.10

    - by netimen
    I can't turn off my ATI adapter. I have applied the fix, but still lspci | grep VGA gives 00:02.0 VGA compatible controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 07) 01:00.0 VGA compatible controller: ATI Technologies Inc M92 LP [Mobility Radeon HD 4300 Series] (rev ff) and my power consumption is about 15W (Wi-Fi on). I run ubuntu 9.10, kernel 2.6.31-14-generic. BIOS version 2.30

    Read the article

  • Wrap app with dynamic libraries into one large static app

    - by progo
    I have an old program that kind of depends on older dynamic libraries. They tend to get upgraded easily with distro's updates. I figured that there would be a script with using ldd that would gather the libs needed and create one bigger, statically linked application that wouldn't break so easily. If I could do this, alot of older KDE libraries could be removed from my system and easen my life. Thanks!

    Read the article

  • javascript doesn't seem to be able to post form data (nginx server w/ php-fpm)

    - by Jones
    So the situation is like so: I have a nginx server with php-fpm installed. All is well and the site scripts and all work perfectly. I am able to use html to POST form data and it works just fine. However, There seems to be be some correlation between javascript, the POST protocol and nothing happening. I cant seem to determine the issue. Example: I have a user login widget that uses javascript on submit the fields and POST the data to a backend auth script which returns a server message that then populates the login box saying something like "Login Successful" followed by reloading the page to properly enable content. Problem is, nothing happens when you hit submit. I do know the setup works because i had it working on apache before migrating. Also if it makes any difference, the server is a Amazon EC2 instance using the Amazon AMI. I really dont know where to start looking on this one, but below is my default.conf for the server: upstream backend_get { server 127.0.0.1:80 weight=1; } upstream backend_post { server 127.0.0.1:80 weight=1; } #Main website url server { listen 80; server_name server.com; #charset koi8-r; access_log logs/host.access.log main; error_log logs/host.error.log; location / { root /usr/share/nginx/html; index index.php index.html index.htm; if ($request_method = POST) { proxy_pass http://backend_post; break; } } location ~ \.php$ { #fastcgi_pass 127.0.0.1:9000; fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } }

    Read the article

< Previous Page | 414 415 416 417 418 419 420 421 422 423 424 425  | Next Page >