Search Results

Search found 12569 results on 503 pages for 'root plist'.

Page 143/503 | < Previous Page | 139 140 141 142 143 144 145 146 147 148 149 150  | Next Page >

  • How to force mdadm to stop RAID5 array?

    - by lucek
    I have /dev/md127 RAID5 array that consisted of four drives. I managed to hot remove them from the array and currently /dev/md127 does not have any drives: cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid1 sdd1[0] sda1[1] 304052032 blocks super 1.2 [2/2] [UU] md1 : active raid0 sda5[1] sdd5[0] 16770048 blocks super 1.2 512k chunks md127 : active raid5 super 1.2 level 5, 512k chunk, algorithm 2 [4/0] [____] unused devices: <none> and mdadm --detail /dev/md127 /dev/md127: Version : 1.2 Creation Time : Thu Sep 6 10:39:57 2012 Raid Level : raid5 Array Size : 8790402048 (8383.18 GiB 9001.37 GB) Used Dev Size : 2930134016 (2794.39 GiB 3000.46 GB) Raid Devices : 4 Total Devices : 0 Persistence : Superblock is persistent Update Time : Fri Sep 7 17:19:47 2012 State : clean, FAILED Active Devices : 0 Working Devices : 0 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Number Major Minor RaidDevice State 0 0 0 0 removed 1 0 0 1 removed 2 0 0 2 removed 3 0 0 3 removed I've tried to do mdadm --stop /dev/md127 but: mdadm --stop /dev/md127 mdadm: Cannot get exclusive access to /dev/md127:Perhaps a running process, mounted filesystem or active volume group? I made sure that it's unmounted, umount -l /dev/md127 and confirmed that it indeed is unmounted: umount /dev/md127 umount: /dev/md127: not mounted I've tried to zero superblock of each drive and I get (for each drive): mdadm --zero-superblock /dev/sde1 mdadm: Unrecognised md component device - /dev/sde1 Here's output of lsof|grep md127: lsof|grep md127 md127_rai 276 root cwd DIR 9,0 4096 2 / md127_rai 276 root rtd DIR 9,0 4096 2 / md127_rai 276 root txt unknown /proc/276/exe What else can I do? LVM is not even installed so it can't be a factor.

    Read the article

  • Lighttpd + django on gentoo 10 seconds to answer

    - by plaetzchen
    I want to run a Django site on a lighttpd with fastcgi on a gentoo machine. Everytime I try to access the site I get a response after more or less exactly 10 seconds. Im using a socket to let lighttpd communicate with my Django site, but a tcp port doesn't help either. Could this be a lighttpd problem? I tried to both from a server in the internet as well as from localost, this is what lighttpd gives me in the error.log 2012-07-10 14:36:36: (response.c.300) -- splitting Request-URI 2012-07-10 14:36:36: (response.c.301) Request-URI : / 2012-07-10 14:36:36: (response.c.302) URI-scheme : http 2012-07-10 14:36:36: (response.c.303) URI-authority: owntube 2012-07-10 14:36:36: (response.c.304) URI-path : / 2012-07-10 14:36:36: (response.c.305) URI-query : 2012-07-10 14:36:36: (response.c.300) -- splitting Request-URI 2012-07-10 14:36:36: (response.c.301) Request-URI : /owntube.fcgi/ 2012-07-10 14:36:36: (response.c.302) URI-scheme : http 2012-07-10 14:36:36: (response.c.303) URI-authority: owntube 2012-07-10 14:36:36: (response.c.304) URI-path : /owntube.fcgi/ 2012-07-10 14:36:36: (response.c.305) URI-query : 2012-07-10 14:36:36: (response.c.349) -- sanatising URI 2012-07-10 14:36:36: (response.c.350) URI-path : /owntube.fcgi/ 2012-07-10 14:36:36: (mod_access.c.135) -- mod_access_uri_handler called 2012-07-10 14:36:36: (mod_fastcgi.c.3632) handling it in mod_fastcgi 2012-07-10 14:36:36: (response.c.470) -- before doc_root 2012-07-10 14:36:36: (response.c.471) Doc-Root : /var/www/owntube 2012-07-10 14:36:36: (response.c.472) Rel-Path : /owntube.fcgi 2012-07-10 14:36:36: (response.c.473) Path : 2012-07-10 14:36:36: (response.c.521) -- after doc_root 2012-07-10 14:36:36: (response.c.522) Doc-Root : /var/www/owntube 2012-07-10 14:36:36: (response.c.523) Rel-Path : /owntube.fcgi 2012-07-10 14:36:36: (response.c.524) Path : /var/www/owntube/owntube.fcgi 2012-07-10 14:36:36: (response.c.541) -- logical -> physical 2012-07-10 14:36:36: (response.c.542) Doc-Root : /var/www/owntube 2012-07-10 14:36:36: (response.c.543) Rel-Path : /owntube.fcgi 2012-07-10 14:36:36: (response.c.544) Path : /var/www/owntube/owntube.fcgi

    Read the article

  • How To Investigate/Restore MySQL Permissions? MySQL ERROR 1045 (28000): Access denied for user

    - by Recc
    ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES) Debian. mysqld is listening on 3306 supposedly Telnet to 3306 works Also tried binding it specifically yo localhost and then 127.0.0.1 which made no difference However: # netstat -ln | grep mysql unix 2 [ ACC ] STREAM LISTENING 78993 /var/run/mysqld/mysqld.sock # mysql -P3306 -ptest ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES) Things I've tried: dpkg-reconfigure mysql-server-5.1 Doesn't help http://www.debian-administration.org/articles/442 Doesn't help This command (source): UPDATE mysql.user SET Password=PASSWORD('MyNewPass') WHERE User='root'; FLUSH PRIVILEGES; Doesn't help, in fact: Query OK, 0 rows affected (0.00 sec) Rows matched: 0 Changed: 0 Warnings: 0 So might the user be deleted? Extremely unlikely as all this started after packages update a colleague did and some separate services started screwing around but my colleague said he removed the offenders. Theres more: while # mysqld_safe --skip-grant-tables is running one can access the data tables, only with the valid passwords! So there's users and some authentication takes place hence the 0 rows affected above. Can the privileges tables be damaged somehow and how can I recreate/restore them when my only way of getting a mysql console is to skip them? Can I spare my reinstall of MySQL? Either way I did get a dump of the DBs now that I could get in with the above mode.

    Read the article

  • How can I avoid permission denied errors when attempting to deploy a rails app with capistrano?

    - by joshee
    Total noob here. I'm attempting to deploy an app through Capistrano. I'm getting relentless permission denied errors when I attempt to run cap deploy:update. Seemingly at least some of these errors are due to missing directories that trigger a "Permission Denied" error. (I'm doing setup on root just temporarily.) set :user, 'root' set :domain, 'domainname.com' set :application, 'appname' # adjust if you are using RVM, remove if you are not $:.unshift(File.expand_path('./lib', ENV['rvm_path'])) require "rvm/capistrano" set :rvm_ruby_string, '1.9.2' # file paths set :repository, "ssh://[email protected]/~/git/appname.git" set :deploy_to, "/var/rails/appname" # distribute your applications across servers (the instructions below put them # all on the same server, defined above as 'domain', adjust as necessary) role :app, domain role :web, domain role :db, domain, :primary => true set :deploy_via, :remote_cache set :scm, 'git' set :branch, 'master' set :scm_verbose, true set :use_sudo, false set :rails_env, :production namespace :deploy do desc "cause Passenger to initiate a restart" task :restart do run "touch #{current_path}/tmp/restart.txt" end desc "reload the database with seed data" task :seed do run "cd #{current_path}; rake db:seed RAILS_ENV=#{rails_env}" end end after "deploy:update_code", :bundle_install desc "install the necessary prerequisites" task :bundle_install, :roles => :app do run "cd #{release_path} && bundle install" end Here's my result: ** [domainname.com :: out] Cloning into '/var/rails/appname/shared/cached-copy'... ** [domainname.com :: err] Permission denied, please try again. ** [domainname.com :: err] Permission denied, please try again. ** [domainname.com :: err] Permission denied (publickey,gssapi-with-mic,password). ** [domainname.com :: err] fatal: The remote end hung up unexpectedly I'm able to ssh without a password, so not sure about that publickey error. By the way, if I run cap deploy:update without set :deploy_via, :remote_cache, here's my result: ** [domainname.com :: out] Cloning into '/var/rails/appname/releases/20120326204237'... ** [domainname.com :: err] Permission denied, please try again. ** [domainname.com :: err] Permission denied, please try again. ** [domainname.com :: err] Permission denied (publickey,gssapi-with-mic,password). ** [domainname.com :: err] fatal: The remote end hung up unexpectedly command finished Thanks a lot for your help with this.

    Read the article

  • shared hosting with malware, .htaccess file gets modified every 2 hours or so

    - by apache
    I spent all day today chasing malware on the shared hosting for one of my clients. The issue is as follows: Every 2 hours or so .htaccess file and all other .htaccess files gets modified, on the top of the file these lines are added: IfModule mod_rewrite.c> RewriteEngine On RewriteCond %{HTTP_REFERER} ^.*(google|ask|yahoo|youtube|wikipedia|excite|altavista|msn|aol|goto|infoseek|lycos|search|bing|dogpile|facebook|twitter|live|myspace|linkedin|flickr)\.(.*) RewriteRule ^(.*)$ http://pasla-ghwoo.ru/rqpgfap?8 [R=301,L] </IfModule> and on the bottom: ErrorDocument 400 http://pasla-ghwoo.ru/rqpgfap?8 ErrorDocument 401 http://pasla-ghwoo.ru/rqpgfap?8 ErrorDocument 403 http://pasla-ghwoo.ru/rqpgfap?8 ErrorDocument 404 http://pasla-ghwoo.ru/rqpgfap?8 ErrorDocument 500 http://pasla-ghwoo.ru/rqpgfap?8 The main problem I'm not root on the server, and cannot sudo, as this is shared hosting with 100's of websites. Typical good commands like dmesg, lsof, dtrace, chattr and many others are not available to me as I'm not root. I can't find who is modifying .htaccess files, how do I get that info? My guess is some php script is changing that which is called from outside via command and control. This seems to relate to this: http://blog.unmaskparasites.com/2009/09/11/dynamic-dns-and-botnet-of-zombie-web-servers/ How do I find out who is modifying .htaccess files without being root?

    Read the article

  • IIS 7.0 rewrite url problem

    - by Jouni Pekkola
    Hello, How i can set redirect url for virtual directory in iis 7.0.I have installed lates url rewrite module 2. ? I could explain my problem with exsample. I have website on my iis 7.0 server: www.mysite.com I desided to create virtual directory sales under my site which is pointing to website root directory.Now I need create redirect url for the vdir. The vdir is pointing same virtual root directory as my site root is The big idea is that i can write on browser www.mysite/sales and i will automaticly redirect to url www.mysite.com?productid=200. I tried to make redirect with rewite url for vdir(not website), but I always get this error message : cannot add duplicate colletion entry of type 'rule' with unique key key attribute 'name' set to "test".This happens when i am pointing for virtual vdir and try to add rule. I can add rules to website level,but rules doesn work. I mean url www.mysite/sales gives me follwing error. I know that key is unique I checked it from web.config. This kind of feature was really easy use in IIS 6.0, just point vdir with your mouse and set properties--a redirect to url. Please some one explain what is right way to do it in IIS 7.0

    Read the article

  • NginX & Munin - Location and error 404

    - by user1684189
    I've a server that running nginx+php-fpm with this simple configuration: server { listen 80; server_name ipoftheserver; access_log /var/www/default/logs/access.log; error_log /var/www/default/logs/error.log; location / { root /var/www/default/public_html; index index.html index.htm index.php; } location ^~ /munin/ { root /var/cache/munin/www/; index index.html index.htm index.php; } location ~\.php$ { include /etc/nginx/fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/default/public_html$fastcgi_script_name; } } but when I open ipoftheserver/munin/ I recieve a 404 error (when I request ipoftheserver/ the files on /var/www/default/public_html are listened correctly) Munin is installed and works perfectly. If I remove this configuration and I use this another one all works good (but not in the /munin/ directory): server { server_name ipoftheserver; root /var/cache/munin/www/; location / { index index.html; access_log off; } } How to fix? Many thanks for your help

    Read the article

  • Almost All Xenserver Logical Volumes Disappeared - Recovery?

    - by Alex
    We had a hard disc crash of one of two hard discs in a software raid with a LVM on top. The server is running Citrix xenserver. On the hard disk which is still intact, the volume group gets detected well, but only one LV is left. (some hashes replaced by "x") # lvdisplay --- Logical volume --- LV Name /dev/VG_XenStorage-x-x-x-x-408b91acdcae/MGT VG Name VG_XenStorage-x-x-x-x-408b91acdcae LV UUID x-x-x-x-x-x-vQmZ6C LV Write Access read/write LV Status available # open 0 LV Size 4.00 MiB Current LE 1 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:0 root@rescue ~ # vgdisplay --- Volume group --- VG Name VG_XenStorage-x-x-x-x-408b91acdcae System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 4 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 698.62 GiB PE Size 4.00 MiB Total PE 178848 Alloc PE / Size 1 / 4.00 MiB Free PE / Size 178847 / 698.62 GiB VG UUID x-x-x-x-x-x-53w0kL I could understand if a full physical volume is lost - but why only the logical volumes? Is there any explanation for this? Is there any way to recover the logical volumes? EDIT We are here in a rescue system. The problem is that the whole server does not boot (GRUB error 22) What we are trying to do is to access the root filesystem. But everything was in the LVM. We have only this: (parted) print Model: ATA SAMSUNG HD753LJ (scsi) Disk /dev/sdb: 750GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 32.3kB 750GB 750GB primary boot, lvm And this 750GB LVM volume is exactly what we see on top. edit2 Output of vgcfgrestore, but from the rescue system, as there is no root to chroot to. # vgcfgrestore --list VG_XenStorage-x-b4b0-x-x-408b91acdcae File: /etc/lvm/archive/VG_XenStorage-x-x-x-x-408b91acdcae_00000.vg VG name: VG_XenStorage-x-x-x-x-408b91acdcae Description: Created *before* executing '/sbin/vgscan --ignorelockingfailure --mknodes' Backup Time: Fri Jun 28 23:53:20 2013 File: /etc/lvm/backup/VG_XenStorage-x-x-x-x-408b91acdcae VG name: VG_XenStorage-x-x-x-x-408b91acdcae Description: Created *after* executing '/sbin/vgscan --ignorelockingfailure --mknodes' Backup Time: Fri Jun 28 23:53:20 2013

    Read the article

  • Setting umask for all users

    - by Yarin
    I'm trying to set the default umask to 002 for all users including root on my CentOS box. According to this and other answers, this can be achieved by editing /etc/profile. However the comments at the top of that file say: It's NOT a good idea to change this file unless you know what you are doing. It's much better to create a custom.sh shell script in /etc/profile.d/ to make custom changes to your environment, as this will prevent the need for merging in future updates. So I went ahead and created the following file: /etc/profile.d/myapp.sh with the single line: umask 002 Now, when I create a file logged in as root, the file is born with 664 permissions, the way I had hoped. But files created by my Apache wsgi application, or files created with sudo, still default to 644 permissions... $ touch newfile (as root): Result = 664 (Works) $ sudo touch newfile: Result = 644 (Doesn't work) Files created by Apache wsgi app: Result = 644 (Doesn't work) Files created by Python's RotatingFileHandler: Result = 644 (Doesn't work) Why is this happening, and how can I ensure 664 file permissions system wide, no matter what creates the file? UPDATE: I ended up finding a cleaner solution to this on a per-directory basis using ACLs, which I describe here.

    Read the article

  • How to redirect a name-based VirtualHost to a different port?

    - by Andra
    I have a virtuoso sparql endpoint installed, which I want to make available through a hostname (e.g. www.virtuosoexample.com). The thing with virtuoso is that the is no Document root. The endpoint is initiated by the daemon and made available through a source port (e.g. localhost:1234/) I know how to set a virtual host pointing to a document root, but i don't know how to do this for a server with a port number. Any advice would be appreciated. Below is the code, how I would do it with a document root. I tried to change that (naively) into localhost:1234/sparql, but that didn't work <VirtualHost * ServerName www.virtuosoexample.com <www.virtuosoexample.com> ServerAlias www.virtuosoexample.com <www.virtuosoexample.com> ErrorLog /var/log/apache2/error.wp-sparql.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.wp-sparql.log combined DocumentRoot /var/www/endpoint/sparql/ <Directory /var/www/endpoint/sparql> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> </VirtualHost

    Read the article

  • postfix revived and delivered have the same values (?)

    - by thinkingbig
    I have configured my first server (Debian with ISPConfig). Generally i want to send bulk e-mails to our users, i configure postfix and turn on postfix... but... After 1 hour of sending emails i have logs like this: Grand Totals messages 21886 received 21883 delivered 0 forwarded 0 deferred 234 bounced 0 rejected (0%) 0 reject warnings 0 held 0 discarded (0%) 30805k bytes received 31280k bytes delivered 3 senders 3 sending hosts/domains 12588 recipients 3 recipient hosts/domains Per-Hour Traffic Summary time received delivered deferred bounced rejected -------------------------------------------------------------------- 0000-0100 0 0 0 0 0 0100-0200 0 0 0 0 0 0200-0300 0 0 0 0 0 0300-0400 0 0 0 0 0 0400-0500 0 0 0 0 0 0500-0600 0 0 0 0 0 0600-0700 0 0 0 0 0 0700-0800 0 0 0 0 0 0800-0900 0 0 0 0 0 0900-1000 0 0 0 0 0 1000-1100 0 0 0 0 0 1100-1200 0 0 0 0 0 1200-1300 0 0 0 0 0 1300-1400 0 0 0 0 0 1400-1500 0 0 0 0 0 1500-1600 15311 15306 0 168 0 1600-1700 6575 6577 0 66 0 1700-1800 0 0 0 0 0 1800-1900 0 0 0 0 0 1900-2000 0 0 0 0 0 2000-2100 0 0 0 0 0 2100-2200 0 0 0 0 0 2200-2300 0 0 0 0 0 2300-2400 0 0 0 0 0 Host/Domain Summary: Message Delivery sent cnt bytes defers avg dly max dly host/domain 21521 30353k 0 3.4 m 15.5 m wp.pl 355 919k 0 54.9 s 13.0 m mysenderdomainexample.pl 7 8477 0 1.7 s 1.9 s prokonto.pl Host/Domain Summary: Messages Received msg cnt bytes host/domain 21879 30786k mysenderdomainexample.pl 5 16196 mx4.wp.pl 1 3200 mx3.wp.pl Senders by message count 21783 [email protected] 96 [email protected] 6 from=< **So, my question is: 1) Why i have recived and delivered have the same values (approx)? 2) How can I check if an email has been delivered? 3) How to change default "root" and "www-data" user (FROM / RETURN PATH) to another? I have changed this in script, but postfix ignore scripting values and send every mail from root (we have .php send cron's in /etc/crontab) 4) WHY APPROX 100 % MAILS RECIVED HAS BEEN ADRESED TO MY SENDER HOST? Host/Domain Summary: Messages Received Waiting for respond, Regards TB**

    Read the article

  • Having trouble with a workaround, for booting from a usb stick, using grub and a minimal linux kernel to load usb drivers

    - by s hanley
    I'm trying to boot from a usb stick. I formatted it to fat32, and later to ext2, and installed dsl on it using unetbootin, and later the usb install guide on dsl wiki (http://www.damnsmalllinux.org/wiki/index.php/Install_to_USB_From_within_Linux). The bios doesn't have a setting for booting from usb. Grub doesn't "see" the usb drive when I use the root and find commands, explained in (http://www.damnsmalllinux.org/wiki/index.php/USB_Booting). This happens even when I set boot from floppy at the top of the boot order. However, my usb keyboard is recognised by the bios and by grub. How can it recognise the keyboard but not the usb drive? Also, the usb led does flash even before grub starts up, so surely something must be happening usb-wise? I am now following an ubuntu guide to booting from a USB stick, using a hdd-based, minimal linux kernel to supply the usb drivers. But I'm having difficulty adapting it to other OSes (slax/dsl/aptosid). I believe I have to alter the initrd.gz file to include usb drivers and then copy that file along with vmlinuz to a partition on my hdd. But, what's the grub command for the kernel line supposed to look like? From the ubuntu example it's: title USB FLASH DRIVE root (hd0,6) kernel /boot/usb-boot/vmlinuz file=/cdrom/preseed/ubuntu.seed boot=casper noprompt cdrom-detect/try-usb=true persistent initrd /boot/usb-boot/initrd.lz boot Should mine just be: title USB FLASH DRIVE root (hd0,6) kernel /boot/usb-boot/vmlinuz cdrom-detect/try-usb=true initrd /boot/usb-boot/initrd.lz boot

    Read the article

  • Creating a partitioned raid1 array for booting a debian squeeze system

    - by gucki
    I'd like to have the following raid1 (mirror) setup: /dev/md0 consists of /dev/sda and /dev/sdb I created this raid1 device using mdadm --create --verbose /dev/md0 --auto=yes --level=1 --raid-devices=2 /dev/sda /dev/sdb This gave a warning about metadata being 1.2 and my system might not boot. I cannot use 0.9 because it restricts the size of the raid to 2TB and I assume grub shipped with latest debian (squeeze) should be able to handle metadata 1.2. So then I created the needed partitions like this: # creating new label (partition table) parted -s /dev/md0 mklabel 'msdos' # creating partitions sfdisk -uM /dev/md0 << EOF 0,4096 ,1024,S ; EOF # making root filesystem mkfs -t ext4 -L boot -m 0 /dev/md0p1 # making swap filesystem mkswap /dev/md0p2 # making data filesystem mkfs -t ext4 -L data /dev/md0p3 Then I mounted the root partition, copied a minimal debian install inside and temporary mounted /dev /proc /sys. Afer this I chrooted to the new root folder and executed: grub-install --no-floppy --recheck /dev/md0 However this fails badly with: /usr/sbin/grub-probe: error: unknown filesystem. Auto-detection of a filesystem of /dev/md0p1 failed. Please report this together with the output of "/usr/sbin/grub-probe --device-map=/boot/grub/device.map --target=fs -v /boot/grub" to I don't think it's a bug in grub (so I didn't report it yet) but a fault of mine. So I really wonder how to properly setup my raid1, everything I tried so far failed.

    Read the article

  • Nginx + PHPBB3 reverse proxy images problem

    - by siberiano
    Hello all I have a problem with my Nginx Frontend + Apache2 backend + PHPBB3 software. It doesn't load the CSS and the images neither. I get constant errors like these: 2010/04/14 16:57:25 [error] 13365#0: *69 open() "/var/www/foo/styles/styles/coffee_time/theme/large.css" failed (2: No such file or directory), client: 83.44.175.237, server: www.foo.com, request: "GET /styles/coffee_time/theme/large.css HTTP/1.1", host: "www.foo.com", referrer: "http://www.foo.com/viewforum.php?f=43" This is my config of the site: server { listen 80; server_name www.foo.com; access_log /var/log/nginx/foo.access.log; # serve static files directly location ~* ^.+.(jpg|jpeg|gif|css|png|js|ico)$ { access_log off; expires 30d; root /var/www/trasteando/; } location / { root /var/www/foo/; index /var/www/foo/index.php; } # proxy the PHP scripts to predefined upstream .apache. # location ~ .php$ { proxy_pass http://apache; } location /styles/ { root /var/www/foo/styles/; }

    Read the article

  • How to verify TRIM/discard on encrypted swap?

    - by svarni
    I am using an encrypted swap partition via ecryptfs-setup-swap on my Ubuntu 13.04 computer using a SSD. I have manually set up trim for my ext4 root partition (simply by adding the "discard" option in /etc/fstab). I also manually ran fstrim on the root partition prior to booting and using dstat I saw that for a few seconds several GB/s of data have been written to the disk. That was presumably the effect of the trim command. These high writerates are reproducable by deleting huge files and have not occured before setting up trim, so I take them as evidence for working trim/discard. Manually enabling trim on my root partition has stopped the wearout of my precious new disk from 365 used reserved blocks (out of 6176 total) within three months down to 0 additional used reserved blocks within three additional months (data from SMART attributes). Because I want to minimize the wearout of my SSD I now would like to know whether my swap partition (which is encrypted using ecryptfs-setup-swap) also makes use of the trim/discard option. I tried sudo swapon -d -v /dev/mapper/cryptswap1 but did not receive particular information ("-v") about whether trim/discard ("-d") was applied. If unsupported, i would expect a message. Then I tried sudo dd if=/dev/sda6 count=1 BS=1M | xxd | less directly after booting and when no swapspace was used but I saw not only zeroes. I assume, when looking at freshly trimmed regions, the disk would send zeroes instead of reading random sectors (and according to some forums, (unencrypted) swap space is trimmed once upon boot). Long story short: Are there any ideas on how to test if trim is effectively used for my encrypted swap? And if not, any ideas on how to - at least manually, for once - trim the whole swap space? I wouldn't want to tinker with the partition itself, because I dont know if it needs to be reinitialized as (encrypted) swap - I dont want to be left with an unbootable system :)

    Read the article

  • sudoer scheme to allow useful access to another web developer yet retain future control of a virtual

    - by Tchalvak
    Background: Virtual Private Server I have a virtual private server that I'm looking to host multiple websites on, and provide access to another web developer. I don't care about putting too many constraints on him, though I wouldn't mind isolating the site that he'll be developing from other sites on the server that I will develop. The problem: retain control Mainly what I want is to make sure that I retain control over the server in the future. I want to reserve the ability to create/promote/demote and other administrative functions that don't deal with web software. If I make him an admin, he can sudo su - and become root and remove root control from me, for example. I need him not to be able to: take away other admin permissions change the root password have control over other security/administrative functions I would like him to still be able to: install software (through apt-get) restart apache access mysql configure mysql/apache reboot edit web development configuration type files in /etc/ Other Standard Setups would be happily considered I've never really set up a good sudoers file, so simple example setups would be very useful, even if they're only somewhat similar to the settings that I'm hoping for above. Edit: I have not yet finalized permissions, so standard, useful sudo setups are certainly an option, the lists above are more what I'm hoping I can do, I don't know that that setup can be done. I'm sure that people have solved this type of problem before somehow, though, and I'd like to go with something somewhat tested as opposed to something I've homegrown.

    Read the article

  • moving files and directories between two machine, via a third, preserving permissions and usernames

    - by Jarmund
    The situation is as follows: Machine A has a file repository accessible via rsync Machine B needs the above mentioned files with all permissions and ownerships intact (including groups etc) Machine C has access to both A and B, but has a completely different set of users. Normally, i would just rsync everything over, directly between A and B, but due to severely limited bandwidth at the moment, i need something different, as rsync times out after building the list of the 430 files (49Mb uncompressed... can be compressed down to ~7Mb). What i've tried so far: rsync everything over from A to C, tar it, copy the tarball over, and then untar it, however, this messes up the ownership and/or the permissions. To rsync it from A to C, i run this command: rsync --numeric-ids --password-file=/root/rsync_pwd_file -oaPvu rsync://[email protected]/portal_2/ ./portal_2/ ...and from the looks of things, they do end up on C with the correct ownerships/permissions/flags/everything (not 100% sure, though.. are there any more switches i can throw in there? did i miss something?) copying the tarball over is simple enough (slow as a one-legged turtle due to the bandwidth, but it checksums out alright) What i'm unsure of is the flags and switches for creating and extracting the tarball, so could someone please provide the full commands for creating a tarball from /root/portal_2 on machine C (with everything intact) and extracting the tarball into /var/ex/portal_2 on machine B? ? Also, are there any other approaches worth mentioning that could allow me to perform this? I have root access to A and C, whereas i only have rsync access to B. PS: I'm running rsync v2.6.9 on machine B, and unfortunately i do not have the oportunity to upgrade to v3

    Read the article

  • nginx codeigniter rewrite: Controller name conflicts with directory

    - by palerdot
    I'm trying out nginx and porting my existing apache configuration to nginx. I have managed to reroute the codeigniter url's successfully, but I'm having a problem with one particular controller whose name coincides with a directory in site root. I managed to make my codeigniter url's work as it did in Apache except that, I have a particular url say http://localhost/hello which coincides with a hello directory in site root. Apache had no problem with this. But nginx routes to this directory instead of the controller. My reroute structure is as follows http://host_name/incoming_url => http://host_name/index.php/incoming_url All the codeigniter files are in site root. My nginx configuration (relevant parts) location / { # First attempt to serve request as file, then # as directory, then fall back to index.html index index.php index.html index.htm; try_files $uri $uri/ /index.php/$request_uri; #apache rewrite rule conversion if (!-e $request_filename){ rewrite ^(.*)/?$ /index.php?/$1 last; } # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } location ~ \.php.*$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini # With php5-cgi alone: fastcgi_pass 127.0.0.1:9000; # With php5-fpm: #fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; } I'm new to nginx and I need help in figuring out this directory conflict with the Controller name. I figured this configuration from various sources in the web, and any better way of writing my configuration is greatly appreciated.

    Read the article

  • How would I change the DocumenRoot on the version of Apache that came pre-installed on my Mac OS X s

    - by racl101
    OK, so I want to take advantage of the Apache server that comes installed on my Mac OS X system (which means, I would like not to have to install my own version of Apache since I might as well tryto use what comes bundled), and as such, I went to change some settings in the configuration file: /etc/apache2/httpd.conf Namely, I changed the these two lines: DocumentRoot "/Users/myusername/Sites" <Directory "/Users/myusername/Sites"> So that they initially pointed to a folder in my Dropbox folder (so I could have my docs sync to my Dropbox): DocumentRoot "/Users/myusername/Dropbox/public_html" <Directory "/Users/myusername/Dropbox/public_html"> That didn't work. So then I figured, ok maybe it was too much to ask to make folder in my Dropbox be my document root. So then I thought, what if I make the Document root another folder of my choosing like so: DocumentRoot "/Users/myusername/dev-sites/public_html" <Directory "/Users/myusername/dev-sites/public_html"> and that didn't work either. After looking within the httpd.conf file for clues it seems that only two directories appear to work as Document root paths for the Apache that comes bundled with Mac OS X: /Users/myusername/Sites (or ~/Sites) and /Library/WebServer/Documents/ But trying to use any other directories didn't seem to work. I would get 403 errors on my browser. I was wondering if there was some other settings to change on the httpd.conf file or any permissions to set to make this work. Any help would be appreciated and many thanks in advance.

    Read the article

  • Linux 2.6.24-gentoo-r3-comtrance on x86_64 high Useage for unknown reasons

    - by Dorjan
    Hello everyone, I'm a complete rookie when it comes to all things Linux related so please treat me as such and assume I know nothing. That being said my Top says this: top - 12:08:03 up 11 days, 15:36, 0 users, load average: 5.47, 5.53, 5.46 Tasks: 296 total, 2 running, 294 sleeping, 0 stopped, 0 zombie Cpu(s): 6.3%us, 1.4%sy, 0.0%ni, 71.3%id, 20.6%wa, 0.0%hi, 0.3%si, 0.0%st Mem: 8176880k total, 8118236k used, 58644k free, 89312k buffers Swap: 1004052k total, 0k used, 1004052k free, 7235652k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1229 root 15 -5 0 0 0 D 1 0.0 199:28.63 kjournald 2946 root 20 0 1716 676 552 D 1 0.0 145:02.94 syslogd 14553 root 20 0 2644 1268 876 R 1 0.0 0:00.34 top 14609 postfix 20 0 7896 1884 1460 D 1 0.0 0:00.02 bounce 14630 postfix 20 0 7896 1876 1452 R 0 0.0 0:00.00 bounce And my hard drives says: > df -k Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda3 4925556 4474836 200508 96% / /dev/sda5 489992 36090 428602 8% /tmp /dev/sda6 377951852 236171160 122581816 66% /var none 4088440 0 4088440 0% /dev/shm It has been like it for a few days now... I know not what is causing the high server load (Normally around 1.3) can anyone give any tips on how to track down the culprit? Many thanks,

    Read the article

  • Server not resolving after restart

    - by DomainSoil
    I restarted our server today, and now cannot for the life of me get anything to resolve... I suspect it has something to do with our routes. I've tried numerous Google results to no avail. Here is as far as I've gotten: [root@www ~]# route -n Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.1.101 0.0.0.0 255.255.255.255 UH 0 0 0 eth1 0.0.0.0 192.168.1.101 0.0.0.0 UG 0 0 0 eth1 Things you need to know: Our server (CentOS 6.3) runs two virtual machines, one live, and one development. They mirror each other as much as possible, but I can't find where I've went wrong with the live server. The dev server works fine. [root@www ~]# ifconfig eth1 Link encap:Ethernet HWaddr xx:xx:xx:xx:xx:xx inet6 addr: xxxx:xxx:xxxx:xxxx:xxxx/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:118206 errors:0 dropped:0 overruns:0 frame:0 TX packets:165 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:7825749 (7.4 Mib) TX bytes:7146 (69.2 KiB) Interrupt:28 [root@www ~]# /etc/init.d/network status Configured devices: lo Auto eth0 Currently active devices: lo eth1 If there is any other information you need, please don't hesitate to ask!

    Read the article

  • Questions about NGINX limit_req_zone

    - by Meteor
    I got a problem with NGINX limit_req_zone. Anyone can help? The problem is that, I want to limit user access to some specific URL, for example: /forum.php?mod=forumdisplay? /forum.php?mod=viewthread&*** But, I do want to add an exception for below URL, /forum.php?mod=image&* Below is the location section of my configuration, the problem is that, for URL started with /forum.php?mod=image&*, the limitation is still applied. Any body can help? location ~*^/forum.php?mod=image$ { root /web/www; fastcgi_pass unix:/tmp/nginx.socket; fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; include fastcgi_params; } location ~*^/(home|forum|portal).php$ { root /web/www; limit_conn addr 5; limit_req zone=refresh burst=5 nodelay; fastcgi_pass unix:/tmp/nginx.socket; fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; include fastcgi_params; } location ~ \.php$ { root /web/www; fastcgi_pass unix:/tmp/nginx.socket; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; include fastcgi_params; }

    Read the article

  • Nagios send mail when server is down

    - by tzulberti
    I am using nagios 3.06 to monitor the servers. When a service is critical, it sends a mail, but when a server is down no mail is sent. Even if all the services go to critical state, no mail is sent. I have the following configuration: define command {     command_name notify-host-by-email     command_line python /etc/nagios3/send_mail.py "[Nagios] $HOSTNAME$" "******** Nagios ****\n\n Host: $HOSTNAME$\n Description: the server is down" } define command{     command_name notify-service-by-email     command_line python /etc/nagios3/send_mail.py "[Nagios] $HOSTNAME$: $SERVICEDESC$ ($NOTIFICATIONTYPE$)" "***** Nagios *****\n\nNotification Type: $NOTIFICATIONTYPE$\nService: $SERVICEDESC$\nHost: $HOSTALIAS$\nAddress: $HOSTADDRESS$\nState: $SERVICESTATE$\nDate/Time: $LONGDATETIME$\nAdditional Info:$SERVICEOUTPUT$" } The python script is a script to sent a mail. It works if I execute it from the command line, but it doesn't sents an email from nagios. What I am doing wrong? UPDATE: The contact data is: define contact{     contact_name root     alias Root     service_notification_period 24x7     host_notification_period 24x7     service_notification_options w,u,c,r     host_notification_options d,r     service_notification_commands notify-service-by-email     host_notification_commands notify-host-by-email     email [email protected] } define contactgroup{     contactgroup_name admins     alias Nagios Administrators     members root }

    Read the article

  • Apache Virtual Host Issue

    - by Nik
    I think I hate Apache now, but on with the issue. It might be a configuration error on my end or just my inability to see what's right in front of me, but I'm trying to configure a sub-domain in Apache and no matter what, it always redirects the sub-domain to the web root of the main domain. My configuration is posted below (and yes, the domain name information was purposefully modified): <VirtualHost *> DocumentRoot /var/www/root/ ServerName example.com <Directory /var/www/root/> allow from all Options +Indexes </Directory> </VirtualHost> <Directory /usr/share/squirrelmail> Options Indexes FollowSymLinks <IfModule mod_php5.c> php_flag register_globals off </IfModule> <IfModule mod_dir.c> DirectoryIndex index.php </IfModule> # access to configtest is limited by default to prevent information leak <Files configtest.php> order deny,allow deny from all allow from 127.0.0.1 </Files> </Directory> # users will prefer a simple URL like http://webmail.example.com <VirtualHost *> DocumentRoot /usr/share/squirrelmail/ ServerName squirrelmail.example.com </VirtualHost>

    Read the article

  • How to add a user to Wheel group?

    - by Natasha Thapa
    I am trying to add a use to wheel group using in a Ubuntu server. sudo usermod -aG wheel john I get: usermod: group 'wheel' does not exist On my /etc/sudoers I have this: > cat /etc/sudoers > # sudoers file. > # > # This file MUST be edited with the 'visudo' command as root. > # > # See the sudoers man page for the details on how to write a sudoers file. > # > > # Host alias specification > > # User alias specification > > # Cmnd alias specification > > # Defaults specification > > # User privilege specification root ALL=(ALL) ALL %root ALL=(ALL) NOPASSWD: ALL > > %wheel ALL=(ALL) NOPASSWD: ALL Do I have to do groupadd of wheel?

    Read the article

< Previous Page | 139 140 141 142 143 144 145 146 147 148 149 150  | Next Page >