Search Results

Search found 12569 results on 503 pages for 'root plist'.

Page 164/503 | < Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >

  • mongodb : Can create new thread on FreeBSD?

    - by user197739
    We experienced some strange thing in our mongodb gridfs platform. The platform actually is a bi Xeon E5 (bi quad core) with 128GB of memory, running on freebsd 9 with a zfs pool dedicated for mongodb. [root@mongofile1 ~]# uname -sr FreeBSD 9.1-RELEASE our /boot/loader.conf vfs.zfs.arc_min="2048M" vfs.zfs.arc_max="7680M" vm.kmem_size_max="16G" vm.kmem_size="12G" vfs.zfs.prefetch_disable="1" kern.ipc.nmbclusters="32768" /etc/sysctl.conf net.inet.tcp.msl=15000 net.inet.tcp.keepidle=300000 kern.ipc.nmbclusters=32768 kern.ipc.maxsockbuf=2097152 kern.ipc.somaxconn=8192 kern.maxfiles=65536 kern.maxfilesperproc=32768 net.inet.tcp.delayed_ack=0 net.inet.tcp.sendspace=65535 net.inet.udp.recvspace=65535 net.inet.udp.maxdgram=57344 net.local.stream.recvspace=65535 net.local.stream.sendspace=65535 we follow the recommendation for the ulimit : [root@mongofile1 ~]# su - mongodb $ ulimit -a cpu time (seconds, -t) unlimited file size (512-blocks, -f) unlimited data seg size (kbytes, -d) 33554432 stack size (kbytes, -s) 524288 core file size (512-blocks, -c) unlimited max memory size (kbytes, -m) unlimited locked memory (kbytes, -l) unlimited max user processes (-u) 5547 open files (-n) 32768 virtual mem size (kbytes, -v) unlimited swap limit (kbytes, -w) unlimited sbsize (bytes, -b) unlimited pseudo-terminals (-p) unlimited This server have a twin (same config exactly) for ReplSet in other data center and we have a virtualized arbiter. Some time, almost 3 days, the process of mongodb exit. The problem begin with: Fri Nov 8 11:27:31.741 [conn774697] end connection 192.168.10.162:47963 (23 connections now open) Fri Nov 8 11:27:31.770 [initandlisten] can't create new thread, closing connection Fri Nov 8 11:27:31.771 [rsHealthPoll] replSet member mongofile2:27017 is now in state DOWN Fri Nov 8 11:27:31.774 [initandlisten] connection accepted from 192.168.10.162:47968 #774702 (20 connections now open) Fri Nov 8 11:27:31.774 [initandlisten] connection accepted from 192.168.10.161:28522 #774703 (21 connections now open) Fri Nov 8 11:27:31.774 [initandlisten] connection accepted from 192.168.10.164:15406 #774704 (22 connections now open) Fri Nov 8 11:27:31.774 [initandlisten] connection accepted from 192.168.10.163:25750 #774705 (23 connections now open) Fri Nov 8 11:27:31.810 [initandlisten] connection accepted from 192.168.10.182:20779 #774706 (24 connections now open) Fri Nov 8 11:27:31.855 [initandlisten] connection accepted from 192.168.10.161:28524 #774707 (25 connections now open) Fri Nov 8 11:27:31.869 [initandlisten] connection accepted from 192.168.10.182:20786 #774708 (26 connections now open) and after many "can create new thread" [root@mongofile1 /usr/mongodb]# tail -n 15000 mongod.log.old |grep "create new thread"|wc 5020 55220 421680 and finish by a magnificent Fri Nov 8 11:30:22.333 [rsMgr] replSet warning caught unexpected exception in electSelf() pure virtual method called Fri Nov 8 11:30:22.333 Got signal: 6 (Abort trap: 6). Fri Nov 8 11:30:22.337 Backtrace: 0x599efc 0x8035cb516 0x599efc <_ZN5mongo10abruptQuitEi+988> at /usr/local/bin/mongod 0x8035cb516 <_pthread_sigmask+918> at /lib/libthr.so.3 Extract of mongodb from top 78126 mongodb 77 20 0 1253G 1449M sbwait 0 0:20 0.00% mongod If I restart the process when it crash, the problem is fixed for almost 3 days. Has anyone seen this before, or know of a fix?

    Read the article

  • Including a PHP file that can be used with multiple sites

    - by Roland
    I have a web server that we use, apache, centos5, php I have a file called 'include.php' that I need to include in multiple sites. Eg. I have a site called testsite.co.za, now in the index.php i want to include the include.php file, the include.php is not in the root of testsite.co.za, Now i created another folder includes in the web root directory which contains include.php my code looks as follows in testsite.co.za/index.php require_once '../includes/include.php'; if i run testsite.co.za it can't detect include.php. Is there a certain server setting I need to change in order to include this file? My directory structureof -/var/www/html   -testsite.co.za       -index.php     -includes       -include.php Hope this makes sence

    Read the article

  • Booting Windows7 kernel from an initrd/wim image file

    - by Ivo
    I'm wondering if it's possibile to have Win7 kernel and relative drivers (especially storage drivers) to boot from an initrd-like image file (maybe .wim?) and later then mount the windows root partition and complete the load of the full OS? I'll try to explain why: I'm running an emulated environment with NO REAL BIOS, and I'm passingthrough a raid storage controller. I want windows to boot from this controller array, but of course the BCD manager cannot access disks in the array until kernel and relative controller storage drivers are loaded. To be clear I get the classical winload.exe missing error. I need a similar solution to what Linux does, loading the kernel and his drivers, and later then mount the root partition and complete the boot. Any ideas or advices?

    Read the article

  • How to configure g-wan to use virtual hosts?

    - by Jan
    Say I have a domain foo.com and a server accessible at 50.60.70.80. I have configured the DNS entries so that foo.com and www.foo.com point to 50.60.70.80. I have g-wan running on the web server. Now I want to host different web sites on foo.com and on www.foo.com. According to the documentation I have to configure a root host and optionally some virtual hosts. So I choose foo.com to be the root host. www.foo.com is a virtual host. My problems is that g-wan seems to ignore my virtual host. No matter whether I use foo.com or ww.foo.com g-wan always serves the foo.com content. This is my g-wan "config": /gwan/0.0.0.0_80/#movq.org /gwan/0.0.0.0_80/$www.movq.org What am I doing wrong here?

    Read the article

  • puppet restart service failure

    - by Roman
    help me please with service restart # changing iptables` file { "/etc/sysconfig/iptables": ensure => "present", content => template("all_in_one/iptables.erb"), owner => root, group => root, mode => 600, } service { "iptables": ensure => running, enable => true, hasstatus => true, hasrestart => true, subscribe => File["/etc/sysconfig/iptables"] } output is: err Failed to call refresh: Could not restart Service[iptables]: Execution of '/sbin/service iptables restart' returned 1

    Read the article

  • How to install php soap with rpm packages in centos 6.4 x86_64

    - by HPM
    I want to install php-soap, centos says: [root@LMS-Cent64 soap]# rpm -ivh php-soap-5.3.3-22.el6.x86_64.rpm error: Failed dependencies: php-common(x86-64) = 5.3.3-22.el6 is needed by php-soap-5.3.3-22.el6.x86_64 after installing php-common(x86-64): [root@LMS-Cent64 soap]# rpm -ivh php-common-5.3.3-22.el6.x86_64.rpm Preparing... ########################################### [100%] package php-common-5.3.3-23.el6_4.x86_64 (which is newer than php-common-5.3.3-22.el6.x86_64) is already installed file /usr/lib64/php/modules/phar.so from install of php-common-5.3.3-22.el6.x86_64 conflicts with file from package php-common-5.3.3-23.el6_4.x86_64 What to do now?

    Read the article

  • Reshape linux md raid5 that is already being reshaped?

    - by smammy
    I just converted my RAID1 array to a RAID5 array and added a third disk to it. I'd like to add a fourth disk without waiting fourteen hours for the first reshape to complete. I just did this: mdadm /dev/md0 --add /dev/sdf1 mdadm --grow /dev/md0 --raid-devices=3 --backup-file=/root/md0_n3.bak The entry in /proc/mdstat looks like this: md0 : active raid5 sdf1[2] sda1[0] sdb1[1] 976759936 blocks super 0.91 level 5, 64k chunk, algorithm 2 [3/3] [UUU] [>....................] reshape = 1.8% (18162944/976759936) finish=834.3min speed=19132K/sec Now I'd like to do this: mdadm /dev/md0 --add /dev/sdd1 mdadm --grow /dev/md0 --raid-devices=4 --backup-file=/root/md8_n4.bak Is this safe, or do I have to wait for the first reshape operation to complete? P.S.: I know I should have added both disks first, and then reshaped from 2 to 4 devices, but it's a little late for that.

    Read the article

  • How to change the Nginx default folder?

    - by Ido Bukin
    I setup a server with Nginx and i set my Public_HTML in - /home/user/public_html/website.com/public And its always redirect to - /usr/local/nginx/html/ How can i change this ? Nginx.conf - user www-data www-data; worker_processes 4; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; tcp_nopush on; tcp_nodelay off; keepalive_timeout 5; gzip on; gzip_comp_level 2; gzip_proxied any; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; include /usr/local/nginx/sites-enabled/*; } /usr/local/nginx/sites-enabled/default - server { listen 80; server_name localhost; location / { root html; index index.php index.html index.htm; } # redirect server error pages to the static page /50x.html error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } /usr/local/nginx/sites-available/website.com - server { listen 80; server_name website.com; rewrite ^/(.*) http://www.website.com/$1 permanent; } server { listen 80; server_name www.website.com; access_log /home/user/public_html/website.com/log/access.log; error_log /home/user/public_html/website.com/log/error.log; location / { root /home/user/public_html/website.com/public/; index index.php index.html; } # pass the PHP scripts to FastCGI server listening on # 127.0.0.1:9000 location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include /usr/local/nginx/conf/fastcgi_params; fastcgi_param SCRIPT_FILENAME /home/user/public_html/website.com/public/$fastcgi_script_name; } } The error message I get is Fatal error: require_once() [function.require]: Failed opening required '/usr/local/nginx/html/202-config/functions.php' the server try to find the file in the Nginx folder and not in my Public_Html

    Read the article

  • Sendmail.mc: alias all incoming e-mails to one account

    - by Angus
    I need to alias all mail coming from another SMTP server to this one account "myinbox". The system in question is to receive all e-mail on the domain, if that's any help. http://william.shallum.net/random-notes/sendmailredirectallmailfordevelopment is a template for the beginning of a solution, but that routes everything (including outgoing and internal mail) to that one account, and trying to understand how these R rules work is making my head spin. I think the answer is in sendmail.mc rather than any Procmail configuration. So I think what I generally don't want the filter to do is: Interfere w/any outgoing e-mail Interfere w/any internal e-mail Sometimes some cron job causes "root" to mail to "root". I don't want these to go to myinbox. Cause infinite loops Who does? Bounce messages and any DSNs come to mind. I'm running Sendmail 8.13.1 and Procmail 3.22.

    Read the article

  • Set nginx.conf to deny all connections except to certain files or directories

    - by Ben
    I am trying to set up Nginx so that all connections to my numeric ip are denied, with the exception of a few arbitrary directories and files. So if someone goes to my IP, they are allowed to access the index.php file, and the phpmyadmin directory for example, but should they try to access any other directories, they will be denied. This is my server block from nginx.conf: server { listen 80; server_name localhost; location / { root html; index index.html index.htm index.php; } location ~ \.php$ { root html; fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /srv/http/nginx/$fastcgi_script_name; include fastcgi_params; } } How would I proceed? Thanks very much!

    Read the article

  • How can I create multiple identical AWS EC2 server instances with large amounts of persistent data?

    - by mojones
    I have a CPU-intensive data-processing application that I want to run across many (~100,000) input files. The application needs a large (~20GB) data file in order to run. What I would like to do is create an EC2 machine image that has my application and associated data files installed boot up a large number (e.g. 100) of instances of this image split my input files up into 100 batches and send one batch to be processed on each instance I am having trouble figuring out the best way to ensure that each instance has access to the large data file. The data file is too big to fit on the root filesystem of an AMI. I could use Block Storage, but a given Block Storage volume can only be attached to a single instance, so I would need 100 clones. Is there some way to create a custom image that has more space on the root filsystem so that I can include my large data file? Or is there a better way to tackle this problem?

    Read the article

  • XAMPP MySQL stops running after ~1.5 seconds

    - by Nona Urbiz
    I have tried installing it as a service. Nothing seems to work! I have checked the status page and MySQL is listed as "Deactivated". When trying to open phpMyAdmin I get: Error MySQL said: Documentation #1045 - Access denied for user 'root'@'localhost' (using password: NO) Connection for controluser as defined in your configuration failed. phpMyAdmin tried to connect to the MySQL server, and the server rejected the connection. You should check the host, username and password in your configuration and make sure that they correspond to the information given by the administrator of the MySQL server. and from the CD demo: Warning: mysql_connect() [function.mysql-connect]: Access denied for user 'root'@'localhost' (using password: NO) in C:\xampp\htdocs\xampp\cds.php on line 77 Could not connect to database! Is MySQL running or did you change the password?        Thanks for any suggestions or help you can give!

    Read the article

  • Is there an equivalent of SU for Windows

    - by CodeSlave
    Is there a way (when logged in as an administrator, or as a member of the administrators group) to masquerade as a non-privileged user? Especially in an AD environment. e.g., in the Unix world I could do the following (as root): # whoami root # su johnsmith johnsmith> whoami johnsmith johnsmith> exit # exit I need to test/configure something on a user's account, and I don't want to have to know their password or have to reset it. Edit: runas won't cut it. Ideally, my whole desktop would become the user's, etc. and not just in a cmd window.

    Read the article

  • Attempting to caue packet loss with netem doesn't work - possibly because of NAT (but delay does work)

    - by tomdee
    I have traffic from a WIFI access point routed via an Ubuntu box. I have two network interfaces which are NATed *filter :INPUT ACCEPT [11:690] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [37:6224] -A FORWARD -s 192.168.2.0/24 -i eth1 -o eth0 -m conntrack --ctstate NEW -j ACCEPT -A FORWARD -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT COMMIT # Completed on Thu Mar 15 13:37:21 2012 # Generated by iptables-save v1.4.10 on Thu Mar 15 13:37:21 2012 *nat :PREROUTING ACCEPT [0:0] :INPUT ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] -A POSTROUTING -j MASQUERADE COMMIT If I run a ping app on an Android device connected to the WIFI network I can happily ping google. If I use netem to introduce some delay tc qdisc change dev eth0 root netem delay 100ms I can clearly see pings taking longer. If I use netem to introduce some packet loss tc qdisc change dev ifb0 root netem loss 50% then I see no change. Packet loss does work fine for locally generated traffic, just not for traffic coming in over the network that's being NATed. Any ideas how to sort this out?

    Read the article

  • ssl_error_log apache issue

    - by lakshmipathi
    https://localhost works but https://ipaddress didn't cat logs/ssl_error_log [Mon Aug 02 19:04:11 2010] [error] [client 192.168.1.158] (13)Permission denied: access to /ajaxterm denied [root@space httpd]# cat logs/ssl_access_log 192.168.1.158 - - [02/Aug/2010:19:04:11 +0530] "GET /ajaxterm HTTP/1.1" 403 290 [root@space httpd]# cat logs/ssl_request_log [02/Aug/2010:19:04:11 +0530] 192.168.1.158 SSLv3 DHE-RSA-CAMELLIA256-SHA "GET /ajaxterm HTTP/1.1" 290 httpd.conf file NameVirtualHost *:443 <VirtualHost *:443> ServerName localhost SSLEngine on SSLCertificateFile /etc/pki/tls/certs/ca.crt SSLCertificateKeyFile /etc/pki/tls/private/ca.key <Directory /usr/share/ajaxterm > Options FollowSymLinks AllowOverride None Order deny,allow Allow from All </Directory> DocumentRoot /usr/share/ajaxterm DirectoryIndex ajaxterm.html ProxyRequests Off <Proxy *> # Order deny,allow Allow from all </Proxy> ProxyPass /ajaxterm/ http://localhost:8022/ ProxyPassReverse /ajaxterm/ http://localhost:8022/ ErrorLog error_log.log TransferLog access_log.log </VirtualHost> How to fix this ?

    Read the article

  • User reduced LVM logical volume without resizing filesystem

    - by Matthew
    I received an email yesterday that one of our users was trying to make room for a heartbeat/clustering package which requires its own partition to act as a voting disk. To do this, he attempted to reduce the size of the root partition's logical volume, and then create a new logical volume for this purpose. However, he forgot to resize the filesystem first (or include the -r switch in the command). He also forgot to unmount the root partition by running this process from a rescue cd. The system is now refusing to boot into the OS with the following error: Either the superblock or the partition table is likely to be corrupt! Unexpected Inconsistency; run fsck manually. The system them drops the user into single user mode. Is it possible to rescue the filesystem, or is it hosed? Its running ext3.

    Read the article

  • kloxo setup error

    - by ron
    Hi, i've just purchased vps from 2host, its unmanaged. the support told me to install kloxo. However i got the ff errors in step 1: --begin-- root@vpshostingtips:~# wget http:// download.lxlabs . com/download/kloxo/production/kloxo-install-master.sh --2010-05-06 04:17:04-- http:// download.lxlabs . com/download/kloxo/production/kloxo-inst all-master.sh Resolving download.lxlabs.com... failed: Temporary failure in name resolution. wget: unable to resolve host address `download.lxlabs.com' root@vpshostingtips:~# --end-- Note: i split the hyperlinks intentionally to post here can somebody tell me whats the reason for error? sorry so new to vps.

    Read the article

  • OpenVPN Permission Denied Error

    - by LordCover
    I am setting OpenVPN up, and I'm in the state of adding users. Details: Host System: Windows Server 2003 32-bit. Guest System: Ubuntu Linux (with OpenVPN installed already), actually I downloaded it from OpenVPN.Net. Virtualization: VMWare v7.0 Problem: I can access the Access Server web portal (on the port 5480), but when I login to http://host_ip:943/admin and enter my (correct) login info, it shows me a page saying that "You don't have enough permissions". I am the (root) user!!!! that is really weird!!! Note: if I enter wrong login it will denote an incorrect login, this means that I am logging in successfully but the problem comes after the login process. What I tried: I tried to create another user after (root) logging in to Linux Bash using (useradd) command, but the same resulted.

    Read the article

  • mount fstab partition with public access

    - by Mikhail
    How do I specify that an fstab mount-point should be public? I want /mnt/windows to be accessible to normal users. I believe I am using ntfs-3g. If I set the /mnt/windows to 777 will it be publicly accessible without changing the permissions on the NTFS disk? /dev/sdb4 /mnt/windows ntfs noatime 0 1 /dev/sdb5 / ext4 noatime 0 1 UUID=5AA4-168D /boot/efi vfat defaults 0 1 and localhost my_computer # stat /mnt/windows/ File: '/mnt/windows/' Size: 12288 Blocks: 24 IO Block: 512 directory Device: 814h/2068d Inode: 5 Links: 1 Access: (0700/drwx------) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2014-08-21 18:29:13.597722200 -0500 Modify: 2014-08-21 18:29:13.597722200 -0500 Change: 2014-08-21 18:29:13.597722200 -0500 Birth: -

    Read the article

  • Execute local script requiring arguments on Linux via plink

    - by c_maker
    Is it possible to execute (from windows) a local script with arguments on a remote linux system? Here's what I got: plink 1.2.3.4 -l root -pw mypassword -m hello.sh Is there a way to do this same thing, but able to give input parameters to hello.sh? I've tried many things, including: plink 1.2.3.4 -l root -pw mypassword -m hello.sh input1 input2 In this case it seems that plink thinks that input1 and input2 are its arguments.. which makes sense. What are my options?

    Read the article

  • TFTP PUT Failing Across Hosts

    - by Jason
    I have a TFTP server installed on a CentOS host. /etc/xinetd.d/tftp: service tftp { disable = no socket_type = dgram protocol = udp wait = yes user = root server = /usr/sbin/in.tftpd server_args = -c -s /var/lib/tftpboot per_source = 11 cps = 100 2 flags = IPv4 } If I try to PUT a file from a remote host to the host running the TFTP server, I get Transfer Timed Out - however, it does create the file in /var/lib/tftpboot but the file is empty. If I tftp from the tftp server to itself (localhost) and PUT a file, it works fine. I have verified that SELinux is disabled and IPTables are turned off. I can connect from the remote hosts with no issue - just seems to be the PUT I have issue with: [root@SVR01 TEST]# tftp 10.100.2.15 tftp> status Connected to 10.100.2.15. Mode: netascii Verbose: off Tracing: off Literal: off Rexmt-interval: 5 seconds, Max-timeout: 25 seconds tftp>

    Read the article

  • NFS share access - Permission denied

    - by rgngl
    I'm trying to share a directory on my NAS device(WD Mybook WE) with NFS to another machine on my local network. The directory on the NAS device looks like this: drwxr-x--- 15 git git 4096 Nov 17 01:05 git/ And id's of the user git on the NAS device is like this: [root@myhost DataVolume]# id git uid=505(git) gid=505(git) I played with many different parameters in the /etc/exports file and this is what I got there currently: /DataVolume/git 192.168.0.20(async,rw,no_root_squash,no_subtree_check) On the client side I have the user git and group git with the same id's to match the ones on the server. user@myclient:~$ id git uid=505(git) gid=505(git) groups=505(git) I mount the directory with: sudo mount myhost:/DataVolume/git -t nfs git/ and the mounted directory looks like: drwxr-x--- 15 git git 4096 Nov 17 01:05 git After these steps I can't seem to cd to that directory with any user, including git and root. I am getting a Permission denied error. Thanks in advance for any help.

    Read the article

  • VMware two vSwitches Guests can't communicate between them

    - by Aaron R.
    I have some servers in this configuration: And I am not able, from VMGuest1, to ping either VMGuest3 or VMGuest4. I can, however, ping Host1 and Host2, which are attached to pSwitch1. The behavior is the same with VMGuest3 or 4 trying to ping VMGuest 1 or 2. I don't have promiscuity enabled for any of these switches, nor do I have a bridge set up inside ESXi for the virtual switches. I know that one of these options is usually necessary when trying to get connectivity between two virtual switches. These switches are connected, however, through their respective physical switches which are bridged together. Ping just times out, arp request looks like this: [root@vmguest1:~]# arp -a vmguest3 vmguest3.example.com (1.2.3.4) at <incomplete> on eth0 [root@vmguest1:~]# arp -a host1 host1.example.com (1.2.3.5) at 00:0C:64:97:1C:FF [ether] on eth0 VMGuest1 can reach hosts on pSwitch1, so why can't it get to hosts on vSwitch1 through pSwitch1 the same way?

    Read the article

  • Linux file permissions seem right but I can't write to a directory

    - by CaseyB
    I believe that I have the permissions set correctly but I can't write to a directory. Here's my problem: cborders@Kraken:/var/www$ ls -la total 12 drwxrwxr-x 2 webz webz 4096 2011-12-30 14:58 ./ drwxr-xr-x 13 root root 4096 2011-12-30 14:58 ../ -rw-rw-r-- 1 webz webz 177 2011-12-30 14:58 index.html cborders@Kraken:/var/www$ id cborders uid=1000(cborders) gid=1000(cborders) groups=1000(cborders),4(adm),20(dialout),24(cdrom),46(plugdev),109(sambashare),113(lpadmin),114(admin),1002(webz) cborders@Kraken:/var/www$ mkdir test mkdir: cannot create directory `test': Permission denied The owner of the directory is a user called webz and the permissions allow the user and group rwx access to it. I am in the webz group but I still can't make any changes. What am I doing wrong here?

    Read the article

< Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >