Search Results

Search found 12077 results on 484 pages for 'node js'.

Page 349/484 | < Previous Page | 345 346 347 348 349 350 351 352 353 354 355 356  | Next Page >

  • Trouble cloning a Macbook Pro hard drive

    - by Mirko Froehlich
    I am trying to upgrade the 250GB hard drive in my MacBook Pro (early 2008 model) to a 750GB drive. I have connected the new drive via an external USB enclosure. The drive is recognized fine, I can format it, etc. However, every time I try to clone the drive, I am getting Input/Output errors. Before the clone operation, I have verified both the internal and the external drive using Disk Utility, and they both check out fine. After the clone operation, the external drive shows multiple "Invalid node structure" errors: I have tried two approaches for cloning the drive: Using Disk Utility, by starting from the OSX install DVD Using Carbon Copy Cloner The outcome is the same in both cases. The Carbon Copy Cloner logs show a handful of the following types of errors: rsync: mkstemp "<... an external filename ...>" failed: Input/output error (5) rsync: stat "<... an external filename ...>" failed: Input/output error (5) The actual files affected seem to be different across different runs of the application. Before the last run, I used Disk Utility to (once more) reformat the external drive and explicitly overwrite it with zeros, but this made no difference. I also tried running a surface scan in Tech Tool Pro overnight. It got about 2/3 of the way through before I had to disconnect the drive (had to take my MacBook Pro to work), but so far it didn't report any bad blocks. Assuming it scans the drive in the same order in which blocks would be allocated during actual use, it seems like if bad blocks were to blame for the clone failures, they should have been found already (given that the source drive is only 250GB). As a last attempt, I may try SuperDuper as well, although my understanding is that it uses the same underlying rsync approach as Carbon Copy Cloner, so it's unlikely to perform any better. Are there any other things I should try before I send the drive in for a replacement? Could these problems be caused by my internal drive, even though it works fine and checks out fine in Disk Utility?

    Read the article

  • Using virtualization infrastructure for J2EE application distribution- viable alternative?

    - by Dan
    Our company builds custom J2EE web solutions. At the moment, we use standard J2EE distribution mechanisms (ear/war archives). Application servers are generally administered by our clients' IT departments and since we do not have complete control over the environment, a lot of entropy can be introduced into the solution. For example: latest app. server patch not applied conflicting third party libraries inside the app. server root server runtime and tuning parameters not configured (for example, number of connections in database pool) We are looking into using virtualization infrastructure for J2EE application distribution. Instead of sending the ear/war archive, we’d send image with application server node and our application preinstalled. Some of the benefits are same as using with using virtualization infrastructure in general, namely better use of hardware resources. For us, it reduces the entropy of hosting infrastructure - distributing VM should be less affected by hosting environment. So far, the downside I see can be in application server licenses, here they will have to use dedicated servers for our solution, but this is generally already done that way. Also, there is a complexity with maintaining virtualization infrastructure, but this is often something IT departments have more experience with than with administering and fine-tuning J2EE solutions. Anyone has experience with this model? What are the downsides? Will we not just replace one type of complexity with other?

    Read the article

  • Disabling Keyboard Wakeup for Ubuntu 10.04 on Acer 1810TZ

    - by sybreon
    My Acer Aspire 1810TZ laptop suspends fine but wakes up on any slight key-press. I would like to disable this behaviour. I read that it involves disabling something in the /proc/acpi/wakeup but SLPB does not seem to be listed at all. root@1810TZ:/etc# cat /proc/acpi/wakeup Device S-state Status Sysfs node UHC0 S3 disabled pci:0000:00:1d.0 UHC1 S3 disabled pci:0000:00:1d.1 UHC2 S3 disabled pci:0000:00:1d.2 UHCR S3 disabled EHC1 S3 disabled pci:0000:00:1d.7 UHC3 S3 disabled pci:0000:00:1a.0 UHC4 S3 disabled UHC5 S3 disabled EHC2 S3 disabled pci:0000:00:1a.7 EXP1 S4 disabled pci:0000:00:1c.0 PXSX S4 disabled pci:0000:01:00.0 EXP2 S4 disabled PXSX S4 disabled EXP3 S4 disabled PXSX S4 disabled EXP4 S4 disabled pci:0000:00:1c.3 PXSX S4 disabled pci:0000:02:00.0 EXP5 S4 disabled PXSX S4 disabled EXP6 S4 disabled PXSX S4 disabled However, the relevant bits seem to be detected from dmesg. [ 0.357628] ACPI: AC Adapter [ACAD] (on-line) [ 0.357749] input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 [ 0.357754] ACPI: Power Button [PWRB] [ 0.357817] input: Lid Switch as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0D:00/input/input1 [ 0.359319] ACPI: Lid Switch [LID0] [ 0.359390] input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 [ 0.359394] ACPI: Sleep Button [SLPB] [ 0.359475] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 [ 0.359479] ACPI: Power Button [PWRF] Not quite sure what to do next.

    Read the article

  • Puppet&Hiera: $variable is not an hash or array when accessing it

    - by txworking
    I wrote a puppet module and the content of init.pp was: class install( $common_instanceconfig = hiera_hash('common_instanceconfig'), $common_instances = hiera('common_instances') ) { define instances { common { $title: name => $title, path => $common_instanceconfig[$title]['path'], version => $common_instanceconfig[$title]['version'], files => $common_instanceconfig[$title]['files'], pre => $common_instanceconfig[$title]['pre'], after => $common_instanceconfig[$title]['after'], properties => $common_instanceconfig[$title]['properties'], require => $common_instanceconfig[$title]['require'] , } } instances {$common_instances:} } And the hieradata file was: classes: - install common_instances: - common_instance_1 - common_instance_2 common_instanceconfig: common_instance_1 path : '/opt/common_instance_1' version : 1.0 files : software-1.bin pre : pre_install.sh after : after_install.sh properties: "properties" common_instance_2: path : '/opt/common_instance_2' version : 2.0 files : software-2.bin pre : pre_install.sh after : after_install.sh properties: "properties" I always got a error message When puppet agent run Error: common_instanceconfig String is not an hash or array when accessing it with common_instance_1 at /etc/puppet/modules/install/manifests/init.pp:16 on node puppet.agent1.tmp It seems $common_instances can be got correctly, but $commono_instanceconfig always be treated as a string. I used YAML.load_file to load the hieradata file, and got a correct hash object. Can anybody help?

    Read the article

  • Is timeout in tracertoutput an indication of an error?

    - by nitramk
    TCP/IP packages sent from my computer to a remote server does not always reach destination and ends up being retransmitted sometimes several times before they succeed. To troubleshoot this, I'm running a tracert to the server: Tracing route to <site> [<address>] Over a maximum of 30 hops: 1 <1 ms <1 ms <1 ms mymachine 2 <1 ms <1 ms <1 ms gw.levonline.com [217.70.32.30] 3 <1 ms <1 ms <1 ms 81.201.213.218 4 <1 ms <1 ms <1 ms bmf1-hmf1.driften.net [81.201.213.12] 5 <1 ms <1 ms <1 ms 10ge-2-4-cr2.a1.sth.ownit.se [84.246.88.157] 6 <1 ms * <1 ms netnod-ix-ge-b-sth-4470.microsoft.com [195.69.11.181] 7 26 ms * * ge-3-0-0-0.ams-64cb-1a.ntwk.msn.net [207.46.42.1] 8 48 ms 57 ms 56 ms ten9-1.lts-76e-1.ntwk.msn.net [207.46.42.133] 9 * * * Request timed out. In step 6 and 7, I'm seeing timeouts while waiting for the reply from the server (as seen above). Running the same tracert many times gives varying output, sometimes the response is fine, but sometimes I get this timeout 1, 2 and sometimes for all 3 packets. The timeout always starts at the same server, netnod-ix-ge-b-sth-4470.microsoft.com. I've tried setting the tracert timeout to 10 seconds, but am still getting the timeout. Running tracert towards other servers does not give me the same timeout. Microsoft network technicians tells me that the problem is not on "their" side. Are these timeouts an indicator of a lost packet on the specific node which did not respond? Are the timeouts an indication of there being a problem, or is it normal?

    Read the article

  • Building a Web proxy to get around same-origin restrictions for collaborative Webapp based on a MEAN stack

    - by Lew Cohen
    Can anyone point to books, articles, blogs, or even applications - open-source or proprietary - that detail building a Web proxy? This specific proxy will exist to get around the same-origin restrictions that prevent, for instance, loading a given Website into an <iframe> in a Webapp. This Webapp is a collaborative application in which a group of users log in to the app's Website and can then load different Websites into this app's <iframe> and do various collaborative things (e.g., several users simultaneously browsing a Website, in synch). The Webapp itself is built on a MEAN stack (MongoDB, Express, AngularJS, and Node.js). The purpose of this proxy is not to do anonymous browsing or to bypass censorship. Information on how to build such a vehicle seems not to be readily available from my research. I've come across Glype but am not sure whether this is a feasible solution. I don't want to reinvent the wheel, so if a product is available for purchase, great. Else, we'd need to build one. The one that seems to be close is http://www.corsproxy.com. In effect, we'd like to re-create this since it evidently does what's needed. I don't care what server-side technology is used. Our app is MEAN-based, if that has any bearing. Also, the proxy has to obviously honor basic security considerations (user cookies, etc.) and eventually be scalable. So, anyone know of any sources that would detail how to build one of these? Is it even worth building if something already exists? If so, what would be a good candidate? Any other issues that should be considered with this proxy/application? Thanks a lot!

    Read the article

  • Windows Domain Chaos - Any Solving Approach

    - by Chake
    we are running an old Window 2003 Server as Domain Controller (DC2003). To safely migrate to Windows 2008 R2 we added a 2008 R2 (DC2008R2) to the domain as domain controller (adprep etc.). After dcpromo on DC2008R2 everything seemed to be ok. The new DC appeared under the "Domain Controlelrs" node. It wasn't checked at this time, if DC2008R2 can REALLY act as domain controller. Later we tried to shutdown DC2003 and ran into a total mess with non functional Exchange and Team Foundation Services. After that I got the job to fix... First i thought it could be an Problem with DC2008R2. So I removed it as Domain Controller and installed a new Windows 2008 R8 Server DC2008R2-2. I ran into similar Problems. I tried a bunch of stuff, but nothign helped. I won't list it, maybe I made an mistake, so I'm willing to redo it with your suggestions. To have a starting point I tried the best practise analyser whicht ended up with 24 "Compatible" and 26 "Not Compatible" tests. From these 26 tests 19 read the same. (I'm translating from german, so that may to be the exact wording) Problem: Using the Best Practise Analyser for Active Directory Domain Services (Active Directory Domain Services Best Practices Analyzer, AD DS BPA) no data can be be gathered using the name of the forest and the domain controller DC2008R2-2. I appreciate any suggestions, this really bothers me.

    Read the article

  • Backing up SQL NetApp Snapshots using TSM

    - by WerkkreW
    In our environment we have a 3 node SQL 2005 Cluster which is on NetApp storage. We are currently using SMSQL (NetApp SnapManager for SQL) to take Snapshot backups of the data. This works great, but due to some audit requirements we are also forced to maintain some copies on tape. We have used NDMP in other places across the enterprise but we do not want to use it in this specific instance. Basically what I need to do is, get the most recent snapshot copy of the databases on tape, via Tivoli Storage Manager (TSM). What I have done is, obtained a basic Windows Server 2003 VM with SnapDrive installed, which is SAN attached and zoned to the NetApp, and I have written a batch file to do the following: Mount the latest __RECENT snapshot lun to the host, using a specific drive letter Perform a TSM based incremental backup Dis-mount the LUN This seems to work fine, except sometimes the LUN's do not mount due to some sort of timeout. Also, due to my limited knowledge of windows batch scripting, I have no way to monitor the success or failure of these backups since I do not know how to send a valid return code back to the TSM scheduling service. Is there a more efficient/elegant way to accomplish this without NDMP?

    Read the article

  • Disabling Keyboard Wakeup for Ubuntu 10.04 on Acer 1810TZ

    - by sybreon
    My Acer Aspire 1810TZ laptop suspends fine but wakes up on any slight key-press. I would like to disable this behaviour. I read that it involves disabling something in the /proc/acpi/wakeup but SLPB does not seem to be listed at all. root@1810TZ:/etc# cat /proc/acpi/wakeup Device S-state Status Sysfs node UHC0 S3 disabled pci:0000:00:1d.0 UHC1 S3 disabled pci:0000:00:1d.1 UHC2 S3 disabled pci:0000:00:1d.2 UHCR S3 disabled EHC1 S3 disabled pci:0000:00:1d.7 UHC3 S3 disabled pci:0000:00:1a.0 UHC4 S3 disabled UHC5 S3 disabled EHC2 S3 disabled pci:0000:00:1a.7 EXP1 S4 disabled pci:0000:00:1c.0 PXSX S4 disabled pci:0000:01:00.0 EXP2 S4 disabled PXSX S4 disabled EXP3 S4 disabled PXSX S4 disabled EXP4 S4 disabled pci:0000:00:1c.3 PXSX S4 disabled pci:0000:02:00.0 EXP5 S4 disabled PXSX S4 disabled EXP6 S4 disabled PXSX S4 disabled However, the relevant bits seem to be detected from dmesg. [ 0.357628] ACPI: AC Adapter [ACAD] (on-line) [ 0.357749] input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 [ 0.357754] ACPI: Power Button [PWRB] [ 0.357817] input: Lid Switch as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0D:00/input/input1 [ 0.359319] ACPI: Lid Switch [LID0] [ 0.359390] input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 [ 0.359394] ACPI: Sleep Button [SLPB] [ 0.359475] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 [ 0.359479] ACPI: Power Button [PWRF] Not quite sure what to do next.

    Read the article

  • Clustering filesystem for small files

    - by viraptor
    Hi, I'm looking for a distributed filesystem which I could use for storing lots of small files (<1MB usually). What I want to get is: 2 servers which have the fs mounted themselves and mirror the data locking support (among reachable nodes) some kind of best-effort automatic resynchronisation after one node goes down and comes back again What I mean by the resync is that, I'm ok with both servers doing read/write operations even if they split-brain. I'm also ok if a local process obtains a lock if the other host is not reachable. From the resync I expect only a file-level consistent view after a while - that is - if file x is modified on both nodes during a split-brain, I don't really care which one is available after they join again, as long as it's full file, not one block coming from node1 and another block from node2. Is there a solution like that out there? I see that gluster has some problems with file locks (even in 3.1). I also noticed that OCFS2 will panic if both nodes split-brain. What other filesystem would allow me to do what I want?

    Read the article

  • CentOS iscsi initiator has session but there is no block device

    - by jcalfee314
    I have installed the scsi-target-utils package on CentOS and I used it to perform a discovery. The discovery did give me an active session. I restarted the iscsi service but I do not see any new devices (fdisk -l). I see in /var/log/messages that my connection is operational now. I'm not sure how to debug this further. Can someone direct me into fixing this? discovery: iscsiadm -m discovery -t sendtargets -p 192.168.0.155 returns: 192.168.0.155:3260,-1 iqn.2009-02.com.twinstrata:cloudarray:sn-1d07c1b62d4ec8f3 Just to verify it actually worked: iscsiadm -m session returns tcp: [1] 192.168.0.155:3260,1 iqn.2009-02.com.twinstrata:cloudarray:sn-1d07c1b62d4ec8f3 restarting as the directions say to do: service iscsi restart output written to /var/log/message Stopping iscsi: Sep 20 12:14:22 localhost kernel: connection1:0: detected conn error (1020) [ OK ] Starting iscsi: Sep 20 12:14:22 localhost kernel: scsi1 : iSCSI Initiator over TCP/IP Sep 20 12:14:22 localhost iscsid: Connection1:0 to [target: iqn.2009-02.com.twinstrata:cloudarray:sn-1d07c1b62d4ec8f3, portal: 192.168.0.155,3260] through [iface: default] is shutdown. Sep 20 12:14:22 localhost iscsid: Could not set session2 priority. READ/WRITE throughout and latency could be affected. [ OK ] [root@db iscsi]# Sep 20 12:14:23 localhost iscsid: Connection2:0 to [target: iqn.2009-02.com.twinstrata:cloudarray:sn-1d07c1b62d4ec8f3, portal: 192.168.0.155,3260] through [iface: default] is operational now Ran a login command: iscsiadm -m node -T iqn.2009-02.com.twinstrata:cloudarray:sn-1d07c1b62d4ec8f3 -p 192.168.0.155 -l No errors, no logging occurred. Next I compared the output from "fdisk -l|egrep dev" both with the iscsi session and without. There is no difference. I suppose I could just look in /etc/mtab. Any ideas on how I can get an iscsi device?

    Read the article

  • Proxychains, Tortunnel, Privoxy: cannot connect() to port

    - by Benjamin
    Hi all, I'm trying to do an nmap scan through tor using tortunnel, privoxy and proxychains like explained in the following video: http://vimeo.com/6238958 I'm getting rather weird results. I can successfully perform any SYN scan on any port. However as soon as I try to do connect() scans, proxychains cannot connect itself to all ports. In other words, I can perform connect() scans to port 80: proxychains nmap -P0 -A -sV www.zzz.com -p80 but not port 21: proxychains nmap -P0 -A -sV www.zzz.net -p21 I get the following error: Starting Nmap 4.62 ( http://nmap.org ) at 2010-06-02 08:34 UTC ProxyChains-2.1 (http://proxychains.sf.net) random chain (1):....127.0.0.1:5060....can't connect to..113.I2.1W1.YY:21 random chain (1):....127.0.0.1:5060....can't connect to..113.I2.1W1.YY:21 random chain (1):....127.0.0.1:5060....can't connect to..113.I2.1W1.YY:21 random chain (1):....127.0.0.1:5060....can't connect to..113.I2.1W1.YY:21 random chain (1):....127.0.0.1:5060....can't connect to..113.I2.1W1.YY:21 random chain (1):....127.0.0.1:5060....can't connect to..113.I2.1W1.YY:21 My only guess would be that the exit node I'm using does not allow connections to port 21. Would that be correct? How could I fix it? Thanks for your time.

    Read the article

  • lsof not showing what port a proc is listening on

    - by ericslaw
    I have many processes on a box listening on several ports. I am trying to map ports to pids. The problem is that lsof is not telling me what ports belong to which process. Given an apache listening on port 80, I can see it listening via netstat: user@host% netstat -an|grep LISTEN|grep 80 *.80 *.* 0 0 49152 0 LISTEN But when I try to map port 80 to a pid I get nothing: user@host% lsof -iTCP:80 -t When I try seeing what sockets that specific pid is using I get: user@host% lsof -lnP -p31 -a -i COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME libhttpd. 31 0 15u IPv4 0x6002d970b80 0t0 TCP *:65535 (LISTEN) Notice the *:65535 in the NAME column. Does anyone know why lsof is not reporting the port in use? I am running as root. I am using a mix of lsof and os versions: lsof v4.77 on Solaris10 sparc lsof v4.72 on Redhat4.2 etc I know that linux solutions can use "netstat -p", so I guess I'm only looking for why solaris isn't working, but I find lsof is frequently silent and not showing me expected data.

    Read the article

  • WWNs,WWPNs and Fibre Channel addresses

    - by user238230
    Lots of contradictory on these subjects and I don't know why. My first question is about the 64 bit WWN. One reference claims the terms WWN and WWPN are synonymous. An online source seems to refute this. They say: A WWPN (world wide port name) is the unique identifier for a fibre channel port where a WWN (world wide name) the unique identifier for the node itself. A good example is a dual port HBA. There will be two WWPN's (one for each port) and only a single WWN for the card itself. Question #1: Which is correct? I’m almost positive I read that every “Port” has a WWN. My next question is about the 24 bit FC address that is dynamically allocated to a port when it is introduced to the switch. The Domain ID field is defined as: "a unique number provided to each switch in the fabric." Question #2: Do Domain IDs only apply to switch ports? For example what would the Domain ID be for a HBA? None? The same as the switch port it is connected to? Question #3: My last question is about the Name Server of a switch. A book example shows the routing of a message through the switch. It uses the WWNs of the source and destination ports to route the message. I am assuming that the Name Server must associate the WWN and the FC address in some way in order to route the message, correct?

    Read the article

  • UPS power requirements for server

    - by captainentropy
    Greetings! So, I just placed an order for a new server. The company recommended that I get a 3000W UPS. (!) As best as I could I calculated the following wattage consumption based on benchmarked data or datasheets provided by the manufacturers of each component: number watts **total watts** MoBo 1 240 240 CPUs (E5540) 2 80 160 RAID cards (3ware) 2 18 36 RAM (6x4GB) 6 3 18 DVD drive 1 7 7 floppy 1 2 2 RE4 drives 8 7 56 WD20 drives 8 6 48 Intel X25 SSD 2 0.15 0.3 total = 567 So that is for the PSU requirements only. The PSUs in the machine are a 720W for the master node and 800W each for two subsystems. That's a total of 2320W that can be delivered by these PSUs. But that is 4X the amount being consumed, at most, by the components. I didn't count case fans or the eSATA card (3W maybe?) or what the PSUs themselves require but assuming I double or triple my calculations I'm not even remotely close to the 3000W UPS I was suggested to get. They run at least $1100. I could get a 2000W for about $750 or a 1500W for $450 and still be well over my estimated power need. I don't think I need a whole lot of run time in the case of a power outage, maybe 20 minutes max, enough time to shutdown if the power doesn't come on within 5-10 minutes. Any thoughts? Am I off on my calculations? Did I overlook something major? If so what are your suggestions for a UPS? Thanks!

    Read the article

  • Free, simple, configurable SOCKS5 server

    - by Pooria Azimi
    I've been looking (for the past 6-7 hours) for a fast, free and configurable SOCKS5 server. I haven't found anything that matches my needs. They are either too complicated, too bare-bones or simply buggy as hell. This is (all) I need: I want it to run on Linux (and also OS X, preferably) I want it to listen on localhost:8888 When my app (say wget.. or curl --socks5=localhost:8888) requests http://www.google.com/search?q=asd (or any other url - both http and https), I want it to fetch the page not from google's servers, but from http://localhost:4444/cached?uri=http://www.google.com/search%3Fq%3Dasd. Nothing more! I don't need caching, or anything else. I just want a SOCKS5 server, running locally, which redirects all queries to my own (local) server. It could be written in C, C++, Python, PHP, Perl, Node.js or any other language. I don't care, as long as it supports my (very limited) needs, or I can easily change the source to make it so. Thanks a lot

    Read the article

  • Debian x86_64 + Nginx + PHP5-FPM optimization

    - by Olal'a
    I used to have a VPS (512MB) from Linode and I was running nginx + php5-fpm (which comes with php5.3.3) on Debian Lenny (i686). The total memory usage was about 90-100MB. Now I have another VPS (different hosting company) and I also run nginx + php5-fpm on Debian Lenny (x86_64). The system is 64-bit, so the memory usage is higher now, about 210-230MB, which I think is too much. Here is my php5-fpm.conf: pm = dynamic pm.max_children = 5 pm.start_servers = 2 pm.min_spare_servers = 2 pm.max_spare_servers = 5 pm.max_requests = 300 That's what top command tells me: top - 15:36:58 up 3 days, 16:05, 1 user, load average: 0.00, 0.00, 0.00 Tasks: 209 total, 1 running, 208 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni, 99.9%id, 0.1%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 532288k total, 469628k used, 62660k free, 28760k buffers Swap: 1048568k total, 408k used, 1048160k free, 210060k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 22806 www-data 20 0 178m 67m 31m S 1 13.1 0:05.02 php5-fpm 8980 mysql 20 0 241m 55m 7384 S 0 10.6 2:42.42 mysqld 22807 www-data 20 0 162m 43m 22m S 0 8.3 0:04.84 php5-fpm 22808 www-data 20 0 160m 41m 23m S 0 8.0 0:04.68 php5-fpm 25102 www-data 20 0 151m 30m 21m S 0 5.9 0:00.80 php5-fpm 10849 root 20 0 44100 8352 1808 S 0 1.6 0:03.16 munin-node 22805 root 20 0 145m 4712 1472 S 0 0.9 0:00.16 php5-fpm 21859 root 20 0 66168 3248 2540 S 1 0.6 0:00.02 sshd 21863 root 20 0 66028 3188 2548 S 0 0.6 0:00.06 sshd 3956 www-data 20 0 31756 3052 928 S 0 0.6 0:06.42 nginx 3954 www-data 20 0 31712 3036 928 S 0 0.6 0:06.74 nginx 3951 www-data 20 0 31712 3008 928 S 0 0.6 0:06.42 nginx 3957 www-data 20 0 31688 2992 928 S 0 0.6 0:06.56 nginx 3950 www-data 20 0 31676 2980 928 S 0 0.6 0:06.72 nginx 3955 www-data 20 0 31552 2896 928 S 0 0.5 0:06.56 nginx 3953 www-data 20 0 31552 2888 928 S 0 0.5 0:06.42 nginx 3952 www-data 20 0 31544 2880 928 S 0 0.5 0:06.60 nginx So, the question is there any way to use less memory? Btw, I have 16 cores and it would be nice to make use of them...

    Read the article

  • Debian x86_64 + Nginx + PHP5-FPM optimization

    - by user55859
    I used to have a VPS (512MB) from Linode and I was running nginx + php5-fpm (which comes with php5.3.3) on Debian Lenny (i686). The total memory usage was about 90-100MB. Now I have another VPS (different hosting company) and I also run nginx + php5-fpm on Debian Lenny (x86_64). The system is 64-bit, so the memory usage is higher now, about 210-230MB, which I think is too much. Here is my php5-fpm.conf: pm = dynamic pm.max_children = 5 pm.start_servers = 2 pm.min_spare_servers = 2 pm.max_spare_servers = 5 pm.max_requests = 300 That's what top command tells me: top - 15:36:58 up 3 days, 16:05, 1 user, load average: 0.00, 0.00, 0.00 Tasks: 209 total, 1 running, 208 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni, 99.9%id, 0.1%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 532288k total, 469628k used, 62660k free, 28760k buffers Swap: 1048568k total, 408k used, 1048160k free, 210060k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 22806 www-data 20 0 178m 67m 31m S 1 13.1 0:05.02 php5-fpm 8980 mysql 20 0 241m 55m 7384 S 0 10.6 2:42.42 mysqld 22807 www-data 20 0 162m 43m 22m S 0 8.3 0:04.84 php5-fpm 22808 www-data 20 0 160m 41m 23m S 0 8.0 0:04.68 php5-fpm 25102 www-data 20 0 151m 30m 21m S 0 5.9 0:00.80 php5-fpm 10849 root 20 0 44100 8352 1808 S 0 1.6 0:03.16 munin-node 22805 root 20 0 145m 4712 1472 S 0 0.9 0:00.16 php5-fpm 21859 root 20 0 66168 3248 2540 S 1 0.6 0:00.02 sshd 21863 root 20 0 66028 3188 2548 S 0 0.6 0:00.06 sshd 3956 www-data 20 0 31756 3052 928 S 0 0.6 0:06.42 nginx 3954 www-data 20 0 31712 3036 928 S 0 0.6 0:06.74 nginx 3951 www-data 20 0 31712 3008 928 S 0 0.6 0:06.42 nginx 3957 www-data 20 0 31688 2992 928 S 0 0.6 0:06.56 nginx 3950 www-data 20 0 31676 2980 928 S 0 0.6 0:06.72 nginx 3955 www-data 20 0 31552 2896 928 S 0 0.5 0:06.56 nginx 3953 www-data 20 0 31552 2888 928 S 0 0.5 0:06.42 nginx 3952 www-data 20 0 31544 2880 928 S 0 0.5 0:06.60 nginx So, the question is there any way to use less memory? Btw, I have 16 cores and it would be nice to make use of them...

    Read the article

  • XenServer VMs can't reach network

    - by toto
    I'm currently trying to setup a small cloud architecture , I'm using in the installation CloudStack 2.2.14 which need two node : a management server (as node1) to provision the cloud and a hyperviser XenServer 5.6 SP2 to host the VMs (as node2). I succeded to create both node1 and node2 into an ESXi 5 VMWare as VMs. So The ESXi 5 is hosting two VMs node1 + node2 , and node2 which is the XenServer will host also VMs (such as ubuntu or Centos). Both node1 and node2 can ping each other and can get the internet connection from Esxi5 ,but My problem is : that VMs into the node2(XenServer) can't reach the network (can't ping node1 or Esxi or get an internet connection but they can ping VMs IN the node2(XenServer). So I tried to: 1-Setup a DHCP server as node3 in ESXi5 and connect node2(Xenserver) to him , but always the VMs into to node2 can't reach the outer network. 2-Setup a DCHP server into node2 , but always the same problem. So , 1-is there any other configuration i'm missing in node2 (considering that I'm sure about DNS , GW , NETMASK configuration)?. 2-Is it the problem because i'm Creating VMs into node2(XenSever) which is a VM into ESXi 5 ?

    Read the article

  • Find slow network nodes between two data centers

    - by 2called-chaos
    I've got a problem with syncing big amount of data between two data centers. Both machines have got a gigabit connection and are not fully occupied but the fastest that I am able to get is something between 6 and 10 Mbit = not acceptable! Yesterday I made some traceroute which indicates huge load on a LEVEL3 router but the problem exists for weeks now and the high response time is gone (20ms instead of 300ms). How can I trace this to find the actual slow node? Thought about a traceroute with bigger packages but will this work? In addition this problem might not be related to one of our servers as there are much higher transmission rates to other servers or clients. Actually office = server is faster than server <= server! Any idea is appreciated ;) Update We actually use rsync over ssh to copy the files. As encryption tends to have more bottlenecks I tried a HTTP request but unfortunately it is just as slow. We have a SLA with one of the data centers. They said they already tried to change the routing because they say this is related to a cheap network where the traffic gets routed through. It is true that it will route through a "cheapnet" but only the other way around. Our direction goes through LEVEL3 and the other way goes through lambdanet (which they said is not a good network). If I got it right (I'm a network intermediate) they simulated a longer path to force routing through LEVEL3 and they announce LEVEL3 in the AS path. I basically want to know if they're right or they're just trying to abdicate their responsibility. The thing is that the problem exists in both directions (while different routes), so I think it is in the responsibility of our hoster. And honestly, I don't believe that there is a DC2DC connection which only can handle 600kb/s - 1,5 MB/s for weeks! The question is how to detect WHERE this bottleneck is

    Read the article

  • Upgrade SQLServer 2008 hardware

    - by John
    Forgive me if I'm not able to be totally clear here. It is not intentional, I'm a senior level developer in a very small company having to act like a manager at the moment. Anyway, the story is that we have 2 older dell servers with SQL Server 2008 Standard in a "cluster". I put that in quotes because I'm still not 100% clear what that means. We have 2 brand new blade servers and want to move the existing databases to the new hardware. Ok, so here is the gotcha. We need to do this with little or no down time. I'm being told that we can evict the passive node, then pull in one of the new servers. But I'm also being told that this is a dangerous step because something could go wrong that would cause the cluster to fail and then we would be left with nothing because the active server would not be able to come back up. Does anyone have any thoughts on how to handle this? I'm being told that the only way to ensure success is to have at least a day of down time where we bring up a new cluster on the new hardware and then migrate the databases 1 by 1.

    Read the article

  • Device CAL, User Cal or Processor license needed for SQL 2008 (architecture explained inside)?

    - by nycgags
    So we have a number of servers in the Amazon cloud running SQL Server Standard edition to aggregate data. For that purpose we are fine, the licensing is handled by our contract with Amazon, no problem there. For the beefier work, we want to install Enterprise Edition (EE) on our servers processing raw data so that we can take advantage of table partitioning. We currently have 3 servers aggregating data from about 40 node servers, all 43 of these servers are running standard edition which is fine. We also have 4 servers running standard processing the raw data, but I think we can get away with 2 (for redundancy) running Enterprise Edition. We have 2-3 dba's that access these DW servers for maintenance (using the same windows login via remote desktop). So visually: 40 -- 3 -- [2] -- 2 -- 1 nodes -- aggregators -- raw (which we want to run EE) -- calculators -- datawarehouse Nodes PUSH to aggregators, Raws PULL from aggregators, Calculators PULL from Raw, Calculators PUSH to datwarehouse I am specifying the push vs. pull in case that changes how the # of licenses is calculated. Q1) how many device (or user) CAL's do we need? Q2) do I need to speak with someone from MSFT to find out if it is ok to install in the Amazon Cloud (Amazon said we need to verify it is ok in our license terms)? Q3) what happens if another device tries to access a server with the limited number of device CAL's? Q4) Are the device CAL's simultaneous number of devices or total? Q5) Do Device and User CAL's cost the same or is there a difference? Q6) Would we need to buy a processor license (we are hoping not to)?

    Read the article

  • Install Exchange 2013 with DSC

    - by Alain Laventure
    I tried to install Exchange 2013 with the resource windowsProcess in existing Exchange Configuration. All prerequisites are installed (the Exchange Organization still exists). This is my Resource section: WindowsProcess Exchange2013 { Credential=$credential Path= "C:\Sources\Cumulative Update 5 for Exchange Server 2013 (KB2936880)\Setup.exe" Arguments= "/mode:Install /role:Mailbox /IAcceptExchangeServerLicenseTerms /TargetDir:C:\EX2013" Ensure= "Present" } #End Filter } #End Node } # End configuration /* @TargetNode='TargetDSC02' @GeneratedBy=exadmin @GenerationDate=08/02/2014 08:16:03 @GenerationHost=SOURCEDSC02 */ instance of MSFT_Credential as $MSFT_Credential1ref { Password = "Password1"; UserName = "S05\\Exadmin"; }; Exadmin is a member of Orgaganization Management Group and it is also member of Domain Admin Group, to be able to install Exchange When I execute this resource , Exchange Installation Start but after 1 minute the installation stops with this error: Failed [Rule:GlobalServerInstall] [Message:You must be a member of the 'Organization Management' role group or a member of the 'Enterprise Admins' group to continue.] To be sure that the right is really the problem I create a special User with only Administrator right of the Exchange server and with no Exchange Permission I run manually on the new Exchange server .\Setup.exe /mode:Install /role:Mailbox /IAcceptExchangeServerLicenseTerms /Targetdir:C:\EX2013 And I got the Same error that with DSC. After I add my test user in the Organization Management Group and I run again manually .\Setup.exe /mode:Install /role:Mailbox /IAcceptExchangeServerLicenseTerms /Targetdir:C:\EX2013 And the Exchange 2013 installation finish without any error. That prove that the problem with DSC is Permission right.

    Read the article

  • Hadoop initscript askes password

    - by Ramesh
    I have installed hadoop on my ubuntu 12.04 single node .I am trying to execute an init script to make the hadoop run on start up but it asks password every time i execute. #!/bin/sh ### BEGIN INIT INFO # Provides: hadoop services # Required-Start: $network # Required-Stop: $network # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Description: Hadoop services # Short-Description: Enable Hadoop services including hdfs ### END INIT INFO PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin HADOOP_BIN=/home/naveen/softwares/hadoop-1.0.3/bin NAME=hadoop DESC=hadoop USER=naveen ROTATE_SUFFIX= test -x $HADOOP_BIN || exit 0 RETVAL=0 set -e cd / start_hadoop () { set +e su $USER -s /bin/sh -c $HADOOP_BIN/start-all.sh > /var/log/hadoop/startup_log case "$?" in 0) echo SUCCESS RETVAL=0 ;; 1) echo TIMEOUT - check /var/log/hadoop/startup_log RETVAL=1 ;; *) echo FAILED - check /var/log/hadoop/startup_log RETVAL=1 ;; esac set -e } stop_hadoop () { set +e if [ $RETVAL = 0 ] ; then su $USER -s /bin/sh -c $HADOOP_BIN/stop-all.sh > /var/log/hadoop/shutdown_log RETVAL=$? if [ $RETVAL != 0 ] ; then echo FAILED - check /var/log/hadoop/shutdown_log fi else echo No nodes running RETVAL=0 fi set -e } restart_hadoop() { stop_hadoop start_hadoop } case "$1" in start) echo -n "Starting $DESC: " start_hadoop echo "$NAME." ;; stop) echo -n "Stopping $DESC: " stop_hadoop echo "$NAME." ;; force-reload|restart) echo -n "Restarting $DESC: " restart_hadoop echo "$NAME." ;; *) echo "Usage: $0 {start|stop|restart|force-reload}" >&2 RETVAL=1 ;; esac exit $RETVAL Please tell me how to run hadoop without entering password.

    Read the article

  • Server 2008 R2 domain windows update strategy

    - by Joost Verdaasdonk
    Let me explain my question a bit. We are a small company that have now made the first move to a bigger network. For now the network contains of 5 servers 2008 R2 (dc,sql,web,etc..). Everything we need is now in place but for now we cannot afford to finish the network by implementing redundant systems. (secondary dc, dns, sql cluster, etc...) For some people this is hard to understand but this is the current situation. (and we are aware and will fix this when we can) Because we want to keep our system secure and up to date I've made sure that all systems are updated regularly. The problem is ofc that the nr of updates Microsoft rolls out that need a system reboot seam to occur more often. (maybe I'm wrong and it just feels like this) ;-) In our domain servers depend on each other for services (like SQL, WEB, or whatever) so just rebooting a server at will is NOT a good idea! For now I update all of them without rebooting at once. After all are up to date I bring them down in the order they are depended on each other. After this I reboot all of them in the inverse order. I understand ofc that if I DID have redundancy in my system that updating and rebooting would not be such a problem because the server task could be taken over by another node but this is something we generally need to add when we can. So my question is. If you read my above situation can you suggest me more Update strategies or general ideas that could help me do this process in a better / faster way? Thanks for your thoughts!

    Read the article

< Previous Page | 345 346 347 348 349 350 351 352 353 354 355 356  | Next Page >