Search Results

Search found 5157 results on 207 pages for 'node'.

Page 172/207 | < Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >

  • Disabling Keyboard Wakeup for Ubuntu 10.04 on Acer 1810TZ

    - by sybreon
    My Acer Aspire 1810TZ laptop suspends fine but wakes up on any slight key-press. I would like to disable this behaviour. I read that it involves disabling something in the /proc/acpi/wakeup but SLPB does not seem to be listed at all. root@1810TZ:/etc# cat /proc/acpi/wakeup Device S-state Status Sysfs node UHC0 S3 disabled pci:0000:00:1d.0 UHC1 S3 disabled pci:0000:00:1d.1 UHC2 S3 disabled pci:0000:00:1d.2 UHCR S3 disabled EHC1 S3 disabled pci:0000:00:1d.7 UHC3 S3 disabled pci:0000:00:1a.0 UHC4 S3 disabled UHC5 S3 disabled EHC2 S3 disabled pci:0000:00:1a.7 EXP1 S4 disabled pci:0000:00:1c.0 PXSX S4 disabled pci:0000:01:00.0 EXP2 S4 disabled PXSX S4 disabled EXP3 S4 disabled PXSX S4 disabled EXP4 S4 disabled pci:0000:00:1c.3 PXSX S4 disabled pci:0000:02:00.0 EXP5 S4 disabled PXSX S4 disabled EXP6 S4 disabled PXSX S4 disabled However, the relevant bits seem to be detected from dmesg. [ 0.357628] ACPI: AC Adapter [ACAD] (on-line) [ 0.357749] input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 [ 0.357754] ACPI: Power Button [PWRB] [ 0.357817] input: Lid Switch as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0D:00/input/input1 [ 0.359319] ACPI: Lid Switch [LID0] [ 0.359390] input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 [ 0.359394] ACPI: Sleep Button [SLPB] [ 0.359475] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 [ 0.359479] ACPI: Power Button [PWRF] Not quite sure what to do next.

    Read the article

  • Windows Domain Chaos - Any Solving Approach

    - by Chake
    we are running an old Window 2003 Server as Domain Controller (DC2003). To safely migrate to Windows 2008 R2 we added a 2008 R2 (DC2008R2) to the domain as domain controller (adprep etc.). After dcpromo on DC2008R2 everything seemed to be ok. The new DC appeared under the "Domain Controlelrs" node. It wasn't checked at this time, if DC2008R2 can REALLY act as domain controller. Later we tried to shutdown DC2003 and ran into a total mess with non functional Exchange and Team Foundation Services. After that I got the job to fix... First i thought it could be an Problem with DC2008R2. So I removed it as Domain Controller and installed a new Windows 2008 R8 Server DC2008R2-2. I ran into similar Problems. I tried a bunch of stuff, but nothign helped. I won't list it, maybe I made an mistake, so I'm willing to redo it with your suggestions. To have a starting point I tried the best practise analyser whicht ended up with 24 "Compatible" and 26 "Not Compatible" tests. From these 26 tests 19 read the same. (I'm translating from german, so that may to be the exact wording) Problem: Using the Best Practise Analyser for Active Directory Domain Services (Active Directory Domain Services Best Practices Analyzer, AD DS BPA) no data can be be gathered using the name of the forest and the domain controller DC2008R2-2. I appreciate any suggestions, this really bothers me.

    Read the article

  • Clustering filesystem for small files

    - by viraptor
    Hi, I'm looking for a distributed filesystem which I could use for storing lots of small files (<1MB usually). What I want to get is: 2 servers which have the fs mounted themselves and mirror the data locking support (among reachable nodes) some kind of best-effort automatic resynchronisation after one node goes down and comes back again What I mean by the resync is that, I'm ok with both servers doing read/write operations even if they split-brain. I'm also ok if a local process obtains a lock if the other host is not reachable. From the resync I expect only a file-level consistent view after a while - that is - if file x is modified on both nodes during a split-brain, I don't really care which one is available after they join again, as long as it's full file, not one block coming from node1 and another block from node2. Is there a solution like that out there? I see that gluster has some problems with file locks (even in 3.1). I also noticed that OCFS2 will panic if both nodes split-brain. What other filesystem would allow me to do what I want?

    Read the article

  • Proxychains, Tortunnel, Privoxy: cannot connect() to port

    - by Benjamin
    Hi all, I'm trying to do an nmap scan through tor using tortunnel, privoxy and proxychains like explained in the following video: http://vimeo.com/6238958 I'm getting rather weird results. I can successfully perform any SYN scan on any port. However as soon as I try to do connect() scans, proxychains cannot connect itself to all ports. In other words, I can perform connect() scans to port 80: proxychains nmap -P0 -A -sV www.zzz.com -p80 but not port 21: proxychains nmap -P0 -A -sV www.zzz.net -p21 I get the following error: Starting Nmap 4.62 ( http://nmap.org ) at 2010-06-02 08:34 UTC ProxyChains-2.1 (http://proxychains.sf.net) random chain (1):....127.0.0.1:5060....can't connect to..113.I2.1W1.YY:21 random chain (1):....127.0.0.1:5060....can't connect to..113.I2.1W1.YY:21 random chain (1):....127.0.0.1:5060....can't connect to..113.I2.1W1.YY:21 random chain (1):....127.0.0.1:5060....can't connect to..113.I2.1W1.YY:21 random chain (1):....127.0.0.1:5060....can't connect to..113.I2.1W1.YY:21 random chain (1):....127.0.0.1:5060....can't connect to..113.I2.1W1.YY:21 My only guess would be that the exit node I'm using does not allow connections to port 21. Would that be correct? How could I fix it? Thanks for your time.

    Read the article

  • lsof not showing what port a proc is listening on

    - by ericslaw
    I have many processes on a box listening on several ports. I am trying to map ports to pids. The problem is that lsof is not telling me what ports belong to which process. Given an apache listening on port 80, I can see it listening via netstat: user@host% netstat -an|grep LISTEN|grep 80 *.80 *.* 0 0 49152 0 LISTEN But when I try to map port 80 to a pid I get nothing: user@host% lsof -iTCP:80 -t When I try seeing what sockets that specific pid is using I get: user@host% lsof -lnP -p31 -a -i COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME libhttpd. 31 0 15u IPv4 0x6002d970b80 0t0 TCP *:65535 (LISTEN) Notice the *:65535 in the NAME column. Does anyone know why lsof is not reporting the port in use? I am running as root. I am using a mix of lsof and os versions: lsof v4.77 on Solaris10 sparc lsof v4.72 on Redhat4.2 etc I know that linux solutions can use "netstat -p", so I guess I'm only looking for why solaris isn't working, but I find lsof is frequently silent and not showing me expected data.

    Read the article

  • CentOS iscsi initiator has session but there is no block device

    - by jcalfee314
    I have installed the scsi-target-utils package on CentOS and I used it to perform a discovery. The discovery did give me an active session. I restarted the iscsi service but I do not see any new devices (fdisk -l). I see in /var/log/messages that my connection is operational now. I'm not sure how to debug this further. Can someone direct me into fixing this? discovery: iscsiadm -m discovery -t sendtargets -p 192.168.0.155 returns: 192.168.0.155:3260,-1 iqn.2009-02.com.twinstrata:cloudarray:sn-1d07c1b62d4ec8f3 Just to verify it actually worked: iscsiadm -m session returns tcp: [1] 192.168.0.155:3260,1 iqn.2009-02.com.twinstrata:cloudarray:sn-1d07c1b62d4ec8f3 restarting as the directions say to do: service iscsi restart output written to /var/log/message Stopping iscsi: Sep 20 12:14:22 localhost kernel: connection1:0: detected conn error (1020) [ OK ] Starting iscsi: Sep 20 12:14:22 localhost kernel: scsi1 : iSCSI Initiator over TCP/IP Sep 20 12:14:22 localhost iscsid: Connection1:0 to [target: iqn.2009-02.com.twinstrata:cloudarray:sn-1d07c1b62d4ec8f3, portal: 192.168.0.155,3260] through [iface: default] is shutdown. Sep 20 12:14:22 localhost iscsid: Could not set session2 priority. READ/WRITE throughout and latency could be affected. [ OK ] [root@db iscsi]# Sep 20 12:14:23 localhost iscsid: Connection2:0 to [target: iqn.2009-02.com.twinstrata:cloudarray:sn-1d07c1b62d4ec8f3, portal: 192.168.0.155,3260] through [iface: default] is operational now Ran a login command: iscsiadm -m node -T iqn.2009-02.com.twinstrata:cloudarray:sn-1d07c1b62d4ec8f3 -p 192.168.0.155 -l No errors, no logging occurred. Next I compared the output from "fdisk -l|egrep dev" both with the iscsi session and without. There is no difference. I suppose I could just look in /etc/mtab. Any ideas on how I can get an iscsi device?

    Read the article

  • UPS power requirements for server

    - by captainentropy
    Greetings! So, I just placed an order for a new server. The company recommended that I get a 3000W UPS. (!) As best as I could I calculated the following wattage consumption based on benchmarked data or datasheets provided by the manufacturers of each component: number watts **total watts** MoBo 1 240 240 CPUs (E5540) 2 80 160 RAID cards (3ware) 2 18 36 RAM (6x4GB) 6 3 18 DVD drive 1 7 7 floppy 1 2 2 RE4 drives 8 7 56 WD20 drives 8 6 48 Intel X25 SSD 2 0.15 0.3 total = 567 So that is for the PSU requirements only. The PSUs in the machine are a 720W for the master node and 800W each for two subsystems. That's a total of 2320W that can be delivered by these PSUs. But that is 4X the amount being consumed, at most, by the components. I didn't count case fans or the eSATA card (3W maybe?) or what the PSUs themselves require but assuming I double or triple my calculations I'm not even remotely close to the 3000W UPS I was suggested to get. They run at least $1100. I could get a 2000W for about $750 or a 1500W for $450 and still be well over my estimated power need. I don't think I need a whole lot of run time in the case of a power outage, maybe 20 minutes max, enough time to shutdown if the power doesn't come on within 5-10 minutes. Any thoughts? Am I off on my calculations? Did I overlook something major? If so what are your suggestions for a UPS? Thanks!

    Read the article

  • Free, simple, configurable SOCKS5 server

    - by Pooria Azimi
    I've been looking (for the past 6-7 hours) for a fast, free and configurable SOCKS5 server. I haven't found anything that matches my needs. They are either too complicated, too bare-bones or simply buggy as hell. This is (all) I need: I want it to run on Linux (and also OS X, preferably) I want it to listen on localhost:8888 When my app (say wget.. or curl --socks5=localhost:8888) requests http://www.google.com/search?q=asd (or any other url - both http and https), I want it to fetch the page not from google's servers, but from http://localhost:4444/cached?uri=http://www.google.com/search%3Fq%3Dasd. Nothing more! I don't need caching, or anything else. I just want a SOCKS5 server, running locally, which redirects all queries to my own (local) server. It could be written in C, C++, Python, PHP, Perl, Node.js or any other language. I don't care, as long as it supports my (very limited) needs, or I can easily change the source to make it so. Thanks a lot

    Read the article

  • WWNs,WWPNs and Fibre Channel addresses

    - by user238230
    Lots of contradictory on these subjects and I don't know why. My first question is about the 64 bit WWN. One reference claims the terms WWN and WWPN are synonymous. An online source seems to refute this. They say: A WWPN (world wide port name) is the unique identifier for a fibre channel port where a WWN (world wide name) the unique identifier for the node itself. A good example is a dual port HBA. There will be two WWPN's (one for each port) and only a single WWN for the card itself. Question #1: Which is correct? I’m almost positive I read that every “Port” has a WWN. My next question is about the 24 bit FC address that is dynamically allocated to a port when it is introduced to the switch. The Domain ID field is defined as: "a unique number provided to each switch in the fabric." Question #2: Do Domain IDs only apply to switch ports? For example what would the Domain ID be for a HBA? None? The same as the switch port it is connected to? Question #3: My last question is about the Name Server of a switch. A book example shows the routing of a message through the switch. It uses the WWNs of the source and destination ports to route the message. I am assuming that the Name Server must associate the WWN and the FC address in some way in order to route the message, correct?

    Read the article

  • Debian x86_64 + Nginx + PHP5-FPM optimization

    - by Olal'a
    I used to have a VPS (512MB) from Linode and I was running nginx + php5-fpm (which comes with php5.3.3) on Debian Lenny (i686). The total memory usage was about 90-100MB. Now I have another VPS (different hosting company) and I also run nginx + php5-fpm on Debian Lenny (x86_64). The system is 64-bit, so the memory usage is higher now, about 210-230MB, which I think is too much. Here is my php5-fpm.conf: pm = dynamic pm.max_children = 5 pm.start_servers = 2 pm.min_spare_servers = 2 pm.max_spare_servers = 5 pm.max_requests = 300 That's what top command tells me: top - 15:36:58 up 3 days, 16:05, 1 user, load average: 0.00, 0.00, 0.00 Tasks: 209 total, 1 running, 208 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni, 99.9%id, 0.1%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 532288k total, 469628k used, 62660k free, 28760k buffers Swap: 1048568k total, 408k used, 1048160k free, 210060k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 22806 www-data 20 0 178m 67m 31m S 1 13.1 0:05.02 php5-fpm 8980 mysql 20 0 241m 55m 7384 S 0 10.6 2:42.42 mysqld 22807 www-data 20 0 162m 43m 22m S 0 8.3 0:04.84 php5-fpm 22808 www-data 20 0 160m 41m 23m S 0 8.0 0:04.68 php5-fpm 25102 www-data 20 0 151m 30m 21m S 0 5.9 0:00.80 php5-fpm 10849 root 20 0 44100 8352 1808 S 0 1.6 0:03.16 munin-node 22805 root 20 0 145m 4712 1472 S 0 0.9 0:00.16 php5-fpm 21859 root 20 0 66168 3248 2540 S 1 0.6 0:00.02 sshd 21863 root 20 0 66028 3188 2548 S 0 0.6 0:00.06 sshd 3956 www-data 20 0 31756 3052 928 S 0 0.6 0:06.42 nginx 3954 www-data 20 0 31712 3036 928 S 0 0.6 0:06.74 nginx 3951 www-data 20 0 31712 3008 928 S 0 0.6 0:06.42 nginx 3957 www-data 20 0 31688 2992 928 S 0 0.6 0:06.56 nginx 3950 www-data 20 0 31676 2980 928 S 0 0.6 0:06.72 nginx 3955 www-data 20 0 31552 2896 928 S 0 0.5 0:06.56 nginx 3953 www-data 20 0 31552 2888 928 S 0 0.5 0:06.42 nginx 3952 www-data 20 0 31544 2880 928 S 0 0.5 0:06.60 nginx So, the question is there any way to use less memory? Btw, I have 16 cores and it would be nice to make use of them...

    Read the article

  • Debian x86_64 + Nginx + PHP5-FPM optimization

    - by user55859
    I used to have a VPS (512MB) from Linode and I was running nginx + php5-fpm (which comes with php5.3.3) on Debian Lenny (i686). The total memory usage was about 90-100MB. Now I have another VPS (different hosting company) and I also run nginx + php5-fpm on Debian Lenny (x86_64). The system is 64-bit, so the memory usage is higher now, about 210-230MB, which I think is too much. Here is my php5-fpm.conf: pm = dynamic pm.max_children = 5 pm.start_servers = 2 pm.min_spare_servers = 2 pm.max_spare_servers = 5 pm.max_requests = 300 That's what top command tells me: top - 15:36:58 up 3 days, 16:05, 1 user, load average: 0.00, 0.00, 0.00 Tasks: 209 total, 1 running, 208 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni, 99.9%id, 0.1%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 532288k total, 469628k used, 62660k free, 28760k buffers Swap: 1048568k total, 408k used, 1048160k free, 210060k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 22806 www-data 20 0 178m 67m 31m S 1 13.1 0:05.02 php5-fpm 8980 mysql 20 0 241m 55m 7384 S 0 10.6 2:42.42 mysqld 22807 www-data 20 0 162m 43m 22m S 0 8.3 0:04.84 php5-fpm 22808 www-data 20 0 160m 41m 23m S 0 8.0 0:04.68 php5-fpm 25102 www-data 20 0 151m 30m 21m S 0 5.9 0:00.80 php5-fpm 10849 root 20 0 44100 8352 1808 S 0 1.6 0:03.16 munin-node 22805 root 20 0 145m 4712 1472 S 0 0.9 0:00.16 php5-fpm 21859 root 20 0 66168 3248 2540 S 1 0.6 0:00.02 sshd 21863 root 20 0 66028 3188 2548 S 0 0.6 0:00.06 sshd 3956 www-data 20 0 31756 3052 928 S 0 0.6 0:06.42 nginx 3954 www-data 20 0 31712 3036 928 S 0 0.6 0:06.74 nginx 3951 www-data 20 0 31712 3008 928 S 0 0.6 0:06.42 nginx 3957 www-data 20 0 31688 2992 928 S 0 0.6 0:06.56 nginx 3950 www-data 20 0 31676 2980 928 S 0 0.6 0:06.72 nginx 3955 www-data 20 0 31552 2896 928 S 0 0.5 0:06.56 nginx 3953 www-data 20 0 31552 2888 928 S 0 0.5 0:06.42 nginx 3952 www-data 20 0 31544 2880 928 S 0 0.5 0:06.60 nginx So, the question is there any way to use less memory? Btw, I have 16 cores and it would be nice to make use of them...

    Read the article

  • XenServer VMs can't reach network

    - by toto
    I'm currently trying to setup a small cloud architecture , I'm using in the installation CloudStack 2.2.14 which need two node : a management server (as node1) to provision the cloud and a hyperviser XenServer 5.6 SP2 to host the VMs (as node2). I succeded to create both node1 and node2 into an ESXi 5 VMWare as VMs. So The ESXi 5 is hosting two VMs node1 + node2 , and node2 which is the XenServer will host also VMs (such as ubuntu or Centos). Both node1 and node2 can ping each other and can get the internet connection from Esxi5 ,but My problem is : that VMs into the node2(XenServer) can't reach the network (can't ping node1 or Esxi or get an internet connection but they can ping VMs IN the node2(XenServer). So I tried to: 1-Setup a DHCP server as node3 in ESXi5 and connect node2(Xenserver) to him , but always the VMs into to node2 can't reach the outer network. 2-Setup a DCHP server into node2 , but always the same problem. So , 1-is there any other configuration i'm missing in node2 (considering that I'm sure about DNS , GW , NETMASK configuration)?. 2-Is it the problem because i'm Creating VMs into node2(XenSever) which is a VM into ESXi 5 ?

    Read the article

  • Upgrade SQLServer 2008 hardware

    - by John
    Forgive me if I'm not able to be totally clear here. It is not intentional, I'm a senior level developer in a very small company having to act like a manager at the moment. Anyway, the story is that we have 2 older dell servers with SQL Server 2008 Standard in a "cluster". I put that in quotes because I'm still not 100% clear what that means. We have 2 brand new blade servers and want to move the existing databases to the new hardware. Ok, so here is the gotcha. We need to do this with little or no down time. I'm being told that we can evict the passive node, then pull in one of the new servers. But I'm also being told that this is a dangerous step because something could go wrong that would cause the cluster to fail and then we would be left with nothing because the active server would not be able to come back up. Does anyone have any thoughts on how to handle this? I'm being told that the only way to ensure success is to have at least a day of down time where we bring up a new cluster on the new hardware and then migrate the databases 1 by 1.

    Read the article

  • Find slow network nodes between two data centers

    - by 2called-chaos
    I've got a problem with syncing big amount of data between two data centers. Both machines have got a gigabit connection and are not fully occupied but the fastest that I am able to get is something between 6 and 10 Mbit = not acceptable! Yesterday I made some traceroute which indicates huge load on a LEVEL3 router but the problem exists for weeks now and the high response time is gone (20ms instead of 300ms). How can I trace this to find the actual slow node? Thought about a traceroute with bigger packages but will this work? In addition this problem might not be related to one of our servers as there are much higher transmission rates to other servers or clients. Actually office = server is faster than server <= server! Any idea is appreciated ;) Update We actually use rsync over ssh to copy the files. As encryption tends to have more bottlenecks I tried a HTTP request but unfortunately it is just as slow. We have a SLA with one of the data centers. They said they already tried to change the routing because they say this is related to a cheap network where the traffic gets routed through. It is true that it will route through a "cheapnet" but only the other way around. Our direction goes through LEVEL3 and the other way goes through lambdanet (which they said is not a good network). If I got it right (I'm a network intermediate) they simulated a longer path to force routing through LEVEL3 and they announce LEVEL3 in the AS path. I basically want to know if they're right or they're just trying to abdicate their responsibility. The thing is that the problem exists in both directions (while different routes), so I think it is in the responsibility of our hoster. And honestly, I don't believe that there is a DC2DC connection which only can handle 600kb/s - 1,5 MB/s for weeks! The question is how to detect WHERE this bottleneck is

    Read the article

  • Device CAL, User Cal or Processor license needed for SQL 2008 (architecture explained inside)?

    - by nycgags
    So we have a number of servers in the Amazon cloud running SQL Server Standard edition to aggregate data. For that purpose we are fine, the licensing is handled by our contract with Amazon, no problem there. For the beefier work, we want to install Enterprise Edition (EE) on our servers processing raw data so that we can take advantage of table partitioning. We currently have 3 servers aggregating data from about 40 node servers, all 43 of these servers are running standard edition which is fine. We also have 4 servers running standard processing the raw data, but I think we can get away with 2 (for redundancy) running Enterprise Edition. We have 2-3 dba's that access these DW servers for maintenance (using the same windows login via remote desktop). So visually: 40 -- 3 -- [2] -- 2 -- 1 nodes -- aggregators -- raw (which we want to run EE) -- calculators -- datawarehouse Nodes PUSH to aggregators, Raws PULL from aggregators, Calculators PULL from Raw, Calculators PUSH to datwarehouse I am specifying the push vs. pull in case that changes how the # of licenses is calculated. Q1) how many device (or user) CAL's do we need? Q2) do I need to speak with someone from MSFT to find out if it is ok to install in the Amazon Cloud (Amazon said we need to verify it is ok in our license terms)? Q3) what happens if another device tries to access a server with the limited number of device CAL's? Q4) Are the device CAL's simultaneous number of devices or total? Q5) Do Device and User CAL's cost the same or is there a difference? Q6) Would we need to buy a processor license (we are hoping not to)?

    Read the article

  • Hadoop initscript askes password

    - by Ramesh
    I have installed hadoop on my ubuntu 12.04 single node .I am trying to execute an init script to make the hadoop run on start up but it asks password every time i execute. #!/bin/sh ### BEGIN INIT INFO # Provides: hadoop services # Required-Start: $network # Required-Stop: $network # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Description: Hadoop services # Short-Description: Enable Hadoop services including hdfs ### END INIT INFO PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin HADOOP_BIN=/home/naveen/softwares/hadoop-1.0.3/bin NAME=hadoop DESC=hadoop USER=naveen ROTATE_SUFFIX= test -x $HADOOP_BIN || exit 0 RETVAL=0 set -e cd / start_hadoop () { set +e su $USER -s /bin/sh -c $HADOOP_BIN/start-all.sh > /var/log/hadoop/startup_log case "$?" in 0) echo SUCCESS RETVAL=0 ;; 1) echo TIMEOUT - check /var/log/hadoop/startup_log RETVAL=1 ;; *) echo FAILED - check /var/log/hadoop/startup_log RETVAL=1 ;; esac set -e } stop_hadoop () { set +e if [ $RETVAL = 0 ] ; then su $USER -s /bin/sh -c $HADOOP_BIN/stop-all.sh > /var/log/hadoop/shutdown_log RETVAL=$? if [ $RETVAL != 0 ] ; then echo FAILED - check /var/log/hadoop/shutdown_log fi else echo No nodes running RETVAL=0 fi set -e } restart_hadoop() { stop_hadoop start_hadoop } case "$1" in start) echo -n "Starting $DESC: " start_hadoop echo "$NAME." ;; stop) echo -n "Stopping $DESC: " stop_hadoop echo "$NAME." ;; force-reload|restart) echo -n "Restarting $DESC: " restart_hadoop echo "$NAME." ;; *) echo "Usage: $0 {start|stop|restart|force-reload}" >&2 RETVAL=1 ;; esac exit $RETVAL Please tell me how to run hadoop without entering password.

    Read the article

  • Server 2008 R2 domain windows update strategy

    - by Joost Verdaasdonk
    Let me explain my question a bit. We are a small company that have now made the first move to a bigger network. For now the network contains of 5 servers 2008 R2 (dc,sql,web,etc..). Everything we need is now in place but for now we cannot afford to finish the network by implementing redundant systems. (secondary dc, dns, sql cluster, etc...) For some people this is hard to understand but this is the current situation. (and we are aware and will fix this when we can) Because we want to keep our system secure and up to date I've made sure that all systems are updated regularly. The problem is ofc that the nr of updates Microsoft rolls out that need a system reboot seam to occur more often. (maybe I'm wrong and it just feels like this) ;-) In our domain servers depend on each other for services (like SQL, WEB, or whatever) so just rebooting a server at will is NOT a good idea! For now I update all of them without rebooting at once. After all are up to date I bring them down in the order they are depended on each other. After this I reboot all of them in the inverse order. I understand ofc that if I DID have redundancy in my system that updating and rebooting would not be such a problem because the server task could be taken over by another node but this is something we generally need to add when we can. So my question is. If you read my above situation can you suggest me more Update strategies or general ideas that could help me do this process in a better / faster way? Thanks for your thoughts!

    Read the article

  • Failover Cluster Quorum Failing

    - by oruchreis
    Hi, I have two nodes which boots from iscsi to implement windows 2008 cluster. And I'm using disk majority option as quorum over iscsi. But when the quorum's iscsi connection failed(May be san server reset), the failover cluster is failed too. If I reset one of the nodes, it can open, but its system disk goes offline. I cant change its status as online, because it says that its reserved by failover cluster(disk is on iscsi, beacuse iscsi boot). And this disk works as readonly. Anything on it cant be deleted or written. So, I cant rejoin the node to the cluster again. I have to reinstall windows. So, what I'm asking is, how can I implement more quorum backup? I mean, can I use both disk majority and file share majority at same time? AFAIK, every nodes also keep the quorum's copy too. But I don't know sometimes san servers goes offline. And quorum's iscsi connection and nodes' iscsi connections get lost. So, nor the quorum that is kept in the nodes neither the quorum iscsi disk is not enough to start the cluster again. I want to use both disk majority and file share majority at the same time. Can I do this? Have you any other suggestion? Regards.

    Read the article

  • Performance data collection for short-running, ephemeral servers

    - by ErikA
    We're building a medical image processing software stack, currently hosted on various AWS resources. As part of this application, we have a handful of long-running servers (database, load balancers, web application, etc.). Collecting performance data on those servers is quite simple - my go-to- recipe of Nagios (for monitoring/notifications) and Munin (for collection of performance data and displaying trends) will work just fine. However - as part of this application, we are constantly starting up and terminating compute instances on EC2. In typical usage, these compute instances start up, configure themselves, receive a job from a message queue, and then get to work processing that job, which takes anywhere from 15 minutes to over 8 hours. After job completion, these instances get terminated, never to be heard from again. What is a decent strategy for collecting performance data on these short-lived instances? I don't necessarily need monitoring on them - if they fail for whatever reason, our application will detect this and handle re-starting the job on another instance or raising the flag so an administrator can take a look at things. However, it still would be useful to collect information like CPU (user, idle, iowait, etc.), memory usage, network traffic, disk read/write data, etc. In our internal database, we track the instance ID of the machine that runs each job, and it would be quite helpful to be able to look up performance data for a specific instance ID for troubleshooting and profiling. Munin doesn't seem like a great candidate, as it requires maintaining a list of munin nodes in a text file - far from ideal for an environment with a high amount of churn, and for the short amount of time each node will be running, I'd rather keep the full-resolution data indefinitely than have RRD water down the data over time. In the end, my guess is that this will require a monitoring engine that: uses a database (MySQL, SQLite, etc.) for configuration and data storage exposes an API for adding/removing hosts and services Are there other things I should be thinking about when evaluating options? Perhaps I'm over-thinking this, though, and just ought to run sar at 1-minute intervals on these short-lived instances and collect the sar db files prior to termination.

    Read the article

  • DRBD stacked resources: recovering from failure

    - by Marcus Downing
    We're running a stacked four-node DRBD setup like this: A --> B | | v v C D This means three DRBD resources running across these four servers. Servers A and B are Xen hosts running VMs, while servers C and D are for backups. A is in the same datacentre as C. From server A to server C, in the first datacentre, using protocol B From server B to server D, in the second datacentre, using protocol B From server A to server B, different datacentres, stacked resource using protocol A First question: booting a stacked resource We haven't got any vital data running on this setup yet - we're still making sure it works first. This means simulating power cuts, network outages etc and seeing what steps we need to recover. When we pull the power out of server A, both resources go down; it attempts to bring them back up at next boot. However, it only succeeds at bringing up the lower-level resource, A-C. The stacked resource A-B doesn't even try to connect, presumably because it can't find the device until it's a connected primary on the lower level. So if anything goes wrong we need to manually log in and bring that resource up, then start the virtual machine on top of it. Second question: setting the primary of a stacked resource Our lower-level resources are configured so that the right one is considered primary: resource test-AC { on A { ... } on C { ... } startup { become-primary-on A; } } But I don't see any way to do the same with a stacked resource, as the following isn't a valid config: resource test-AB { stacked-on-top-of test-AC { ... } stacked-on-top-of test-BD { ... } startup { become-primary-on test-AC; } } This too means that recovering from a failure requires manual intervention. Is there no way to set the automatic primary for a stacked resource?

    Read the article

  • IPTABLE & IP-routed netwok solution for HOST net and VM's subnet

    - by Daniel
    I've got ProxmoxVE2.1 ruled KVM node on Debian and bunch of VM's guests machine. That is how my networking looks like: # network interface settings auto lo iface lo inet loopback # device: eth0 auto eth0 iface eth0 inet static address 175.219.59.209 gateway 175.219.59.193 netmask 255.255.255.224 post-up echo 1 > /proc/sys/net/ipv4/conf/eth0/proxy_arp And I've got two working subnet solution auto vmbr0 iface vmbr0 inet static address 10.10.0.1 netmask 255.255.0.0 bridge_ports none bridge_stp off bridge_fd 0 post-up ip route add 10.10.0.1/24 dev vmbr0 This way I can reach internet, to resolve outside hosts, update and download everything I need but can't reach one guest VM out of any other VM's inside my network. The second solution allows me to communicate between VM's: auto vmbr1 iface vmbr1 inet static address 10.10.0.1 netmask 255.255.255.0 bridge_ports none bridge_stp off bridge_fd 0 post-up echo 1 > /proc/sys/net/ipv4/ip_forward post-up iptables -t nat -A POSTROUTING -s '10.10.0.0/24' -o vmbr1 -j MASQUERADE post-down iptables -t nat -D POSTROUTING -s '10.10.0.0/24' -o vmbr1 -j MASQUERADE I can even NAT internal addresses: -t nat -I PREROUTING -p tcp --dport 789 -j DNAT --to-destination 10.10.0.220:345 My inexperienced mind is ready to double VM's net adapters: one for the first solution and another - for second (with slightly different adresses) but I'm pretty sure that it's a dumb way to resolve the problem and everything can be resolved via iptables/ip route rules that I can't create. I've tried a dozen of "wizard manuals" and "howto's" to mix both solution but without success. Looking for an advice (and good reading links for networking begginers).

    Read the article

  • Syntax error at '{'; expected '}' when using nagios in puppet

    - by jiangchengwu
    It's a big problem to me, because I'm not familiar with puppet. ERROR on the puppetmaster: debug: importing '/etc/puppet/manifests/nodes/group-1.pp' err: Could not parse for environment production: Syntax error at '{'; expected '}' at /etc/puppet/manifests/nodes/group-1.pp:6 ERROR on the puppet client: err: Could not retrieve catalog from remote server: Error 400 on SERVER: Could not parse for environment production: Syntax error at '{'; expected '}' at /etc/puppet/manifests/nodes/group-1.pp:6 in group-1.pp: node 'group1' { include ntp class { 'nagios::host': #this is line 6 nodename => $clientcert, appname => 'test', } } nagios::host in module module/nagios/host.pp code are here: class nagios::host($nodename, $hostgroup) { file { '/usr/lib/nagios/plugins': mode = "755", require = Package["nagios-plugins"], } ... @@nagios_service { "${nodename}_check_ssh": ensure => present, use => 'generic-service', host_name => "${nodename}", notification_interval => 60, flap_detection_enabled => 0, service_description => "SSH", check_command => "check_ssh", target => "/etc/nagios3/services.d/${nodename}.cfg", } } and the file module/nagios/init.pp is blank How could I fix it ?

    Read the article

  • Install Exchange 2013 with DSC

    - by Alain Laventure
    I tried to install Exchange 2013 with the resource windowsProcess in existing Exchange Configuration. All prerequisites are installed (the Exchange Organization still exists). This is my Resource section: WindowsProcess Exchange2013 { Credential=$credential Path= "C:\Sources\Cumulative Update 5 for Exchange Server 2013 (KB2936880)\Setup.exe" Arguments= "/mode:Install /role:Mailbox /IAcceptExchangeServerLicenseTerms /TargetDir:C:\EX2013" Ensure= "Present" } #End Filter } #End Node } # End configuration /* @TargetNode='TargetDSC02' @GeneratedBy=exadmin @GenerationDate=08/02/2014 08:16:03 @GenerationHost=SOURCEDSC02 */ instance of MSFT_Credential as $MSFT_Credential1ref { Password = "Password1"; UserName = "S05\\Exadmin"; }; Exadmin is a member of Orgaganization Management Group and it is also member of Domain Admin Group, to be able to install Exchange When I execute this resource , Exchange Installation Start but after 1 minute the installation stops with this error: Failed [Rule:GlobalServerInstall] [Message:You must be a member of the 'Organization Management' role group or a member of the 'Enterprise Admins' group to continue.] To be sure that the right is really the problem I create a special User with only Administrator right of the Exchange server and with no Exchange Permission I run manually on the new Exchange server .\Setup.exe /mode:Install /role:Mailbox /IAcceptExchangeServerLicenseTerms /Targetdir:C:\EX2013 And I got the Same error that with DSC. After I add my test user in the Organization Management Group and I run again manually .\Setup.exe /mode:Install /role:Mailbox /IAcceptExchangeServerLicenseTerms /Targetdir:C:\EX2013 And the Exchange 2013 installation finish without any error. That prove that the problem with DSC is Permission right.

    Read the article

  • central apache log analysis of many hosts

    - by Jason Antman
    We have 30+ apache httpd servers, and are looking to perform analysis on the logs both for historical trending and near "real time" monitoring/alerting. I'm mainly interested in things like error rates (4xx/5xx), response time, overall request rate, etc. but it would also be very useful to pull out more compute-intensive statistics like unique client IPs and user agents per unit of time. I'm leaning towards building this as a centralized collector/server/storage, and am also considering the possibility of storing non-apache logs (i.e. general syslog, firewall logs, etc.) in the same system. Obviously a large part of this will probably have to be custom (at least the connection between pieces and the parsing/analysis we do), but I haven't been able to find much information on people who have done stuff like this, at least at shops smaller than Google/Facebook/etc. who can throw their log data into a hundred-node compute cluster and run Map/Reduce on it. The main things I'm looking for are: - All open source - Some way of collecting logs from apache machines that isn't too resource-intensive, and transports them relatively quickly over the network - Some way of storing them (NoSQL? key-value store?) on the backend, for a given amount of time (and then rolling them up into historical averages) - In the middle of this, a way of graphing in near-real-time (probably also with some statistical analysis on it) and hopefully alerting off of those graphs. Any suggestions/pointers/ideas, to either "products"/projects or descriptions of how other people do this would be greatly helpful. Unfortunately, we're not exactly a new-age-y devops shop, lots of old stuff, homogeneous infrastructure, and strained boxes.

    Read the article

  • Configure Nginx to render static files and rewrite file extension or proxy_pass

    - by Pardoner
    I've set up Nginx to handle all my static files else proxy_pass to a Node.js server. It's working fine but I'm having difficulty rewriting the url so that it remove the .html file extension. upstream my_upstream { server 127.0.0.1:8000; keepalive 64; } server { listen 80; server_name staging.mysite.com; root /var/www/staging.mysite.org/public; access_log /var/logs/staging.mysite.org.access.log; error_log /var/logs/staging.mysite.org.error.log; location ~ ^/(images/|javascript/|css/|robots.txt|humans.txt|favicon.ico) { rewrite (.*)\.html $1 permanent; try_files $uri.html $uri/ /index.html; access_log off; expires max; } location / { proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $http_host; proxy_set_header X-NginX-Proxy true; proxy_set_header Connection ""; proxy_http_version 1.1; proxy_cache one; proxy_cache_key sfs$request_uri$scheme; proxy_pass http://my_upstream; } }

    Read the article

< Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >