Search Results

Search found 14714 results on 589 pages for 'free wi fi'.

Page 540/589 | < Previous Page | 536 537 538 539 540 541 542 543 544 545 546 547  | Next Page >

  • Why might one host be unable to access the Internet, when it can ping the router and when all other hosts can?

    - by user1444233
    I have a Draytek Vigor 2830n. It's kicking out a 192.168.3.0 LAN. It performs load-balancing across dual-WAN ports, although I've turned off the second WAN to simplify testing. There are many hosts on the LAN. All IPs are allocated through DHCP, most freely allocated from the pool, but one or two are bound to NIC MAC addresses. All hosts can access the Internet, save one. That host (192.168.3.100 or 'dot100' for short) gets allocated an IP address (and the right gateway address, DNS server addresses, subnet etc.) dot100 can ping itself. It can ping the gateway, and access the latter's web interface via port 80. It's responsive and loss-free (sustained ping over a couple of minutes reports no data loss). Yet, for some reason that evades me, dot100 can't ping an external IP address or domain name. I suspect it's never been able to, because it was getting some Internet access from a second adaptor (different subnet), but that's now been turned off, which exposed the problem. In dot100, I've tried: two operating systems (Windows 8 and Knoppix), to rule out anti-virus programs etc. two physical adaptors two cables, on each adaptor two IPs (e.g. .100 and .103 assigned by Mac and .26 from the pool) both dynamic and assigned (MAC-bound) DHCP-allocated IPs but none of this experiments yielded any variation in the result. dot100 is a crucial host. It's a file server for the network, so I need it to be reliably allocated a consistent IP. Can anyone offer a potential solution or a way forward with the analysis please? My guess My analysis so far leads me to believe it's a router issue. I've checked the web interface very carefully. There are no filters setup in Firewall - General Setup or Filter Setup. I suspect it's a corrupted internal routing table, but the web UI shows this as the Routing table: Key: C - connected, S - static, R - RIP, * - default, ~ - private * 0.0.0.0/ 0.0.0.0 via 62.XX.XX.X WAN1 * 62.XX.XX.X/ 255.255.255.255 via 62.XX.XX.X WAN1 S 82.YY.YYY.YYY/ 255.255.255.255 via 82.YY.YYY.YYY WAN1 C 192.168.1.0/ 255.255.255.0 directly connected WAN2 C~ 192.168.3.0/ 255.255.255.0 directly connected LAN2

    Read the article

  • iptables management tools for large scale environment

    - by womble
    The environment I'm operating in is a large-scale web hosting operation (several hundred servers under management, almost-all-public addressing, etc -- so anything that talks about managing ADSL links is unlikely to work well), and we're looking for something that will be comfortable managing both the core ruleset (around 12,000 entries in iptables at current count) plus the host-based rulesets we manage for customers. Our core router ruleset changes a few times a day, and the host-based rulesets would change maybe 50 times a month (across all the servers, so maybe one change per five servers per month). We're currently using filtergen (which is balls in general, and super-balls at our scale of operation), and I've used shorewall in the past at other jobs (which would be preferable to filtergen, but I figure there's got to be something out there that's better than that). The "musts" we've come up with for any replacement system are: Must generate a ruleset fairly quickly (a filtergen run on our ruleset takes 15-20 minutes; this is just insane) -- this is related to the next point: Must generate an iptables-restore style file and load that in one hit, not call iptables for every rule insert Must not take down the firewall for an extended period while the ruleset reloads (again, this is a consequence of the above point) Must support IPv6 (we aren't deploying anything new that isn't IPv6 compatible) Must be DFSG-free Must use plain-text configuration files (as we run everything through revision control, and using standard Unix text-manipulation tools are our SOP) Must support both RedHat and Debian (packaged preferred, but at the very least mustn't be overtly hostile to either distro's standards) Must support the ability to run arbitrary iptables commands to support features that aren't part of the system's "native language" Anything that doesn't meet all these criteria will not be considered. The following are our "nice to haves": Should support config file "fragments" (that is, you can drop a pile of files in a directory and say to the firewall "include everything in this directory in the ruleset"; we use configuration management extensively and would like to use this feature to provide service-specific rules automatically) Should support raw tables Should allow you to specify particular ICMP in both incoming packets and REJECT rules Should gracefully support hostnames that resolve to more than one IP address (we've been caught by this one a few times with filtergen; it's a rather royal pain in the butt) The more optional/weird iptables features that the tool supports (either natively or via existing or easily-writable plugins) the better. We use strange features of iptables now and then, and the more of those that "just work", the better for everyone.

    Read the article

  • Netcat file transfer problem

    - by thepurplepixel
    I have two custom scripts I just wrote to facilitate transferring files between my VPS and my home server. They are both written in bash (short & sweet): To send: #!/bin/bash SENDFILE=$1 PORT=$2 HOST='<my house>' HOSTIP=`host $HOST | grep "has address" | cut --delimiter=" " -f 4` echo Transferring file \"$SENDFILE\" to $HOST \($HOSTIP\). tar -c "$SENDFILE" | pv -c -N tar -i 0.5 | lzma -z -c -6 | pv -c -N lzma -i 0.5 | nc -q 1 $HOSTIP $PORT echo Done. To receive: #!/bin/bash SERVER='<myserver>' SERVERIP=`host $SERVER | grep "has address" | cut --delimiter=" " -f 4` PORT=$1 echo Receiving file from $SERVER \($SERVERIP\) on port $PORT. nc -l $PORT | pv -c -N netcat -i 0.5 | lzma -d -c | pv -c -N lzma -i 0.5 | tar -xf - echo Done. The problem is that, for a very quick second, I see something flash along the lines of "Connection Refused" (before pv overwrites it), and no file is ever transferred. The port is forwarded through my router, and nmap confirms it: ~$ sudo nmap -sU -PN -p55515 -v <my house> Starting Nmap 5.00 ( http://nmap.org ) at 2010-04-21 18:10 EDT NSE: Loaded 0 scripts for scanning. Initiating Parallel DNS resolution of 1 host. at 18:10 Completed Parallel DNS resolution of 1 host. at 18:10, 0.00s elapsed Initiating UDP Scan at 18:10 Scanning 74.13.25.94 [1 port] Completed UDP Scan at 18:10, 2.02s elapsed (1 total ports) Host 74.13.25.94 is up. Interesting ports on 74.13.25.94: PORT STATE SERVICE 55515/udp open|filtered unknown Read data files from: /usr/share/nmap Nmap done: 1 IP address (1 host up) scanned in 2.08 seconds Raw packets sent: 2 (56B) | Rcvd: 5 (260B) Also, running netcat normally doesn't work either: squircle@summit:~$ netcat <my house> 55515 <my house> [<my IP>] 55515 (?) : Connection refused Both boxes are Ubuntu Karmic (9.10). The receiver has no firewall, and outbound traffic on that port is allowed on the sender. I have no idea what to troubleshoot next. Any ideas? P.S.: Feel free to move this to SO/SF if you feel it would fit better there.

    Read the article

  • Looking for a NTP Server Software for Windows

    - by Simon
    I'm looking for a, preferably free, NTP Server for Windows Server 2003/2008. We have already tried the built in Windows Time Server, but our tests did show that it is not very accurate, we see time differences up to 500ms. The max time difference we can allow for our application is ~100ms. Now we have already used the Meinberg NTPd for Windows. It works great except we have one big issue with it: If there is a network connection problem between the client and server, the ntp server is in a panic state It won't give the client a new time until we restart the ntp service. This is a big issue which has caused us some trouble. It was working fine for months until there was a network problem we didn't notice, we only noticed it after a week when the time difference was already 30 sec. on the clients. So please suggest some alternative NTP Server for windows. I did Google but I get a lot of unrelated search results. Edit: So far the ntpd windows version was very accurate and I'd like to stick with it. The only problem is the "panic state" after a network disconnect. Maybe some knows here what the cause of this is and how to fix it. Also, I forgot to mention that we have a server/client setup like this: Server1 -- Server2 -- Server3 -- Client1 -- Client2 -- Client3 So Server2 gets its time from Server1, Server3 gets its time from Server2, and the Clients get their time from Server3. Also, there are clients connected directly to Server2. It is important that all Servers and Clients have the exact same time (within ~100ms) Now there was a network problem with Server3 and its clients. The servers run the ntpd port for Windows, which acts as NTP server and client. The clients have Dimension4 as NTP client. After the network problem, the error message in D4 was something like this (out the top of my head, don't have the exact error message): Server response: The server is in a panic state (could not sync clock) I read through the ntpd docs, and the only mention of "panic" is when the time difference is 10000 seconds which will cause to exit the ntpd server but this was not the case. Also there is a "-g" command line switch to disable the panic exit, but it is already set by default. Any ideas what could cause the panic state and how to get rid of it next time?

    Read the article

  • How Do I Restrict Repository Access via WebSVN?

    - by kaybenleroll
    I have multiple subversion repositories which are served up through Apache 2.2 and WebDAV. They are all located in a central place, and I used this debian-administration.org article as the basis (I dropped the use of the database authentication for a simple htpasswd file though). Since then, I have also started using WebSVN. My issue is that not all users on the system should be able to access the different repositories, and the default setup of WebSVN is to allow anyone who can authenticate. According to the WebSVN documentation, the best way around this is to use subversion's path access system, so I looked to create this, using the AuthzSVNAccessFile directive. When I do this though, I keep getting "403 Forbidden" messages. My files look like the following: I have default policy settings in a file: <Location /svn/> DAV svn SVNParentPath /var/lib/svn/repository Order deny,allow Deny from all </Location> Each repository gets a policy file like below: <Location /svn/sysadmin/> Include /var/lib/svn/conf/default_auth.conf AuthName "Repository for sysadmin" require user joebloggs jimsmith mickmurphy </Location> The default_auth.conf file contains this: SVNParentPath /var/lib/svn/repository AuthType basic AuthUserFile /var/lib/svn/conf/.dav_svn.passwd AuthzSVNAccessFile /var/lib/svn/conf/svnaccess.conf I am not fully sure why I need the second SVNParentPath in default_auth.conf, but I just added that today as I was getting error messages as a result of adding the AuthzSVNAccessFile directive. With a totally permissive access file [/] joebloggs = rw the system worked fine (and was essentially unchanged), but as I soon as I start trying to add any kind of restrictions such as [sysadmin:/] joebloggs = rw instead, I get the 'Permission denied' errors again. The log file entries are: [Thu May 28 10:40:17 2009] [error] [client 89.100.219.180] Access denied: 'joebloggs' GET websvn:/ [Thu May 28 10:40:20 2009] [error] [client 89.100.219.180] Access denied: 'joebloggs' GET svn:/sysadmin What do I need to do to get this to work? Have configured apache wrong, or is my understanding of the svnaccess.conf file incorrect? If I am going about this the wrong way, I have no particular attachment to my overall approach, so feel free to offer alternatives as well. UPDATE (20090528-1600): I attempted to implement this answer, but I still cannot get it to work properly. I know most of the configuration is correct, as I have added [/] joebloggs = rw at the start and 'joebloggs' then has all the correct access. When I try to go repository-specific though, doing something like [/] joebloggs = rw [sysadmin:/] mickmurphy = rw then I got a permission denied error for mickmurphy (joebloggs still works), with an error similar to what I already had previously [Thu May 28 10:40:20 2009] [error] [client 89.100.219.180] Access denied: 'mickmurphy' GET svn:/sysadmin Also, I forgot to explain previously that all my repositories are underneath /var/lib/svn/repository UPDATE (20090529-1245): Still no luck getting this to work, but all the signs seem to be pointing to the issue being with path-access control in subversion not working properly. My assumption is that I have not conf

    Read the article

  • Too many sleeping processes?

    - by user55859
    I'm running Debian Lenny (x86_64) on a cloud VPS (Xen) and top command tells me there are 210 processes running and 209 are sleeping: top - 14:49:29 up 15:18, 1 user, load average: 0.09, 0.11, 0.05 Tasks: 210 total, 1 running, 209 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 532288k total, 437316k used, 94972k free, 30584k buffers Swap: 1048568k total, 408k used, 1048160k free, 219772k cached And here is what ps aux command gives me: USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 0.1 10380 812 ? Ss Sep30 0:00 init [2] root 2 0.0 0.0 0 0 ? S< Sep30 0:00 [kthreadd] root 3 0.0 0.0 0 0 ? S< Sep30 0:00 [migration/0] root 4 0.0 0.0 0 0 ? S< Sep30 0:00 [ksoftirqd/0] root 5 0.0 0.0 0 0 ? S< Sep30 0:00 [events/0] root 6 0.0 0.0 0 0 ? S< Sep30 0:00 [khelper] root 7 0.0 0.0 0 0 ? S< Sep30 0:05 [xenwatch] root 8 0.0 0.0 0 0 ? S< Sep30 0:13 [xenbus] root 10 0.0 0.0 0 0 ? S< Sep30 0:00 [migration/1] root 11 0.0 0.0 0 0 ? S< Sep30 0:00 [ksoftirqd/1] root 12 0.0 0.0 0 0 ? S< Sep30 0:00 [events/1] root 13 0.0 0.0 0 0 ? S< Sep30 0:00 [migration/2] root 14 0.0 0.0 0 0 ? S< Sep30 0:00 [ksoftirqd/2] root 15 0.0 0.0 0 0 ? S< Sep30 0:00 [events/2] root 16 0.0 0.0 0 0 ? S< Sep30 0:00 [migration/3] root 17 0.0 0.0 0 0 ? S< Sep30 0:00 [ksoftirqd/3] root 18 0.0 0.0 0 0 ? S< Sep30 0:00 [events/3] root 19 0.0 0.0 0 0 ? S< Sep30 0:00 [migration/4] root 20 0.0 0.0 0 0 ? S< Sep30 0:00 [ksoftirqd/4] root 21 0.0 0.0 0 0 ? S< Sep30 0:00 [events/4] root 22 0.0 0.0 0 0 ? S< Sep30 0:00 [migration/5] root 23 0.0 0.0 0 0 ? S< Sep30 0:00 [ksoftirqd/5] root 24 0.0 0.0 0 0 ? S< Sep30 0:00 [events/5] root 25 0.0 0.0 0 0 ? S< Sep30 0:00 [migration/6] root 26 0.0 0.0 0 0 ? S< Sep30 0:00 [ksoftirqd/6] root 27 0.0 0.0 0 0 ? S< Sep30 0:00 [events/6] root 28 0.0 0.0 0 0 ? S< Sep30 0:00 [migration/7] root 29 0.0 0.0 0 0 ? S< Sep30 0:00 [ksoftirqd/7] root 30 0.0 0.0 0 0 ? S< Sep30 0:00 [events/7] root 31 0.0 0.0 0 0 ? S< Sep30 0:00 [migration/8] root 32 0.0 0.0 0 0 ? S< Sep30 0:00 [ksoftirqd/8] root 33 0.0 0.0 0 0 ? S< Sep30 0:00 [events/8] root 34 0.0 0.0 0 0 ? S< Sep30 0:00 [migration/9] root 35 0.0 0.0 0 0 ? S< Sep30 0:00 [ksoftirqd/9] root 36 0.0 0.0 0 0 ? S< Sep30 0:00 [events/9] root 37 0.0 0.0 0 0 ? S< Sep30 0:00 [migration/10] root 38 0.0 0.0 0 0 ? S< Sep30 0:00 [ksoftirqd/10] root 39 0.0 0.0 0 0 ? S< Sep30 0:04 [events/10] root 40 0.0 0.0 0 0 ? S< Sep30 0:00 [migration/11] root 41 0.0 0.0 0 0 ? S< Sep30 0:00 [ksoftirqd/11] root 42 0.0 0.0 0 0 ? S< Sep30 0:00 [events/11] root 43 0.0 0.0 0 0 ? S< Sep30 0:00 [migration/12] root 44 0.0 0.0 0 0 ? S< Sep30 0:00 [ksoftirqd/12] root 45 0.0 0.0 0 0 ? S< Sep30 0:00 [events/12] root 46 0.0 0.0 0 0 ? S< Sep30 0:00 [migration/13] root 47 0.0 0.0 0 0 ? S< Sep30 0:00 [ksoftirqd/13] root 48 0.0 0.0 0 0 ? S< Sep30 0:00 [events/13] root 49 0.0 0.0 0 0 ? S< Sep30 0:00 [migration/14] root 50 0.0 0.0 0 0 ? S< Sep30 0:00 [ksoftirqd/14] root 51 0.0 0.0 0 0 ? S< Sep30 0:00 [events/14] root 52 0.0 0.0 0 0 ? S< Sep30 0:00 [migration/15] root 53 0.0 0.0 0 0 ? S< Sep30 0:00 [ksoftirqd/15] root 54 0.0 0.0 0 0 ? S< Sep30 0:00 [events/15] root 55 0.0 0.0 0 0 ? S< Sep30 0:00 [kintegrityd/0] root 56 0.0 0.0 0 0 ? S< Sep30 0:00 [kintegrityd/1] root 57 0.0 0.0 0 0 ? S< Sep30 0:00 [kintegrityd/2] root 58 0.0 0.0 0 0 ? S< Sep30 0:00 [kintegrityd/3] root 59 0.0 0.0 0 0 ? S< Sep30 0:00 [kintegrityd/4] root 60 0.0 0.0 0 0 ? S< Sep30 0:00 [kintegrityd/5] root 61 0.0 0.0 0 0 ? S< Sep30 0:00 [kintegrityd/6] root 62 0.0 0.0 0 0 ? S< Sep30 0:00 [kintegrityd/7] root 63 0.0 0.0 0 0 ? S< Sep30 0:00 [kintegrityd/8] root 64 0.0 0.0 0 0 ? S< Sep30 0:00 [kintegrityd/9] root 65 0.0 0.0 0 0 ? S< Sep30 0:00 [kintegrityd/10] root 66 0.0 0.0 0 0 ? S< Sep30 0:00 [kintegrityd/11] root 67 0.0 0.0 0 0 ? S< Sep30 0:00 [kintegrityd/12] root 68 0.0 0.0 0 0 ? S< Sep30 0:00 [kintegrityd/13] root 69 0.0 0.0 0 0 ? S< Sep30 0:00 [kintegrityd/14] root 70 0.0 0.0 0 0 ? S< Sep30 0:00 [kintegrityd/15] root 71 0.0 0.0 0 0 ? S< Sep30 0:00 [kblockd/0] root 72 0.0 0.0 0 0 ? S< Sep30 0:00 [kblockd/1] root 73 0.0 0.0 0 0 ? S< Sep30 0:00 [kblockd/2] root 74 0.0 0.0 0 0 ? S< Sep30 0:00 [kblockd/3] root 75 0.0 0.0 0 0 ? S< Sep30 0:00 [kblockd/4] root 76 0.0 0.0 0 0 ? S< Sep30 0:00 [kblockd/5] root 77 0.0 0.0 0 0 ? S< Sep30 0:00 [kblockd/6] root 78 0.0 0.0 0 0 ? S< Sep30 0:00 [kblockd/7] root 79 0.0 0.0 0 0 ? S< Sep30 0:00 [kblockd/8] root 80 0.0 0.0 0 0 ? S< Sep30 0:00 [kblockd/9] root 81 0.0 0.0 0 0 ? S< Sep30 0:00 [kblockd/10] root 82 0.0 0.0 0 0 ? S< Sep30 0:00 [kblockd/11] root 83 0.0 0.0 0 0 ? S< Sep30 0:00 [kblockd/12] root 84 0.0 0.0 0 0 ? S< Sep30 0:00 [kblockd/13] root 85 0.0 0.0 0 0 ? S< Sep30 0:00 [kblockd/14] root 86 0.0 0.0 0 0 ? S< Sep30 0:00 [kblockd/15] root 87 0.0 0.0 0 0 ? S< Sep30 0:00 [cqueue] root 88 0.0 0.0 0 0 ? S< Sep30 0:00 [kseriod] root 89 0.0 0.0 0 0 ? S Sep30 0:00 [pdflush] root 90 0.0 0.0 0 0 ? S Sep30 0:00 [pdflush] root 91 0.0 0.0 0 0 ? S< Sep30 0:00 [kswapd0] root 92 0.0 0.0 0 0 ? S< Sep30 0:00 [aio/0] root 93 0.0 0.0 0 0 ? S< Sep30 0:00 [aio/1] root 94 0.0 0.0 0 0 ? S< Sep30 0:00 [aio/2] root 95 0.0 0.0 0 0 ? S< Sep30 0:00 [aio/3] root 96 0.0 0.0 0 0 ? S< Sep30 0:00 [aio/4] root 97 0.0 0.0 0 0 ? S< Sep30 0:00 [aio/5] root 98 0.0 0.0 0 0 ? S< Sep30 0:00 [aio/6] root 99 0.0 0.0 0 0 ? S< Sep30 0:00 [aio/7] root 100 0.0 0.0 0 0 ? S< Sep30 0:00 [aio/8] root 101 0.0 0.0 0 0 ? S< Sep30 0:00 [aio/9] root 102 0.0 0.0 0 0 ? S< Sep30 0:00 [aio/10] root 103 0.0 0.0 0 0 ? S< Sep30 0:00 [aio/11] root 104 0.0 0.0 0 0 ? S< Sep30 0:00 [aio/12] root 105 0.0 0.0 0 0 ? S< Sep30 0:00 [aio/13] root 106 0.0 0.0 0 0 ? S< Sep30 0:00 [aio/14] root 107 0.0 0.0 0 0 ? S< Sep30 0:00 [aio/15] root 108 0.0 0.0 0 0 ? S< Sep30 0:00 [kpsmoused] root 167 0.0 0.0 0 0 ? S< Sep30 0:00 [net_accel/0] root 168 0.0 0.0 0 0 ? S< Sep30 0:00 [net_accel/1] root 169 0.0 0.0 0 0 ? S< Sep30 0:00 [net_accel/2] root 170 0.0 0.0 0 0 ? S< Sep30 0:00 [net_accel/3] root 171 0.0 0.0 0 0 ? S< Sep30 0:00 [net_accel/4] root 172 0.0 0.0 0 0 ? S< Sep30 0:00 [net_accel/5] root 173 0.0 0.0 0 0 ? S< Sep30 0:00 [net_accel/6] root 174 0.0 0.0 0 0 ? S< Sep30 0:00 [net_accel/7] root 175 0.0 0.0 0 0 ? S< Sep30 0:00 [net_accel/8] root 176 0.0 0.0 0 0 ? S< Sep30 0:00 [net_accel/9] root 177 0.0 0.0 0 0 ? S< Sep30 0:00 [net_accel/10] root 178 0.0 0.0 0 0 ? S< Sep30 0:00 [net_accel/11] root 179 0.0 0.0 0 0 ? S< Sep30 0:00 [net_accel/12] root 180 0.0 0.0 0 0 ? S< Sep30 0:00 [net_accel/13] root 181 0.0 0.0 0 0 ? S< Sep30 0:00 [net_accel/14] root 182 0.0 0.0 0 0 ? S< Sep30 0:00 [net_accel/15] root 315 0.0 0.0 0 0 ? S< Sep30 0:00 [xfs_mru_cache] root 316 0.0 0.0 0 0 ? S< Sep30 0:00 [xfslogd/0] root 317 0.0 0.0 0 0 ? S< Sep30 0:00 [xfslogd/1] root 318 0.0 0.0 0 0 ? S< Sep30 0:00 [xfslogd/2] root 319 0.0 0.0 0 0 ? S< Sep30 0:00 [xfslogd/3] root 320 0.0 0.0 0 0 ? S< Sep30 0:00 [xfslogd/4] root 321 0.0 0.0 0 0 ? S< Sep30 0:00 [xfslogd/5] root 322 0.0 0.0 0 0 ? S< Sep30 0:00 [xfslogd/6] root 323 0.0 0.0 0 0 ? S< Sep30 0:00 [xfslogd/7] root 324 0.0 0.0 0 0 ? S< Sep30 0:00 [xfslogd/8] root 325 0.0 0.0 0 0 ? S< Sep30 0:00 [xfslogd/9] root 326 0.0 0.0 0 0 ? S< Sep30 0:00 [xfslogd/10] root 327 0.0 0.0 0 0 ? S< Sep30 0:00 [xfslogd/11] root 328 0.0 0.0 0 0 ? S< Sep30 0:00 [xfslogd/12] root 329 0.0 0.0 0 0 ? S< Sep30 0:00 [xfslogd/13] root 330 0.0 0.0 0 0 ? S< Sep30 0:00 [xfslogd/14] root 331 0.0 0.0 0 0 ? S< Sep30 0:00 [xfslogd/15] root 332 0.0 0.0 0 0 ? S< Sep30 0:00 [xfsdatad/0] root 333 0.0 0.0 0 0 ? S< Sep30 0:00 [xfsdatad/1] root 334 0.0 0.0 0 0 ? S< Sep30 0:00 [xfsdatad/2] root 335 0.0 0.0 0 0 ? S< Sep30 0:00 [xfsdatad/3] root 336 0.0 0.0 0 0 ? S< Sep30 0:00 [xfsdatad/4] root 337 0.0 0.0 0 0 ? S< Sep30 0:00 [xfsdatad/5] root 338 0.0 0.0 0 0 ? S< Sep30 0:00 [xfsdatad/6] root 339 0.0 0.0 0 0 ? S< Sep30 0:00 [xfsdatad/7] root 340 0.0 0.0 0 0 ? S< Sep30 0:00 [xfsdatad/8] root 341 0.0 0.0 0 0 ? S< Sep30 0:00 [xfsdatad/9] root 342 0.0 0.0 0 0 ? S< Sep30 0:00 [xfsdatad/10] root 343 0.0 0.0 0 0 ? S< Sep30 0:00 [xfsdatad/11] root 344 0.0 0.0 0 0 ? S< Sep30 0:00 [xfsdatad/12] root 345 0.0 0.0 0 0 ? S< Sep30 0:00 [xfsdatad/13] root 346 0.0 0.0 0 0 ? S< Sep30 0:00 [xfsdatad/14] root 347 0.0 0.0 0 0 ? S< Sep30 0:00 [xfsdatad/15] root 399 0.0 0.0 0 0 ? S< Sep30 0:00 [jfsIO] root 400 0.0 0.0 0 0 ? S< Sep30 0:00 [jfsCommit] root 401 0.0 0.0 0 0 ? S< Sep30 0:00 [jfsCommit] root 402 0.0 0.0 0 0 ? S< Sep30 0:00 [jfsCommit] root 403 0.0 0.0 0 0 ? S< Sep30 0:00 [jfsCommit] root 404 0.0 0.0 0 0 ? S< Sep30 0:00 [jfsCommit] root 405 0.0 0.0 0 0 ? S< Sep30 0:00 [jfsCommit] root 406 0.0 0.0 0 0 ? S< Sep30 0:00 [jfsCommit] root 407 0.0 0.0 0 0 ? S< Sep30 0:00 [jfsCommit] root 408 0.0 0.0 0 0 ? S< Sep30 0:00 [jfsCommit] root 409 0.0 0.0 0 0 ? S< Sep30 0:00 [jfsCommit] root 410 0.0 0.0 0 0 ? S< Sep30 0:00 [jfsCommit] root 411 0.0 0.0 0 0 ? S< Sep30 0:00 [jfsCommit] root 412 0.0 0.0 0 0 ? S< Sep30 0:00 [jfsCommit] root 413 0.0 0.0 0 0 ? S< Sep30 0:00 [jfsCommit] root 414 0.0 0.0 0 0 ? S< Sep30 0:00 [jfsCommit] root 415 0.0 0.0 0 0 ? S< Sep30 0:00 [jfsCommit] root 416 0.0 0.0 0 0 ? S< Sep30 0:00 [jfsSync] root 673 0.0 0.0 0 0 ? S< Sep30 0:00 [kjournald] root 727 0.0 0.1 16840 960 ? S<s Sep30 0:00 udevd --daemon root 1273 0.0 0.3 122036 2016 ? Sl Sep30 0:00 /usr/sbin/rsyslogd -c3 root 1306 0.0 0.2 48960 1224 ? Ss Sep30 0:00 /usr/sbin/sshd root 1809 0.0 0.2 21276 1076 ? Ss Sep30 0:00 /usr/sbin/cron root 1873 0.0 1.5 41460 8360 ? Ss Sep30 0:02 /usr/sbin/munin-node root 1896 0.0 0.1 3864 608 tty1 Ss+ Sep30 0:00 /sbin/getty 38400 tty1 root 1897 0.0 0.1 3864 604 tty2 Ss+ Sep30 0:00 /sbin/getty 38400 tty2 root 1898 0.0 0.1 3864 604 tty3 Ss+ Sep30 0:00 /sbin/getty 38400 tty3 root 1899 0.0 0.1 3864 608 tty4 Ss+ Sep30 0:00 /sbin/getty 38400 tty4 root 1900 0.0 0.1 3864 608 tty5 Ss+ Sep30 0:00 /sbin/getty 38400 tty5 root 1901 0.0 0.1 3864 604 tty6 Ss+ Sep30 0:00 /sbin/getty 38400 tty6 101 4526 0.0 0.1 42820 1052 ? Ss 12:27 0:00 /usr/sbin/exim4 -bd -q30m root 8865 0.0 0.2 11668 1432 pts/0 S 13:18 0:00 /bin/sh /usr/bin/mysqld_safe mysql 8980 0.0 9.0 175284 48368 pts/0 Sl 13:18 0:05 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --pid-file=/var/run/my root 8981 0.0 0.1 6480 684 pts/0 S 13:18 0:00 logger -t mysqld -p daemon.error root 13730 0.0 0.8 149144 4712 ? Ss 14:05 0:00 /usr/bin/php5-fpm --fpm-config /etc/php5/fpm/php5-fpm.conf www-data 13731 0.2 11.4 172756 61136 ? S 14:05 0:05 /usr/bin/php5-fpm --fpm-config /etc/php5/fpm/php5-fpm.conf www-data 13732 0.2 8.9 158516 47712 ? S 14:05 0:05 /usr/bin/php5-fpm --fpm-config /etc/php5/fpm/php5-fpm.conf www-data 13733 0.1 8.1 156576 43468 ? S 14:05 0:04 /usr/bin/php5-fpm --fpm-config /etc/php5/fpm/php5-fpm.conf root 14601 0.0 0.2 30600 1240 ? Ss 14:15 0:00 nginx: master process /usr/sbin/nginx www-data 14602 0.0 0.3 30976 1836 ? S 14:15 0:00 nginx: worker process www-data 14603 0.0 0.3 30976 1836 ? S 14:15 0:00 nginx: worker process www-data 14604 0.0 0.5 31552 2852 ? S 14:15 0:00 nginx: worker process www-data 14605 0.0 0.4 31240 2580 ? S 14:15 0:00 nginx: worker process www-data 14606 0.0 0.3 30976 1836 ? S 14:15 0:00 nginx: worker process www-data 14607 0.0 0.3 30976 1836 ? S 14:15 0:00 nginx: worker process www-data 14608 0.0 0.4 31244 2536 ? S 14:15 0:00 nginx: worker process www-data 14609 0.0 0.5 31544 2788 ? S 14:15 0:00 nginx: worker process root 17169 0.0 0.2 17456 1160 pts/0 R+ 14:45 0:00 ps aux root 26391 0.0 0.6 66168 3284 ? Ss 10:32 0:00 sshd: root@notty root 26394 0.0 0.3 42376 2120 ? Ss 10:32 0:00 /usr/lib/openssh/sftp-server root 31500 0.0 0.6 66140 3248 ? Ss 11:33 0:00 sshd: root@pts/0 root 31503 0.0 0.3 20248 1924 pts/0 Ss 11:33 0:00 -bash root 31509 0.0 0.6 66168 3264 ? Ss 11:34 0:00 sshd: root@notty root 31512 0.0 0.3 42180 1984 ? Ss 11:34 0:00 /usr/lib/openssh/sftp-server I'm wondering if this is normal situation? Do I need all of those process? Thanks for any suggestions!

    Read the article

  • Linux per-process resource limits - a deep Red Hat Mystery

    - by BobBanana
    I have my own multithreaded C program which scales in speed smoothly with the number of CPU cores.. I can run it with 1, 2, 3, etc threads and get linear speedup.. up to about 5.5x speed on a 6-core CPU on a Ubuntu Linux box. I had an opportunity to run the program on a very high end Sunfire x4450 with 4 quad-core Xeon processors, running Red Hat Enterprise Linux. I was eagerly anticipating seeing how fast the 16 cores could run my program with 16 threads.. But it runs at the same speed as just TWO threads! Much hair-pulling and debugging later, I see that my program really is creating all the threads, they really are running simultaneously, but the threads themselves are slower than they should be. 2 threads runs about 1.7x faster than 1, but 3, 4, 8, 10, 16 threads all run at just net 1.9x! I can see all the threads are running (not stalled or sleeping), they're just slow. To check that the HARDWARE wasn't at fault, I ran SIXTEEN copies of my program independently, simultaneously. They all ran at full speed. There really are 16 cores and they really do run at full speed and there really is enough RAM (in fact this machine has 64GB, and I only use 1GB per process). So, my question is if there's some OPERATING SYSTEM explanation, perhaps some per-process resource limit which automatically scales back thread scheduling to keep one process from hogging the machine. Clues are: My program does not access the disk or network. It's CPU limited. Its speed scales linearly on a single CPU box in Ubuntu Linux with a hexacore i7 for 1-6 threads. 6 threads is effectively 6x speedup. My program never runs faster than 2x speedup on this 16 core Sunfire Xeon box, for any number of threads from 2-16. Running 16 copies of my program single threaded runs perfectly, all 16 running at once at full speed. top shows 1600% of CPUs allocated. /proc/cpuinfo shows all 16 cores running at full 2.9GHz speed (not low frequency idle speed of 1.6GHz) There's 48GB of RAM free, it is not swapping. What's happening? Is there some process CPU limit policy? How could I measure it if so? What else could explain this behavior? Thanks for your ideas to solve this, the Great Xeon Slowdown Mystery of 2010!

    Read the article

  • Real-time offline folder-to-folder backup application needed (Windows)

    - by niktech
    I recently started using Intel Matrix Storage RAID solution that allowed me to use my 5 1TB drives for two RAID volumes. First one a 1TB RAID 0 striped across all 5 drives and second one a RAID 5 across the rest of the free space on all drives (around 2.85TB usable space). The RAID 0 I use for OS, applications and games while the RAID 5 I use as a more-permanent type storage (photos, etc). Now I do realize that running the OS and applications on RAID 0 across 5 drives is very dangerous, which is what brings up the following question. Is there a reliable freeware realtime backup application that can backup a set of folders from one drive to another drive (no online backups needed)? I've already tried a few (Mozy, Yadis, Comodo Backup, GFI Backup, Idoo, Crash Plan) but none meet my requirements: Low CPU and RAM usage. Realtime Backups - as soon as a file is modified in the source folder, it is added to the backup queue which will be processed with the lowest priority when the CPU is idle. This backup queue should persist in cases of computer restarts (ie: the source and destination folders should always have the same set of files, except for the ones waiting in the backup queue). Incremental Backups - if only 10 bytes changed in a 1GB file, the app should only copy those 10 new bytes. Ability to back up locked and opened files (some apps, like Yadis, can't back up critical files like browser favorites). Ability to run as a service (no need for any user to log-in to have the app started). Optional requirements: Compression of the destination into a well-known format (RAR, Zip) that can be directly read without the use of the application. Preset source folders (such as Browser Favorites, Game Saves, Application Settings, etc). The idea is to use RAID 0 array as "semi-persistent RAM-like" storage which in case of a failure can be quickly rebuilt by reinstalling the OS, apps and games and copying over the settings, saves, favorites from the RAID 5. I'm also thinking of taking this RAID 0 as RAM idea to the extreme with SSDs (as soon as we get some nice 6Gb/s SATA III SSDs out there), where a couple of SSDs chained in RAID 0 will work as yet another semi-persistent cache layer sitting between the RAM and the HD. I'm just hoping there already exists an application that satisfies these requirements... otherwise I'll have to write one myself, which I would prefer not to do.

    Read the article

  • Windows 2003 Storage Server Hanging on Large File Transfers

    - by user25272
    In one of our offices we have a Dell PowerVault 745N NAS device which acts as the main file server. Its running 32bit Windows 2003 Storage Server SP2 with 3GB RAM. The server holds around 60 users HOME folders, which are mapped via AD. The office clients are a mix of XP SP3, Vista and Windows 7. Occasionally the server will completely hang when transferring large files. When the hang happens the console becomes unresponsive with only the mouse active and blank wallpaper. Sometimes stopping the copy frees the server, sometimes not. The hanging can last around 20 minutes. During this time other servers also become unresponsive with blank wallpaper at the console. If you do manage to get onto another server the taskbar and run commands are unresponsive. This also transcends to the client computers sometimes with explorer crashing. I'm guessing this is due to the HOME folder mapping. Eventually the NAS server with free up and everything will be back to normal. The server is configured as follows: PERC 4/DC DATA 2 - 12 SCSI HDD - RAID5 SHADOWCOPY 2 SCSI HDD - RAID1 CERC SATA DATA 11 4 SATA HDD - RAID5 OS 4 SATA HDD - RAID5 All the drivers and firmware is up to date. I've been through all the diagnostics with Dell and the hardware has come up clean including full HDD tests on the arrays. The server has NOD32 installed as the AV, but the hanging happens when it is uninstalled. There are no errors in the event log when this happens and we don't have any errors logged on any of our ProCurve switches. DNS is fine on the domain and AD from what I can tell is running happily. There are no DFS or NFS shares setup either. All the shares are standard Windows. I've unchecked the allow the computer to turn off this device to save power box under Power Management on the NIC. "Set Link Speed and Duplex to Auto-negotiate 1000 " Increased Receive Descriptors buffer from 256 to 352 (reserves more CPU resource for handling data) I've run network traces using network monitor and have found the following: 417 8.078125 {SMB:192, NbtSS:25, TCP:24, IPv4:23} 192.168.2.244 192.168.5.35 SMB SMB:R; Nt Create Andx - NT Status: System - Error, Code = (52) STATUS_OBJECT_NAME_NOT_FOUND I've tried different cabling; NICs and switch ports all with the same result. Transferring files from other servers on the domain is fine. All I haven't done is run CHKDSK on the drives to look for any file system errors. On the Vista clients I have also run netsh interface tcp set global autotuning=disabled with no result. Could it be that the server has a faulty drive or that the I/O is too much for it to handle? Any ideas why would the hang cause issues with the other servers on the LAN? Many Thanks.

    Read the article

  • LACP : Cisco ASA 5515 & Switch ProCurve 2920

    - by user979276
    I've two ASAs 5515 connected in failover Active/Stand by (on Gi0/5) My two ASAs are connected to two Switch ProCurve 2920 to have HA if something happens. So I plug something like that (don't pay attention to the arrows) : So one the ASA, I created a Port-Channel like that : interface GigabitEthernet0/0 nameif outside security-level 0 ip address 192.168.1.3 255.255.255.0 standby 192.168.1.4 ! interface GigabitEthernet0/1 speed 1000 duplex full channel-group 1 mode passive no nameif no security-level no ip address ! interface GigabitEthernet0/2 speed 1000 duplex full channel-group 1 mode passive no nameif no security-level no ip address ! interface Port-channel1.1 vlan 1 nameif inside security-level 100 ip address 192.168.8.1 255.255.255.0 standby 192.168.8.2 ! interface Port-channel1.10 vlan 10 nameif guest security-level 50 ip address 172.16.100.2 255.255.255.224 standby 172.16.100.3 ! interface Port-channel1.16 vlan 16 nameif dmz security-level 50 ip address 192.168.16.1 255.255.255.0 standby 192.168.16.2 On the switch, I created a trunk LACP capable with the port 1 and 2 on each switch, force the speed to 1000 and put the port un full duplex mode. BUT this is not working... I tried many things and I can't make it work. In this configuration, I can't ping anything between my ASA and my Switch (or any object connected). Here what I get on my ASA : Channel group 1 LACP port Admin Oper Port Port Port Flags State Priority Key Key Number State ----------------------------------------------------------------------------- Gi0/2 SP not-bndl 32768 0x1 0x1 0x3 0xc Gi0/1 FP not-bndl 32768 0x1 0x1 0x2 0x6 And on the Switchs : PORT LACP TRUNK PORT LACP LACP NUMB ENABLED GROUP STATUS PARTNER STATUS ----- ------- ----- ------ ------- ------ 1 Active trk1 Broken Yes Failure 2 Active trk1 Broken Yes Failure If I change the Cisco interface to LACP mode On, I can ping the switch from the ASA but nothing other objects conneted on the switch. If I look at the statut of LACP on the switch I see this : PORT LACP TRUNK PORT LACP LACP NUMB ENABLED GROUP STATUS PARTNER STATUS ----- ------- ----- ------ ------- ------ 1 Active trk1 Up No Success 2 Active trk1 Up No Success I don't have any clue on what's going on so If someone have any idea and help me on this, it would be great ! Feel free to ask me anything if you need any more information ! Thanks a lot !

    Read the article

  • UCARP: prevent the original master from taking over the VIP when it comes back after failure?

    - by quanta
    Keepalived can do this by combining the nopreempt option and the BACKUP state on the both nodes: Prevent VRRP Master from becoming Master once it has failed Prevent master to fall back to master after failure How about the UCARP? Name : ucarp Arch : x86_64 Version : 1.5.2 Release : 1.el5.rf Size : 81 k Repo : installed Summary : Common Address Redundancy Protocol (CARP) for Unix URL : http://www.ucarp.org/ License : BSD Description: UCARP allows a couple of hosts to share common virtual IP addresses in order : to provide automatic failover. It is a portable userland implementation of the : secure and patent-free Common Address Redundancy Protocol (CARP, OpenBSD's : alternative to the patents-bloated VRRP). : Strong points of the CARP protocol are: very low overhead, cryptographically : signed messages, interoperability between different operating systems and no : need for any dedicated extra network link between redundant hosts. If I don't use the --preempt option and set the --advskew to the same value, both nodes become master. /etc/sysconfig/carp/vip-010.conf # Virtual IP configuration file for UCARP # The number (from 001 to 255) in the name of the file is the identifier # $Id: vip-001.conf.example 1527 2004-07-09 15:23:54Z dude $ # Set the same password on all mamchines sharing the same virtual IP PASSWORD="pa$$w0rd" # You are required to have an IPADDR= line in the configuration file for # this interface (so no DHCP allowed) BIND_INTERFACE="eth0" # Do *NOT* use a main interface for the virtual IP, use an ethX:Y alias # with the corresponding /etc/sysconfig/network-scripts/ifcfg-ethX:Y file # already configured and ith ONBOOT=no VIP_INTERFACE="eth0:0" # If you have extra options to add, see "ucarp --help" output # (the lower the "-k <val>" the higher priority and "-P" to become master ASAP) OPTIONS="-z -k 255" /etc/sysconfig/network-scripts/ifcfg-eth0:0 DEVICE=eth0:0 ONBOOT=no BOOTPROTO= IPADDR=192.168.6.8 NETMASK=255.255.255.0 USERCTL=yes IPV6INIT=no node 1: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000 link/ether c6:9b:8e:af:a7:69 brd ff:ff:ff:ff:ff:ff inet 192.168.6.192/24 brd 192.168.6.255 scope global eth0 inet 192.168.6.8/24 brd 192.168.6.255 scope global secondary eth0:0 inet6 fe80::c49b:8eff:feaf:a769/64 scope link valid_lft forever preferred_lft forever node 2: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000 link/ether 00:30:48:f7:0f:81 brd ff:ff:ff:ff:ff:ff inet 192.168.6.38/24 brd 192.168.6.255 scope global eth1 inet 192.168.6.8/24 brd 192.168.6.255 scope global secondary eth1:0 inet6 fe80::230:48ff:fef7:f81/64 scope link valid_lft forever preferred_lft forever

    Read the article

  • How much did it cost our competitor to DDoS us at 50 Gbps for two weeks?

    - by MiniQuark
    I know that this question may sound like an invalid serverfault question, but I believe that it's quite valid: the amount of time and effort that a sysadmin should spend on DDoS protection is a direct function of typical DDoS prices. Let me rephrase this: protecting a web site against small attacks is one thing, but resisting 50 Gbps of UDP flood is another and requires time & money. Deciding whether or not to spend that time & money depends on whether such an attack is likely or not, and this in turn depends on how cheap and simple such an attack is for the attacker. So here's the full story: our company has been victim to a massive DDoS attack (over 50 Gbps of UDP traffic, full-time during 2 weeks). We are pretty sure that it's one of our competitors, and we actually know which one, because we were the only two remaining competitors on a very big request for proposal, and the DDoS attack magically stopped the day we won (double hurray, by the way)! These people have proved in the past that they are very dishonest, but we know that they are not technical at all, so we believe that they simply paid for some botnet DDoS service. I would like to know how much these services typically cost, for such a large scale attack. Please do not give any link to such services, I would really hate to give these people any publicity. I understand that a hacker could very well do this for free, but what's a typical price for such an attack if our competitors paid for it through some kind of botnet service? It is really starting to scare me (if we're talking thousands of dollars here, then I am really going to freak off: who knows, they might just hire a hit-man one day?). Of course we filed a complaint, but the police says that they cannot do much about it (DDoS attacks are virtually untraceable, so they say), and our suspicions are not enough to justify them raiding our competitor's offices to search for proofs. For your information, we now changed our infrastructure to be able to sustain such attacks: we now use a major CDN service so that our servers are not directly affected by DDoS attacks. Requests for dynamic pages do get proxied to our servers, but for low level attacks (UDP flood, or Syn floods, for example) we only receive legitimate trafic, so we're fine. If they decide to launch higher level attacks (HTTP flood or slowloris attacks for example), most of the load should be handled by the CDN... at least I hope so! Thank you very much for your help.

    Read the article

  • How to keep group-writeable shares on Samba with OSX clients?

    - by Oliver Salzburg
    I have a FreeNAS server on a network with OSX and Windows clients. When the OSX clients interact with SMB/CIFS shares on the server, they are causing permission problems for all other clients. Update: I can no longer verify any answers because we abandoned the project, but feel free to post any help for future visitors. The details of this behavior seem to also be dependent on the version of OSX the client is running. For this question, let's assume a client running 10.8.2. When I mount the CIFS share on an OSX client and create a new directory on it, the directory will be created with drwxr-x-rx permissions. This is undesirable because it will not allow anyone but me to write to the directory. There are other users in my group which should have write permissions as well. This behavior happens even though the following settings are present in smb.conf on the server: [global] create mask= 0666 directory mask= 0777 [share] force directory mode= 0775 force create mode= 0660 I was under the impression that these settings should make sure that directories are at least created with rwxrwxr-x permissions. But, I guess, that doesn't stop the client from changing the permissions after creating the directory. When I create a folder on the same share from a Windows client, the new folder will have the desired access permissions (rwxrwxrwx), so I'm currently assuming that the problem lies with the OSX client. I guess this wouldn't be such an issue if you could easily change the permissions of the directories you've created, but you can't. When opening the directory info in Finder, I get the old "You have custom access" notice with no ability to make any changes. I'm assuming that this is caused because we're using Windows ACLs on the share, but that's just a wild guess. Changing the write permissions for the group through the terminal works fine, but this is unpractical for the deployment and unreasonable to expect from anyone to do. This is the complete smb.conf: [global] encrypt passwords = yes dns proxy = no strict locking = no read raw = yes write raw = yes oplocks = yes max xmit = 65535 deadtime = 15 display charset = LOCALE max log size = 10 syslog only = yes syslog = 1 load printers = no printing = bsd printcap name = /dev/null disable spoolss = yes smb passwd file = /var/etc/private/smbpasswd private dir = /var/etc/private getwd cache = yes guest account = nobody map to guest = Bad Password obey pam restrictions = Yes # NOTE: read smb.conf. directory name cache size = 0 max protocol = SMB2 netbios name = freenas workgroup = COMPANY server string = FreeNAS Server store dos attributes = yes hostname lookups = yes security = user passdb backend = ldapsam:ldap://ldap.company.local ldap admin dn = cn=admin,dc=company,dc=local ldap suffix = dc=company,dc=local ldap user suffix = ou=Users ldap group suffix = ou=Groups ldap machine suffix = ou=Computers ldap ssl = off ldap replication sleep = 1000 ldap passwd sync = yes #ldap debug level = 1 #ldap debug threshold = 1 ldapsam:trusted = yes idmap uid = 10000-39999 idmap gid = 10000-39999 create mask = 0666 directory mask = 0777 client ntlmv2 auth = yes dos charset = CP437 unix charset = UTF-8 log level = 1 [share] path = /mnt/zfs0 printable = no veto files = /.snap/.windows/.zfs/ writeable = yes browseable = yes inherit owner = no inherit permissions = no vfs objects = zfsacl guest ok = no inherit acls = Yes map archive = No map readonly = no nfs4:mode = special nfs4:acedup = merge nfs4:chown = yes hide dot files force directory mode = 0775 force create mode = 0660

    Read the article

  • Searching For a Desktop Security Software to harden Windows machines, anybody?

    - by MosheH
    I'm a network administrator of a small/medium network. I'm looking for a software (Free or Not) which can harden Windows Computers (XP And Win7) for the propose of hardening standalone desktop computers (not in domain network). Note: The computers are completely isolated (standalone), so i can't use active directory group policy. moreover, there are too many restriction that i need to apply, so it is not particle to set it up manual (one by one). Basically what I’m looking for is a software that can restrict and disable access for specific user accounts on the system. For Example: User john can only open one application and nothing else -- He don’t see no icon on the desktop or start menu, except for one or two applications which i want to allow. He can't Right click on the desktop, the task-bar icons are not shown, there is no folder options, etc... User marry can open a specific application and copy data to one folder on D drive. User Dan, have access to all drives but cannot install software, and so on... So far ,I've found only the following solutions, but they all seems to miss one or more feature: Desktop restriction Software 1. Faronics WINSelect The application seems to answer most of our needs except one feature which is very important to us but seems to be missing from WINSelect, which is "restriction per profile". WINSelect only allow to set up restrictions which are applied system-wide. If I have multiple user accounts on the system and want to apply different restrictions for each user, I cant. Deskman (No Restriction per user)- Same thing, no restriction per profile. Desktop Security Rx - not relevant, No Win7 Support. The only software that I've found which is offering a restriction per profile is " 1st Security Agent ". but its GUI is very complicated and not very intuitive. It's worth to mention that I'm not looking for "Internet Kiosk software" although they share some features with the one I need. All I need is a software (like http://www.faronics.com/standard/winselect/) that is offering a way to restrict Windows user interface. So if anybody know an Hardening software which allows to set-up user restrictions on Windows systems, It will be a big, big, big help for me! Thanks to you all

    Read the article

  • Deployment/provisioning tool for commercial applications (not developed in-house)

    - by mfinni
    I help manage a few hosted commercial applications, and we have a lot of manual processes involved when doing new customer-instance deployments into the shared (multitenant) environment. Allow me to describe the most relevant features, and then we can talk about the tools. We have an application on AIX, that requires dozens of changes to config files (some plain text, some XML) as well as a good number of commands to be run on multiple servers - some to start the new instance, some to restart our shared authentication and reporting engines, etc. The config changes follow templates, of course. The servers in question will also depend on the initial conditions specified by the implementer/deployer - we may choose to deploy a given customer to our servers in Europe, or one set of servers may be active-active whereas a different set of servers is active-passive - in short, there's a lot of complications. We have another application that run on IIS 6 and SQL. The DBAs don't want any automation of the SQL components and that's fine with me, but automating the IIS bit would be great. For a new customer instance, we make a filesystem copy of a template Virtual Directory target named after the new customer, make a new AppPool to match, edit a VirDir template .xml file to replace the filepaths and AppPool names with the new ones, and then make a new VirDir from the modified template XML to point to the new filesystem folder and app pool. For the first case, something like ControlTier or Chef might be good. For the second, the new(ish) Web Deploy from MS would probably do a good job. Has anyone used these tools or others to do something similar for applications? More of a nice-to-have, not a fixed requirement - Has anyone used anything that works on both platforms? I'm looking for something free, because the official word is that within a year, we will have whatever HP has renamed the OpsWare suite, which should be able to do stuff like this. Edit - based on someone's suggestion, looking at CFengine for the AIX application, it doesn't seem to address my pain. The problem isn't keeping a given config synced across dozens of servers, we have rsync for that. The problem is that onboarding a new customer instance touches dozens of files, putting pieces of the same or similar information into them - some are new stanzas in existing files, some are new files, and some are new directories. This is a several-hours-long process that is also error-prone because it's mostly done by hand. I guess I'm looking for config-file generation and management. I have built a small Perl script to do something similar for a much smaller case - it binds a CSV file into variables, and then does a copy-and-search-and-replace from a set of template config files. I could probably do the same here.

    Read the article

  • How can I change how OS X's 'say' command pronounces a word?

    - by jwhitlock
    OS X's say command is useful for some tasks (such as Skype's 'notify me when a contact comes online), but it is pronouncing some names incorrectly. Is there a way to teach say to pronounce a word differently? For example, try: say "Hi, Joel Spolsky" The 'ol' sounds like 'ball' rather than 'old'. I'd like to add an exception that say "Pronounce Spolsky like this", rather than try to teach new linguistic rules. I bet there is a way since it can pronounce "iphone" as Apple wants. Update - After some research, here's what I've learned: Text-to-speech is split between turning the text to phonemes, and then the phonemes are turned into audio using a voice. Changing the voice doesn't effect the phonemes. The Speech Synthesis Manager has some functions for turning text to phonemes, and a method for registering a speech dictionary that will add new text-phoneme maps. However, Apple's speech dictionary must be in a binary form - I didn't find any plist XML. Using dtrace while running say, I found some interesting files opened in /System/Library/PrivateFrameworks/SpeechDictionary.framework/Resources. This is probably the speech dictionary, but they are all binary, except for Homophones, which is XML. Adding entries to Homophones does nothing - it is probably used in speech-to-text. They are also code signed by Apple - changing them may prevent some programs from working. PrefixDictionary CartNames CartLite SymbolDictionary Homophones There are ways to add text versions of application interface elements so VoiceOver works, a lot of which a developer gets for free, but there are tricky bits. The standard here appears to be to use a phonetic spelling as needed. My guesses are: say is a light layer of code on top of the Speech Synthesis Manager. It would be easy for the Apple devs to add a command line option to take the path to a speech dictionary plist for alternate phoneme mapping, but they didn't. It may be a useful open-source project to write a better say. Skype probably uses Speech Synthesis Manager directly, leaving no hooks to change the way my friend's names are pronounced, other than spelling them phonetically, which is silly. The easiest way to make a command line version of say is how JRobert suggested. Here's my quick implementation, using Doug Harris's spelling suggestion: #!/bin/sh echo $@ | tr '[A-Z]' '[a-z]' | sed "s/spolsky/spowlsky/g" | /usr/bin/say Finally, some fun command line stuff: # Apple is weird sqlite3 /System/Library/PrivateFrameworks/SpeechDictionary.framework/Resources/Tuples .dump # Get too much information about what files are being opened sudo dtrace -n 'syscall::open*:entry { printf("%s %s",execname,copyinstr(arg0)); }' # Just fun say -v bad "Joel Spolsky Spolsky Spolsky Spolsky Spolsky, Joel Spolsky Spolsky Spolsky Spolsky Spolsky" echo "scale=1000; 4*a(1)" | bc -l | say

    Read the article

  • Managing multiple independant domains with Google Apps

    - by Saif Bechan
    I am currently running a server where I have multiple domains with all of them running there own mail server. My plan is to outsource this whole email service and have Google, or competitor, do this for me. Let me start by telling you the setup I have now and want to migrate to Google. Initial setup I have a main domain where I run my server, and my nameserver. This is an important domain because this holds the connection with all my internal applications. For example log messages, cronjob messages, and virus-scan messages are sent to this domain. This email is also registered at my registrar and I use it to communicate with my ISP. Next I run a few independent websites that all need their independent email addresses. This can be on shared space, I don't mind. 1 Gig will be enough for everything I am going to do. Summary: superdomain.com (which only has a catchall for internal use and communication with my ISP) cars.com (independent) flowers.com (independent) foods.com (independent) I am going to be the admin for all of this. The independent domains don't need there own admin panel, they just need email addresses like info@ support@, etc. I do all the managing and they just send and receive emails using the accounts i give them. All of the websites have there different staff that use the accounts. Tried so far I have registered my superdomain, but I can only add aliases to the main domain. If I make all the other domains aliases the emails from [email protected] and [email protected] will have the same inbox. I want them to be separate. is the only way to achieve this by creating an account for each domain? And if so, is there no way of creating a superdomain account where I can edit all these accounts easily without having to log in 4 different places to get my work done. I have searched the Google help forums, and posted questions but without any results so far. Questions Can anyone please give me some advice on what to do. I currently use the free program Google has.

    Read the article

  • Start a software company offshore

    - by Mascarpone
    Hello Everybody, I own a small, very young, EU based (Italy) company, and among other things, we sell IT solutions. I have a degree in applied mathematics, and I mainly deal with user interfaces, embedded systems, automation and web applications. You can say that I'm an enlightened entrepreneur because I work only with open source software (OS, IDE, I release under BSD , ... everything is free as in freedom), I give high importance to post sales services and customer satisfaction, plus I think I'm the best boss someone could desire (LOL), as I have google in mind when I think about IT workers rights. But the most beautiful thing is that, although everybody advised us not to use open source, is that we are quite profitable!!! (for the sixth trimester in a row). Now I offshore most of the work to an Indian company. I divide the work in modules and I outsource the longer or more trivial ones. I spend a lot of time defining the specifications and I leave the hard work to them. Using productivity bonuses, a lot of prototypes and third-party audits I think that my software has reached a very good quality level. I would like to start my own software development company, in order to improve control over process and cut costs. Obviously I can't afford the cost of labor in the EU, so I thought about opening a company in Asia. What I need Is: 1) Cheap labor - I can afford to give productivity bonuses and higher than average wages and stay profitable just because labor is cheap. 2) Many talents - I need a good level of tertiary education, and a good number of graduates, so I can hire junior developers and train and teach them according to my needs and philosophies (e.g.: open source mind) 3) Good infrastructure - buildings, transport, internet, .... everything that a company might need. I thought about 3 possible candidates: 1) India - I already work with indian people, I know that they are realiable and speak a good english. Big cities are too expensive, but maybe a small city like lucknow http://en.wikipedia.org/wiki/Lucknow could suits my needs. 2) China - They say it's cheaper than India, but I everytime I worked with a chineese company the language was a big barrier. They work hard, are somewhat skilled and cheap but maybe it's a risky path. Plus I feel a little uncofortable with their lack of human rights. 3) Philippines - Same as china: cheaper than india, but maybe less educated. Where do you think it's the best place to start a software company? Any reading or book to advise? thank you very much

    Read the article

  • How to set up a file server in a restricted corporate environment

    - by Emilio M Bumachar
    I work in a big corporation, and the disk space my team gets in the corporate file server is so low, I am considering turning my work PC into a file server. I ask this community for links to tutorials, software suggestions, and advice in general about how to set it up. My machine is an Intel Core2Duo E7500 @ 3GHz, 3 GB of RAM, Running Windows XP Service Pack 3. Upgrading, formatting or installing another OS is out of the question. But I do have Administrator priviledges on the PC, and I can install programs (at least for now). A lot of security software I don't even know about is and must remain installed. But I only need communication whithin the corporate network, which is not restricted. People have usernames (logins) on the corporate network, and I need to use them to restrict access. Simply put, I have a list of logins of team members, and only people in the list should access the files. I have about 150 GB of free disk space. I'm thinking of allocating 100 GB to the team's shared files. I plan monthly backups on machines of co-workers, same configuration. But automation of backups is a nice, unnecessary feature: it's totally acceptable for me to manually copy the contents to a different machine once a month. Uptime is important, as everyone would use these files in their daily work. I have experience as a python and C programmer, but no experience whatsoever as a sysadmin, and almost nothing of my programming experience is network programming. I'm a complete beginner in this. Thanks in advance for any help. EDIT I honestly appreciate all the warnings, I really do, but what I plan to make available is mostly stuff that now is solely on DVDs just for space reasons. It's 'daily work' to read them, but 'daily work write' files will remain on the corporate server. As for the importance of uptime, I think I overstated it: a few outages are OK, it's already an improvement over getting the DVDs. As for policy, my manager is kind of on my side, I will confirm that before making my move. As for getting more space through the proper channels, well, that was Plan A, and it's still on the table... But I don't have much hope. I'm not as "core businees" as I'd like.

    Read the article

  • open source knowledge base CMS system

    - by Thomi
    I'm looking for an open source knowledge base system that uses tags, rather than free-text search to identify articles (a lot like serverfault does). I've looked at twiki, which many people suggested, but haven't found what I'm looking for. Basically I want to be able to create and tag articles, and provide an easy way for anonymous users to search based on tags. Edit: OK, here's some more detail regarding what I want. Basically, all the knowledge base systems I have seen so far are a collection of articles, each article with a title. Most of them allow you to categorise articles into groups and sub-groups. Users of the system can search for information using a title search, for example "How do I print from AwesomeProduct?" - which then shows a list of any articles that match that search text. This is fine and dandy when your KB is for one version of the software product (the mythical AwesomeProduct ver 1.0). However, the development team then go ahead and create a new version (ver 2.0) that adds many new features and changes some existing features. Now, how do we support both products in the same KB? The Naive method is to copy all articles from 1.0, and update them for 2.0, adding and removing articles in 2.0 as required. We can then add text at the top of every 1.0 article that says: "this articles applies to 1.0 only, to see the 2.0 version, click here" (or something similar) The problem with articles being indexed in the system by title is that it's very hard to filter based on meta-data like version. What happens when we create version 3.0 or 4.0? The end-situation here is that you have a mess of articles. They're hard to search, hard to filter, and even harder to manage. The solution (it seems to me) is to use tags, rather than text as the article index mechanism. So articles can be tagged with a tag representing the software version, topic area etc. etc. Users can then filter based on tag - an example search might be "version_1 printing" - which straight away gives a list of articles with all these tags. So that's what I'm looking for - a KB system that uses tags, rather than text to index many articles. I'm sure I could build something with drupal, but I was hoping for something that worked out-of-the-box.

    Read the article

  • How to get the best LINPACK result and conquer the Top500?

    - by knweiss
    Given a large Linux HPC cluster with hundreds/thousands of nodes. What are your best practices to get the best possible LINPACK benchmark (HPL) result to submit for the Top500 supercomputer list? To give you an idea what kind of answers I would appreciate here are some sub-questions (with links): How to you tune the parameters (N, NB, P, Q, memory-alignment, etc) for the HPL.dat file (without spending too much time trying each possible permutation - esp with large problem sizes N)? Are there any Top500 submission rules to be aware of? What is allowed, what isn't? Which MPI product, which version? Does it make a difference? Any special host order in your MPI machine file? Do you use CPU pinning? How to you configure your interconnect? Which interconnect? Which BLAS package do you use for which CPU model? (Intel MKL, AMD ACML, GotoBLAS2, etc.) How do you prepare for the big run (on all nodes)? Start with small runs on a subset of nodes and then scale up? Is it really necessary to run LINPACK with a big run on all of the nodes (or is extrapolation allowed)? How do you optimize for the latest Intel/AMD CPUs? Hyperthreading? NUMA? Is it worth it to recompile the software stack or do you use precompiled binaries? Which settings? Which compiler optimizations, which compiler? (What about profile-based compilation?) How to get the best result given only a limited amount of time to do the benchmark run? (You can block a huge cluster forever) How do you prepare the individual nodes (stopping system daemons, freeing memory, etc)? How do you deal with hardware faults (ruining a huge run)? Are there any must-read documents or websites about this topic? E.g. I would love to hear about some background stories of some of the current Top500 systems and how they did their LINPACK benchmark. I deliberately don't want to mention concrete hardware details or discuss hardware recommendations because I don't want to limit the answers. However, feel free to mention hints e.g. for specific CPU models.

    Read the article

  • What issues ensue from having multiple versions of Office installed?

    - by Michael Sorens
    My ultimate question is embodied in the title but I thought it might be helpful to others if I detail what instigated my inquiry and my examination of the problem. To me, the first rule of software updates is Primum non nocere -- first, do no harm. So with my Windows 7 system containing both Office 2003 and Office 2010 I blithely proceeded to install this month's updates from Microsoft, containing updates for both versions of Office. While Microsoft officially does not recommend running multiple versions (see, for example, Running Multiple Versions of Microsoft Excel it is possible; I have had two versions installed for a year or more and have never run into an issue before. One thing that is always mentioned is installation order, i.e., the one you want to open files by default should be installed last. I wanted 2010 as my default so I had indeed installed 2003 first then, years later, 2010. So with this round of Windows updates, either it installed patches to 2010 before 2003, knocking out the file association, or the 2003 patch was more comprehensive, in the sense of touching the file association while the 2010 did not. In any case, after updates, double-clicking a .xls file opened 2003 rather than 2010. Web search indicated either: Use the file associations control panel to re-associate .xls files with the correct version of excel. I looked at this first, but it showed what seemed to be an unversioned "Excel" associated with .xls files so I did not check further. (This turned out to be an error on my part; more later.) Re-install versions in the desired order; I find this unreasonable. Run the repair option of the Office installer on the desired version; still seems more work than one should need. Run excel from the command line with "/regserver" on the one to be the default and "/unregserver" on the other. Good idea, but further search indicated that neither 2007 nor 2010 support "/regserver" contrary to some posts (e.g. Default Program With Multiple Versions Installed). Since this was a Windows Update issue and Microsoft provides free support for such, I inquired there as well, but succeeded only in getting the suggestion to uninstall all other versions, period; not acceptable to me. What worked for me was going back to the file associations control panel and manually selected the Office 2010 version of Excel. While it appeared no different in the control panel, it did fix the double-click issue. So if all it takes is this simple fix after an update, I can live with that. What I am wondering is: Has anyone seen any other problems related to having multiple versions of Office installed?

    Read the article

  • 2nd Year College - Learning - Microsoft Server Products

    - by Ryan
    As the title says, I just finished my first year of college (majoring in Software Engineering). Fortunately my school likes Microsoft enough, and I can get pretty much anything I want that Microsoft sells. I also can get IBM Websphere and the like for free as well. Earlier this year, I set up an oldish computer (2.6 Pentium D, x64) to run ubuntu server headless. I'm predominately a Java developer, so Apache, Maven, Nexus, Sonar, SVN, etc made it onto the machine. It worked really well for personal and school projects, especially team projects (quick ramp up). Anyways, I started to pick up C# to complement my Java knowledge (don't judge me :P), and am interested in working with some of the associated Microsoft equivalents. The machine currently has the Ubuntu install, as well as Windows 7 Ultimate. I do all of my actual development work off my laptop, also running Windows 7 Ultimate. I was wondering what software you would recommend putting on the machine. I’m not actually serving anything off the machine itself, but in Ubuntu I had it doing integration tests with Hudson on every commit, and profiling my applications, etc, etc. The machine would be running headless, and I would remote into it. Here is what I am currently leaning towards / wondering about: Windows 7 Ultimate vs Windows Server 2008 (R2) (no one is really clear why I should go with one over the other) Windows Team Foundation Sharepoint (Never used it before, kind of meh about it) IBM Websphere or Glassfish (Some Java EE web server) SQL Server 2008 A DVCS In order to better control product conflicts / limit resource use, I’m wondering if I should install things into virtual machines (I can get VmWare or Microsoft Virtualization Products) I also plan on installing everything I had running under Linux (it’s almost entirely Java based development software, so it’ll run on both, only reason I went with ubuntu during the year was because the apache build seemed better). I’m primarily looking to become familiar with enterprise software development tools, as well as get something functional that will help my development process. (IE, I’ll still use project and assign tasks even though I might be the only one to assign tasks to, just to practice doing so). Is there any other software / configuration details I should explore? Opinions on my current list? I primarily use C#, Java, and PHP. I'm familiar with ruby, and python as well. Thanks!

    Read the article

  • Moving from VPS to Cloud

    - by GRIGORE-TURBODISEL
    ...and I have a few questions. I'm basically working on a MySQL+PHP based webapp. Since I don't have on-demand scaling with VPS, I'm planning to move from VPS to Cloud when I hit the 1000 subscribers barrier. I'm looking at Windows Azure but I'm ok with other suggestions. So here are my questions: Will it really cost me a kidney? Every subscriber needs to download around 4-5MB of static resources each day. Bandwidth is free on the VPS but here I see costs can easily get to $800.00/mo; this makes me very insecure about the whole thing, I mean VPS is just $2,000/yr. Do I need another VM or is PHP included in the Web Sites? I have basic sysadmin skills, I think I can handle setting up a PHP install, but will I have to do this? If yes, what other service do I need to setup manually? What about Memcached, MySQL, etc? What security protections does it include? For example I have some basic protection included, like directory traversals and executable files upload; I also have CloudFlare on my other websites for DDoS protection; will I need to do the same thing here too, can it even be installed, can I edit my DNS records, etc? How are e-mails, subdomains, add-on domains, parked domains, etc. handled? I haven't seen any references to e-mail boxes. On the VPS I simply add them from cPanel ([email protected] / whatever.mysite.com / ...); do I have a similar management interface here? Do I get SSH access? Or at least FTP, remote MySQL access and maybe some incremental back-ups or something? Can I see my quotas and advanced traffic info? I must mention that I really like the idea of the whole "cloud" concept, the added reliability and everything but I really need maybe a parallel to regular hosting or something so I know what to expect.

    Read the article

  • ISPconfig3 + CentOS 6.2 , confused on how to move forward after initial install?

    - by Damainman
    I installed ISPCONFIG3 on centos 6.2 using the great guide on howtoforge.com. Everything is up and running and I can access ISPCONFIG via a browser. However I am not sure how to move forward with the initial setup so I can setup the very first account and get my website live. Details: Only have 1 server, the centos+ispconfig is running on a virtual machine of XEN XCP. I setup the server name to be server1.mydomain.com. I only have 2 usable ips. I plan to use them as follows: xx.xx.xx.01 : For my website and the websites of all accounts I add. xx.xx.xx.02 : For ns1.mydomain.com and ns2.mydomain.com (Yea I know they should be different ips at different locations, but this is what I have to work with at the moment.... ) I registered the nameservers at my registrar with the .02 ip. I want to use bind and ISPconfig to run the DNS on my server itself and not via my registrar. Right now if I go to the .01 IP it shows the centos+apache successful install page. So to break it down basically I am not sure where to start when it comes to: (What to consider and what to do to setup the first domain on the server) Telling bind to use the name server domains with .02. Setting up my First website(which will be my main website) in ISPconfig so mydomain.com resolves properly to my server. Make it so when you go to the .01 IP, it either redirects or shows the contents of my main website. (If this can't be done, then any advice is appreciated) Making sure that when I add a new domain, it automatically puts in the proper information for the domain so it points to the right mail, database, dns, entry. If I overlooked a tutorial then please feel free to let me know, and any advice would be greatly appreciated. Some of the tutorials I found were not specific to doing everything on only one server with Centos+Apache+Bind. Right now all I did was install centos and install ISPconfig3. Trying to move forward correctly so I don't mess up everything I did by not knowing what to do. Thank you in advance!!

    Read the article

< Previous Page | 536 537 538 539 540 541 542 543 544 545 546 547  | Next Page >