Search Results

Search found 8897 results on 356 pages for 'hit detect'.

Page 318/356 | < Previous Page | 314 315 316 317 318 319 320 321 322 323 324 325  | Next Page >

  • Why won't vyatta allow SMTP through my firewall?

    - by Solignis
    I am setting up a vyatta router on VMware ESXi, But I see to have hit a major snag, I could not get my firewall and NAT to work correctly. I am not sure what was wrong with NAT but it "seems" to be working now. But the firewall is not allowing traffic from my WAN interface (eth0) to my LAN (eth1). I can confirm its the firewall because I disabled all firewall rules and everything worked with just NAT. If put the firewalls (WAN and LAN) back in place nothing can get through to port 25. I am not really sure what the issue could be I am using pretty basic firewall rules, I wrote the rules while looking at the vyatta docs so unless there is something odd with the documentation they "should" be working. Here is my NAT rules so far; vyatta@gateway# show service nat rule 20 { description "Zimbra SNAT #1" outbound-interface eth0 outside-address { address 74.XXX.XXX.XXX } source { address 10.0.0.17 } type source } rule 21 { description "Zimbra SMTP #1" destination { address 74.XXX.XXX.XXX port 25 } inbound-interface eth0 inside-address { address 10.0.0.17 } protocol tcp type destination } rule 100 { description "Default LAN -> WAN" outbound-interface eth0 outside-address { address 74.XXX.XXX.XXX } source { address 10.0.0.0/24 } type source } Then here is my firewall rules, this is where I believe the problem is. vyatta@gateway# show firewall all-ping enable broadcast-ping disable conntrack-expect-table-size 4096 conntrack-hash-size 4096 conntrack-table-size 32768 conntrack-tcp-loose enable ipv6-receive-redirects disable ipv6-src-route disable ip-src-route disable log-martians enable name LAN_in { rule 100 { action accept description "Default LAN -> any" protocol all source { address 10.0.0.0/24 } } } name LAN_out { } name LOCAL { rule 100 { action accept state { established enable } } } name WAN_in { rule 20 { action accept description "Allow SMTP connections to MX01" destination { address 74.XXX.XXX.XXX port 25 } protocol tcp } rule 100 { action accept description "Allow established connections back through" state { established enable } } } name WAN_out { } receive-redirects disable send-redirects enable source-validation disable syn-cookies enable SIDENOTE To test for open ports I have using this website, http://www.yougetsignal.com/tools/open-ports/, it showed port 25 as open without the firewall rules and closed with the firewall rules. UPDATE Just to see if the firewall was working properly I made a rule to block SSH from the WAN interface. When I checked for port 22 on my primary WAN address it said it was still open even though I outright blocked the port. Here is the rule I used; rule 21 { action reject destination { address 74.219.80.163 port 22 } protocol tcp } So now I am convinced either I am doing something wrong or the firewall is not working like it should.

    Read the article

  • "Hostile" network in the company - please comment on a security setup

    - by TomTom
    I have a little specific problem here that I want (need) to solve in a satisfactory way. My company has multiple (IPv4) networks that are controlled by our router sitting in the middle. Typical smaller shop setup. There is now one additional network that has an IP Range OUTSIDE of our control, connected to the internet with another router OUTSIDE of our control. Call it a project network that is part of another companies network and combined via VPN they set up. This means: They control the router that is used for this network and They can reconfigure things so that they can access the machines in this network. The network is physically split on our end through some VLAN capable switches as it covers three locations. At one end there is the router the other company controls. I Need / want to give the machines used in this network access to my company network. In fact, it may be good to make them part of my active directory domain. The people working on those machines are part of my company. BUT - I need to do so without compromising the security of my company network from outside influence. Any sort of router integration using the externally controlled router is out by this idea So, my idea is this: We accept the IPv4 address space and network topology in this network is not under our control. We seek alternatives to integrate those machines into our company network. The 2 concepts I came up with are: Use some sort of VPN - have the machines log into VPN. Thanks to them using modern windows, this could be transparent DirectAccess. This essentially treats the other IP space not different than any restaurant network a laptop of the company goes in. Alternatively - establish IPv6 routing to this ethernet segment. But - and this is a trick - block all IPv6 packets in the switch before they hit the third party controlled router, so that even IF they turn on IPv6 on that thing (not used now, but they could do it) they would get not a single packet. The switch can nicely do that by pulling all IPv6 traffic coming to that port into a separate VLAN (based on ethernet protocol type). Anyone sees a problem with using he switch to isolate the outer from IPv6? Any security hole? It is sad we have to treat this network as hostile - would be a lot easier - but the support personnel there is of "known dubious quality" and the legal side is clear - we can not fulfill our obligations when we integrate them into our company while they are under a jurisdiction we don't have a say in.

    Read the article

  • "Error loading operating system": Win7/Vista

    - by LookitsPuck
    Hey fellas, Have this computer for about 2 years now. Originally had Vista installed, now have Windows 7 installed. Both on separate hard drives. Also have another drive used strictly for media. About a week ago, the Vista hard drive started going on its way out. Was getting problems on startup. After a few BIOS settings, I was able to get into Windows 7 and everything was fine. However, I started remembering the startup issues, so I deleted the bootup for Vista under msconfig. Didn't restart the computer at that time, though. For a few days, everything was ok. Last night I play a little poker, then hit the hay. I wake up to a good ole "Error loading operating system" on the screen. Just wonderful. Looks like the computer restarted overnight (auto updates, anyone?). So, after a big of finagling and half hearted tries, I can't get past the "Error loading operating system" screen. FWIW, in the BIOS it can see my hard drives fine. So I move on. I get my Windows 7 installation disk to try and do a repair. Go in the BIOS, change boot priority to DVD drive, and we're on our merry way. After loading from the disc, I first try jumping into the "Repair your computer" section. That opens up the System Recovery Options. However, this is where the problem comes into play. I don't see any operating systems here. Nada. What's odd though is if I click on the Load Drivers button, I can see my Windows 7 partition (C:), and can go through the files and folders without issue. What do I do at this point? I can't repair it. It seems like I can traverse the hard drive without issue when in an open dialog in the System Recovery Options, but I'm getting the good ole "Error loading computer" on bootup. Suggestions? Thanks all!!

    Read the article

  • Hibernating and booting into another OS: will my filesystems be corrupted?

    - by Ryan Thompson
    Suppose I have Windows and Linux installed on the same computer. If I hibernate Windows, can I boot into Linux without corrupting the Windows filesystem when I resume Windows? What about the other way around? What if I hibernate one, boot into the other, and mount the hibernated filesystem read/write? Read-only? If this is unsafe, is there any way to detect the hibernated state of the other OS and prevent mounting its filesystem? Basically, how far can I push this before it breaks, and how dangerous is it near the edge? I think I know the answers to some of the above questions, but for other ones, I have no idea, and for obvious reasons I have not tested this on my own computer. If someone has tested these, please enlighten the rest of us. I'm not necessarily looking for a specific answer to every question; I'll accept any response that answers a reasonable portion. EDIT: Let me clarify that when I say "hibernate," I mean the process of writing the contents of RAM to the hard disk and completely powering down the computer. In this state, powering the computer back on brings you through the BIOS and bootloader again, and you could theoretically select another operating system on a multi-boot system. Anyway, on with the original question: RESULTS Ok, after everyone's assurances that this would work, I tested it for myself. I set up Ubuntu to remount all ntfs filesystems and external drives read-only before hibernating. There was no need for a similar Windows setup because Windows does not read Linux filesystems. Then, I tried alternately hibernating one operating system and resuming the other, back and forth a few times. I even tried mounting the Windows filesystem from Ubuntu read-write, and creating a few files. Windows didn't complain when I resumed. So, in conclusion, you can more or less freely hibernate in a dual-boot Windows/Linux scenario. Note that I did not test a dual Linux/Linux co-hibernation situation. If you have two or more Linux installs and you hibernate one of them, you might be able to corrupt the filesystem by mounting it from another.

    Read the article

  • Apache ProxyPass Missing Images

    - by EpicOfChaos
    I have a apache server that sits in front of my glassfish server. mydomain.com goes directly to my static files on apache, than if you hit the subdomain forum.mydomain.com it goes to the glassfish webapp forum/ at 127.0.0.1:8080/forum/. This proxy seems to work it takes me to the web app but all of the images are missing! Here is how I go my virtual host setup. NameVirtualHost *:80 <VirtualHost *:80> ServerName www.mydomain.com ServerAlias subdomain.mydomain.com mydomain.com DocumentRoot "/usr/local/apache/htdocs" </VirtualHost> <VirtualHost *:80> ServerName forum.mydomain.com # any logging config, etc, that you need ProxyPass / http://127.0.0.1:8080/forum/ ProxyPassReverse / http://127.0.0.1:8080/forum/ </VirtualHost> And in the access log this is what I am seeing. [15/Jan/2012:03:28:02 +0000] "GET /forums/list.page HTTP/1.1" 200 12861 [15/Jan/2012:03:28:02 +0000] "GET /forum/templates/default/images/logo.jpg HTTP/1.1" 404 1075 [15/Jan/2012:03:28:02 +0000] "GET /forum/templates/default/styles/style.css?1326582403934 HTTP/1.1" 404 1075 [15/Jan/2012:03:28:02 +0000] "GET /forum/templates/default/images/icon_mini_recentTopics.gif HTTP/1.1" 404 1075 [15/Jan/2012:03:28:02 +0000] "GET /forum/templates/default/images/icon_mini_search.gif HTTP/1.1" 404 1075 [15/Jan/2012:03:28:02 +0000] "GET /forum/templates/default/images/icon_mini_members.gif HTTP/1.1" 404 1075 [15/Jan/2012:03:28:02 +0000] "GET /forum/templates/default/styles/en_US.css?1326582403934 HTTP/1.1" 404 1075 [15/Jan/2012:03:28:02 +0000] "GET /forum/templates/default/images/icon_mini_groups.gif HTTP/1.1" 404 1075 [15/Jan/2012:03:28:02 +0000] "GET /forum/templates/default/images/folder_big.gif HTTP/1.1" 404 1075 [15/Jan/2012:03:28:02 +0000] "GET /forum/templates/default/images/icon_mini_login.gif HTTP/1.1" 404 1075 [15/Jan/2012:03:28:02 +0000] "GET /forum/templates/default/images/whosonline.gif HTTP/1.1" 404 1075 [15/Jan/2012:03:28:02 +0000] "GET /forum/templates/default/images/icon_mini_register.gif HTTP/1.1" 404 1075 [15/Jan/2012:03:28:02 +0000] "GET /forum/ping_session.jsp HTTP/1.1" 404 1075 [15/Jan/2012:03:28:02 +0000] "GET /forum/templates/default/images/folder_lock.gif HTTP/1.1" 404 1075 [15/Jan/2012:03:28:02 +0000] "GET /forum/templates/default/images/folder.gif HTTP/1.1" 404 1075 [15/Jan/2012:03:28:02 +0000] "GET /forum/templates/default/images/folder_new.gif HTTP/1.1" 404 1075 Any Ideas why the images are not working?

    Read the article

  • Varnish 503 Guru Mediation errors with pfsense and healthy apache

    - by Fammy
    We are running a pfsense firewall / load balancer with varnish as service, In front of Fedora linux webservers running apache. We are getting intermittent 503 guru mediation errors. We are a bit stuck scratching our heads because it is not easily repeatable. The timeouts are set to 30s (connect and first byte) but yet the 503 page will show instantly, not after 30s. Then if you refresh immediately it may very well work instantly and sometimes for a 100 refreshes. The load average on the web servers is < 1, the DB server is < 3 (all servers (web, db, pfsense/varnish) are physical rather than VM. I would have thought if the timeouts were being hit then the 503 page would only appear after 30s am I mistaken? Also when an error happens there does not appear to be any corresponding error in apache's log files. This seems to affect pages as well as images, so it is possible to have the page load fine, and for 9/10 images on the page to be fine but 1 not work An example of the varnish debug is below. It says no backend connection but I can't figure out why, if the load was high on apache I could understand it being flaky The machines are on the same gig ethernet lan 21 ReqStart c *IP-REMOVED* 33418 1274368062 21 RxRequest c GET 21 RxURL c /fashion/ 21 RxProtocol c HTTP/1.1 21 RxHeader c User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.0.5) Gecko/2008121622 Fedora/3.0.5-1.fc10 Firefox/3.0.5 21 RxHeader c Host: *ourdomain.com* 21 RxHeader c Accept: */* 21 RxHeader c Accept-Encoding: deflate, gzip 21 VCL_call c recv lookup 21 VCL_call c hash 21 Hash c /fashion/ 21 Hash c *ourdomain.com* 21 VCL_return c hash 21 VCL_call c miss fetch 21 FetchError c no backend connection 21 VCL_call c error restart 21 VCL_call c recv lookup 21 VCL_call c hash 21 Hash c /fashion/ 21 Hash c *ourdomain.com* 21 VCL_return c hash 21 VCL_call c miss fetch 21 FetchError c no backend connection 21 VCL_call c error restart 21 VCL_call c recv lookup 21 VCL_call c hash 21 Hash c /fashion/ 21 Hash c *ourdomain.com* 21 VCL_return c hash 21 VCL_call c miss fetch 21 FetchError c no backend connection 21 VCL_call c error deliver 21 VCL_call c deliver deliver 21 TxProtocol c HTTP/1.1 21 TxStatus c 503 21 TxResponse c Service Unavailable 21 TxHeader c Server: Varnish 21 TxHeader c Content-Type: text/html; charset=utf-8 21 TxHeader c Content-Length: 384 21 TxHeader c Accept-Ranges: bytes 21 TxHeader c Date: Wed, 11 Apr 2012 10:36:17 GMT 21 TxHeader c X-Varnish: 1274368062 21 TxHeader c Age: 0 21 TxHeader c Via: 1.1 varnish 21 TxHeader c Connection: close 21 TxHeader c X-Cache: MISS 21 Length c 384 21 ReqEnd c 1274368062 1334140577.449995041 1334140577.450334787 1.794108152 0.000282764 0.000056982

    Read the article

  • Proper Imaging Procedures to Restore and Deploy Image with Separate System Reserved Partition

    - by alharaka
    UPDATE: As per my experience here, no one responded. If I do not hear back from TechNet forum members about it, I will post a bounty here, if it makes a difference. I have banged my head against a wall for what seems like all week. I am going to explain my simple procedure, and how none of it, absolutely none, seems to work afterword despite few alternatives and everyone on the internet telling assuming this is how to do it. Diskpart Commands to Create FS Structure REM Select the disk targeted for deployment. REM REM NOTE: Usually disk 0, but drive failure can make it external USB REM media. This will erase the drive regardless! select disk 0 REM Remove previous formatting. clean REM Create System Reserved partition bootloader and files. create partition primary size=100 REM Format the volume format fs=ntfs label="System Reserved" quick override noerr REM Assign the System Reserved partition the D: mount for now assign letter=C REM The main system partition, size not specified to occupy whole drive. create partition primary REM Format the volume format fs=ntfs quick override noerr REM Assign the OS partition the D: mount for now assign letter=D REM Make this the active/bootable partition. sel disk 0 sel partition 1 active REM Close out the diskpart session. exit Now, I thought this was madness, but it turns out the System Reserved partition and standard "System Partition" (C:, commonly both the boot and system volumes where you find the Windows directory AND the bootmgr/ntldr hardware files, this is where Windows 7 diverges) as mounted in the Windows PE session where I run these commands do not matter. See reference here. Since this needs to be BitLocker-ready, enter this crappy System Reserved partition that is separate 100MB of awesome that goes before the regular boot volume. I do this, then I proceed to the next step. Deploy System Reserved and Normal System Images REM C is still the "System Reserved Partition", and the image is just like it sounds. imagex /apply G:\images\systemreserved.wim 1 C: REM D is now what will be the C: system partition on reboot, supposedly. imagex /apply G:\images\testimage.wim 1 D: Reboot the system Now, the images I just captured should look good. This is not even sysprepped, but reapplying the same fscking image I prepared on the same reference workstation hours before. Problem is I get 0xc000000e could not detect the accessible boot device \Windows\system32\winload.exe or different kinds of nonsense revolving around being able to find the boot volume with all the right files. I try different variations of things, now none of them work. I tried repairs with bcdboot, with a fresh System Reserved partition or not, bootrec, and maually editing the damn BCD store with bcdedit. I tried finalizing the above process with and without bootsect /nt60 C: /force. I need to wrap up and automate this procedure. What am I doing wrong that does not make the image happy, but really just miserable.

    Read the article

  • Partition table corrupted (USB flash drive)

    - by 13ren
    It's an 8 GB Patriot thumb drive, which I've used extensively with lots of data. Today, it is detected, but all data is gone: (EDIT at least some data is still there, but the partition table is gone) EDIT @Sathya (thanks) here's the relevant output from sudo fdisk -l: Disk /dev/sdc: 8019 MB, 8019509248 bytes 247 heads, 62 sectors/track, 1022 cylinders Units = cylinders of 15314 * 512 = 7840768 bytes Disk /dev/sdc doesn't contain a valid partition table It looks like it is /dev/sdc, with that 8 GB... and no partition table. I tried to mount /dev/sdc (and then dmesg | tail): /media> sudo mount /dev/sdc mytmp mount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or other error In some cases useful info is found in syslog - try dmesg | tail or so /media> dmesg | tail [ 24.300000] sdc: unknown partition table [ 24.320000] sd 2:0:0:0: Attached scsi removable disk sdc [ 24.370000] usb-storage: device scan complete [ 26.870000] EXT2-fs error (device sdc): ext2_check_descriptors: Block bitmap for group 1 not in group (block 0)! [ 26.870000] EXT2-fs: group descriptors corrupted! [ 50.420000] unhashed dentry being revalidated: .DCOPserver_eeepc-brendanma__0 [ 50.430000] unhashed dentry being revalidated: .DCOPserver_eeepc-brendanma__0 [ 50.430000] unhashed dentry being revalidated: .DCOPserver_eeepc-brendanma__0 [ 5565.470000] EXT2-fs error (device sdc): ext2_check_descriptors: Block bitmap for group 1 not in group (block 0)! [ 5565.470000] EXT2-fs: group descriptors corrupted! EDIT @Col: results from testdisk Disk /dev/sdc - 8013 MB / 7642 MiB - CHS 1022 247 62 Current partition structure: Partition Start End Size in sectors Partition sector doesn't have the endmark 0xAA55 After I hit [proceed], it says: Structure: Ok. Keys A: add partition, L: load backup, Enter: to continue The "Structure: Ok." seems reassuring... will "A: add partition" make my old data accessible (if it's still there), or will it make a new, fresh partition? Another option is "[ MBR Code ] Write TestDisk MBR code to first sector" - would it be better to do this? EDIT I found that at least some of my data is still on the flash drive, by using the below, and searching for English text in less (like " the "): cat /dev/sde | tr -cd '\11\12\40\1540-\176' | less (The drive changed from "/dev/sdb" to "/dev/sde" because I connected some extra drives today). I've learnt that "/dev/sde1" would be the first partition; and "/dev/sde" is the whole drive. Because unix treats these devices just like files, you can use all the ordinary unix file commands on them, like cat, and then process them like any other stream of data. The tr above removes non-printable characters ("\40" is space, which I wanted to preserve). In less, you can use "/" to search, similar to Vim. How can I get my data back (assuming it's still there)? If only the partition table is corrupted, is there a standard "partition recovery tool"? Is there a way to "repartition" without deleting everything?

    Read the article

  • How to get more information from the system crash

    - by viraptor
    I'd like to debug an issue I'm having with a linux (debian stable) server, but I'm running out of ideas of how to confirm any diagnosis. Some background: The servers are running DL160 class with hardware raid between two disks. They're running a lot of services, mostly utilising network interface and CPU. There are 8 cpus and 7 "main" most cpu-hungry processes are bound to one core each via cpu affinity. Other random background scripts are not forced anywhere. The filesystem is writing ~1.5k blocks/s the whole time (goes up above 2k/s in peak times). Normal CPU usage for those servers is ~60% on 7 cores and some minimal usage on the last (whatever's running on shells usually). What actually happens is that the "main" services start using 100% CPU at some point, mainly stuck in kernel time. After a couple of seconds, LA goes over 400 and we lose any way to connect to the box (KVM is on it's way, but not there yet). Sometimes we see a kernel reporting hung task (but not always): [118951.272884] INFO: task zsh:15911 blocked for more than 120 seconds. [118951.272955] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [118951.273037] zsh D 0000000000000000 0 15911 1 [118951.273093] ffff8101898c3c48 0000000000000046 0000000000000000 ffffffffa0155e0a [118951.273183] ffff8101a753a080 ffff81021f1c5570 ffff8101a753a308 000000051f0fd740 [118951.273274] 0000000000000246 0000000000000000 00000000ffffffbd 0000000000000001 [118951.273335] Call Trace: [118951.273424] [<ffffffffa0155e0a>] :ext3:__ext3_journal_dirty_metadata+0x1e/0x46 [118951.273510] [<ffffffff804294f6>] schedule_timeout+0x1e/0xad [118951.273563] [<ffffffff8027577c>] __pagevec_free+0x21/0x2e [118951.273613] [<ffffffff80428b0b>] wait_for_common+0xcf/0x13a [118951.273692] [<ffffffff8022c168>] default_wake_function+0x0/0xe .... This would point at raid / disk failure, however sometimes the tasks are hung on kernel's gettsc which would indicate some general weird hardware behaviour. It's also running mysql (almost read-only, 99% cache hit), which seems to spawn a lot more threads during the system problems. During the day it does ~200kq/s (selects) and ~10q/s (writes). The host is never running out of memory or swapping, no oom reports are spotted. We've got many boxes with similar/same hardware and they all seem to behave that way, but I'm not sure which part fails, so it's probably not a good idea to just grab something more powerful and hope the problem goes away. Applications themselves don't really report anything wrong when they're running. I can run anything safely on the same hardware in an isolated environment. What can I do to narrow down the problem? Where else should I look for explanation?

    Read the article

  • IE and Google Chrome timeout on an IIS6 hosted SSL page that Firefox handles well

    - by Thomas
    Ok, here's the scenario: Up until a few weeks ago, none of us noticed anything wrong with the corporate website. People were using it without complaint. Then, a client complained that a specific page on the site was timing out for him, and only when he committed a POST action on a form filled with data. I checked it out, and it timed out for me, too. But, it only timed out in Google Chrome and IE, not in Firefox. Additionally, the same page, on the same server, but served from a different domain name (one not under the protection of SSL, either) does not time out under any browser. To clarify: https://www.mysite.com/changes.php times out on POST, but the same with http works fine. That distinction (SSL vs. Non-SSL) seems to be important, as nothing else has changed. Our certificate is valid, and Firefox detects no errors thrown by the page. I've looked at the Request and Response headers from the page, and they all follow the correct formats. Then, after wandering through the site, I noticed a few other things. Both IE and Chrome will frequently time out on any page that is PHP-based. They never time out on static images or html files. I've looked at the site from a variety of different servers, my home and work workstations, and my netbook. Because of that, I've discounted a viral infection, as I highly doubt a virus is going to hit every one of the machines to which I have access in exactly the same manner. My setup is: Server: Win2k3, II6, PHP 5.2.9-1. Clients: IE7, IE8, Chrome (regular and dev channel): Frequent timeouts on PHP pages. Firefox 2, Firefox 3: No timeouts. Firebug shows no errors or even lengthy periods serving the pages. I've spent 2 days searching for any tech knowledge that I can find, and my search parameters are all too general. Everyone has problems loading SSL pages in IE and Chrome for a wide variety of reasons. The infrequent nature of the timeouts and the fact that there are no errors being reported anywhere is starting to drive me insane. Does anyone have any insight on a problem like this?

    Read the article

  • Best RAID setup for multimedia fileserver?

    - by Mr. Schwabe
    I'm building a fileserver for my small office. We do film and multimedia design. Only 3 clients connected. The server is primarily for local access to graphic assets and video files. I'm looking for advice on hardware and software required. Particularly for the RAID. I have the following objectives: A) merged capacity I'd like all other systems to access the data as a single mapped network drive that has an initial capacity of 10 TB. So perhaps 5x 2TB drives (plus mirror drives for redundancy). B) easy way to increase capacity Thinking long term, I'd like to 'easily' add more drives to the array for a potential two or three fold increase in capacity. So theoretically it could get upto a 30 TB raid array consisting of maybe 15x 2 TB drives of capacity (plus mirror drives for redundancy). C) maximum fault tolerance I want at least 1 mirror drive per capacity drive (in laymen's terms). So if I start with 10 TB / 5x 2TB of capacity, I suppose I would need another another 5x 2TB drives to be mirrors. So 10 drives total. But I'd also like potential for even more redundancy; with upto 2 additional mirrors per 'capacity drive' (and to be able to add them to the array anytime with ease). D) easy way to monitor drive health I'd like an intuitive interface for managing the raid and monitoring drive health The other systems accessing this network drive will be running Windows, but also the odd Ubuntu and MacOS system as well. Are these objectives attainable? What type of RAID setup do you recommend? What hardware will be required? Also what OS do you think this system should be running? Does it really matter? I'm no network admin - just a long time Windoze user, without much Linux experience. That said, I'm not opposed to a Linux solution if it's easy enough and more practical than a Windows OS for this server. Or maybe something such as Openfiler. Budget should hit the sweet spot for value and performance (hence my preference to use 2TB drives). The biggest focus is storage; aside from that the system just needs to keep the drives running optimally with perhaps 2 or 3 clients accessing / writing files at any given time. The hardware quote would start with something like 10x 2TB WD Caviar Blacks; about $1900 for the storage + $x for remaining parts. http://ncix.com/products/index.php?sku=42775&vpn=WD2001FASS&manufacture=Western%20Digital%20WD Your advice is appreciated, thanks!

    Read the article

  • iptables management tools for large scale environment

    - by womble
    The environment I'm operating in is a large-scale web hosting operation (several hundred servers under management, almost-all-public addressing, etc -- so anything that talks about managing ADSL links is unlikely to work well), and we're looking for something that will be comfortable managing both the core ruleset (around 12,000 entries in iptables at current count) plus the host-based rulesets we manage for customers. Our core router ruleset changes a few times a day, and the host-based rulesets would change maybe 50 times a month (across all the servers, so maybe one change per five servers per month). We're currently using filtergen (which is balls in general, and super-balls at our scale of operation), and I've used shorewall in the past at other jobs (which would be preferable to filtergen, but I figure there's got to be something out there that's better than that). The "musts" we've come up with for any replacement system are: Must generate a ruleset fairly quickly (a filtergen run on our ruleset takes 15-20 minutes; this is just insane) -- this is related to the next point: Must generate an iptables-restore style file and load that in one hit, not call iptables for every rule insert Must not take down the firewall for an extended period while the ruleset reloads (again, this is a consequence of the above point) Must support IPv6 (we aren't deploying anything new that isn't IPv6 compatible) Must be DFSG-free Must use plain-text configuration files (as we run everything through revision control, and using standard Unix text-manipulation tools are our SOP) Must support both RedHat and Debian (packaged preferred, but at the very least mustn't be overtly hostile to either distro's standards) Must support the ability to run arbitrary iptables commands to support features that aren't part of the system's "native language" Anything that doesn't meet all these criteria will not be considered. The following are our "nice to haves": Should support config file "fragments" (that is, you can drop a pile of files in a directory and say to the firewall "include everything in this directory in the ruleset"; we use configuration management extensively and would like to use this feature to provide service-specific rules automatically) Should support raw tables Should allow you to specify particular ICMP in both incoming packets and REJECT rules Should gracefully support hostnames that resolve to more than one IP address (we've been caught by this one a few times with filtergen; it's a rather royal pain in the butt) The more optional/weird iptables features that the tool supports (either natively or via existing or easily-writable plugins) the better. We use strange features of iptables now and then, and the more of those that "just work", the better for everyone.

    Read the article

  • Server 2008, 2 NICs, 2 fixed IPs - big delays using internet

    - by user46055
    Hi geniuses I have an all in one Windows 2008 server, configured with AD/DHCP/DNS/RRAS - all set up with wizards and no specific tweaking. The server has 2 network adapters : one of which ("MyWAN") is plugged into our office's internet connection, the other ("MyLAN") is plugged into a local switch, which is also where all our desktops are connected. So this one server is doing everything. When first set up, MyLAN had a fixed IP of 192.168.2.1 and served the desktops with DHCP scope 192.168.2.50-99. It also told them to use 192.168.2.1 as DNS and gateway. MyWAN was setup to take its IP etc from DHCP, being handled by the building's router and ADSL modem etc. All desktops were setup to use DHCP. This all worked perfectly fine, until I recently changed MyWAN to have a static IP (I wanted to access it from home, and needed to give it a static IP to port map in the building's router). Things still work, but there is now a long delay when accessing the internet. The actual speed is as before when downloading, but there is a pause of 3-6 secs when connecting to new hosts (for example if I browse to slashdot from either a desktop or the server itself, it'll hang on connecting to slashdot.org, hang again on connecting to *.fsdn, *.google-analytics.com and all the other hosts referenced from the main page). If I ping slashdot.org from the server, I get the following : Pinging slashdot.org [216.34.181.45] with 32 bytes of data: Reply from 192.168.2.1: Destination host unreachable. Reply from 216.34.181.45: bytes=32 time=99ms TTL=239 Reply from 216.34.181.45: bytes=32 time=100ms TTL=239 Reply from 216.34.181.45: bytes=32 time=101ms TTL=239 Pinging anywhere external always seems to hit 192.168.2.1 first, which doesn't seem right. Trying tracert from the server gives the following : Tracing route to slashdot.org [216.34.181.45] over a maximum of 30 hops: 1 MYSERVER01.intranet [192.168.2.1] reports: Destination host unreachable Trying tracert from a desktop gives the following : Tracing route to slashdot.org [216.34.181.45] over a maximum of 30 hops: 1 <1 ms * <1 ms MYSERVER [192.168.2.1] 2 * * * Request timed out. 3 6 ms 6 ms 6 ms dsl-gw1.ge.mer.uk.webtapestry.net [217.151.111.17] 4 38 ms 239 ms 251 ms gw-router.ge.mer.uk.webtapestry.net [217.151.111.13] ...and then all is fine after that. I think that DNS is working fine because the domain names are getting translated to correct IPs immediately. DHCP seems to be okay? So perhaps it's something up with my RRAS setup - although I can't see any option during the setup wizard which I would have filled in differently. I've also tried changing the binding order of the two network connections, to prioritise MyWAN, but that doesn't seem to have done anything. Any idea what's up? Many thanks - Rob

    Read the article

  • Why do some games randomly turn my screen a random solid color?

    - by Emlena.PhD
    When playing some games my computer will randomly have an error that I cannot fix without turning it off and back on again. The screen changes to one solid color, which varies (off the top of my head I can remember seeing solid green, magenta, etc..) and the sound blares a single tone. The sound sometimes briefly restores and I can still hear the game sounds and even hear and still be heard by people in my Mumble channel, but the screen doesn't right itself so I'm still blind. What's more is this happens in some games but not in others. While the game is actually running, not while I'm still in the menu. However, it does happen if I'm afk or idle but the game world is still rendering. Games where the error occurs: League of Legends World of Warcraft Trine The Sims 2 Dungeon Defenders Safe games: games where it has never occurred: Tribes: Ascend Star Wars: the Old Republic Battlefield 3 So relatively older games cause the problem while newer games do not? I cannot predict when it will happen, it just seems random. However, if it happens and I try playing the same game further after restart it does appear to occur more frequently after the first time. But if I switch to a safe game it doesn't continue happening. Both of my RAM sticks appear fine, flipped position or either one on their own and games still run, computer still boots. I would think over-heating, but then why not all games? ALso, sometimes it happens immediately after I start playing, within seconds of the 3D world booting up. I'm looking to upgrade very soon so I want to figure out what component or software is fubar and replace/repair it. Any suggestions or recommendations of tools would be helpful. Below is some system information. Dxdiag does not detect any problems. Operating System: Windows 7 Home Premium 64-bit (6.1, Build 7601) Service Pack 1 (7601.win7sp1_gdr.120305-1505) System Manufacturer: Gigabyte Technology Co., Ltd. System Model: EP45-UD3R BIOS: Award Modular BIOS v6.00PG Processor: Intel(R) Core(TM)2 Duo CPU E8500 @ 3.16GHz (2 CPUs), ~3.2GHz Memory: 4096MB RAM DirectX Version: DirectX 11 DxDiag Version: 6.01.7601.17514 64bit Unicode Graphics card name: NVIDIA GeForce GTX 285 Driver Version: 8.17.12.9610 (error has occurred w/several driver versions) Sound: I do not have a sound card, been using motherboard's built in sound)

    Read the article

  • In search of a good audio player for Ubuntu 9.10

    - by Joe Casadonte
    If this should be marked Community Wiki, please let me know. I'm switching from XP to Ubuntu, and I have been very disappointed with the selection of media players available. I'm primarily interested in an audio player, but integrated video and library management is OK, too. My criteria: Must be able to play audio CDs (I'm shocked how many apps this does away with, right away) Must be able to play MP3 & WAV; OGG, SHN, FLAC are all bonuses Repeat and Shuffle modes are a must FreeDB / GraceNote through a proxy is a must (if it can read a PAC file, that would be awesome) It needs to be really small, e.g. skinnable or an applet Ability to execute a playlist is a plus Gapless MP3 playback a plus I'm running Gnome, but I'm not totally adverse to a KDE app. Command-line only is also a viable option. Some that I've tried: RhythmBox - probably the best of the lot that I've tried; I don't like its mini mode (doesn't show the song being played) and I can't figure out how to get it to hit FreeDB/GraceNote through a proxy Songbird - can't play CDs, playlist management is atrocious Banshee Jajuk Maybe a couple of more. Thanks! UPDATE I tried out VLC, Amarok and Songbord (again). VLC I eventually got to work (I had some kind of bad configuration). It seemed way more involved than I was looking for out of a music player, and in general more geared to video than audio. I couldn't fathom its library management, which I think it has; maybe it doesn't, and that's why I couldn't figure it out. Amaork looked very promising but the library management was not to my liking, and the way it handled a playlist with both MP3 and WAV is inexplicable at best. I did like some aspects of the UI, but not enough to keep it. Songbird is very finicky, but I like the library management. Sort of. It kept telling me my Watch folder was invalid, even thought it clearly was accessible. Playlist management is bizarre, and the message that it was deleting source files whenever I deleted a playlist had me too worried to keep using it. Had it been able to play CDs, maybe I would have persevered. Audacious, while a bit odd at times, does seem to do what I want. If it had a library manager, I wouldn't have bothered trying any of the others. Thanks for the help, everyone!

    Read the article

  • How much did it cost our competitor to DDoS us at 50 Gbps for two weeks?

    - by MiniQuark
    I know that this question may sound like an invalid serverfault question, but I believe that it's quite valid: the amount of time and effort that a sysadmin should spend on DDoS protection is a direct function of typical DDoS prices. Let me rephrase this: protecting a web site against small attacks is one thing, but resisting 50 Gbps of UDP flood is another and requires time & money. Deciding whether or not to spend that time & money depends on whether such an attack is likely or not, and this in turn depends on how cheap and simple such an attack is for the attacker. So here's the full story: our company has been victim to a massive DDoS attack (over 50 Gbps of UDP traffic, full-time during 2 weeks). We are pretty sure that it's one of our competitors, and we actually know which one, because we were the only two remaining competitors on a very big request for proposal, and the DDoS attack magically stopped the day we won (double hurray, by the way)! These people have proved in the past that they are very dishonest, but we know that they are not technical at all, so we believe that they simply paid for some botnet DDoS service. I would like to know how much these services typically cost, for such a large scale attack. Please do not give any link to such services, I would really hate to give these people any publicity. I understand that a hacker could very well do this for free, but what's a typical price for such an attack if our competitors paid for it through some kind of botnet service? It is really starting to scare me (if we're talking thousands of dollars here, then I am really going to freak off: who knows, they might just hire a hit-man one day?). Of course we filed a complaint, but the police says that they cannot do much about it (DDoS attacks are virtually untraceable, so they say), and our suspicions are not enough to justify them raiding our competitor's offices to search for proofs. For your information, we now changed our infrastructure to be able to sustain such attacks: we now use a major CDN service so that our servers are not directly affected by DDoS attacks. Requests for dynamic pages do get proxied to our servers, but for low level attacks (UDP flood, or Syn floods, for example) we only receive legitimate trafic, so we're fine. If they decide to launch higher level attacks (HTTP flood or slowloris attacks for example), most of the load should be handled by the CDN... at least I hope so! Thank you very much for your help.

    Read the article

  • debian packages version convention

    - by JackWu
    I'm using debian/Ubuntu, and get confused about versions of packages. When using dpkg -l command, I get: ii vim 2:7.3.429-2ubuntu2.1 Vi IMproved - enhanced vi editor ii vim-common 2:7.3.429-2ubuntu2.1 Vi IMproved - Common files ii vim-runtime 2:7.3.429-2ubuntu2.1 Vi IMproved - Runtime files ii vim-tiny 2:7.3.429-2ubuntu2.1 Vi IMproved - enhanced vi editor - compact version ii virt-what 1.11-1 detect if we are running in a virtual machine ii w3m 0.5.3-5ubuntu1 WWW browsable pager with excellent tables/frames support ii watershed 6 reduce superfluous executions of idempotent command ii wget 1.13.4-2ubuntu1 retrieves files from the web ii whiptail 0.52.11-2ubuntu10 Displays user-friendly dialog boxes from shell scripts ii whoopsie 0.1.33 Ubuntu crash database submission daemon ii wimlib9 1.5.0-1~webupd8~precise Library to extract, create, modify, and mount WIM files ii wimtools 1.5.0-1~webupd8~precise Tools to extract, create, modify, and mount WIM files ii wireless-tools 30~pre9-5ubuntu2 Tools for manipulating Linux Wireless Extensions ii wpasupplicant 0.7.3-6ubuntu2.1 client support for WPA and WPA2 (IEEE 802.11i) ii x11-common 1:7.6+12ubuntu2 X Window System (X.Org) infrastructure ii x11-utils 7.6+4ubuntu0.1 X11 utilities ii xauth 1:1.0.6-1 X authentication utility ii xbitmaps 1.1.1-1 Base X bitmaps ii xclip 0.12-1 command line interface to X selections ii xfonts-encodings 1:1.0.4-1ubuntu1 Encodings for X.Org fonts ii xfonts-utils 1:7.6+1 X Window System font utility programs ii xkb-data 2.5-1ubuntu1.3 X Keyboard Extension (XKB) configuration data ii xml-core 0.13 XML infrastructure and XML catalog file support rc xpdf 3.02-21build1 Portable Document Format (PDF) reader ii xterm 271-1ubuntu2.1 X terminal emulator ii xz-lzma 5.1.1alpha+20110809-3 XZ-format compression utilities - compatibility commands ii xz-utils 5.1.1alpha+20110809-3 XZ-format compression utilities ii zabbix-agent 1:1.8.11-1 network monitoring solution - agent ii zlib1g 1:1.2.3.4.dfsg-3ubuntu4 compression library - runtime ii zlib1g-dev 1:1.2.3.4.dfsg-3ubuntu4 compression library - development ii zsh 4.3.17-1ubuntu1 shell with lots of features The third column is version, but it all messed up in a way I can't understand. I mean, different packages use total different naming specification. Here are the major questions: Why there are ubuntu in them, and there are not? what all the special -~+ mean? alpha and build, dfsg, what are they? Can I just use them casually? vim and other packages have 2:, what does that mean? How version comparison works, since they can be so different? Can anyone please explain this to me? Or where can I find an official document? Thanks in advance.

    Read the article

  • What is the current state of Ubuntu's transition from init scripts to Upstart? [migrated]

    - by Adam Eberlin
    What is the current state of Ubuntu's transition from init.d scripts to upstart? I was curious, so I compared the contents of /etc/init.d/ to /etc/init/ on one of our development machines, which is running Ubuntu 12.04 LTS Server. # /etc/init.d/ # /etc/init/ acpid acpid.conf apache2 --------------------------- apparmor --------------------------- apport apport.conf atd atd.conf bind9 --------------------------- bootlogd --------------------------- cgroup-lite cgroup-lite.conf --------------------------- console.conf console-setup console-setup.conf --------------------------- container-detect.conf --------------------------- control-alt-delete.conf cron cron.conf dbus dbus.conf dmesg dmesg.conf dns-clean --------------------------- friendly-recovery --------------------------- --------------------------- failsafe.conf --------------------------- flush-early-job-log.conf --------------------------- friendly-recovery.conf grub-common --------------------------- halt --------------------------- hostname hostname.conf hwclock hwclock.conf hwclock-save hwclock-save.conf irqbalance irqbalance.conf killprocs --------------------------- lxc lxc.conf lxc-net lxc-net.conf module-init-tools module-init-tools.conf --------------------------- mountall.conf --------------------------- mountall-net.conf --------------------------- mountall-reboot.conf --------------------------- mountall-shell.conf --------------------------- mounted-debugfs.conf --------------------------- mounted-dev.conf --------------------------- mounted-proc.conf --------------------------- mounted-run.conf --------------------------- mounted-tmp.conf --------------------------- mounted-var.conf networking networking.conf network-interface network-interface.conf network-interface-container network-interface-container.conf network-interface-security network-interface-security.conf newrelic-sysmond --------------------------- ondemand --------------------------- plymouth plymouth.conf plymouth-log plymouth-log.conf plymouth-splash plymouth-splash.conf plymouth-stop plymouth-stop.conf plymouth-upstart-bridge plymouth-upstart-bridge.conf postgresql --------------------------- pppd-dns --------------------------- procps procps.conf rc rc.conf rc.local --------------------------- rcS rcS.conf --------------------------- rc-sysinit.conf reboot --------------------------- resolvconf resolvconf.conf rsync --------------------------- rsyslog rsyslog.conf screen-cleanup screen-cleanup.conf sendsigs --------------------------- setvtrgb setvtrgb.conf --------------------------- shutdown.conf single --------------------------- skeleton --------------------------- ssh ssh.conf stop-bootlogd --------------------------- stop-bootlogd-single --------------------------- sudo --------------------------- --------------------------- tty1.conf --------------------------- tty2.conf --------------------------- tty3.conf --------------------------- tty4.conf --------------------------- tty5.conf --------------------------- tty6.conf udev udev.conf udev-fallback-graphics udev-fallback-graphics.conf udev-finish udev-finish.conf udevmonitor udevmonitor.conf udevtrigger udevtrigger.conf ufw ufw.conf umountfs --------------------------- umountnfs.sh --------------------------- umountroot --------------------------- --------------------------- upstart-socket-bridge.conf --------------------------- upstart-udev-bridge.conf urandom --------------------------- --------------------------- ureadahead.conf --------------------------- ureadahead-other.conf --------------------------- wait-for-state.conf whoopsie whoopsie.conf To be honest, I'm not entirely sure if I'm interpreting the division of responsibilities properly, as I didn't expect to see any overlap (of what framework handles which services). So I was quite surprised to learn that there was a significant amount of overlap in service references, in addition to being unable to discern which of the two was intended to be the primary service framework. Why does there seem to be a fair amount of redundancy in individual service handling between init.d and upstart? Is something else at play here that I'm missing? What is preventing upstart from completely taking over for init.d? Is there some functionality that certain daemons require which upstart does not yet have, which are preventing some services from converting? Or is it something else entirely?

    Read the article

  • SSL support with Apache and Proxytunnel

    - by whuppy
    I'm inside a strict corporate environment. https traffic goes out via an internal proxy (for this example it's 10.10.04.33:8443) that's smart enough to block ssh'ing directly to ssh.glakspod.org:443. I can get out via proxytunnel. I set up an apache2 VirtualHost at ssh.glakspod.org:443 thus: ServerAdmin [email protected] ServerName ssh.glakspod.org <!-- Proxy Section --> <!-- Used in conjunction with ProxyTunnel --> <!-- proxytunnel -q -p 10.10.04.33:8443 -r ssh.glakspod.org:443 -d %host:%port --> ProxyRequests on ProxyVia on AllowCONNECT 22 <Proxy *> Order deny,allow Deny from all Allow from 74.101 </Proxy> So far so good: I hit the Apache proxy with a CONNECT and then PuTTY and my ssh server shake hands and I'm off to the races. There are, however, two problems with this setup: The internal proxy server can sniff my CONNECT request and also see that an SSH handshake is taking place. I want the entire connection between my desktop and ssh.glakspod.org:443 to look like HTTPS traffic no matter how closely the internal proxy inspects it. I can't get the VirtualHost to be a regular https site while proxying. I'd like the proxy to coexist with something like this: SSLEngine on SSLProxyEngine on SSLCertificateFile /path/to/ca/samapache.crt SSLCertificateKeyFile /path/to/ca/samapache.key SSLCACertificateFile /path/to/ca/ca.crt DocumentRoot /mnt/wallabee/www/html <Directory /mnt/wallabee/www/html/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> <!-- Need a valid client cert to get into the sanctum --> <Directory /mnt/wallabee/www/html/sanctum> SSLVerifyClient require SSLOptions +FakeBasicAuth +ExportCertData SSLVerifyDepth 1 </Directory> So my question is: How to I enable SSL support on the ssh.glakspod.org:443 VirtualHost that will work with ProxyTunnel? I've tried various combinations of proxytunnel's -e, -E, and -X flags without any luck. The only lead I've found is Apache Bug No. 29744, but I haven't been able to find a patch that will install cleanly on Ubuntu Jaunty's Apache version 2.2.11-2ubuntu2.6. Thanks in advance.

    Read the article

  • Radeon HD4850 serious issues when using DirectX 10

    - by ricsmania
    Hello, I have a problem with my video card. Whenever I run a DirectX 10 game, it works for a few seconds (10 or so) and then starts displaying nothing but big polygons. I have tested this with Crysis and Resident Evil 5, both have the same problems. The same games running under DirectX 9 work fine, except for some small black squares once in a while. I have the following specs: Asus P7P55D LE Intel Core i5 750 Sapphire Radeon HD4850 1GB 2x2GB Patriot Viper II Sector 5, DDR3 1600 MHz OCZ Stealth X Stream 500SXS 500W At first I thought it could be the video card overheating (it has stock cooling), but the game crashes even when it's running at 50 degrees C, and it's never been higher than 70. I also thought it could be the PSU, but as far as I know 500W is enough for this computer, especially because I haven't overclocked anything. My OS is Windows 7 X64 and I am using Catalyst 10.10, but I have also tried many older versions with no success. I don't think there is a problem with the card itself, or else it wouldn't run DirectX 9 games I believe. I have spent many hours searching for a solution but I couldn't, so any help is appreciated. Thank you. EDIT: I did some further investigation about the problem, and it seems taspeotis was right, it might be related to memory. I slightly underclocked the memory from 993 to 965 MHz and the problem went away completely. Both the black squares using DirectX 9 and the weird polygons using DirectX 10. I was using RE DirectX 10 Benchmark, as it consistently crashed around the same point, and now I can play the full benchmark with no artifacts at all. Unfortunately, the underclock has an obvious hit in performance. Although it's not critical, it's definitely noticeable. So, if the video memory test software showed no erros, but the card needs an underclock to work, what might be the problem? Temperature? Voltage? By the way, I couldn't find what the default voltage for this card is. And what is a good software to try and increase it? I tried Ati Tray Tools but it has a bug that increases the clock speed dramatically whenever I change something in the Overclock tab, so I'm afraid it might fry my card. Worst case scenario, if I don't find I solution I will try to slightly increase the GPU clock to compensate for the memory clock. Thank you again.

    Read the article

  • libreadline history lines combine

    - by jettero
    This has been driving me crazy for about three years. I don't know how to fully describe the problem, but I think I can finally describe a way to recreate it. Your milage may vary. I have a mixture of ubuntu server and desktop machines of various versions and a few gentoo machines with various states of disrepair. They all seem to kindof do their own thing, although with similarities. Try this and let me know if you see the same thing. pop open two xterms (TERM=xterm) resize one so they're not the same issue screen -R test1 in one (TERM=screen) and screen -x test1 in the other hooray, typing in one shows up in the other; although notice that their different size produces artifacts and things issue a couple commands in your shell hit ^AF in the one that doesn't fit quite right, now it fits!! scroll back over the history a little goto 6 Eventually you'll notice a couple history lines combine. If you don't, then it's something unique to my setup, which spans various distributions and computers; so that's a confusing concept to me. If you see the thing I'm seeing then this: bash$ ls -al bash$ ps auxfw becomes this: bash$ ls -al; ps auxfw It doesn't happen every time. I have to really play with it — unless I don't want it to happen, then it always does. On some systems (or combinations), I get a line separator like the example above. On some systems, I do not. That I get the line separator on some systems seems to indicate to me that bash supports this behavior. Its history is entirely handled by libreadline and after perusing (ie, carefully reading) the man pages, I couldn't find a single readline setting for combining two history lines. Nor can I find anything in the bash manpage. So, how can I invoke this on purpose? Or, if I can't do that, how can I disable it completely? I would take either answer as a solution. Currently, I only see it when I don't want it.

    Read the article

  • Windows 2008 R2 IPsec encryption in tunnel mode, hosts in same subnet

    - by fission
    In Windows there appear to be two ways to set up IPsec: The IP Security Policy Management MMC snap-in (part of secpol.msc, introduced in Windows 2000). The Windows Firewall with Advanced Security MMC snap-in (wf.msc, introduced in Windows 2008/Vista). My question concerns #2 – I already figured out what I need to know for #1. (But I want to use the ‘new’ snap-in for its improved encryption capabilities.) I have two Windows Server 2008 R2 computers in the same domain (domain members), on the same subnet: server2 172.16.11.20 server3 172.16.11.30 My goal is to encrypt all communication between these two machines using IPsec in tunnel mode, so that the protocol stack is: IP ESP IP …etc. First, on each computer, I created a Connection Security Rule: Endpoint 1: (local IP address), eg 172.16.11.20 for server2 Endpoint 2: (remote IP address), eg 172.16.11.30 Protocol: Any Authentication: Require inbound and outbound, Computer (Kerberos V5) IPsec tunnel: Exempt IPsec protected connections Local tunnel endpoint: Any Remote tunnel endpoint: (remote IP address), eg 172.16.11.30 At this point, I can ping each machine, and Wireshark shows me the protocol stack; however, nothing is encrypted (which is expected at this point). I know that it's unencrypted because Wireshark can decode it (using the setting Attempt to detect/decode NULL encrypted ESP payloads) and the Monitor Security Associations Quick Mode display shows ESP Encryption: None. Then on each server, I created Inbound and Outbound Rules: Protocol: Any Local IP addresses: (local IP address), eg 172.16.11.20 Remote IP addresses: (remote IP address), eg 172.16.11.30 Action: Allow the connection if it is secure Require the connections to be encrypted The problem: Though I create the Inbound and Outbound Rules on each server to enable encryption, the data is still going over the wire (wrapped in ESP) with NULL encryption. (You can see this in Wireshark.) When the arrives at the receiving end, it's rejected (presumably because it's unencrypted). [And, disabling the Inbound rule on the receiving end causes it to lock up and/or bluescreen – fun!] The Windows Firewall log says, eg: 2014-05-30 22:26:28 DROP ICMP 172.16.11.20 172.16.11.30 - - 60 - - - - 8 0 - RECEIVE I've tried varying a few things: In the Rules, setting the local IP address to Any Toggling the Exempt IPsec protected connections setting Disabling rules (eg disabling one or both sets of Inbound or Outbound rules) Changing the protocol (eg to just TCP) But realistically there aren't that many knobs to turn. Does anyone have any ideas? Has anyone tried to set up tunnel mode between two hosts using Windows Firewall? I've successfully got it set up in transport mode (ie no tunnel) using exactly the same set of rules, so I'm a bit surprised that it didn't Just Work™ with the tunnel added.

    Read the article

  • Seeking past end of file causes Apache hang, and it never restarts.

    - by talkingnews
    I've actually solved my problem with a better script, but I'm still left wondering why Apache2 hung completely - this is an out-of-the-box ISPCONFIG 3.03 install, everything bang up to date, running perfectly. Until... The troublesome but innocent-looking script: $fp = fopen("/var/log/ispconfig/cron.log", "r"); fseek($fp, -5000, SEEK_END); $line_buffer = array(); while (!feof($fp)) { $line = fgets($fp, 1024); $line_buffer[] = $line; $line_buffer = array_slice($line_buffer, -10, 10); } foreach ($line_buffer as $line) { echo $line; } You get the idea, just a script I found on a forum somwehere. I did this for various logs, since it's a nice easy window on what's occurring (in a protect dir, of course!). One day, the logs having grown large an me having sorted all my cron, scripting and mail queue errors, I thought I was time to start afresh. updated, rebooted, archived and deleted the logs. When I ran my script a couple of hours later, it hung. And hung. 8 minutes I waited. Chrome timed the page out, of course, but the server never came back to life. htop showed /usr/sbin/apache2 -k restart using 100% CPU. Never came back until I did a service apache2 restart. Ran fine, as soon as I hit that logfile again...dead. So, I worked out it was the logfile script, and I worked out that seeking beyond the end of the file wasn't good, and I found a better script http://www.php.net/manual/en/function.fseek.php#90450 But what I'm left wondering is... why didn't something restart or kill the process? How was one hanging page able to bring down the whole server? It's running suphp. I say "out of the box", I've tweaked mysql and apache to fork and reserve sensible amounts of processes for the 512Mb RAM the VPS has, and it'll handle multiple refreshes of large pages, and hadn't hung before. Any ideas how I'd avoid this? Google isn't my friend in this instance beyond the reccs. above about number of processes vs RAM available.

    Read the article

  • Moving from VPS to Cloud

    - by GRIGORE-TURBODISEL
    ...and I have a few questions. I'm basically working on a MySQL+PHP based webapp. Since I don't have on-demand scaling with VPS, I'm planning to move from VPS to Cloud when I hit the 1000 subscribers barrier. I'm looking at Windows Azure but I'm ok with other suggestions. So here are my questions: Will it really cost me a kidney? Every subscriber needs to download around 4-5MB of static resources each day. Bandwidth is free on the VPS but here I see costs can easily get to $800.00/mo; this makes me very insecure about the whole thing, I mean VPS is just $2,000/yr. Do I need another VM or is PHP included in the Web Sites? I have basic sysadmin skills, I think I can handle setting up a PHP install, but will I have to do this? If yes, what other service do I need to setup manually? What about Memcached, MySQL, etc? What security protections does it include? For example I have some basic protection included, like directory traversals and executable files upload; I also have CloudFlare on my other websites for DDoS protection; will I need to do the same thing here too, can it even be installed, can I edit my DNS records, etc? How are e-mails, subdomains, add-on domains, parked domains, etc. handled? I haven't seen any references to e-mail boxes. On the VPS I simply add them from cPanel ([email protected] / whatever.mysite.com / ...); do I have a similar management interface here? Do I get SSH access? Or at least FTP, remote MySQL access and maybe some incremental back-ups or something? Can I see my quotas and advanced traffic info? I must mention that I really like the idea of the whole "cloud" concept, the added reliability and everything but I really need maybe a parallel to regular hosting or something so I know what to expect.

    Read the article

  • Any way to recover ext4 filesystems from a deleted LVM logical volume?

    - by Vegar Nilsen
    The other day I had a proper brain fart moment while expanding a disk on a Linux guest under Vmware. I stretched the Vmware disk file to the desired size and then I did what I usually do on Linux guests without LVM: I deleted the LVM partition and recreated it, starting in the same spot as the old one, but extended to the new size of the disk. (Which will be followed by fsck and resize2fs.) And then I realized that LVM doesn't behave the same way as ext2/3/4 on raw partitions... After restoring the Linux guest from the most recent backup (taken only five hours earlier, luckily) I'm now curious on how I could have recovered from the following scenario. It's after all virtually guaranteed that I'll be a dumb ass in the future as well. Virtual Linux guest with one disk, partitioned into one /boot (primary) partition (/dev/sda1) of 256MB, and the rest in a logical, extended partition (/dev/sda5). /dev/sda5 is then setup as a physical volume with pvcreate, and one volume group (vgroup00) created on top of it with the usual vgcreate command. vgroup00 is then split into two logical volumes root and swap, which are used for / and swap, logically. / is an ext4 file system. Since I had backups of the broken guest I was able to recreate the volume group with vgcfgrestore from the backup LVM setup found under /etc/lvm/backup, with the same UUID for the physical volume and all that. After running this I had two logical volumes with the same size as earlier, with 4GB free space where I had stretched the disk. However, when I tried to run "fsck /dev/mapper/vgroup00-root" it complained about a broken superblock. I tried to locate backup superblocks by running "mke2fs -n /dev/mapper/vgroup00-root" but none of those worked either. Then I tried to run TestDisk but when I asked it to find superblocks it only gave an error about not being able to open the file system due to a broken file system. So, with the default allocation policy for LVM2 in Ubuntu Server 10.04 64-bit, is it possible that the logical volumes are allocated from the end of the volume group? That would definitely explain why the restored logical volumes didn't contain the expected data. Could I have recovered by recreating /dev/sda5 with exactly the same size and disk position as earlier? Are there any other tools I could have used to find and recover the file system? (And clearly, the question is not whether or not I should have done this in a different way from the start, I know that. This is a question about what to do when shit has already hit the fan.)

    Read the article

< Previous Page | 314 315 316 317 318 319 320 321 322 323 324 325  | Next Page >