Search Results

Search found 12376 results on 496 pages for 'active pattern'.

Page 395/496 | < Previous Page | 391 392 393 394 395 396 397 398 399 400 401 402  | Next Page >

  • Windows Not Honoring DHCP Scope

    - by jerhinesmith
    Please bear with me as I'm not a networking person by trade. Our current configuration at work includes two Windows Servers serving as DHCP/Active Directory servers (if that makes sense) -- one replicating from the other. On both machines, the DNS resolution is set up as: Main Windows Box (10...* address) Public IP Address (for Verizon) Public IP Address (secondary Verizon) Secondary Windows Box (10...* address) Assuming our domain is foo.com, we maintain the foo.com website on a hosted VPS with it's own IP address. The problem is that even though bar.foo.com is an internal server and is defined in DNS on the Primary Windows machine, when I ping bar or even bar.foo.com it resolves to the hosted IP address instead of the 10.* address. I tried taking both of the Public IP addresses out of the DHCP scope, and that seemed to work, but it completely slowed down access to any external sites, so that wasn't acceptable. I also tried adding the two Windows machine as the DNS servers on my desktop. That too worked, but I'd rather not have everything enter their DNS servers, as the above setup should theoretically be working. Is there anything I could check to see why pinging bar.foo.com isn't resolving to the DNS entry on the Windows machines? Here's a summary of the ping results, if they help: Pinging from servers with static IP bar.foo.com resolves with correct IP address Pinging from linux machines not joined to the domain bar.foo.com resolves with correct IP address Pinging from user's desktop machines, joined to the domain, but dynamic IP bar.foo.com resolves with incorrect IP address This is driving me crazy!

    Read the article

  • Recommended drive encryption solution

    - by Chris Driver
    Hello, I will soon be purchasing a number of laptops running Windows 7 for our mobile staff. Due to the nature of our business I will need drive encryption. Windows BitLocker seems the obvious choice, but it looks like I need to purchase either Windows 7 Enterprise or Ultimate editions to get it. Can anyone offer suggestions on the best course of action: a) Use BitLocker, bite the bullet and pay to upgrade to Enterprise/Ultimate b) Pay for another 3rd party drive encryption product that is cheaper (suggestions appreciated) c) Use a free drive encryption product such as TrueCrypt Ideally I am also interested in 'real world' experience from people who are using drive encryption software and any pitfalls to look out for. Many thanks in advance... UPDATE Decided to go with TrueCrypt for the following reasons: a) The product has a good track record b) I am not managing a large quantity of laptops so integration with Active Directory, Management consoles etc is not a huge benefit c) Although eks did make a good point about Evil Maid (EM) attacks, our data is not that desirable to consider it a major factor d) The cost (free) is a big plus but not the primary motivator The next problem I face is imaging (Acronis/Ghost/..) encrypted drives will not work unless I perform sector-by-sector imaging. That means an 80Gb encrypted partition creates an 80Gb image file :(

    Read the article

  • Finding default gateway in an openvpn environment in windows

    - by Alexander Trümper
    I need to find the default gateway in a openvpn scenario where the route output looks like that: IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 10.49.73.1 10.49.73.24 10 0.0.0.0 128.0.0.0 10.8.0.1 10.8.0.2 30 So I googled around a bit and a found this script here: @For /f "tokens=3" %%* in ( 'route.exe print ^|findstr "\<0.0.0.0\>"' ) Do @Set "DefaultGateway=%%*" echo %DefaultGateway% This works, but matches both lines in the route output. But I need to find this line: 0.0.0.0 0.0.0.0 10.49.73.1 10.49.73.24 10 So I tried to modify the findstr parameter like this: findstr "\<0.0.0.0\>.\<0.0.0.0\>" in the expectation that '.' will match for the tab between the columns. But it doesn't. It will still set DefaultGateway to 10.8.0.1 I couldn't find a clue in MS documentation either. Maybe someone knows the right expression? Thanks a lot.

    Read the article

  • Nginx flv audio pseudo stream works but video is not loading

    - by sarah
    I am working on a development server for a company & they want nginx webserver to work with. So the requirements for their company is, it should be capable of doing following things i.e hotlink protection, mp4 & flv pseudo stream & secure streaming. However nginx fulfills their requirements and i am configuring their server from past 2 days as i am new to this field so i've only acheived hotlinking prevention in past 2 days. But the problem on which i am stuck is flv pseudo streaming, to make work to mp4 pseudo stream it was just a piece of paper but i am really fuc*ed up with flv pseudo stream. I have converted my flv videos with flvmdi tools to insert many keyframes but the problem is , when i try to seek video from following keyframes that are generated by flvmdi i.e test.flv?start=2681223, video does not load but audio pseudo works fine. So it means no problem with my flv configuration in nginx.conf file. And the forum that i used to compile my nginx-1.2.1 is http://h264.code-shop.com/trac/wiki/Mod-H264-Streaming-Nginx-Version2 & by adding additional module --with-http_flv_module. This forum is really active, hopes i will resolve my problem as soon as you guys will provide me some guide.

    Read the article

  • Can a USB/IDE/SATA adapter be flaky?

    - by Ward
    I use USB/IDE/SATA converters a lot and on the two that I have now, I sometimes get errors copying files to drives. It only happens when I'm copying big files to the drive (big can mean as little as 100MB, I think it happens more often with bigger files - 300MB or more), and basically the copy will fail and I'll get one or more error messages about "Delayed write failed." But if I disconnect the drive and re-connect it, I'll usually be able to continue. (The file that was being copied will be corrupt, but otherwise the drive is fine.) I just noticed a new type of flakiness: the data transfer rate can vary widely. I copied one set of files (5x300MB files) and it took 10+minutes, then I copied another set (approx. the same sizes) and it took less than a minute. I haven't done systematic testing, the other things I'm doing on my laptop at the same time might have some impact, and I haven't cross-checked the two adapters I have and the 3 hard drives I'm working with to see if there's a pattern. I'm more wondering if anyone else has seen anything like this.

    Read the article

  • Openfire: Granular alerts

    - by R.S.
    Our organization has had an Openfire server up and running for about a year now. So far we have used it for messaging in the I.T. Dept and Alerts to all users. We hit a snag this week when one system went down and several notifications were sent out to inform users of progress. Some of the users were Radiologists that do not use the particular system in question and these users found it more of an annoyance than informative. Since that I have been tasked with finding a more granular system for alerts. I am confident that Openfire can handle this and I have just about settled on a way of getting this to work. My idea is to create a half dozen or so users. For example: Staff, Doctor, Assitant and Supervisor. Using spark as our messenger has worked great so far so I would like to stick with that if possible. With that in mind, under advanced login features the resource name can be changed to something unique and non-unique users can log in under the same account, however, when a message is sent to one of these users, the message delivery is inconsistent. Currently I have 4 users under the Assistant user and it seems only 1 of the users receives the messages. Is this scenario even possible? I am avoiding working with the groups in Openfire because the function is atrocious. I could possibly integrate the system into our Active directory but I don’t think that will get us to a workable solution any quicker or more efficiently.

    Read the article

  • EC2 AMI won't boot after edit

    - by Eric Lars0n
    I did something stupid, I got a new laptop and copied everything over to the new one, then wiped the old one clean. Then I realized that I forgot to copy the private key out of .ssh that I use to connect to my AWS EBS backed instance. So I can't log in to my custom AMI. So I created a new Volume from the Snapshot of the AMI, then started up a public instance and attached the Volume to it, edit the sshd_config to allow for password log in. Unmounted the volume, detached it, made a snapshot of it, then made a new AMI from the snapshot. The new AMI launches, but never passes the Status Checks and is not reachable. What am I doing wrong? Or alternatively how can I fix my problem? Edit: Adding some of the console output Linux version 2.6.16-xenU ([email protected]) (gcc version 4.0.1 20050727 (Red Hat 4.0.1-5)) #1 SMP Mon May 28 03:41:49 SAST 2007 BIOS-provided physical RAM map: Xen: 0000000000000000 - 000000006a400000 (usable) 980MB HIGHMEM available. 727MB LOWMEM available. NX (Execute Disable) protection: active IRQ lockup detection disabled RAMDISK driver initialized: 16 RAM disks of 4096K size 1024 blocksize NET: Registered protocol family 2 Registering block device major 8 XENBUS: Timeout connecting to devices! Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(8,0)

    Read the article

  • VMWare use of Gratuitous ARP REPLY

    - by trs80
    I have an ESXi cluster that hosts several Windows Server VMs and around 30 Windows workstation VMs. Packet captures show a high number of ARP replies of the form: -sender_ip: VM IP -sender_mac: VM virtual MAC -target_ip: 0.0.0.0 -target_mac: Switch interface MAC The specific addresses aren't really a concern -- they're all legitimate and we're not having any problems with communications (most of the questions surrounding GARP and VMWare have to do with ping issues, a problem we don't have). I'm looking for an explanation of the traffic pattern in an environment that functions as expected. So the question is why would I see a high number of unsolicited ARP replies? Is this a mechanism VMWare uses for some purpose? What is it? Is there an alternative? EDIT: Quick diagram: [esxi]--[switch vlan]--[inline IDS]--[fw]--(rest of network) The IDS is complaining about these unsolicited ARPs. Several IDS vendors trigger on ARP replies without a prior request, or for ARP replies that have a target IP of 0.0.0.0. The target MAC in these replies is the VLAN interface on the switch. Capture points: -The IDS grabs the offending packets -The FW can see the same ones -A VM on the ESXi host does not see these, although there is an ARP request for a specific IP on the ESXi host that has source_ip=0.0.0.0 and source_mac=[switch vlan interface]. I can't share the captures, unfortunately. Really I'm interested in finding out if this is normal for an ESXi deployment.

    Read the article

  • Can varnish cache files without specific extension or residing in specific directory

    - by pataroulis
    I have a varnish installation to cache (MANY) images that my service serves. It is about 200 images of around 4k per second and varnish happily serves them according to the following rule: if (req.request == "GET" && req.url ~ "\.(css|gif|jpg|jpeg|bmp|png|ico|img|tga|wmf)$") { remove req.http.cookie; return(lookup); } Now, the thing is that I recently added another service on the same server that creates thumbnails to serve but it does not add a specific extension. The files are of the following filename pattern: http://www.example.com/thumbnails/date-of-thumbnail/xxxxxxxxx.xx where xx are numbers, so xxxxxxxxx.xx could be 6482364283.73 (two numbers at the end) (actually this is the timestamp so I can keep extra info in the filename) That has the side effect that varnish does not cache them and I see them constantly being served by apache itself. Even though I can change the format from now on to create thumbs ending in .jpg, is there a way to change the vcl file of my varnish daemon to either cache everything under a directory (the thumbnails directory) or everything with two numbers at its extension? Let me know if I can provide any additional info ! Thanks!

    Read the article

  • Reusing slot numbers in Linux software RAID arrays

    - by thkala
    When a hard disk drive in one of my Linux machines failed, I took the opportunity to migrate from RAID5 to a 6-disk software RAID6 array. At the time of the migration I did not have all 6 drives - more specifically the fourth and fifth (slots 3 and 4) drives were already in use in the originating array, so I created the RAID6 array with a couple of missing devices. I now need to add those drives in those empty slots. Using mdadm --add does result in a proper RAID6 configuration, with one glitch - the new drives are placed in new slots, which results in this /proc/mdstat snippet: ... md0 : active raid6 sde1[7] sdd1[6] sda1[0] sdf1[5] sdc1[2] sdb1[1] 25185536 blocks super 1.0 level 6, 64k chunk, algorithm 2 [6/6] [UUUUUU] ... mdadm -E verifies that the actual slot numbers in the device superblocks are correct, yet the numbers shown in /proc/mdstat are still weird. I would like to fix this glitch, both to satisfy my inner perfectionist and to avoid any potential sources of future confusion in a crisis. Is there a way to specify which slot a new device should occupy in a RAID array? UPDATE: I have verified that the slot number persists in the component device superblock. For the version 1.0 superblocks that I am using that would be the dev_number field as defined in include/linux/raid/md_p.h of the Linux kernel source. I am now considering direct modification of said field to change the slot number - I don't suppose there is some standard way to manipulate the RAID superblock?

    Read the article

  • Clone Microsoft VM > SharePoint issues

    - by Rob
    Hi, We have an existing (production) Hyper-V VM that I want to clone to create a staging server. This server is NOT on a domain/active-directory - uses all local computer accounts. It is Windows 2008 OS, SharePoint 2007, SQL server 2008, Reporting Services, and some custom line-of-business web-applications. We rename the server etc as per the documentation we have, but are having troubles specifically with SharePoint. Some of the sites do not come up, and some issues adding/updating site settings such as site owners. Does anyone know of a definitive resource to this process? thanks Update: We seem to have most working, but the Shared Service Provider, and the SSP's admin site, are not working. In addition, the custom web-parts that reference IIS Session objects are failing (which seem to be related to the Share Service Provider). All the Windows user accounts for various services are renamed during the server cloning, but the windows account names in SQL server are not automatically changed. We've tried to change them in SQL server but still does not seem to work Seems like allot of work - and am thinking that there must be some guidance out there. also getting this: NullReferenceException: Object reference not set to an instance of an object.] Microsoft.Office.Server.Administration.SqlSessionStateResolver.System.Web. IPartitionResolver.ResolvePartition(Object key) +135

    Read the article

  • Server 2008 R2 & Domain Trusts - Attempt to Compromise Security

    - by SnAzBaZ
    We have two separate Active Directory domains; EUROPE and US. There is a two way trust between the domains / forests. I have a group of users called "USA Staff" that have access to certain shares on servers in the EUROPE domain and a group called "EUROPE Staff" which have access to shares in the USA domain. Recently the USA PDC was upgraded to Windows Server 2008 R2. Now when I try to access a share on a USA server from a Windows 7 workstation in the EUROPE domain I get the "Please enter your username / password" dialog box appear, with a message at the bottom: "The system has detected a possible attempt to compromise security." When I enter a username / password for a user in the USA domain, I can then access the network resource. Entering credentials for a EUROPE user however does not give me access, even though my NTFS and Share permissions are set to allow that. Windows Server 2003 / Windows Server 2008 did not have this problem, it seems to be unique to R2. I found KB938457 and opened up port 88 on the Server 2008 R2 firewall but it did not make any difference. Any other suggestions as to what to turn off in R2 to get this working again ? Thanks

    Read the article

  • Xen virtual host can reach some sites but not others

    - by Tun H S Lee
    Okay, this is killing me. Debian Squeeze, Xen 4.0, brand new install. No iptables rules whatsoever except for the ones added by the default xen bridge script. Dom0 can reach the entire world, no problems. DomU can receive packets from some hosts, but not from others. For instance, if I ping Host A, it works fine. If I ping Host B, the DomU reports 100% packet loss. The hosts are random, but consistent (even after reboots). I can see no pattern to why some work and others don't. In fact, in some cases, different virtual hosts on the same server (an other server at a different data center) are divided; some work and others do not. I can reboot (DomU or Dom0 too) and the same hosts will work or fail as before. If I tcpdump on the Host B while pinging from the DomU, everything looks fine. It sees the echo request coming in and says it's sending one back. However, if I tcpdump peth0 on the Dom0, it never sees the echo reply. Any ideas what could be happening? I'm tearing my hair out here.

    Read the article

  • krenew command not working : Permission Denied

    - by prathmesh.kallurkar
    I am using a Linux server to perform my simulations. The login and the file-system of the server are protected using kerberos. The file-system is supported using NFS. Since my simulations take a lot of time to run, my ssh sessions used to hang regularly. So, I have started running my simulations in byobu (similar to screen). In order to make sure that my kerberos session remains active, I am using the krenew command. I have entered the following command in my .bash_profile file. (I am sure that it is called for every login) killall -9 krenew 2> /dev/null krenew -b -t -K 10 So everytime I ssh to the server, I kill the existing krenew command. Then, I spawn a new krenew command -b (which runs in background), -t (I forgot why I was using this option !), and -K 10 (It must run after every 10 minutes and refresh the kerberos cache). When I run the simulations, It runs for 14 hours and then suddenly, I am getting error for reading file Permission Denied Is the command that I am running incorrect ??

    Read the article

  • computer freezes but music continues

    - by Danny
    Recently I have had a problem where my computer will freeze completely but if I happen to be streaming Pandora in a tab that will continue playing. If I wait about 2-5 minutes it will eventually come back and start working normally. I also noticed that during the period that it is unresponsive that the HDD activity light stays lit the whole time, not flashing. I've ran memtest86+ and a diagnostic from Western Digital for my HDD model and none of them reported any errors. The specs for my computer are 1 x ASRock H55M/USB3 R2.0 LGA 1156 Intel H55 HDMI USB 3.0 Micro ATX Intel Motherboard 1 x CORSAIR Enthusiast Series CMPSU-550VX 550W ATX12V V2.2 SLI Ready CrossFire Ready 80 PLUS Certified Active PFC Compatible with Core i7 Power Supply 1 x Intel Core i3-540 Clarkdale 3.06GHz LGA 1156 73W Dual-Core Desktop Processor Intel HD Graphics BX80616I3540 1 x G.SKILL 4GB (2 x 2GB) 240-Pin DDR3 SDRAM DDR3 1600 (PC3 12800) Dual Channel Kit Desktop Memory Model F3-12800CL9D-4GBNQ 1 x ASUS PCE-N13 PCI Express 150/300Mbps Transfer/Receive Rate Wireless Adapter 1 x EVGA 01G-P3-1556-KR GeForce GTX 550 Ti (Fermi) FPB 1GB 192-bit GDDR5 PCI Express 2.0 x16 HDCP Ready SLI Support Video Card I can't imagine what would be causing these problems.

    Read the article

  • Can't start Windows 7 after cloning HDD

    - by Paul
    Brief description: cloned HDD1 - HDD2 HDD1 partition 1 boots HDD1 partition 2 boots HDD2 partition 1 boots HDD2 partition 2 doesn't boot Windows, but is bootable in general Now verbosely: In all the cases computer is the same. I have two Windows 7 installations on HDD1 - both are booting fine. I choose between them using standard Windows 7 boot loader menu. Technically there are 4 partitions: 100 MB Boot loader partition (active), Windows 7 copy 1 (25 GB), Windows 7 copy 2 (150 GB) and Working partition. All are primary. In past few days I tried to clone the whole HDD1 to HDD2 of the same size (but 2,5 inch form factor) as is using Minitool Partition wizard. Everything has been copied, all files are accessible, no faults in file system structure, even boot loader wasn't damaged and I hadn't to repair it. But I can boot only first installation of Windows 7 (it boots without issues). When I choose the second installation, I get immediately a completely black screen without any texts, cursors and other data. HDD isn't accessed after that. This black screen is sensitive to Ctrl-Alt-Delete which causes computer reboot. I did some experimenting: Installed Windows 7 to that partition - it booted fine. Then I renamed "Windows" to "Windows.old" and copied Windows directory from HDD1 as it was, using Far Manager, and got the same troubles - black screen. (Of course I performed renaming and copying from other copy of Windows). So, it seems that problems are inside this installation of Windows, somewhere in its files.

    Read the article

  • hosting company blocking google bots and crawlers [closed]

    - by Jayapal Chandran
    Hi, I am having a site for the past three years and it is very active for the past two years. Until not the site is working well and also now but not after the hosting company blocked google bots. Many pages appeared in the first page of the google search. After they started blocking i couldn't see my links in the first page instead they appeared after 5 pages or they did not appear at all. Will hosting companies be so stupid that they block and dont mention it to their users. They want to protect themselves by making the websites at stake. I display google ads and not this month i got only half for this 10 days. I have made requests to other hosting companies like blue host and monster host that i wan to transfer my domain by making a condition that the will not block google bots which stops the business indirectly. so any kind of help will be helpful. how can i claim what i lost from the hosting company. what other hosting companies consider the users (by informing the events like changing the IP or blocking google bot.) It was really working hard to bring up my site but these people just crashed down my site in a few days. :-(

    Read the article

  • Two internet connections at once in Windows 7

    - by webmasters
    I have a 3G wireless modem and I have a LAN - Right now they are both connected. I need a way to choose which applications will use the 3G connection and which applications will use the LAN. My Operating System is windows 7. How can I do this? Any ideas? Here is a route print: - the 3G modem's IP is 10.81.132.96 Lets say, for example, map google.com to using the 3G internet connection. IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 192.168.2.1 192.168.2.102 20 0.0.0.0 0.0.0.0 10.81.132.97 10.81.132.111 286 10.81.132.96 255.255.255.224 On-link 10.81.132.111 286 10.81.132.111 255.255.255.255 On-link 10.81.132.111 286 10.81.132.127 255.255.255.255 On-link 10.81.132.111 286 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 192.168.2.0 255.255.255.0 On-link 192.168.2.102 276 192.168.2.102 255.255.255.255 On-link 192.168.2.102 276 192.168.2.255 255.255.255.255 On-link 192.168.2.102 276 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 192.168.2.102 276 224.0.0.0 240.0.0.0 On-link 10.81.132.111 286 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 192.168.2.102 276 255.255.255.255 255.255.255.255 On-link 10.81.132.111 286 ===========================================================================

    Read the article

  • Uninstallation of WSUS form SBS 2008

    - by Logik
    I am not much experienced system admin, but i came across the client who was having SBS 2008. The server was running out of HDD space. So to recover some, I removed its WSUS role (twas not needed).this removed WSUS 3.0 SP1 & freed a lot of space. This SBS is: Domian controller, DNS, DHCP, File server. After i removed WSUS i disabled windows update service & i rebooted serer & checked from one of client if the shared folders are accessible. they were. Next day all of sudden i got call from them saying they can't login into their domain. I looked into server, the Active directory service was stopped. I never remember touching any service other than windows update. How come AD service stopped running all of sudden. Is removing WSUS have such impact? I am not aware of any such thing.

    Read the article

  • Can not boot windows XP from cloned hard disk - what can I do?

    - by Martin
    My configuration: a PC (some years old) with MSI K8N-Neo-4F Motherboard, 1 GB RAM. Disk 1 (Maxtor, SATA II, 250 GB): 2 Partitions, on Partition 1 (48 GB): Windows XP Professional (NTFS) on Partition 2 (190 GB): data (NTFS) I wanted to have a larger and faster disk (the PC is incredibly slow and permanently the disk is rattling when I try to open an application or during Windows startup), so I took Disk 2 (Seagate, Sata II, 500 GB), installed in the PC, created at first a 400 GB-partition at the end of the disk and cloned the data to it, which worked well Installed a swap partition and a partition for Ubuntu Linux 12.10 on the first "part" of the disk so I was able to boot Linux and the old Windows XP with the Linux "System selection" at startup. Now I wanted to move Windows XP to the new disk, deleted the Linux partitions cloned Windows XP to the new disk (with free tools - EASESUS), left both disks in the PC and tried to select the new hard drive during boot as boot partition. This did not work, the PC refused to boot from this second disk. I tried many things like making the boot partition on the 2nd drive "active" in the Windows System Preferences modifying the boot.ini file to boot from the second disk - tried to boot from it, but ended with an error message stating that it was not possible to boot from this disk because of a hardware failure or something else or so removing the original disk and plugging the new one on the same SATA port as the original one - also booting failed with an error message repairing the MBR by booting into recovery mode from the Windows XP Installation CD-ROM, selecting the second disk and doing "FIXMBR" which said that everything was fine with the MBR. after that at least the PC tried to boot from the newer disk and then startup was hanging during the blue screen with the Windows Logo.... no luck. ... deleting the cloned partition and cloning again - this time with Macrium Reflect Free version... - no success during booting. I tried a lot of things with no success, so I wonder what I am doing wrong?! What could I do to successfully clone my Win XP partition to replace the original disk by a larger one which is bootable.

    Read the article

  • 100% uptime for a web application

    - by Chris Lively
    We received an interesting "requirement" from a client today. They want 100% uptime with off-site failover on a web application. From our web application's viewpoint, this isn't an issue. It was designed to be able to scale out across multiple database servers, etc. However, from a networking issue I just can't seem to figure out how to make it work. In a nutshell, the application will live on servers within the client's network. It is accessed by both internal and external people. They want us to maintain an off-site copy of the system that in the event of a serious failure at their premises would immediately pick up and take over. Now we know there is absolutely no way to resolve it for internal people (carrier pigeon?), but they want the external users to not even notice. Quite frankly, I haven't the foggiest idea of how this might be possible. It seems that if they lose Internet connectivity then we would have to do a DNS change to forward traffic to the external machines... Which, of course, takes time. Ideas? UPDATE I had a discussion with the client today and they clarified on the issue. They stuck by the 100% number, saying the application should stay active even in the event of a flood. However, that requirement only kicks in if we host it for them. They said they would handle the uptime requirement if the application lives entirely on their servers. You can guess my response.

    Read the article

  • TCP Keepalive and firewall killing idle sessions

    - by Carlos A. Ibarra
    In a customer site, the network team added a firewall between the client and the server. This is causing idle connections to get disconnected after about 40 minutes of idle time. The network people say that the firewall doesn't have any idle connection timeout, but the fact is that the idle connections get broken. In order to get around this, we first configured the server (a Linux machine) with TCP keepalives turned on with tcp_keepalive_time=300, tcp_keepalive_intvl=300, and tcp_keepalive_probes=30000. This works, and the connections stay viable for days or more. However, we would also like the server to detect dead clients and kill the connection, so we changed the settings to time=300,intvl=180,probes=10, thinking that if the client was indeed alive, the server would probe every 300s (5 minutes) and the client would respond with an ACK and that would keep the firewall from seeing this as an idle connection and killing it. If the client was dead, after 10 probes, the server would abort the connection. To our surprise, the idle but alive connections get killed after about 40 minutes as before. Wireshark running on the client side shows no keepalives at all between the server and client, even when keepalives are enabled on the server. What could be happening here? If the keepalive settings on the server are time=300,intvl=180,probes=10, I would expect that if the client is alive but idle, the server would send keepalive probes every 300 seconds and leave the connection alone, and if the client is dead, it would send one after 300 seconds, then 9 more probes every 180 seconds before killing the connection. Am I right? One possibility is that the firewall is somehow intercepting the keepalive probes from the server and failing to pass them on to the client, and the fact that it got a probe makes it think that the connection is active. Is this common behavior for a firewall? We don't know what kind of firewall is involved. The server is a Teradata node and the connection is from a Teradata client utility to the database server, port 1025 on the server side, but we have seen the same problem with an SSH connection so we think it affects all TCP connections.

    Read the article

  • Computer randomly shuts itself off

    - by Decency
    I have not been able to determine a pattern for why this happens, despite my best efforts. I've attempted to run it on full power with Prime95 and this doesn't trigger a restart. Generally the restarts occur while I'm playing games, watching videos, or even just having multiple tabs open in a browser. However, I often play processor intense games for hours without any restarts occurring, and sometimes they'll happen 3-4 times in an hour during less intense activity, so I don't think that is the problem. I imagine it has something to do with overheating or power consumption so I've been monitoring CPU temperature and cleaning with compressed air, but the problem keeps happening. I don't know how to track power consumption, and assume that this is the problem. Whenever this occurs, the sound gets stuck in a short loop of whatever was playing at the time, though restarts also occur when nothing is playing. Here is a screenshot of temperatures: and under load: Here's the parts list: http://secure.newegg.com/WishList/PublicWishDetail.aspx?WishListNumber=10546754 As shown in the list, the case includes a 585W Power Supply, which I've been told should be plenty. I built the computer myself with a friend's guidance but it's very possible I did something wrong. Right now I'm looking into ensuring that I have the latest drivers for all components. Any help would be appreciated- thanks.

    Read the article

  • How to disable Utility Manager (Windows Key + U)

    - by Skizz
    How do I disable the Windows + U hotkey in Windows XP? Alternatively, how do I stop the utility manager from being active? The two are related. The utilty manager is currently providing a potential security hole and I need to remove it[1]. The system I'm developing uses a custom Gina to log in and start a custom shell. This removes most Windows Key hotkeys but the Win + U still pops up the manager app. Update: Things I've tried and don't work: NoWinKeys registry setting - this only affects explorer hotkeys; Renaming utilman.exe - program reappears next login; Third party software - not really an option, these machines are audited by the clients and additional, third party software would be unlikely to be accepted. Also, the proedure needs to be reasonably straightforward - this has to be done by field service engineers to existing machines (machines currently in Russia, Holland, France, Spain, Ireland and USA). [1] The hole is via the internet options in the help viewer the utility app links to.

    Read the article

  • How do you recreate the System Recovery environment in Windows 7?

    - by Howiecamp
    I'm running Windows 7 Home Premium RTM (64-bit) and I want to take advantage of the system recovery tools (eg the Command Prompt) without using the Windows 7 DVD. My understanding is that this environment (WinRE) should be installed to your HDD by default as part of the Windows 7 installation. However, when I hit F8 on boot and select "Repair", I get: Windows failed to start. A recent hardware or software change might be the cause. To fix the problem... Status: 0xc000000e Info: The boot selection failed because a required device is inaccessible. The "Info" line seems like the smoking gun. My next step was to boot from the Windows 7 DVD, and choose "Repair". It indicated my Recovery Environment wasn't on the Windows 7 boot menu (perfect) and offered to fix it. I said yes and rebooted, however same issue as above. In addition, when I booted in to Windows 7 and I looked at the boot menu options, the recovery/repair option was not there. Only my Windows installation. Finally, I ran the Disk Management tool (diskmgmt.msc) and took a look at the contents of my "System Reserved" partition (which was set to "Active" as normal). It's unclear to me what the contents should look like, however it is my understanding that the WinRE environment gets installed to this partition. (As part of the above troubleshooting I followed http://superuser.com/questions/25728/how-to-fix-windows-7-boot-process which lead to http://www.sevenforums.com/tutorials/668-system-recovery-options.html).

    Read the article

< Previous Page | 391 392 393 394 395 396 397 398 399 400 401 402  | Next Page >