Search Results

Search found 32520 results on 1301 pages for 'local machine'.

Page 473/1301 | < Previous Page | 469 470 471 472 473 474 475 476 477 478 479 480  | Next Page >

  • Why is my connection slow?

    - by Jay R.
    I have a Dell Precision T5400 with a Broadcom 1Gb onboard NIC. For some strange reason, when I access machines on our local network, the best I can get is around 125KB/s download speed. My laptop that has a 10/100Mb NIC onboard usually gets around 300KB/s or better from the same network resource. Both machines are plugged into the same 1Gb switch which connects to our local network wall jack at 100Mb half duplex. There is also a printer plugged into the same switch at 100Mb full. The resource I'm using for the test is a 30MB zip file copied from a jetty webserver that is running as part of a cruisecontrol installation. The cruisecontrol installation is running WindowsXP with full real-time antivirus and Altiris patch management and inventory running. That stuff on its own is eating some of the download speed. I've seen the laptop reach into the multiple MB/s download speed before, but the desktop never seems to get past 125KB/s to 130KB/s. In WindowsXP, before I upgraded the driver in the desktop, it was that slow. In Fedora, it is still slow even though it appears to be using the same driver version as the upgraded Windows driver. The upgraded Windows driver is faster, but still not nearly as fast as the laptop. What gives? Any insight to improve the situation would be appreciated. Could it be that the BroadCom board just isn't that good, or the driver in linux is just not as good as the Windows one?

    Read the article

  • One Active Directory, Multiple Remote Desktop Services (Server 2012 solution)

    - by Trinitrotoluene
    What I am trying to do is quite complex, so I figured I'd throw it out to a wider audience to see if anyone can find a flaw. What I am trying to do (as an MSP/VAR) is design a solution that will give multiple companies a session based remote desktop (companies that need to be kept completely seperate), using only a handful of servers. This is how I imagine it at the moment: CORE SERVER - Server 2012 Datacentre (All below are HyperV servers) Server1: Cloud-DC01 (Active Directory Domain Services for mycloud.local) Server2: Cloud-EX01 (Exchange Server 2010 running multi tenant mode) Server3: Cloud-SG01 (Remote Desktop Gateway) CORE SERVER 2 - Server 2012 Datacentre (All below are HyperV servers) Server1: Cloud-DC02 (Active Directory Domain Services for mycloud.local) Server2: Cloud-TS01 (Remote Desktop Session Host for Company A) Server3: Cloud-TS02 (Remote Desktop Session Host for Company B) Server4: Cloud-TS03 (Remote Desktop Session Host for Company C) What I thought about doing was setting up each Organisation in their own OU (perhaps creating their OU structure based on the Excahnge 2010 tenant OU structure so the accounts are linked). Each company would get a Remote Desktop Session Host server that would also serve as a file server. This server would be seperated from the rest on its own range. The server Cloud-SG01 would have access to all these networks and route the traffic to the appropriate network when a client connects and authenticated so they are pushed onto the correct server (Based on session collections in 2012). I won't lie this is something I have come up with quite quickly so there may well be something gapingly obvious that I am missing. Any feedback would be appreciated.

    Read the article

  • Exchange 2013 attachments too big?

    - by KPS
    I am having the toughest time sending large attachments, everywhere I have checked my file size limit for send/receive is 100mb but yet users are unable to receive files even at the size of 14mb. I'm using a spam filter (Appriver) and have worked with there support for a very long time, we see the following errors in logs 13:32:40.260 4 SMTP-000036([myserverIP]) rsp: 354 Start mail input; end with <CRLF>.<CRLF> 13:33:41.038 3 SMTP-000033([myserverIP]) write failed. Error Code=connection reset by peer 13:33:41.038 3 SMTP-000033([myserverIP]) [659500] failed to send. Error Code=connection reset by peer 13:33:41.038 4 SMTP([myserverIP]) [659500] batch reenqueued into tail Windows firewall is disabled on the exchange server, all other emails that are of smaller value come through just fine. Here is a print out of size limits: ConnectorType ConnectorName MaxReceiveMessageSize MaxSendMessageSize ------------- ------------- --------------------- ------------------ Send InternetSendConnector - 35 MB (36,700,160 bytes) Send Appriver-Smarthost - 35 MB (36,700,160 bytes) Receive Default EXCHSRVR 100 MB (104,857,600 bytes) - Receive Client Proxy EXCHSRVR 100 MB (104,857,600 bytes) - Receive Default Frontend EXCHSRVR 100 MB (104,857,600 bytes) - Receive Outbound Proxy Frontend EXCHSRVR 100 MB (104,857,600 bytes) - Receive Client Frontend EXCHSRVR 100 MB (104,857,600 bytes) - Receive ExchangeRelay 100 MB (104,857,600 bytes) - TransportConfig - 100 MB (104,857,600 bytes) 10 MB (10,485,760 bytes) ADSiteLink DEFAULTIPSITELINK Unlimited Unlimited There is a no anti-virus on the server either that could be interfering, I am out of ideas at this point :( EDIT 1 After running BPA, it gives and error: Exchange Organization: Check whether the incoming message(CN=MyDomain,CN=Microsoft Exchange,CN=Services,CN=Configuration,DC=WG,DC=local) size isn't set The maximum incoming message size isn't set in organization 'CN=MyDomain,CN=Microsoft Exchange,CN=Services,CN=Configuration,DC=WG,DC=local'. This can cause reliability problems. Here are the sizes as of now: [PS] C:\Temp>Get-TransportConfig | ft MaxSendSize, MaxReceiveSize MaxSendSize MaxReceiveSize ----------- -------------- Unlimited Unlimited [PS] C:\Temp>Get-ReceiveConnector | ft name, MaxMessageSize Name MaxMessageSize ---- -------------- Default EXCHSRVR 100 MB (104,857,600 bytes) Client Proxy EXCHSRVR 100 MB (104,857,600 bytes) Default Frontend EXCHSRVR 100 MB (104,857,600 bytes) Outbound Proxy Frontend EXCHSRVR 100 MB (104,857,600 bytes) Client Frontend EXCHSRVR 100 MB (104,857,600 bytes) ExchangeRelay 100 MB (104,857,600 bytes) Again, smaller emails come through just fine. Seems like there is a 10mb receive limit somewhere that I cannot find.

    Read the article

  • OpenVPN Server - CPU is pegged out

    - by ericl42
    Hello, I am configuring OpenVPN to act as a SSL tunnel for a remote location. I have OpenVPN1 at our current location acting as a server then OpenVPN2 at the other location that is acting as a client but is also acting as a DHCP server to machines behind it so they are basically connected to the local LAN. Everything is set up fine and I can talk from location A to location B with no problems like everyone is local. I am however having some performance issues. OpenVPN1 CPU is pegged to 100% the entire time I am copying or doing any type of activity through the tunnel. I expect some CPU usage going up but nothing like this. It's really killing my performance. OpenVPN1 is running in ESX right now with 2 gig RAM and 4 procs with unlimited bursting capacity. I am using AES-192 encryption with a 1024 key. Any idea how I can get my CPU down on OpenVPN1 and my download/upload speeds higher between the tunnel? Thanks. edit: Turning down the logging helped boost the throughput a little bit, but I am still fairly shy of where I believe I should be. Also I am still maxed out on the CPU. Does anyone have any ideas? I am really stuck on this. Thanks.

    Read the article

  • Help setting up an secondary authoritative DNS server.

    - by GLB03
    We have three Authoritative DNS servers and three recursive/caching DNS servers on my campus. Authoritative servers DNS1- Windows 2003 DNS2- Old Red Hat ----- Replacing w/ newer version DNS3- Windows 2008 (I installed) Caching and Recursive resolvers servers Server1- Windows 2003 Server2- CentOS 5.2 (I installed) Server3- CentOS 5.3 (I installed) I am replacing DNS2 with a newer Red Hat version, but have no documentation on how it was implemented. I have setup caching and windows authoritative servers, but not a linux secondary authoritative server. I have a perl script from the original server that pulls data from our DNS1 server. We use DJBDNS and TinyDNS on our linux servers. Our Network Engineer says the DNS2 server I am replacing is an authoritative server that doesn't need to be caching, but the only instructions I see is for an Authoritative server that does caching as well. Can someone point me in the right directions. I thought I was on the right track with using these instructions but when I query my new dns server I get "No response from server", I have temporarily disabled iptables to eliminate it from being an issue. ps -aux | grep dns avahi 3493 0.0 0.2 2600 1272 ? Ss Apr24 0:05 avahi-daemon: running [newdns2.local] root 5254 0.0 0.1 3920 680 pts/0 R+ 09:56 0:00 grep dns root 6451 0.0 0.0 1528 308 ? S Apr29 0:00 supervise tinydns dnslog 6454 0.0 0.0 1540 308 ? S Apr29 0:00 multilog t ./main tinydns 9269 0.0 0.0 1652 308 ? S Apr29 0:00 /usr/local/bin/tinydns

    Read the article

  • Setup Exchange 2010 cannot verify Host (A) record warning

    - by Joost Verdaasdonk
    When I try to install Exchange 2010 on my server 2008 R2 server I get a warning during the prerequisites check: Warning: setup cannot verify that the 'Host' (A) record for this computer exists within the DNS database on server: 90.195.200.12. The goal of this Exchange setup is that I'm able to sent email in my local domain as well receive/sent email through the public domain name. Some information about my setup This Server is going to be a dedicated exchange host and has the following IP setup: (IP's are examples and not the real IP's ofc) Local VLAN NIC: IP: 10.10.50.22 Subnet: 255.255.255.0 No gateway DNS: 10.10.50.1 (is domain controler with authoritive DNS) public WAN NIC: IP: 90.195.200.148 Subnet: 255.255.255.235 Gateway: 90.195.200.145 DNS: 90.195.200.12 | 190.160.230.14 My public domain - exampledomain.com A record: mail - IP: 90.195.200.148 MX record IP: 90.195.200.148 As I'm seeing now the exchange setup is looking for the A record in one of the DNS servers in my Public WAN NIC. And ofc this is not where my A records are defined. I have those A records in 2 places: 1. In the domain controler DNS (the private nic) 2. In the online dns registration of my public domain (exampledomain.com) My question is... is this warning going to be a problem? Can I do something better in my setup so that this warning will go away? Please advice?

    Read the article

  • IIS8 Asp.net State service remote connection failure

    - by maxisam
    Recently we upgrade our web server to windows server 2012 with IIS8. We have this issue when users try to connect the asp.net state service to this web server remotely. It always popup Unable to make the session state request to the session state server. Please ensure that the ASP.NET State service is started and that the client and server ports are the same. If the server is on a remote machine, please ensure that it accepts remote requests by checking the value of HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\aspnet_state\Parameters\AllowRemoteConnection. If the server is on the local machine, and if the before mentioned registry value does not exist or is set to 0, then the state server connection string must use either 'localhost' or '127.0.0.1' as the server name. In IIS7 / 7.5 we use the same way and it works fine. As long as the state service is running and firewall is set properly, we don't have any problem. However, in IIS8 it doesn't work. (We even turn off firewall to test it) Thanks for helping.

    Read the article

  • Computer hangs at boot screen with new RAID card

    - by shanethehat
    I am trying to build a new server around a Biostar TH61 motherboard and an Adaptec 6405E RAID controller card. The machine booted fine from USB before the RAID card and drives were installed. After installing, on the first boot the card was detected, but then started to spit out the following message every 10 seconds: Error: Controller Kernel Stopped Running << Press any key to continue ... Following the troubleshooting guide I unplugged everything, reseated the card, and reattached all the drives. This time the machine is sitting on the boot screen without any error messages and flashing a cursor, but after 15 minutes of this, nothing seems to be happening. Given that there are no error messages I'm hesitant to reboot again. Is it normal for a RAID card to sit without a status message when it firsts boots, maybe to initialise the drives or something? The current screen output looks a bit like this: Controller #00 found at PCI Slot:01, Bus:01, Dev:00, Func:00 Controller Model: Adaptec 6405E Firmware Version: 5.2-0[18512] Memory Size: 128MB Serial number: 111111111111111 SAS WWN: 50000D1104AE9180 _ Update: So after waiting 30 minutes I've rebooted back to the Kernal Stopped Running error. Maybe time to update the RAID BIOS.

    Read the article

  • chkconfig creating service symlinks with the wrong order

    - by Robert
    On RHEL 6.3, I have a system service that should be starting after postgresql and httpd (order 64 and 85, respectively), but chkconfig always places it at order 50. I tried an experiment on a CentOS 6.0 virtual machine to make sure I understood the LSB stanza syntax. I created /etc/init.d/foo, owner root, permissions 755, with this text: ### BEGIN INIT INFO # Provides: foo # Required-Start: postgresql httpd # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Description: Foo init script ### END INIT INFO And then ran chkconfig --add foo. Result: /etc/rc5.d/S86foo is created, as expected. (The other runlevels are also as expected.) I repeated the exact same experiment on the RHEL machine, and it created /etc/rc5.d/S50foo instead. I can't see anything different between the two that would lead to different results. Both machines have postgresql and httpd starting at the same orders and runlevels. Any thoughts? I could just use # chkconfig: 2345 86 50, or manually rename the service symlinks to the correct order, but I'm trying to document an install process for later users, and I want to know how to do it right and understand why it's not working as expected.

    Read the article

  • Can you transfer a Windows 7 XP-mode VHD to a new PC without the need for reactivation?

    - by Henk
    Hi. At the moment I am running Vista 32 bit and use VPC 2007 a lot. I have several W2000 VMs set up with complex software configurations. For me, one of the advantages of working with VPCs is that a VM can easily be copied to another PC (like a notebook). Also, when I move to a new PC, I can immediately work with my existing VMs without the need to reinstall all applications. I am thinking about buying a new pc with Windows 7 64 bit pro, and using the xp-mode VM that comes with Windows 7. The advantages would be support for more memory (so being able to run more VM's simultaneously) and having a licenced XP VM. I would like to know if you can transfer a VHD that is based on the xp-mode VHD to another machine without Activation issues. The other machine will of course, also be running W7. I would guess that this would not be a problem. However, I did read about a guy who had to reinstall his W7 because of a harddisk failure, and could not access his xp-mode VHD anymore because it demanded re-activation. Thanks. Henk.

    Read the article

  • Keyboard issue when using kitty+puttycyg but not when using putty or cygwin alone

    - by kamaradclimber
    I would like to use a unique way to use console on my windows setup. Previously I used putty for remote access to linux servers and cygwin to have unix-like tools on windows. Then I discovered kitty which is a patched putty and have added the puttycyg patch. It provides the same way to connect to remote and local console. However, there is a strange behavior using vim when connected to the local console (using the puttycyg patch) : keys display A/B/C/D and replace the current character by these letter. In insert mode it does replace the caracter, in normal mode, no modification is made to the document even if the caracter is displayed as replaced. For instance, when I type : fixed bug with product deleted I get : fixed bbug wiwith prprodudueleteted I have read a lot of questions about this type of issue 3, 4 and googled it but there is no answer that work for me. The issue is present only for the setup kitty+puttycyg patch : cygwin alone works perfectly (and putty alone works also for access to linux servers). Any help would be appreciated !

    Read the article

  • Deploying Windows Service through group policy fails with Event ID 102

    - by Sören Kuklau
    I'm trying to deploy a custom Windows Service (written in C#; installed through a VS setup project) using a group policy. To help debug this, I also have two additional MSIs in the same policy. All three packages are deployed as a machine policy, not a user one. On one machine (runs Windows Server 2008; no UAC), all three deploy fine. The service is set to Automatic, as expected. On two machines (run Windows 7; UAC), the two other MSIs deploy fine, but my service fails to install. The event log gives an event ID of 102, which appears to be a permissions problem: The install of application "Package Name" from policy "Policy Name" failed. The error was The installation source for this product is not available. Verify that the source exists and that you can access it. However, all three packages come from the same share linked through UNC, so this is unlikely. My guess is that UAC is the problem; that the service requires additional permissions. Do I need to alter the MSI somehow?

    Read the article

  • How can I allow the Xbox 360 to browse movies on a network drive?

    - by Roger
    I've always been able to stream movies stored on my Windows 7 computer's local hard drive to the Xbox 360 with no problems. A few weeks ago, though, I got a Seagate Dockstar NAS device and moved my movies to it. Ever since then, I can't seem them in the Xbox. I've tried several different things. When I try to add the mapped drive to the Media Player library, it says that it can't share the files. (I don't have any security on the NAS drive). I've tried creating a symlink to a local drive, but that doesn't help. Neither does adding the UNC path directly. The Dockstar seems to have its own Xbox share capability, but it doesn't respect my folder structure - it brings over every video file on the drive in a single list, including several hundred home videos that aren't playable on the Xbox. Is there any way to use Media Player (or the Zune software) to share files stored on a NAS drive? Barring this, is there a lightweight, free server that will allow me to share these files while maintaining my folder structure?

    Read the article

  • The Network folder specified is currently mapped using a different user name and password

    - by Frank Thornton
    I have a NAS device, it has 3 shares. On one computer I have access to all 3 of the shares. On another computer I keep getting this error when try and add a 2nd one. The Network folder specified is currently mapped using a different user name and password [...] That is the message I keep getting. What causes that? EDIT: Every share has it's own username and password. EDIT: NET USE on the one running 3 from the same NAS device New connections will be remembered. Status Local Remote Network ------------------------------------------------------------------------------- OK T: \\192.168.2.5\SHARE1 Microsoft Windows Network OK X: \\Nas-1dsho-abc\SHARE2 Microsoft Windows Network Disconnected Y: \\192.168.2.9\backups Microsoft Windows Network OK Z: \\Nas-1dsho-abc\cbackups Microsoft Windows Network The command completed successfully. NET USE on the other: New connections will be remembered. Status Local Remote Network ------------------------------------------------------------------------------- OK Y: \\192.168.2.5\SHARE1 Microsoft Windows Network Unavailable Z: \\192.168.2.5\SHARE2 Microsoft Windows Network The command completed successfully.

    Read the article

  • DAS vs SAN storage for serving 2 to 4 nodes

    - by Luke404
    We currently have 4 Linux nodes with local storage, arranged in two active/passive pairs with storage mirrored using DRBD, running virtual machines (actually using Xen Hypervisor) for typical hosting workloads (mail, web, a couple VPS, etc.). We're approaching the (presumed) maximum IOPS of those servers, and we're planning to migrate to an external storage solution with two active nodes, with capacity for up to four active nodes. Since we're an all-Dell shop I've done some research and found the MD3200 / MD3200i products should be the ones we're looking for. We are pretty sure we won't be attaching more than 4 hosts on a single storage and I'm wondering if there is any clear advantage for one or the other. In theory I should be able to attach 4 SAS hosts to a single MD3200 (single links on a single controller MD3200, or dual redundant SAS links from each host to a dual-controller MD3200), or 4 iSCSI hosts to a single MD3200i (directly on its 4 GigE ports without any switch, again with dual links for the dual controller option). Both setups should let us implement live VM migration since all hosts can access all the LUNs at the same time, and also some shared filesystem like GFS2 or OCFS2. Also, both setups should allow full redundancy of the whole system (assuming dual controllers in the storage). One difference I can see is that the DAS solution is actually limited to 4 hosts while the iSCSI one should be able to grow to more hosts (adding two GigE switches to the mix). One point for the iSCSI solution is that it would allow us to start out with our current nodes and upgrade them at a later time (we can't add other SAS controllers, but they already have 4 GigE ports each). With the right (iSCSI|SAS) controllers I should be able to connect diskless nodes and boot them off the external storage which I think is a good thing (get rid of any local storage). On the other hand, I would have thought the SAS one to be cheaper but it seems like an MD3200 actually costs a little less than an MD3200i (?) (please note: I've used Dell gear in my examples since that's what we're looking for but I assume the same goes with other vendors) I would like to know if my assumptions above are correct, and if I'm missing any important difference between the two setups.

    Read the article

  • RRAS Problem routing to central site from RRAS server only?

    - by TomTom
    Given is an office connected to headquarters using a RRAS bridge (2 virtual machines using RRAS to route between the two networks). Naming: The office is A, the RRAS on A is a-lnk. THe headquartters is B, b-lnk the RRAS machine there. The VPN works perfectly - machines can ping and work between the sites. Domain controllers on both ends replicating, DFS working, remote desktop working. All in all... everything is fine. EXCEPT: a-lnk itself can not reach any machine in B. This would normally not be troublesome (noone ever does anything on a-lnk), but there are two exceptions: * a-lnk is supposed to get it's license from a KMS in B, so not being able to reach B means it is not prolonging. * a-lnk is supposed to pull updates from a WSUS in B - and not being able to reach B means - no updates. Given that thigns work (and security is a minor issue - A-lnk is not reachable from the internet as it is behing a NAT hardware anyway) this got not handled for months. I just wan to get this item ticked off now. Anyone an idea what this is? It definitely is not a "dns does not work" or "routing in general is bad" item, as any computer in A can connect to any computer in B, and the other way arount - only the RRAS computer itself seems to do something really awkward. Platform for both: 2008 R2 standard.

    Read the article

  • How to configure networking on an appliance such that it can plug and play on any corporate network?

    - by Joshua Lim
    I had a chance to configure a Moxa NPort device server appliance on my client's network, it was very easy to do so, done in just 2 minutes. Here's what I did:- The Moxa device server had a preset IP address of 192.168.127.254 and subnet mask 255.255.255.0 - http://www.moxa.com/doc/manual/nport/5400/NPort_5400_Series_Users_Manual_v4.pdf Moxa provides a Windows software which I used to "scan" for the device server. It worked like magic! The software returns a list of device servers found. Each device server is identified by MAC address, and by selecting the device server using the software, I can reset the default IP address and subnet mask of that device server! In comparison, during an earlier project, I spent 2 hours trying to get KVM to work for a Windows 7 embedded appliance I'm trying to install in my client's network - http://superuser.com/questions/380305/how-to-configure-windows-7-professional-appliance-pc-on-my-clients-network-usin Prior to that, I have already tried pre-configuring the IP address and subnet mask to the one which my client provided, yet the appliance still can't connect to the client's network! I've also tried cross cable, didn't work either. After KVM worked, I discovered that the network settings were "lost" after I plug the machine into the client's network. Now my question is what can I do to setup my Windows 7 embedded appliance so that it can connect to any network like that the Moxa device server? I tried experimenting this on my network using a Windows machine configured to an IP address of 192.168.127.254 and subnet mask 255.255.255.0, but it doesn't connect to my network that uses 192.168.0.*. :( EDIT: I would like to point out that the Moxa Windows configuration software seems to be able to connect to any Moxa device connected to the network even if it is on a different subnet, as long as the network adapter shows "connected". This is important because the Moxa device has no VGSM port or interface to configure the IP address.

    Read the article

  • Windows 2008 R2 Servers Sending Arp Requests for IPs outside Subnet

    - by Kyle Brandt
    By running a packet capture on my my routers I see some of my servers sending ARP requests for IPs that exist outside of its network. For example if my network is: Network: 8.8.8.0/24 Gateway: 8.8.8.1 (MAC: 00:21:9b:aa:aa:aa) Example Server: 8.8.8.20 (MAC: 00:21:9b:bb:bb:bb) By running a capture on the interface that has 8.8.8.1 I see requests like: Sender Mac: 00:21:9b:bb:bb:bb Sender IP: 8.8.8.20 Target MAC: 00:21:9b:aa:aa:aa Target IP: 69.63.181.58 Anyone seen this behavior before? My understanding of ARP is that requests should only go out for IPs within the subnet... Am I confused in my understanding of ARP? If I am not confused, anyone seen this behavior? Also, these seem to happen in bursts and it doesn't happen when I do something like ping an IP outside of the network. Update: In response to Ian's questions. I am not running anything like Hyper-V. I have multiple interfaces but only one is active (Using BACS failover teaming). The subnet mask is 255.255.255.0 (Even if it were something different it wouldn't explain an IP like 69.63.181.58). When I run MS Network Monitor or wireshark I do not see these ARP requests. What happens is that on the router capturing I see a burst of about 10 requests for IPs outside of the network from the host machine. On the machine itself using wireshark or NetMon I see a flood of ARP responses for all the machines on the network. However, I don't see any requests in the capture asking for those responses. So it seems like maybe it is maybe refreshing the arp cache but including IPs that outside of the network. Also when it does this NetMon doesn't show the ARP requests?

    Read the article

  • Restoring Windows 2008 Server X86 and X64

    - by rihatum
    Restoring Windows 2008 Server (Domain Controller) We are using Backup Exec System Recovery 2010 to Image our DC. Now this software has a feature to convert the backup into a vmware or hyper-v VM I have also used disk2vhd to convert one of our dc's to a vhd and when I connected it into Hyper-V, it booted fine, I can login - BUT :-) As soon as I login, I get the activation error, that change product key, this product key isn't good for this machine etc. Question is : When in a real recovery situation, what would be the procedure to restore it either virtual or onto a physical box but be able to login and change product key etc ? In this scenario its just locked down and I cant' do anything, if this is the case, how would I replicate my production environment via these tools ? Any Ideas ? Will be grateful for some real world examples here. Same thing happens with our exchange backup / test restore either physical or virtual, can login but nothing else. Now we don't have the keys as they are OEM keys and just wondering what will happen in a real scenario, would we be purchasing another KEY or using the OEM key on our new server ? This is a test environment I am trying to create by restoring our backups either into hyper-v or physical test machines. Also, If I build up a machine (Server 2008) in a VM (Hyper-V), How can I restore just the system state backup of my DC into it ? will that give me the activation error too ? even though I would use the TRIAL ISOs provided by Microsoft ? Kind regards

    Read the article

  • Why won't IIS serve my website? - 404 Page Not Found

    - by Giffyguy
    Built a brand new server, with a fresh copy of Windows Server 2003 Enterprise x86 Edition. Installed the .NET Framework 1.1, 2.0, 3.5, and 4.0 Added the "Domain Controller" and "Application Server" roles. Created a new website, pointed it to a local directory: C:\Inetpub\angryoctopus.net\ Added the appropriate headers: angryoctopus.net, www.angryoctopus.net, TCP port 80, all IPs Moved the website content into the local directory. Configured the default document in IIS: Default.aspx Enabled ASP.NET for this website, and set it to the correct version: 2.0.50727 Configured the zone angryoctopus.net in DNS. Tested DNS lookup here to ensure DNS was functional. Opened website in VS 2008 and re-built (and debugged) to ensure the content was functional. I can clearly see that IIS is responding normally, by browsing directly to my server's IP address. Since this does not use the angryoctopus HTTP header, the default website is displayed instead: the "Under Construction" page. And yet, after all of this, angryoctopus.net still returns 404. Does anybody know what could be wrong? What troubleshooting steps have I forgotten? Is there a command-line diagnostic that might provide more information?

    Read the article

  • ps aux as non-root doesn't show all processes

    - by JMW
    hi, i'm using an ubuntu 10.04 server... when i run ps aux as root i see all processes when i run ps aux as nonroot i see JUST the processes of the current user after a bit of research i found the following solution: root@m85:~# ls -al /proc/ total 4 dr-xr-xr-x 122 root root 0 2010-12-23 14:08 . drwxr-xr-x 22 root root 4096 2010-12-23 13:30 .. dr-x------ 6 root root 0 2010-12-23 14:08 1 dr-x------ 6 root root 0 2010-12-23 14:08 10 dr-x------ 6 root root 0 2010-12-23 14:08 1212 dr-x------ 6 root root 0 2010-12-23 14:08 1227 dr-x------ 6 root root 0 2010-12-23 14:08 1242 dr-x------ 6 zabbix zabbix 0 2010-12-24 23:52 12747 [...] my first idea was, that it got mounted in a weird way: /etc/fstab is ok and it doesn't seem to be mounted in an weird way... my second idea was, that there might be a rootkit: but it's not a rootkit... rkhunter tells me, that there is no rootkit installed... i don't know if it is since the machine got installed or came with an update. i've just installed zabbix-agent on the machine and realized, that it didn't work properly... What could have caused such strange permissions (500) and how can i set it back to an normal level (555) ? Crazy, i've never seen something like that... thanks in advance for any help and merry christmas :) see you

    Read the article

  • Is USB supported in safe mode on XP?

    - by Hugh Allen
    According Microsoft, "Universal Serial Bus Devices Do Not Work in Safe Mode" under XP. However, in my testing this is incorrect. USB keyboards, mice and flash drives seem to work fine in safe mode (I made sure the BIOS was not providing support). This makes sense because a failure of a standard input device would be, in Microsoft parlance, a "bad user experience". So, Is USB supported in safe mode on XP? If your answer is no (agreeing with Microsoft), please provide a test case, preferably in a virtual machine, where a standard HID keyboard or mouse fails. Please state hardware / BIOS / OS configuration. Note that you will need a PS/2 keyboard attached in addition to your USB device(s) in order to use the boot menu. Virtual machine software usually emulates a PS/2 keyboard. Alternatively, you could add the /safeboot switch to boot.ini. If your answer is yes, please provide a link to some supporting documentation (either from Microsoft or someone authoritative). Your answer might be "devices X, Y and Z are supported but nothing else", in which case also give a link.

    Read the article

  • configs for several sites in apache with ssl

    - by elCapitano
    i need to secure two different sites in apache. One of them should only be a proxy for a different server which is running on port 8069. Now one (which is natively included in apache) runs with SSL: <VirtualHost *:443> ServerName 192.168.1.20 SSLEngine on SSLCertificateFile /etc/ssl/erp/oeserver.crt SSLCertificateKeyFile /etc/ssl/erp/oeserver.key DocumentRoot /var/www/cloud ServerPath /cloud/ #CustomLog /var/www/logs/ssl-access_log combined #ErrorLog /var/www/logs/ssl-error_log </VirtualHost> The other one is not running and even not registered. When i try to access it, i get an exception (ssl_error_rx_record_too_long): <VirtualHost *:443> ServerName 192.168.1.20 ServerPath /erp/ SSLEngine on SSLCertificateFile /etc/ssl/erp/oeserver.crt SSLCertificateKeyFile /etc/ssl/erp/oeserver.key ProxyRequests Off ProxyPreserveHost On <Proxy *> Order deny,allow Allow from all </Proxy> ProxyVia On ProxyPass / http://127.0.0.1:8069/ ProxyPassReverse / http://127.0.0.1:8069 RewriteEngine on RewriteRule ^/(.*) http://127.0.0.1:8069/$1 [P] RequestHeader set "X-Forwarded-Proto" "https" SetEnv proxy-nokeepalive 1 </VirtualHost> My whish is the following configuration: 192.168.1.20 ->> unsecured local path to website 192.168.1.20/cloud/ ->> secured local documentpath from cloud 192.168.1.20/erp/ ->> secured proxy on port 80 for http://192.168.1.20:8069 how is this possible? is this even possible? perhaps cloud.192.168.1.20 and erp.192.168.1.20 is better?! Thank you

    Read the article

  • Exchange Connector Won't Send to External Domains

    - by sisdog
    I'm a developer trying to get my .Net application to send emails out through our Exchange server. I'm not an Exchange expert so I'll qualify that up front!! We've set up a receive Connector in Exchange that has the following properties: Network: allows all IP addresses via port 25. Authentication: Transport Layer Security and Externally Secured checkboxes are checked. Permission Groups: Anonymous Users and Exchange Servers checkboxes are checked. But, when I run this Powershell statement right on our Exchange server it works when I send to a local domain address but when I try to send to a remote domain it fails. WORKS: C:\Windows\system32Send-Mailmessage -To [email protected] -From [email protected] -Subject testing -Body testing -SmtpServer OURSERVER (BTW: my value for OURSERVER=boxname.domainname.local. This is the same fully-qualified name that shows up in our Exchange Management Shell when I launch it). FAILS: C:\Windows\system32Send-Mailmessage -To [email protected] -From [email protected] -Subject testing -Body testing -SmtpServer OURSERVER Send-MailMessage : Mailbox unavailable. The server response was: 5.7.1 Unable to relay At line:1 char:17 + Send-Mailmessage <<<< -To [email protected] -From [email protected] -Subject testing -Body himom -SmtpServer FTI-EX + CategoryInfo : InvalidOperation: (System.Net.Mail.SmtpClient:SmtpClient) [Send-MailMessage], SmtpFailed RecipientException + FullyQualifiedErrorId : SmtpException,Microsoft.PowerShell.Commands.SendMailMessage EDIT: From @TheCleaner 's advice, I ran the Add-ADPermission to the relay and it didn't help; [PS] C:\Windows\system32Get-ReceiveConnector "Allowed Relay" | Add-ADPermission -User "NT AUTHORITY\ANONYMOUS LOGON" -E xtendedRights "Ms-Exch-SMTP-Accept-Any-Recipient" Identity User Deny Inherited -------- ---- ---- --------- FTI-EX\Allowed Relay NT AUTHORITY\ANON... False False Thanks for the help. Mark

    Read the article

  • Windows memory logged on vs logged off

    - by Adi
    Let's say I power on my fresh installed Windows 7 x64 machine. After Windows boots up, there are a bunch of services being started in the background that start allocating memory. Then I enter my user/pass and Windows logs me in. Let's supose I don't do anythig else (I don't explicitely start any application) and I don't have any other app installed by me. So it's fresh install of my machine. My question is: how much memory is needed for all the UI & other stuff? Is it a good indicator to look into task manager and check all the processes started under my user name and sum up all the memory consumed by those processes to get the total amount of memory I am consuming just to stay logged on? Basically this is my question: how much memory is needed just to stay logged on? Now, if log off would all the memory be released back to the system so that the background services can benefit of? Also, I assume that there might be a different discussion for each Windows flavors (?)

    Read the article

< Previous Page | 469 470 471 472 473 474 475 476 477 478 479 480  | Next Page >