Search Results

Search found 19824 results on 793 pages for 'word 2008'.

Page 181/793 | < Previous Page | 177 178 179 180 181 182 183 184 185 186 187 188  | Next Page >

  • pfsense, active directory, local domain

    - by Dalton Conley
    First things first, I have no idea what I'm doing. Certainly not afraid to admit that.. but here is my network setup. I have 2 servers, one of them in a domain controller. Both are running windows server 2008. They have replicated directories. Each server is at a different location and has its own firewall for the network at that location. Both firewalls are using pfsense. Recently a firewall went down and my coworker reinstalled pfsense, and everything seems setup correctly. Again, I have no idea what I'm doing so I'm not sure. I have records from when the previous IT person had setup this network and the firewall settings are the same but those records could have been extremely old. Now, I have a domain name for my network.. we'll call it "mydomain.net". I use to be able to access this domain name and it would bring up the servers replicated drives(i.e. \\mydomain.net). Now I cannot. I can however access the servers individual host names on my network(i.e. \\server1 , \\server2). We didn't change anything on the server which is what makes me think its something to do with the firewall. I know this is probably a very general question and I don't have a lot of detail to add but could anyone give me some insight on to what could be causing this, or some debugging techniques I can apply to this? I'm a programmer, not a network administrator.

    Read the article

  • HyperV - low CPU usage

    - by Klark
    I am very new to HyperV and virtual machine philosophy in general, so please expect more or less nooby questions :) I have a server that is only used as a host for virtual machines. OS is windows server 2008 R2 and it is running on 16 CPU and 48 GBs of RAM. On aforementioned server there are 8 VMs, each having 4 CPUs and 4 GBs of RAM. On those VMs we are running some CPU intensive tasks. Each machine has nearly 100% cpu usage. After I noticed slow performance I went to the host machine and started playing with process explorer. It turned out that cpu usage is very low. Also I/O is very low, and of course, memory consumption is high, which is expected. Of course, I don't expect that those 4 virtual cores dedicated to a VM work as fast as real, hardware 4 cores, but still I expected a higher consumption of real hardware. Is this sort of behaviour normal? I see that the most of CPU usage on host machine are marked as interrupts (which I guess is normal) and all those interrupts are passed to only one core (which is strange). Are there out of box optimization that I could perform to finally use all that processing power that is under the hood. My knowledge of virtualization technology is near to embarrassing, so I would be grateful for any links that could enlightened me :) Thanks.

    Read the article

  • How to make Virtualbox, OpenVPN, and Win2008 Web R2 like one another?

    - by Aquitaine
    Back with web developer guy wearing net admin hat. Hopefully this is an easy one. We have two servers on a public network at a hosted facility. Server A is our public-facing web server and server B is our database server. Both are running Windows 2008 Server R2 Web Edition. We want Server B isolated from everything except Server A, such that anyone who has to connect to server B goes through the VPN on Server A. It's not perfect since we have no access to do this on the router side, but it's what we've got. We've set up VirtualBox and OpenVPN Access Server on Server A. It has one network interface set to 'NAT' mode, such that OpenVPN gets its IP at 10.0.2.x, and to connect to the OpenVPN interface, I go to the local IP for the Virtualbox network adapter, 192.168.56.x, which works as I configured the appropriate ports using VBoxManage. My question is, do I need to be using Bridged Networking and give the VPN server its own IP, or is there some way to tell the server (either Windows or the Virtualbox OpenVPN) that 'any public connection on the real external IP on port X should be directed to this internal LAN address of 192.168.1.x on port Y'? OpenVPN itself doesn't seem to be aware of the server's real external IP unless we put it in Bridged networking mode; is that necessary or advisable? We're without RRAS since this is Web edition, but I feel like what we're going for is pretty simple. Thanks! Aq

    Read the article

  • Hiding subfolders from users with Windows Server security

    - by Frans
    Using Windows Server 2008. I would like to allow all users to map to a common network drive and be able to browse it. But, I only want them to be able to see the subfolders they actually have access rights to. Is this doable? Example I have a share with two folders on it; \\domain\share\FolderA \\domain\share\FolderB With three different security groups, I would like to map a network drive for all three to \\domain\share. However, for group1, I want them to only be able to see FolderA, group2 should only see FolderB and group3 should see both. I am not just talking about denying access to the actual folder, which is easy enough, I don't want the user to even be able to see that the folder exists. In other words, when group 1 logs in and do "dir n:\" they should see N:\FolderA When group 2 logs in, they should see N:\FolderB and when group 3 logs in they should see N:\Folder A N:\Folder B My half-baked solution If I completely block access to the root then I can't map a drive to it. I can give everyone the traverse right which then allows the user to map a drive. However, if a member of group1 or group2 tries to go to "N:\" they get an access denied error. If they go to N:\FolderA (for group1) then it works. So, that sort of works, but it would be nicer if the user could actually browse to N:\ and just only see the subfolders they have access to. I am pretty sure I have seen this done but not sure how to do it myself. Any advice would be greatly appreciated.

    Read the article

  • Network access lags for Win7 when server network utilization is high

    - by Jeff Miles
    We have a Dell PE2950 file server running Windows 2008, hosting a DFS namespace of ~1.2 TB. This server has two Broadcom 1Gbps NICs teamed together. When there is high traffic going to the server across the network (greater than 200 Mbps), any Windows 7 client accessing a DFS share at the time experiences severe performance problems. For example: Computer A has an AutoCAD drawing opened directly from the DFS share. Performance is normal, not causing any issues. Computer B begins a file transfer, putting a 11GB file onto a different DFS namespace, on the same server Computer A immediately notices lag while using AutoCAD. The cursor momentarily freezes within AutoCAD every 10 seconds or so, and any browsing of the DFS share is extremely slow. Computer B completes file transfer, and performance resumes to normal for Computer A. This is only affecting Windows 7 clients, using a variety of hardware (desktop + laptop). All of our Windows XP clients see no performance impact during the file transfer. Things I have tried with no change: Had Computer A work from an entirely different RAID array from the file transfer destination Updated NIC drivers on clients and server Enabled TCP offload and receive side scaling on the server NIC (previously disabled when the issue began) Antivirus disabled during file transfer I am currently having a user test applications other than AutoCAD when the file transfer occurs, and will update the question with that result. Does anyone have any recommendations for resolution or additional troubleshooting steps?

    Read the article

  • Site hanging in iis7 - how do I troubleshoot?

    - by Chris Foot
    I am currently having a problem with a windows 2008 server running IIS 7. The server runs several sites but only seems to have the issue with one particular site. Every so often, the whole server slows to a crawl with nearly all requests timing out! Invariably, when we log in to take a look there is always an IIS process using up around 90% cpu. Looking into the worker processes in IIS there are usually one or two requests that have been running for a long time. They are always in the ExecuteRequestHandler state with ManagedPipeline as the module name and the current ones i'm looking at have been running for 7686248 (what units is this in, it doesn't say?). It is also not always the same page, in fact we have seen at least 3 different pages listed under url when this has happened. It seems that the only way to bring the server back to life is to kill the 90% process! The site is running under .Net 4.0 and the code on it is very similar to other sites on the server which do not have the problem! How do I start troubleshooting this?

    Read the article

  • Windows Server - share files without access for administrator

    - by Pawel
    We have a MS Windows Server 2008 R8 based server that is administrated by our IT department. We would like to achieve two things simultaneously: A folder on the server, containing several thousand files (new files added frequently) that is accessible to some ActiveDirectory users (e.g. board of directors) but is not accessible by IT department employees IT department employees still maintain rights to administrate the server, including installing new software and services We already checked some solutions: Using NTFS access rights. Unfortunately IT (members of "Administrators" group) can set themselves as new owners of the files and change the permissions so that they gain access to the files. Enabling EFS. Unfortunately even if you do not allow IT to access files, they still can disable EFS completely because they have administrative rights. Moreover as far as I know you have to manually add permissions for all users but the owner for each new file - very inconvenient. Creating a new role for the IT department that has all the privileges apart from taking ownership of files. Unfortunately if you're not a member of the Administrators group, you cannot install new software, no matter what privileges you add to the role. TrueCrypt - nice free encryption software, but with poor sharing capabilities. You can either mount an encryption container on the server (and then IT has access to its contents) or you mount them locally but only one user can mount it for writing. AxCrypt - free encryption software that enables file-by-file encryption on the server. There are some disadvantages though - you have to manually encrypt each new file added. The files have their extensions changes. You can only set one password for all files (so all users have to know this one password). Any other ideas? Our budget is limited so enterprise-class software from Symantec or PGP would probably be not an option.

    Read the article

  • Winamp playing sound but no video

    - by Greg Sansom
    I am having problems playing video in Winamp (the movie I am trying to play is an AVI - not sure if other formats work). I have installed the K-Lite Codec Pack, and the video does work in Winamp Classic. I can also play the video in Winamp on another machine (although I can't remember the exact configuration details of that machine - and I don't think they're relevant). There are a few symptoms: The content of the Video view is either empty, transparent, or displays rendering from other programs. Opening the Visualization view shows the following error: MILKDROP ERROR DirectX initialization failed (GetDeviceCaps). This means that no valid 3D-accelerated display adapter could be found on your computer. If you know this is not the case, it is possible that your graphics subsystem is unstable; please try rebooting your computer and then try to run the plugin again. Otherwise, please install a 3D-accelerated display adapter. Trying to open streams via the SHOUTCast TV plugin shows Error opening video output, and the video does not open. Opening the file with WMC causes the following error (although the movie still plays): Error creating DX9 allocation presenter CreateDevice failed D3DERR_NOTAVAILABLE There are no warnings displayed in Device Manager, although the display adapter is the standard Windows one. Running DxDiag shows no problems (codec for Video listed as XviD 1.1.2 Final). GSpot reports that codecs are installed. System specs: - Windows Server 2008 r2 Standard 64-bit, with latest updates; - .NET 3.5.1 installed; - Winamp v5.6.01 (latest version); - DirectX 11 (Latest version); - K-Lite Codec Pack 7.0.0 (Full); - Machine is HP DC7600 - full specs here. Please comment if there is any more information which will help to diagnose the problem.

    Read the article

  • Installing Windows Management Framework 3.0 basically destroyed WMI, how can I fix it without reinstalling the O.S.?

    - by Massimo
    Related, of course, to this question. Before discovering it was somewhat... dangerous, I installed Windows Management Framework 3.0 on a number of Windows Server 2008 R2 SP1 servers, and WMI got completely trashed on all of them. This is what the WMI namespace looks like on a normal server (this is from Server Manager - Configuration - WMI Control): This is what it looks like after installing WMF 3.0: Yeah. Everything except WMF 3.0's new features is gone. Needless to say, nothing seems to work anymore on those servers. And no, this is not due to some strange installation error, this happened on three servers which were perfectly working before installing WMF 3.0, and on all of them the installation completed succesfully. Admittedly, one of them had a somewhat complex setup (various System Center products and SQL Server instances)... but two of them are just plain standard domain controllers which do nothing else at all. How can I fix this mess without having to reinstall the O.S. on these servers? And why did it happen in the first place?

    Read the article

  • Win2008: Boot from mirrored dynamic disk fails!

    - by Daniel Marschall
    Hello. I am using Windows Server 2008 R2 Datacenter and I got two 1.5TB S-ATA2 hard disks installed and I want to make a soft raid. (I do know the disadvantages of softraid vs. hardraid) I have following partitions on Disk 0: (1) Microsoft Reserved 100 MB (dynamic), created during setup (2) System Partition 100 GB (dynamic) (3) Data partition, 1.2TB (dynamic) I already mirrored these contents to Disk 1. Its contents are: (1) System partition mirror, 100 GB (dynamic) (2) Data partition, 1.2 TB mirror (dynamic) (3) Unusued 100 MB (dynamic) -- is from "MSR" of Disk 0, created during setup. Since data and system partition are mirrored, I expect that my system works if disk 0 would fail. But it doesn't. If I force booting on disk 0: Works (I get the 2 bootloader screen) If I force booting on disk 1 (F8 for BBS), nothing happens. I got a blank black screen with the blinking caret. I already made disk1/partition1 active with diskpart, but it still does not boot from this drive. Please help. Both partitions are in "MBR" partition style. They look equal, except the missing "MSR" partition at the partition beginning (which seems to be not relevant to booting). Regards Daniel Marschall

    Read the article

  • Simplest DNS solution for remote offices

    - by dunxd
    I look after a bunch of remote offices that connect via VPN - a Cisco ASA 5505 in each office acts as Firewall and VPN end point. Beyond that we keep things as simple as possible in the offices to minimise the support burden. We don't have any kind of server except in offices large enough to justify having someone dedicated to IT. Basically there is the ASA, some computers, a network printer and a switch. One of the problems I am seeing in a lot of offices is that DNS requests looking up hosts inside our network often fail - I'm assuming timeouts due to the offices internet connection (they are all in developing world countries) having some sub-optimal qualities (e.g. high latency caused by VSAT segments, or packet loss. The obvious solution to this is to have some sort of local DNS service that can serve local requests - so I think it would need to do zone transfers from our Microsoft Windows 2008 R2 DNS servers at HQ. However, simply installing Windows Servers in each office is both expensive, and creates a support burden. This got me thinking about pfsense/m0n0wall on embedded devices - those can act as a DNS server, and could be configured at HQ and sent out as just something that needs to be plugged into the network and can then be forgotten about by the staff locally. Maybe there are some alternatives to the ASA 5505 that include some DNS functionality. Has anyone here dealt with the problem, either using some kind of embedded device, or found some other solution? Any gotchas or reasons to avoid what I have suggested?

    Read the article

  • Intermittent extrememly long response times when downloading documents

    - by pap
    I have a Java web application running om Tomcat 7 with an Apache httpd 2.2 fronting with mod_jk/AJP. One part of the application is serving files (up to 4mb size). Now, normally this all runs very smooth with stable, low response-times. However, in rare instances (<0.1% of downloads), the downloadtime will go beyond 1 minute. After activating the ThreadStuckValve in Tomcat, I can see that the long responses seem to be stuck at org.apache.tomcat.jni.Socket.sendbb(Native method) i.e network I/O. At most, these long-running downloads take 5 minutes, which I strongly suspect is because of the default 300 second timout in Apache 2.2 (http://httpd.apache.org/docs/2.2/mod/core.html, "TimeOut directive"). To me, this looks like network problems. The Apache timeout (if that is what is kicking in at the 5 minute mark) indicates that ACK packets are not being transmitted correctly. My questions are what could be causing this? Closed browser at receiving end but socket not signaled as closed properly? Packet loss or some other network failure in transit? Where would I start troubleshooting this? We're running Tomcat and Apache on Windows server 2008-R2 in a vmware virtualized server.

    Read the article

  • WS2008 subst in Logon script does not "stick"

    - by Frans
    I have a terminal server environment exclusively with Windows Server 2008. My problem is that I need to "map" a drive letter to each users Temp folder. This is due to a legacy app that requries a separate Temp folder for each user but which does not understand %temp%. So, just add "subst t: %temp%" to the logon script, right? The problem is that, even though the command runs, the subst doesn't "stick" and the user doesn't get a T: drive. Here is what I have tried; The simplest version: 'Mapping a temp drive Set WinShell = WScript.CreateObject("WScript.Shell") WinShell.Run "subst T: %temp%", 2, True That didn't work, so tried this for more debug information: 'Mapping a temp drive Set WinShell = WScript.CreateObject("WScript.Shell") Set procEnv = WinShell.Environment("Process") wscript.echo(procEnv("TEMP")) tempDir = procEnv("TEMP") WinShell.Run "subst T: " & tempDir, 3, True This shows me the correct temp path when the user logs in - but still no T: Drive. Decided to resort to brute force and put this in my login script: 'Mapping a temp drive Set WinShell = WScript.CreateObject("WScript.Shell") WinShell.Run "\\domain\sysvol\esl.hosted\scripts\tempdir.cmd", 3, True where \domain\sysvol\esl.hosted\scripts\tempdir.cmd has this content: echo on subst t: %temp% pause When I log in with the above then the command window opens up and I can see the subst command being executed correctly, with the correct path. But still no T: drive. I have tried running all of the above scripts outside of a login script and they always work perfectly - this problem only occurs when doing it from inside a login script. I found a passing reference on an MSFN forum about a similar problem when the user is already logged on to another machine - but I have this problem even without being logged on to another machine. Any suggestion on how to overcome this will be much appreciated.

    Read the article

  • WMI Sensors monitoring

    - by DmitrySemenov
    Monitoring tool Paessler stopped to monitor WMI Windows Sensors Paessler is Updated to version 12.4.5.3165. (10/30/2012 1:44:11 PM) Paessler windows sensors (against windows server 2008 R2 web edition) stopped to work (no changes have been made on server that we monitor) with the message Connection could not be established (80070005: Access is denied - Host: 192.168.2.10, User: Administrator, Password: **, Domain: ntlmdomain:) (code: PE015) However if I go to Virtual machine used to run Paessler and the following cscript runs successfully: strComputer = "192.168.2.10" Set objSWbemLocator = CreateObject("WbemScripting.SWbemLocator") Set objSWbemServices = objSWbemLocator.ConnectServer _ (strComputer, "root\cimv2", _ "Administrator", "pass") Set colProcessList = objSWbemServices.ExecQuery( _ "Select * From Win32_Processor") For Each objProcess in colProcessList Wscript.Echo "Process Name: " & objProcess.Name Next I'm getting output C:\>cscript test.vbs Microsoft (R) Windows Script Host Version 5.8 Copyright (C) Microsoft Corporation. All rights reserved. Process Name: Intel(R) Xeon(R) CPU X5680 @ 3.33GHz Process Name: Intel(R) Xeon(R) CPU X5680 @ 3.33GHz So WMI works a. I gave Administrator credentials for Device to monitor in Paessler setting, the same I used in the script above b. I restarted windows server (broken sensors) - but this didn't help c. I restarted Paessler probe service - no effect any ideas?

    Read the article

  • WinHttpCertCfg not importing certificate

    - by Ramon Zarazua
    I need to setup a deployment script that imports an SSL certificate that my service uses. I have tried importing with WinHttpCertCfg and with CertMgr to no avail. Here are the command-line arguments I have tried to use with both: winhttpcertcfg.exe -i <certname>.pfx -c LOCAL_MACHINE\My -p <password> -a <user service runs as> and CertMgr.exe -add -all -s -r localMachine -c <cert name> My It seems from what I have investigated that CertMgr does not allow you to import certificates with a password, so I'd rather get winhttpcertcfg working. When I run them I get the following output: WinHttpCertCfg: Microsoft (R) WinHTTP Certificate Configuration Tool Copyright (C) Microsoft Corporation 2001. CertMgr: CertMgr Succeeded However, when I look into the local machine certificates in MMC, try to load them from my service, or list it out through winhttpcertcfg, or even looking at the registry in HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\SystemCertificates\MY\Certificates it is not found. I have tried all of the following: If I install the cert manually (Through CertMgr.msc dialogs) it works. The user installing is running as administrator The user installing has full access on the certificate The tools print out an error when something is wrong (wrong password) Tried it in multiple machines (All of them server 2008 R2) At this point I am officially out of ideas. Thank you.

    Read the article

  • Authority Information Access local path being ignored

    - by Kevin
    I have a CA set up in Server 2008 R2, and generally it is working, but I can't control the local path/filename it writes its own certificate to for the Authority Information Access publishing. Here's a screen shot of the dialog I'm trying to set this on: From these settings I would expect to get the file: C:\Windows\system32\CertSrv\CertEnroll\DAMNIT.crt But instead I get: C:\Windows\system32\CertSrv\CertEnroll\SERVER.domain.com_My Issuing Authority(1).crt Of course, the actual change shown wouldn't be very useful, but it's illustrative; no matter what path/filename I use, it always lands up in the same place and with the same name. I actually wanted to change the name from <ServerDNSName>_<CaName><CertificateName>.crt to <CaName><CertificateName>.crt, since the latter corresponds to the HTTP URL whereas the former does not. Admittedly, I haven't set up many CAs so perhaps I'm just deluded as to what this dialog is supposed to be setting, but if so this is notoriously bad UI design. (Incidentally, I have a couple other complaints with the same dialog.) What's going on here and is there some way to get the filename pattern I want?

    Read the article

  • Enabled Network Discovery on Server, and now VNC and Squeezebox clients don't work

    - by Mike Hanson
    I've recently setup a Windows Server 2008. It's running an email server, Squeezebox server, MS SQL Server, etc. I'm doing remote maintenance with UltraVNC. I had everything working fine. Then the server needed to access a network share on another machine, and I was prompted to turn on network discovery, which I did. I chose the Home rather than Public option. Since doing that, some things have stopped working, while others are still fine. Shared folders and the the Email services (ports 25 and 110) are still accessible. VNC (port 5900) and Squeezeboxes (port 9000) no longer work. Here's what I've tried to try to solve the problem: Checked the network discovery settings, to see if anything looked strange. Checked the firewall settings, and those ports appear to be open. Also in the firewall settings, the entries for Private domain Network Discovery were all on, but the Domain/Public ones were off. I tried turning those on. In the services, turned on Function Discovery Resource Publication and SSDP Discovery. Any other suggestions?

    Read the article

  • files have no ownership permissions and can't assign ownership

    - by Force Flow
    I'm having problems with file permissions on a server 2008 R1 server. Office 2010 tmp files are being created, and don't have any security permissions assigned. They aren't being deleted, I can't assign ownership, and I can't delete them. I downloaded and ran the sysinternals tool handle.exe. When running it for the first time, handle64.exe was created, but not assigned any permissions. I cannot assign ownership and cannot delete it. Seemingly random files in random places don't seem to have any permissions assigned. Access is denied when attempting to change ownership to administrator or the administrators group. If I try to replace inheritable permissions of the folder these files are in, access is denied for the files with no permissions. I attempted to use subinacl to view the ownership information on the files that had no permissions, but access was denied here as well. I also tried setting the owner with setacl in an elevated cmd window, but access was denied as well. This problem only surfaced in the last few days, and I'm unsure as what the cause is or how to correct it.

    Read the article

  • TCP Server Memory management: #Connections Vs. #Requests

    - by Andrew
    Given that, there is no theoretical limit to number of concurrent TCP connections a Windows 2008 server can handle. Only thing will happen is, with each connection there will be memory consumption in server. Unfortunately, memory is not unlimited (and I want to utilize only physical memory). For example, lets say we've 2GB server memory. Now there are two extreme cases: Case 1: If we've allocated 64KB buffer for each connection (only to receive incoming request), then 32768 connections can consume all the 2GB of memory. This will not leave any memory to queue/process incoming requests from those connections. Case 2: On the other hand, lets say a single (or very few) connections continuously keeps sending request buffers (for example, video streaming from one connection to other) and server cannot process them within time, those buffers will get piled up in server and eventually will occupy most of the servers memory. And it will not leave any memory for new connection thereafter. This is the real dilemma in server design bugging me badly for last many days. If I can decide on max size of request buffer per connection and max number of requests to allow in queue per connection. Then, based on available server memory, it will then automatically set limit on max number of concurrent connections. How to decide on these limits to achieve best performance and throughput? I am just looking for perfect utilization of server resources. Are there any standard guidelines or empirical data available with someone who can share with me please.

    Read the article

  • IIS crashes with unhandled exception in ASP.NET

    - by SnowCrash
    We had an issue recently with an unhandled exception in an ASP.NET C# application bringing down IIS and all application pools that it was hosting. IIS Manager was unable to restart or stop/start the service and I was unable start IIS again after killing w3wp.exe in the task manager. A system reboot restored IIS to a running state; as a primarily Linux admin, I generally consider an unplanned system reboot to resolve a software error to be an act of high heresy. Is there a way to "harden" IIS so that a faulting application does not affect anything but the request that exposes the fault? Some details on the server and application fault. IIS: 7.5 .NET: 4.0 Windows Server 2008 R2 Faulted on call to System.Net.Dns.Resolve() with a url pointing to a non-existant domain as the argument. (I'm aware that this method is deprecated but the point that a page code issue shouldn't bring down the server still stands) The exception generated was SocketException. The faulting module according to event viewer was KERNELBASE.dll The issue was resolved by wrapping the call in a try-catch, logging the exception and displaying some generic content on the page. I'm hoping that I missed something in the IIS config that would switch it to "production" mode or something.

    Read the article

  • Windows VPN for remote site connection drawbacks

    - by Damo
    I'm looking for some thoughts on a particular way of setting up a estate of machines. We have a requirement to install machines into unmanned, remote locations. These machines will auto login and perform tasks controlled from a central server. In order to manage patching, AV, updates etc I want these machines to be joined to a dedicated domain for this estate. Some of the locations will only have 3G connectivity (via other hardware), others will be located on customer premises in internal networks. The central server (of ours) and the Domain Controller will be on a public WAN. I see two ways of facilitating this. Install a router at each location and have a site to site VPN between the remove device and the data centre where the servers are location Have the remote machine dial up and authenticate via a Windows VPN connection to the DC via RAS Option one is more costly to setup and has a higher operational cost. It also offers better diagnostics if the remote PC goes down. Option two works well but is solely dependent on the VPN connection been made before any communication can be made to the remote machine. In a simple test, I can got a Windows 7 machine to dial a VPN prior to authentication to a domain, then automatically login to the machine using domain credentials. If the VPN connection drops, it redials. I can also create a timed task to auto connect every hour in case of other issues. I'd like to know, why (if at all) is operating a remote network of devices which are located in various out of band locations in this way a bad idea? Consider 300-400 remote machines all at different sites. I'd rather have 400 VPN connections to a 2008 server than 400 routers, however I'd like to know other opinions on this.

    Read the article

  • Diagnosing "The specified module could not be found" on IIS7 with ASP.Net

    - by Baldy
    I am migrating some web apps from a windows 2003 server with IIS6 server to a Windows server 2008 R2 server with IIS7. One of the apps, which runs on asp.net v2.0 using forms authentication will not load. It gives me the following error... The specified module could not be found. (Exception from HRESULT: 0x8007007E) Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.IO.FileNotFoundException: The specified module could not be found. (Exception from HRESULT: 0x8007007E) Now i fully understand that the app cannot load some kind of resource due to a FileNotFoundException, but i am struggling to diagnose exactly where in the application this is happening, as it does not tell me what the module is, nor what file it is looking for. I have enabled failed request tracing and i get back a complete request trace, yet i cannot find anything that gives me detail on this specific error, or the module involved. Any advice on diagnosing the root cause of the issue would be greatly appreciated.

    Read the article

  • legitimacy of the tasks in the task scheduler

    - by Eyad
    Is there a way to know the source and legitimacy of the tasks in the task scheduler in windows server 2008 and 2003? Can I check if the task was added by Microsoft (ie: from sccm) or by a 3rd party application? For each task in the task scheduler, I want to verify that the task has not been created by a third party application. I only want to allow standards Microsoft Tasks and disable all other non-standards tasks. I have created a PowerShell script that goes through all the xml files in the C:\Windows\System32\Tasks directory and I was able to read all the xml task files successfully but I am stuck on how to validate the tasks. Here is the script for your reference: Function TaskSniper() { #Getting all the fils in the Tasks folder $files = Get-ChildItem "C:\Windows\System32\Tasks" -Recurse | Where-Object {!$_.PSIsContainer}; [Xml] $StandardXmlFile = Get-Content "Edit Me"; foreach($file in $files) { #constructing the file path $path = $file.DirectoryName + "\" + $file.Name #reading the file as an XML doc [Xml] $xmlFile = Get-Content $path #DS SEE: http://social.technet.microsoft.com/Forums/en-US/w7itprogeneral/thread/caa8422f-6397-4510-ba6e-e28f2d2ee0d2/ #(get-authenticodesignature C:\Windows\System32\appidpolicyconverter.exe).status -eq "valid" #Display something $xmlFile.Task.Settings.Hidden } } Thank you

    Read the article

  • Disabling LDAP Signing on Windows PDC in Local Policy

    - by Golmaal
    I just tripped over my own feet it seems. Playing around on a Windows 2008 R2 server (set up as domain controller), I was intrigued by certain warning event (event id 2886) which says: "To enhance the security of directory servers, you can configure both Active Directory Domain Services (AD DS) and Active Directory Lightweight Directory Services (AD LDS) to require signed Lightweight Directory Access Protocol (LDAP) binds." So I thoughtlessly did some Googling and set the relevant policies which enforce LDAP signing. Now I don't remember but I may have done that using Local Policy. Now I have setup a pfsense box which must authenticate AD users via LDAP. While the firewall can communicate over secure channel, it is difficult to manage the same for other packages such as Squid and SquidGuard. So now I have to disable i.e. undo those policy changes. The problem is that they are greyed out! The policies in question are LDAP server signing and LDAP client signing. I don't remember what I did but when I access these policies from Local Policy editor on the server, they are set to "Require Signing" and are greyed out. The same policies can still be set via Default Domain Controller option in Group Policy editor. So how can I reset these greyed out policies? Thanks

    Read the article

  • Connect by Wifi to Sql Server from another computer

    - by Bronzato
    I try to connect by Wifi to Sql Server with Sql Server Management Studio from another computer but it failed. I have a computer with Windows Seven & Sql Server 2008 (lets say the server computer). Next to it, I have a fresh installed computer with Windows Seven & Sql Server Management Studio (let's say the client computer). What I do on the server computer: configure firewall by enabling port 1433 enabled network protocols (TCP/IP) inside Sql Server Configuration Manager checked "Allow remote connections to this server" on server properties in Sql Server Management. started Sql Server Browser restarted services (Sql Server Browser is stopped but I think it is not neccessary, isn't it?) Next, I successfully tested a ping on the port 1433 from my client computer with a tool named tcping (ex: tcping 192.168.1.4 1433). But I still cannot connect from my client computer to Sql Server on my other computer. Ok, something new on this problem: until now, I successfully connected to my "server computer" with Management Studio. What I do is typing the computer name in the server name field in the connection window of Management Studio. My previous (failed) attempt was to type the computer name followed by the instance of sql server (ex: COMPUTER_NAME\SQL2008). I don't know why I only have to type the computer name... Nevermind. Now my new challenge is to succeed connecting my VB6 application to this remote database located on my "computer server". I have a connection string for this but it failed to connect. Here is my connection string: "Provider=SQLOLEDB.1;Password=mypassword;User ID=sa;Initial Catalog=TPB;Data Source=THIERRY-HP\SQL2008" Any idea what's wrong? Thanks

    Read the article

< Previous Page | 177 178 179 180 181 182 183 184 185 186 187 188  | Next Page >