Search Results

Search found 14148 results on 566 pages for '2008'.

Page 492/566 | < Previous Page | 488 489 490 491 492 493 494 495 496 497 498 499  | Next Page >

  • Why does clicking on Windows 7 Printer Properties Result In Driver Not Installed?

    - by octopusgrabbus
    The question I need to ask is has anyone heard of getting a "driver not installed" error when clicking on a printer's properties on Windows 7, and is there a workaround? Here are the details of the problem. One of our users has a Windows 7 desktop, and an HP LaserJet 4050 T connected to via a parallel-to-usb converter. The PLC5 universal driver was installed for series 4050 printers. I needed to install the PLC 6 driver, which completed successfully. The user is an administrator of the system, and I was prompted to and accepted running as Administrator to install the driver. After the install, I went to see the 4050's properties and was prompted that the PLC6 driver was not installed. I believe the PLC6 driver was installed, because the PLC5 driver resulted in receiving an official HP error page indicating the printer was "not set up for collating" as the second page of printing two copies of a one page email. This problem did not occur with the PLC 6 driver. Oddly enough, setting back to PLC5 produced the same error about the PLC5 driver not being installed. I ignored/dismissed the error box (did not re-install the driver), and reproduced the error, with the second page being the HP not set up for collating error page. Any thoughts on what is causing this and how to clear it would be appreciated. The closest fix I could find was on a Microsoft tech page, and they had me clear winsock out of a Administrator run command line, followed by a reboot. That did not fix the problem. I have also found this http://social.technet.microsoft.com/Forums/windowsserver/en-US/5101195b-3aca-4699-9a06-db4578614e2d/changing-driver-results-in-printer-driver-is-not-installed-error-on-server-2008?forum=winserverprint and will look into trying some of these suggestions, which appear to me to be a "shotgun" approach to fixing the problem.

    Read the article

  • Road Warrior VPN Setup

    - by wobblycogs
    I apologise up front for the rather open ended nature of this question but I've got well out of my depth and could really do with some pointers. I need to set up a road warrior VPN solution which will allow our customers to securely access a number of services we provide for them. Customer machines will be running a variety of Windows versions from XP onwards with a variety of patch levels. Typically they will connect from the clients main offices but not always. It is safe to assume that all clients will be behind NATs but we may occasionally see a connection that isn't NAT'ed. Typical connection situation is therefore: Customer Laptop -- Router (NAT) -- Internet -- VPN Server + Firewall -- Server (Win 2008 R2, Non-routable IP) There will initially be a dozen or so people that could connect but that will grow quickly to around 100. It's unlikely that we'll see that many concurrent connections though, I imagine our total VPN throughput would be <50Mbps peak. What are my options for setting this up? I've been trying to set up a system like this using a MikroTik router for a few days but have struggled to get it working correctly, particularly with NAT'ed clients. I've had a quick look at OpenVPN and liked what I saw but I think it's unlikely our customers IT departments would allow the client to be installed. Finally I've looked at the Cisco ASA range but I'm on a fairly tight budget so this is less preferable but it looks like it would work pretty much out of the box. My fall back position is to connect the server directly and use the provided VPN + Firewall facilities but that is far from ideal as the number of servers is likely to grow over time.

    Read the article

  • Network connection keeps dropping - bad hardware?

    - by Bill Sambrone
    Hello all, I've into a bit of a wall with a client of mine. In an office of 20 people, he is the only one who experiences broken connections to his mapped network drives. I have everyone set up with about 6 mapped drives, all pointing to the same server (no DFS), and everyone else can access them lightning fast. The environment consists of a mix of Windows 7 and XP machines, all 32-bit. The server holding the data everyone is mapping to is running on Server 2008 R2, and is a domain controller. We recently swapped out their old 10/100 switch for a shiny new Dell PowerConnect gigabit switch. We have also replaced an old dying Sonicwall with a shiny new one. Everything is running on an ESX host except for the DC, where everyone is getting data from. In my client's office, we have done the following: Swapped out his computer (Win7 and XP box) Swapped out the desktop switch in his office Removed the desktop switch in his office Changed out the network cable going to the wall Ran 'net config server /autodisconnect:-1' on the server Disabled remote differential compression on his current Win7 box When we swapped out his network cable, everything seemed fine for about 4 days. Normally I would get a phone call a couple times per day letting me know that Outlook has crashed (there is a 9GB PST living on the server he is always connected to), or that his software he is running from his L drive has crashed. I almost thought I had this solved, but after we rebooted the DC the other night he all of a sudden couldn't stay connected to his mapped network drives for more than 10 minutes. When I ran 'net use' from the command prompt, it listed all the network drives where were randomly in a state of 'OK', 'Disconnected', or 'Reconnecting'. What else should I try? Maybe there is bad wiring in the wall, patch panel, or a bad port in the new switch I have in the server room?

    Read the article

  • Defragging Host OS of VMWare

    - by JackLocke
    Hi All, I want to ask something that has been puzzling me from last few days. I will try to explain my problem as clear as I can ... I have VMWare Workstation installed in my machine. And I use one separate 100Gb drive which stores all of my virtual machines, nothing else. Now, last week I was playing with a De-fragmentation tool called "Smart Defrag" which showed me in its analysis report that my drive where I am currently storing all of my Virtual Machines has more than 80% of fragmentation !!! Now my question is ... What will be the effect on my Guest / VM machine performance if I defrag my Host machine ... I mean this Host machine is essentially storing those virtual machines, but still dont have any direct access to what ever is stored in those machines ... so defraging the host should not cause any problem. But before proceeding, I want to hear from other people who may have met same problem. I will really appreciate any help ... BTW, I am using Windows 7 as Host and the guest machines I am using are Windows 2008 & 2003 & Ubuntu 10.04 THanks, Jack

    Read the article

  • MS Excel 2010 - Using DSN + 32 bit drivers

    - by Kristiaan
    I need some advice as im running into a problem and so far i have been unable to find a solution. We have a set of reports developed in MS Excel that use DSN file to connect to data sources to retrieve data, these work fine on 32 / 64bit systems, however we are moving to a terminal server environment using windows 2008 R2 64Bit. The reports fail to run using the DSN's within this environment if we only have the 32bit drivers installed and configured in the ODBC settings, the minute we install the 64Bit drivers the software works. Is there a way / Method of getting Excel or the DSN file to NOT use the 64Bit driver, but force it to use the 32bit driver. ANSWERED - But due to low user score i cannot "answer" my own question... Sadly there is no way to-do what i want to-do, without a lot of very nasty and not 100% perfect reg hacks. If you need to access 32bit ODBC data sources the application in question has to be 32Bit. here is a link to just one forum post i found relating to this type of problem, it appears the only way i would be able to accomplish this is to remove the 64bit version of office and install the 32bit version instead of it. http://social.msdn.microsoft.com/Forums/en-US/accessdev/thread/5108f337-f06a-4518-afe3-d3c1abd040ef/

    Read the article

  • User authentication -- username mismatch in IIS in ASP.NET application

    - by Cory Larson
    Last week, an employee's Active Directory username was changed (or a new one was created for them). For the purposes of this example, let's assume these usernames: Old: Domain\11111 New: Domain\22222 When this user now logs in using their new username, and attempts to browse to any one of a number of ASP.NET applications using only Windows Authentication (no Anonymous enabled), the system authenticates but our next layer of database-driven permissions prevents them from being authorized. We tracked it down to a mismatch of usernames between their logon account and who IIS thinks they are. Below are the outputs of several ASP.NET variables from apps running in a Windows 2008 IIS7.5 environment: Request.ServerVariables["AUTH_TYPE"]: Negotiate Request.ServerVariables["AUTH_USER"]: Domain\11111 Request.ServerVariables["LOGON_USER"]: Domain\22222 Request.ServerVariables["REMOTE_USER"]: Domain\11111 HttpContext.Current.User.Identity.Name: Domain\11111 System.Threading.Thread.CurrentPrincipal.Identity.Name: Domain\11111 From the above, I can see that only the LOGON_USER server variable has the correct value, which is the account the user used to log on to their machine. However, we use the "AUTH_USER" variable for looking up the database permissions. In a separate testing environment (completely different server: Windows 2003, IIS6), all of the above variables show "Domain\22222". So this seems to be a server-specific issue, like the credentials are somehow getting cached either on their machine or on the server (the former seems more plausible). So the question is: how do I confirm whether it's the user's machine or the server that is botching the request? How should I go about fixing this? I looked at the following two resources and will be giving the first one a try shortly: http://www.interworks.com/blogs/jvalente/2010/02/02/removing-saved-credentials-passwords-windows-xp-windows-vista-or-windows-7 http://stackoverflow.com/questions/2325005/classic-asp-request-servervariableslogon-user-returning-wrong-username/5299080#5299080 Thanks.

    Read the article

  • Group Policy for DNS Server Addresses

    - by John
    This question deals strictly with the methodology of using Active Directory to publish DNS settings and controlling the DNS server address order to domain attached clients. This is not asking about if this is the right function, role, or best use of group policy; but rather, how to make it work. I would like to be able to publish in group policy DNS addresses and possibly control the order in which the client consumes the addresses. I have tried following the information from this site without success. I set the following setting within group policy, but the client never shows the settings within the TCP/IP properties. Computer Configuration/Administrative Templates/Network/DNS Client/DNS Servers I did list them as a single space separated list. For example, the following: 192.168.0.1 192.168.10.3 192.168.34.2 192.168.2.67 192.168.56.99 192.168.99.23 This would be for Windows 7, Windows Server 2003, and Windows Server 2008 clients. I am not sure what I am doing wrong or how to get this to work. Am I missing a setting? Do I need to set something differently?

    Read the article

  • SQL Server becomes slow after restart

    - by Tobi DM
    I already posted this one on stackoverflow but someone gave me the hint to that I might have more luck on serverfault. We use SQL Server 2005 on an Windwos Server 2008. Ther Server has 48 GB RAM. SQL Server is configured to use 40 GB RAM. There is only one database hosted (About 70 GB). The only app beside SQL Server is our App-Server which connects the clients to the database. Now we encounter the following problem: After a restart of the server our the performance is great. The server grabs the 40 GB RAM wich it is allowed to and then runs fast as hell. But after about 4 weeks the system becomes slower and slower. The execution of statements (seen in the profiler) is raising slowly. But I cannot see that there is something going wrong on the server. CPU usage is at about 20% I/O also seems to be no Problem The process monitor does also not show that there are strange apps or something like that. Eventlog does also have no interessting messages No open transactions or blockings to see We do not use cursors in our app We tried already the following things without effect: Droped the cache by using the statements DBCC FreeProcCache DBCC FREESYSTEMCACHE('ALL') DBCC DropCleanbuffers Restarted the Appserver we are using. Restart the sql server service But nothing did help exept restarting the whole server. Any ideas?

    Read the article

  • Configure IIS to pass-through CGI output without any conditioning

    - by Daniel Watrous
    I'm building a web service on Windows 2008 R2 with IIS 7.5 and Python 2.5. Right now I have the Handler Mappings and everything else setup just fine, Except that IIS is modifying what it gets back from the CGI script before sending it along the the client. Here's an example: I wrote the following CGI script: # hello.py print "Status: 400 Bad Request" print "Content-Type: text/html" print print "Error Message" According to the HTTP spec this should be fine and a Status of 400 should allow for a description of the error message in the body of the response. When the server response actually comes back to me I get the following: Status: 400 Bad Request Date: Fri, 11 Feb 2011 17:58:30 GMT X-Powered-By: ASP.NET Connection: close Content-Length: 11 Server: Microsoft-IIS/7.5 Content-Type: text/html Bad Request I've seen on this forum and others where I can change or eliminate the X-Powered-By header element, but I would like IIS to leave it alone altogether. I'm not sure why it takes my response, deletes "Error Message" from the body and replaces it with "Bad Request" and then adds all that other junk in. Is there some way to tell IIS to just send the response along without making any changes at all?

    Read the article

  • Windows Server - share files without access for administrator

    - by Pawel
    We have a MS Windows Server 2008 R8 based server that is administrated by our IT department. We would like to achieve two things simultaneously: A folder on the server, containing several thousand files (new files added frequently) that is accessible to some ActiveDirectory users (e.g. board of directors) but is not accessible by IT department employees IT department employees still maintain rights to administrate the server, including installing new software and services We already checked some solutions: Using NTFS access rights. Unfortunately IT (members of "Administrators" group) can set themselves as new owners of the files and change the permissions so that they gain access to the files. Enabling EFS. Unfortunately even if you do not allow IT to access files, they still can disable EFS completely because they have administrative rights. Moreover as far as I know you have to manually add permissions for all users but the owner for each new file - very inconvenient. Creating a new role for the IT department that has all the privileges apart from taking ownership of files. Unfortunately if you're not a member of the Administrators group, you cannot install new software, no matter what privileges you add to the role. TrueCrypt - nice free encryption software, but with poor sharing capabilities. You can either mount an encryption container on the server (and then IT has access to its contents) or you mount them locally but only one user can mount it for writing. AxCrypt - free encryption software that enables file-by-file encryption on the server. There are some disadvantages though - you have to manually encrypt each new file added. The files have their extensions changes. You can only set one password for all files (so all users have to know this one password). Any other ideas? Our budget is limited so enterprise-class software from Symantec or PGP would probably be not an option.

    Read the article

  • Best NIC config when virtual servers need iSCSI storage?

    - by icky2000
    I have a Windows 2008 server running Hyper-V. There are 6 NICs on the server configured like this: NIC01 & NIC02: teamed administrative interface (RDP, mgmt, etc) NIC03: connected to iSCSI VLAN #1 NIC04: connected to iSCSI VLAN #2 NIC05: dedicated to one virtual switch for VMs NIC06: dedicated to another virtual switch for VMs The iSCSI NICs are used obviously for storage to host the VMs. I put half the VMs on the host on the switch assigned to NIC05 and the other half on the switch assigned to NIC06. We have multiple production networks that the VMs could appear on so the switch ports that NIC05 & NIC06 are connected to are trunked and we then tag the NIC on the VM for the appropriate VLAN. No clustering on this host. Now I wish to assign some iSCSI storage direct to a VM. As I see it I have 2 options: Add the iSCSI VLANs to the trunked ports (NIC05 and NIC06), add two NICs to the VM that needs iSCSI storage, and tag them for the iSCSI VLANs Create two additional virtual switches on the host. Assign one to NIC03 and one to NIC04. Add two NICs to the VM that needs iSCSI storage and let them share that path to the SAN with the host. I'm wondering about how much overhead the VLAN tagging in Hyper-V has and haven't seen any discussion about that. I'm also a bit concerned that something funky on the iSCSI-connected VM could saturate the iSCSI NICs or cause some other problem that could threaten storage access for the entire host which would be bad. Any thoughts or suggestions? How do you configure your hosts when VMs connect direct to iSCSI?

    Read the article

  • Problems installing Windows service via Group Policy in a domain

    - by CraneStyle
    I'm reasonably new to Group Policy administration and I'm trying to deploy an MSI installer via Active Directory to install a service. In reality, I'm a software developer trying to test how my service will be installed in a domain environment. My test environment: Server 2003 Domain Controller About 10 machines (between XP SP3, and server 2008) all joined to my domain. No real other setup, or active directory configuration has been done apart from things like getting DNS right. I suspect that I may be missing a step in Group Policy that says I need to grant an explicit permission somewhere, but I have no idea where that might be or what it will say. What I've done: I followed the documentation from Microsoft in How to Deploy Software via Group Policy, so I believe all those steps are correct (I used the UNC path, verified NTFS permissions, I have verified the computers and users are members of groups that are assigned to receive the policy etc). If I deploy the software via the Computer Configuration, when I reboot the target machine I get the following: When the computer starts up it logs Event ID 108, and says "Failed to apply changes to software installation settings. Software changes could not be applied. A previous log entry with details should exist. The error was: An operations error occurred." There are no previous log entries to check, which is weird because if it ever actually tried to invoke the windows installer it should log any sort of failure of my application's installer. If I open a command prompt and manually run: msiexec /qb /i \\[host]\[share]\installer.msi It installs the service just fine. If I deploy the software via the User Configuration, when I log that user in the Event Log says that software changes were applied successfully, but my service isn't installed. However, when deployed via the User configuration even though it's not installed when I go to Control Panel - Add/Remove Programs and click on Add New Programs my service installer is being advertised and I can install/remove it from there. (this does not happen when it's assigned to computers) Hopefully that wall of text was enough information to get me going, thanks all for the help.

    Read the article

  • RDP Connection to Windows 7 stays really slow

    - by Pavlo
    I have an Issue with connecting to Windows 7 via RDP. I can open an RDP Session, but regardless of any settings, the responce times are really long. This in particulary is the case when opening a web page in a browser. I've tried IE, Firefox and Google Chrome. I also use RDP connection to a Windows 2008 Server from the same client machine, and the speed is very normal with all features turned on. We have Gigabit Ethernet here. So I think it can not be the client's fault. What concerns Windows 7 Machine, I've tried shutting all the sraphic features off and turning the color levels to 256 colors. Result - the same. If I work locally on the machine - I can not see any lags. What else have I tried: Using old RDP 5 Client from Microsoft Setting network autotuninglevel as seen here Do You have some ideas? Thanks in advance! Update the problem seems to be with rendering window contents. All the window borders and pannes are rendered pretty quickly, but the content shows up very slowly. Also mouse movements are recognised by the Win 7 box only after some period. Are there some hidden settings in the RDP, where one could turn some advanced features off or turn some caching on? I use Bitmap Caching, but this apparently doesn't help.

    Read the article

  • Shortcut to "printer and faxes" on another computer

    - by Doltknuckle
    I have a print server running windows server 2008 that has about 50 printers on it. In windows XP, I was able to connect to the server using the UNC name and make a shortcut to the "printers and faxes" folder. (For the record, I know that it really isn't a folder, but that's outside the scope of this question.) I have recently switched to windows 7 and I find that the jump lists are really useful. One of the things I want to do is make it easy to connect to that server's "printers and faxes" folder. I would like to use something like a shortcut that I can open and go immediately to that location. The problem is that windows 7 doesn't have a way to create a shortcut like you could in WinXP. They have a button on the toolbar that says "view remote printers" which sends you to the correct folder. I'd like to avoid having to type out the server name. I also can't use the "view network" link in windows explorer. Our organization has over 6,000 machines and viewing the network lists all of them. This is all about saving time by using the minimum number of mouse clicks and key presses in normal operation. Does anyone have any suggestions?

    Read the article

  • Using udev to create a character device based on a driver being loaded

    - by SteveCB
    I'm in the process of setting up RAID monitoring for a number of Dell servers that use the PERC 6i integrated card. We're using Nagios at present and the check_megasasctl plugin seems to fit the bill. However, the plugin relies upon the existence of: /dev/megaraid_sas_ioctl_node This device node doesn't exist by default, you have to create it by hand using something like: mknod /dev/megaraid_sas_ioctl_node c 253 0 Now, to make the existence of this device node persistent across reboots, I thought I could write a udev rule, but as usual, I'm missing something. I thought I could create a file such as /etc/udev/rules.d/10-local/rules that contained: DRIVER=="megasas" NAME="megaraid_sas_ioctl_node" MODE="0600" But this doesn't work - no device node after a reboot. Dmesg output indicates the megasas driver is loaded and functional: megasas: 00.00.04.01-RH1 Thu July 10 09:41:51 PST 2008 megasas: 0x1000:0x0060:0x1028:0x1f0c: bus 1:slot 0:func 0 megasas: FW now in Ready state Further, I don't see any means to instruct udev on which type of device node to create: character or block. I suspect I'm failing to understand exactly how udev is meant to work. I realise I could just cheat and run MegaCLI in /etc/rc.local, redirecting output to /dev/null; it creates the megaraid_sas_ioctl_node device node as part of its execution. I just thought using udev rules would be a) cleaner and b) a useful learning exercise. Perhaps I should just dump the above mknod command in /etc/rc.local... So how do I get udev to create the /dev/megaraid_sas_ioctl_node device node based on the presence of the megasas driver? Cheers Steve

    Read the article

  • I deployed Flash Player via a Software Installation policy. How to upgrade?

    - by eleven81
    I have a Windows Server 2008 machine as my DC. Earlier this year I created a Software Installation GPO to deploy Adobe Flash Player plugin MSI. I assigned the policy to the computers, about half run Windows XP x86 and the other half Windows 7 x64. That all works like clockwork. When I created the Software Installation Policy, I disabled the Flash Player plugin's automatic update feature by editing the MSI in Orca. I did this because I wanted all of my machines to run the exact same version of the plugin. Now, some time has passed and a newer version of the Flash Player plugin has been released. It is time for me to push out the updated version of the plugin. I already have the new MSI, but I am lost on what to do next. I see the upgrades tab in the Software Installation GPO, but everything there reads like that would be used for add-ons to a larger master program and not for updates that are released over time. I have read that it is best to create a new Software Installation policy with the new MSI, revoke the old GPO, and assign the new GPO. I feel as though, over time, I will wind up with more revoked policies than active ones. I have also read that some people have had success by replacing the old MSI with the new MSI and simply telling the GPO to redeploy. This seems like a backdoor method that will only get me in to trouble. In short, what is the correct, best-practice, or preferred way to roll out the new version via Group Policy?

    Read the article

  • Method to integrate Powershell scripts with non-Windows workflow?

    - by Matt Simmons
    I love the smell of new machines in the morning. I'm automating a machine creation workflow that involves several separate systems across my infrastructure, some of which involve 15 year old perl scripts on Solaris hosts, PXE Booting Linux systems, and Powershell on Windows Server 2008. I can script each of the individual parts, and integrating the Linux and Unix automation is fairly straightforward, but I'm at a loss as to how to reliably tie together the Powershell scripts to the rest of the processes. I would prefer if the process began on a Linux host, since I imagine that it will end up as a web application living on an Apache server, but if it needs to begin on Windows, I am hesitantly okay with that. I would ideally like something along the lines of psexec for Linux to run against Windows, but the answer in that direction appears to by Cygwin, and as much as I appreciate all of the hard work that they put in, it has never felt right, if you know what I mean. It's great for a desktop and gives a lot of functionality, but I feel like Windows servers should be treated like Windows servers and not bastardized Unix machines (which, incidentally, is my argument against OSX servers, too, and they're actually Unix). Anyway, I don't want to go with Cygwin unless that's the last and only option. So I guess what I'm asking is if there is a way to execute jobs on Windows machines from Linux. Without Cygwin. I'm open to ideas and suggestions, including "Look idiot, everyone uses Cygwin, so suck it up and deal with it". Thanks in advance!

    Read the article

  • csvde doesn't import users

    - by The Eighth Ero
    I have a small problem as I'm a server manager beginner, I installed a Domain Controller on my Windows Server 2008, and I created three OUs, now I'm trying to add users to each OU via csvde command, but I get as a result of the operation, without mentioning any errors: > C:\csvde>csvde -i -f List.csv > Connecting to "(null)" > Logging in as current user using SSPI Importing directory from file > "List.csv" Loading entries. > 0 entries modified successfully. Below is the csv file I'm using to add 2 users to "Offshoring1" OU, the domain name is "iado.lan". DN objectClass sAMAccountName sn givenName userPrincipalNAme cn=BB NN,ou=Offshoring1,dc=iado,dc=lan user BB NN BB [email protected] cn=II YY,ou=Offshoring1,dc=iado,dc=lan user II YY II [email protected] and this the csv data as generated by Word 2011 on my mac : DN;objectClass;sAMAccountName;sn;givenName;userPrincipalNAme cn=BB NN,ou=Offshoring1,dc=iado,dc=lan;user;BB;NN;BB;[email protected] cn=II YY,ou=Offshoring1,dc=iado,dc=lan;user;II;YY;II;[email protected] I do use -k option to force import but still no success.

    Read the article

  • Best Asp.net Hosting

    - by dotnetguts
    There are many asp.net web hosting companies which spends lot on advertisement and also gives you very cheaper rate, as low as $5, but when it comes to support they are simply hopeless. Everyone can you please pass your experience with your past hosting companies and suggest any good asp.net hosting company? Please consider following requirement factors 1) Asp.net 3.5 or 4.0 supported. 2) Url Rewriter support 3) GZip support (Dynamic through code) 4) Initial Setup support (If required) 5) SQL Server 2005 or 2008 6) Allow to access SQL Server DB using SQL Mgmt Studio 7) Environment supporting Backup and Restore of DB on my own, without involving tech support team 8) Full Text Search support 9) FTP support 10) I can able to send atleast 500 Emails daily. 11) 99.9% Up Time (No matter all web hosting say they have 99.9% Up Time, but its not true). 12) Alert Email to be sent when they do any maintenance or during downtime. 13) Hosting Price should be reasonable. Incase you feel i am missing something please add to the list. Can anyone suggest good webhosting company based on above factors?

    Read the article

  • C# sends SQL data 4 times less from one box than from another

    - by Bobb
    W2003, .NET 3.5, SQL 2008 I have prod and UAT app servers deployed in 2 different data centres. I have a C# app which reads text file, parse the text and sends the data to the SQL in bulk. SQL server is in US and the app servers are in London (but in different places). All POPs have dedicated network connections. There is no public internet involved. When the app runs on UAT server I can see in Perfmon that the Send byte/sec is x4 higher than from production server. My estimate is that one server outputs at 1 MB/s and the other at 250 KB/s rate. My suspicion immediately is that there is a router on one of the DCs which shapes traffic or does QoS limitation on traffice from London to US. However support and Windows team and networkig team all are saying that there are no differences in neither networking config on the 2 DCs nor NIC config on the 2 app server... How to find out why is the networking bottlneck is 4 times tighter in one place than in the other? What can I do about it?

    Read the article

  • Network Security Device/Software

    - by Campo
    We currently run Symantec Antivirus Corporate 10.2. The software is really easy to manage on a network but the actual virus detection isn't bad but the malware detection is crap. We recently were infected with a email bot that got us put on some block lists. This has been resolved. I cannot have that happen again. I would like to find a program as easy to manage as symantec that I can install on all the user's workstations as well as the servers. We run a windows 2003 domain. We have a couple 2008 test servers in the environment. Most of the workstations are xp though I am using windows 7 and symantect is not compatible with this OS... So we need a solution that would cover all those operating systems. If it could be installed on macs too that would be a bonus though not necessary at all. This software must detect: Viruses AND Malware I am looking for something that combines the features in anti-malware programs like malwarebytes or spybot with an antivirus program like symantec or AVG. Alternatively if there is a piece of hardware that is a firewall, router, and packet inspection for virus/spam that would be the most ideal solution. I then could supplement with a piece of software that could pickup what the hardware misses. Thank you for your suggestions.

    Read the article

  • SQL Server Installation: Is it 32 or 64 bit?

    - by CapBBeard
    Hi, Recently I was performing an OS upgrade on one of our DB servers, moving from Server 2003 to Server 2008. The DBMS is SQL Server 2005. While reinstalling SQL on the new Windows installation, I went to another of our DB servers to verify a couple of settings. Now, I always thought this second server was Server 2003 x64 + SQL 2005 x64 (from what I'd been told), but I now have my doubts about this. I now suspect that it is in fact only 32 bit SQL, however I'd like to verify this. Here's some details: The OS is definitely 64 bit. xp_msver shows Platform as NT INTEL X86 SELECT @@VERSION shows Microsoft SQL Server 2005 - 9.00.4035.00 (Intel X86)... However sqlservr.exe is not shown with '* 32' in taskmgr, does anyone know why this is the case, if it is in fact 32 bit as claimed? Despite this, it does seem to be running out of the x86 program files folder. If I do the same checks on a confirmed 64 bit installation, it does give back the expected 64 bit readings, which can only prove that this server in question is only running in 32 bit. Now, that being the case, the question arises about how much memory this '32 bit' install can use. Task manager reports about 3.5GB memory usage for sqlservr.exe (The server has 16GB physical). I suspect that AWE has not been configured at all, and therefore the server will be significantly under-utilised (remembering that the OS is 64 bit) if SQL is simply using a 32bit address space. Is this assumption correct? I feel the server should have SQL reinstalled as 64 bit in order to fully utilise the hardware platform, however it is currently heavily in production; this will be no easy task. I suspect we may just have to configure AWE correctly and let it be for the time being (Unless this is a bad idea?). I apologise that this question is a little vague/lost; I'm no SQL expert, just trying to get a handle on what's going on here.

    Read the article

  • Virtualize SBS 2003 - P2V vs migrating to new VM

    - by jlehtinen
    I need to virtualize a SBS 2003 server in my work environment. I need some tips on what people think is the best way to proceed. Background: The SBS 2003 server is the primary DC for the domain and also hosts FTP, RRAS(VPN), DNS, and file shares. Exchange is NOT used, neither is SQL server. DHCP is done via a firewall appliance. I have added a Server 2003 VM to the domain and promoted it to the DC role. AD/DNS is replicating here correctly. This was mainly done to provide fault-tolerance to the domain, I was not intending to make this VM the primary DC. I've already asked about buying upgraded licensing for Server 2008/2012 but was refused due to cost. Options: I see (at least) two routes I could take to complete this. From what I've read option 2 is the "preferred" method, but there's a few steps where I'm not clear on what to expect. Option 1.) P2V the primary DC Power off primary DC Power off secondary DC (to prevent USN rollback in case P2V has issue) P2V (cold clone) primary DC Boot new PDC VM Allow new hardware to detect Remove old NIC hardware from device manager Assign old IPs to new virtual NICs Reboot PDC VM, confirm connectivity and no major issues Power on secondary DC, confirm replication Option 2.) Create new VM, transfer roles, remove original DC from domain Create new VM, install SBS 2003 Do I need the original SBS install discs for this? MS migration doc mentions this. Add VM to domain, promote to DC role Does this start 7 day timer where two SBS servers can be in same domain? Set up RRAS on new VM Set up IIS/FTP on new VM Move file shares to new VM Transfer FSMO roles to new VM DC dcpromo original primary DC out of domain

    Read the article

  • Looking for some IIS redirect help/ideas

    - by CoreyT
    Right now we have a site with a LOT of static asp pages such as, www.site.com/123.asp. This is due to how our current site's CMS builds it's pages by default. I don't have an exact count but we have roughly 6000 asp files in the site right now. We are in the middle of a redesign and restructuring of the site, and are looking to migrate to SEO friendly URLs. The problem we're having right now is what do we do to redirect the old pages to the new friendly URLs? I know how to do redirects that is not the issue here. The problems I am coming up with right now are listed below. 1 - Is there a limit to the number of redirects in IIS? 2 - Would having even a few thousand redirects affect IIS performance? 3 - My understanding is that we would not be passing along page rank to the new URLs, is that true? (not a major question I can ask on more SEO forums if nobody here is sure) 4 - Would using something like the IIS URL Rewrite 2 module for IIS 7 help us out? Or would I still need to define several thousand unique redirects in it? Our server right now is running Server 2003, however in the redesign I would be open to migrating to Server 2008 R2 if there is a good case for it (i.e. the URL Rewrite module). Thanks for any guidance or help. I have been looking for a good way to do this for a while now and keep coming up with things that sound problematic and bad (such as having 6000 redirects).

    Read the article

  • SQL Server Instance login issue

    - by reallyJim
    I've just brought up a new installation of SQL Server 2008. I installed the default instance as well as one named instance. I'm having a problem connecting to the named instance from anywhere besides the server itself with any user besides 'sa'. I am running in mixed mode. I have a login/user that has a known username. Using that user/login, I can properly connect when directly on the server. When I attempt to login from anywhere else, I recieve a "Login failed for user ''", with Error 18456. In the log file in the server, I see a reason that doesn't seem to help: "Reason: Could not find a login matching the name provided.". However, that user/login DOES exist, as I can use it locally. There are no further details about the error. Where can I start to find something to help me with this? I've tried deleting and recreating the user, as well as just creating a new one from scratch--same result, locally fine, remotely an error. EDIT: Partially Resolved. I'm now passed the base issue--the clients were trying to connect via the default instance. I don't know why. So, once proper ports were opened in the firewall, and a static port assigned to the named instance, I can now connect--BUT ONLY if I specify the connection as Server,Port. SQLBrowser is apparently not helping/working in this case. I've verified it IS running, and done a stop/restart after my config changes, but no difference yet.

    Read the article

< Previous Page | 488 489 490 491 492 493 494 495 496 497 498 499  | Next Page >