Search Results

Search found 5135 results on 206 pages for 'sbs 2003'.

Page 193/206 | < Previous Page | 189 190 191 192 193 194 195 196 197 198 199 200  | Next Page >

  • Pros/Cons of switching from Exchange to GMail

    - by Brent
    We are a medium-large non-profit company, with around 1000 staff and volunteers, and have been using MS Exchange (currently 2003) for our mail system for years. I recently attended a Google conference where they were positing that "Cloud computing is the way of the future", and encouraging us to switch from doing our own email with Exchange, to using GMail and Google Apps for everything. Additionally, one of our departments has been pushing from inside to do this transition within their own department, if not throughout the entire organization. I can definitely see some benefits - such as: Archive space - we never seem to have the space our users want, and of course, the more we get, the more we have to back up OS Agnostic - Exchange is definitely built for windows, and with mac and linux users on the rise, these users increasingly demand better tools / support. Google offers this. Better archiving - potential of e-discovery, that doesn't exist in a practical way with our current setup. Switching would relieve us of a fair bit of server administration, give more options to our end users, and free up the server resources we are now using for Exchange. Our IT department wants to be perceived as providing up-to-date solutions to technical problems, and this change would definitely provide such an image. Google's infrastructure is obviously much more robust than ours, and they employ some of the world's best security and network experts. However, there are also some serious drawbacks: We would be essentially outsourcing one of our mission-critical systems to a 3rd party The switch would inevitably involve Google Apps and perhaps more as well. That means we would have a-lot more at the mercy of a single (potentially weak) password. (is there a way to make this more secure using a password plus physical key of some sort??) Our data would not remain under our roof - or even in our country (Canada). This obviously has plusses on the Disaster Recovery side, but I think there are potential negatives on the legal side. I can't imagine that somebody as large as Google would be as responsive as we would want with regard to non-critical issues such as tracing missing emails, etc. (not sure how much access we would have to basic mail logs - for instance) Can anyone help me evaluate this decision? What issues am I overlooking? What experiences have you had with this transition (or the opposite - gmail to Exchange) Can you add to the points I have already outlined?

    Read the article

  • ISAPI filter with LDAP over SSL only works as administrator

    - by Zac
    I have created an ISAPI filter for IIS 6.0 that tries to authenticate against Active directory using LDAP. The filter works fine when authenticating regularly over port 389, but when I try to use SSL, I always get the 0x51 Server Down error at the ldap_connect() call. Even skipping the connect call and using ldap_simple_bind_s() results in the same error. The weird thing is that if I change the app pool identity to the local admin account, then the filter works fine and LDAP over SSL is successful. I created an exe with the same code below and ran it on the server as admin and it works. Using the default NETWORK SERVICE identity for the site's app pool is what seems to be the problem. Any thoughts as to what is happening? I want to use the default identity since I don't want the website to have elevated admin privileges. The server is in a DMZ outside the network and domain where our DCs are that run AD. We have a valid certificate on our DCs for AD as well. Code: // Initialize LDAP connection LDAP * ldap = ldap_sslinit(servers, LDAP_SSL_PORT, 1); ULONG version = LDAP_VERSION3; if (ldap == NULL) { strcpy(error_msg, ldap_err2string(LdapGetLastError())); valid_user = false; } else { // Set LDAP options ldap_set_option(ldap, LDAP_OPT_PROTOCOL_VERSION, (void *) &version); ldap_set_option(ldap, LDAP_OPT_SSL, LDAP_OPT_ON); // Make the connection ldap_response = ldap_connect(ldap, NULL); // <-- Error occurs here! // Bind and continue... } UPDATE: I created a new user without admin privileges and ran the test exe as the new user and I got the same Server Down error. I added the user to the Administrators group and got the same error as well. The only user that seems to work with LDAP over SSL authentication on this particular server is administrator. The web server with the ISAPI filter (and where I've been running the test exe) is running Windows Server 2003. The DCs with AD on them are running 2008 R2. Also worth mentioning, we have a WordPress site on the same server that authenticates against LDAP over SSL using PHP (OpenLDAP) and there's no problem there. I have an ldap.conf file that specifies TLS_REQCERT never and the user running the PHP code is IUSR.

    Read the article

  • Issues with Server 2012 using DFSR running on Hyper-V 2012

    - by Bryan
    We have a number of Server 2012 systems, all of which run virtualised on Hyper-V 2012 server. We are having problems with two such virtual instances, both of which are used as file servers, whereby they occasionally stop responding to requests to serve files to clients. After logging on to the server, attempts to shut it down gracefully fail (no error, it just fails to acknowledge a shutdown request). Recovery is a case of power cycling the server(s) from the Hyper-V console. These two servers don't server a large number of users (one serves no more than 6 users, and the other serves around 20 users), they are in the same domain, but on different physical hardware (and at different sites). They don't lock up at the same time. They both use DFSR to replicate a fairly large amount of data between themselves (200GB) over ADSL connections, this is working fine, and we have been using DFSR to do this on the previous two generations of server OS we have used (Server 2008 R2 and Server 2003 - both of which were physical installs however). Today, when one of the servers crashed, I noticed an entry in the event log, which looked similar to the following: Log Name: Application Source: ESENT Date: 27/11/2012 10:25:55 Event ID: 533 Task Category: General Level: Warning Keywords: Classic User: N/A Computer: HAL-FS-01.example.com Description: DFSRs (1500) \\.\E:\System Volume Information\DFSR\database_C8CC_101_CC00_EC0E\ dfsr.db: A request to write to the file "\\.\E:\System Volume Information\ DFSR\database_C8CC_101_CC00_EC0E\fsr.log" at offset 4423680 (0x0000000000438000) for 4096 (0x00001000) bytes has not completed for 36 second(s). This problem is likely due to faulty hardware. Please contact your hardware vendor for further assistance diagnosing the problem. When the server started up again, I went to find the event log entry to investigate further and found that the event log entry was no longer there (I assume it was in memory but failed to write to disk before the server was powered off, for the reason mentioned in the message). I found the above message by searching back further in the event log. Both of these virtual servers have their E: volumes fully allocated as opposed to dynamically expanding, and there are no other issues on any of the other virtual servers (which include server 2012, server 2008 R2 and Ubuntu 12.04 x64). There are no signs of IO, memory or CPU starvation on the host systems. I've used performance counters on the affected virtual servers to monitor memory usage (including non paged pool usage), as well as CPU and network utilisation, and none of these show any signs of trouble when the issue arises. I would have thought our configuration isn't that uncommon, so I'm wondering if anyone else has seen this, and managed to resolve the problem?

    Read the article

  • Setting Timeouts: SQL Server 2008/IIS 7.5

    - by Julie
    We have recently migrated from a Win 2003/SQL Server 2000 system to Win 2008 64 bit R2, SQL Server 2008 R2. Our websites are in classic asp, and this can't be changed to another scripting language at this time. On the old server, if I got stuck in some kind of endless loop, the page would throw an error. On the new server, I have a page that has some sort of looping problem, that even though the SQL SP is called only once (and runs fine run as a query on the server) it pegs SQL server and therefore locks all of our websites. I'll get my code figured out, no biggie. But I need to make sure the server times out when this happens. (The page I'm working on runs fine with certain instances of the query, and locks with others using a different query variable. I can't have something like that sneak up on me on a page I haven't touched for three years.) I can't figure out how an SP that runs once on the server, from an ASP page, is tying up SQL server this way. It's obviously some sort of a timeout issue, but I can't figure out where/which timeout values to change. I actually have to remote desktop to the server and kill the process in SQL server. I'm afraid I'm a generalist, and server management is not my thing, even though it's my responsibility, so I am almost certain to have questions about any answer that I receive. How can I track this down? What settings do I need to change? More info: It's not SQL Server On our test site, I created an ASP file that just did an endless loop (do while 1=1) and had the same problem - the other websites wouldn't load - without SQL server being involved. So I think the reason the process was hanging is that the page wasn't timing out as it should, and so the connection to SQL was never closed. Killing the process in SQL server would reset the page somehow. For my intentional endless loop, I had to refresh the app pool to get rid of it. This points more to either IIS or the ASP settings. The ASP timeouts are set to whatever the default were when the server was first loaded. I still can't figure out why one file is locking up all websites, though. Again, that didn't happen on the old server.

    Read the article

  • Total newb having SSH tunnel and remote MySQL access problems

    - by kscott
    I don't often work with linux or need to SSH tunnel into remote MySQL databases, so pardon my ignorance. I'm using Windows 7 and am needing to connect to a remote MySQL instance on a Linux server. For months I had been using the HeidiSQL client application successfully. Today two things happened: the DB moved to a new server and I updated HeidiSQL, now I cannot log in to the MySQL server, when attempting I get this message from Heidi: SQL Error (2003) in statement #0: Can't connect to MySQL server on 'localhost' (10061) If I use Putty, I can connect to the server and get MySQL access through command line, including fetching data from the DB. I assume this means my credentials and address are correct, but do not understand why putting those same details into HeidiSQL's SSH tunnel info won't work. I also downloaded the MySQL Workbench and attempted to set up a connection through that client and got this message: Cannot Connect to Database Server Your connection attempt failed for user 'myusername' from your host to server at localhost:3306: Lost connection to MySQL server at 'reading initial communication packet', system error: 0 Please: 1 Check that mysql is running on server localhost 2 Check that mysql is running on port 3306 (note: 3306 is the default, but this can be changed) 3 Check the myusername has rights to connect to localhost from your address (mysql rights define what clients can connect to the server and from which machines) 4 Make sure you are both providing a password if needed and using the correct password for localhost connecting from the host address you're connecting from From Googling around I see that it could be related to the MySQL bind-address, but I am a third party sub-contractor with no access to the MySQL settings of this box and the system admin is assuring me that I'm an idiot and need to figure it out on my end. This is completely possible but I don't know what else to try. Edit 1 - The client settings I am using In Heidi and MySQL Workbench I am using the following: SSH host + port: theHostnameOfTheRemoteServer.com:22 {this is the same host I can Putty to} SSH Username: mySSHusername {the same user name I use for my Putty connection} SSH Password: mySSHpassword {the same password for the Putty connection} Local port: 3307 {this is on the SSH settings tab and was defaulted to 3307 by Heidi, changing it to 3306 gives me a different error: SQL Error (1045) in statement #0: Access denied for user 'mySQLusername'@'localhost' (using password: YES)"} MySQL host: theHostnameOfTheRemoteServer.com {consensus seems to be I should use 'localhost' here} MySQL User: mySQLusername {which I can connect with once in with Putty} MySQL Password: mySQLpassword {which works once in with Putty} Port: 3306

    Read the article

  • Subversion: Can't move... Permission Denied

    - by yalestar
    Whilst trying to commit some files to SVN, we're suddenly all getting this error Can't move '/usr/local/svn/articles/db/txn-protorevs/2002-8.rev' to '/usr/local/svn/articles/db/revs/2/2003': Permission denied I checked the permissions in the repository, and they look the same as all our other repositories, yet this is the only repo that causes the error. Any ideas how I can fix this? SVN is running as root on Linux via svnserve, FWIW.

    Read the article

  • How to have Windows Server DNS use hosts file to resolve specific host names

    - by user41079
    Hello, everyone, I'm facing a small problem with Windows Server 2003 DNS service. In my corporation, I'm running Microsoft DNS server(172.16.0.12) to do name resolution to my company intranet(domain name ends in dev.nls. resolving to IP 172.16..), and it is also configured as a DNS forwarder to forward other domain names(e.g. *.google.com , *.sf.net) to Internet real DNS servers. This internal DNS server never tends to serve users from outside world. And, we are running a mail server(serving incoming mail for a real Internet domain @nlscan.com) inside company firewall which can be accessed in either way: by connecting to 172.16.0.10 from within intranet. by connecting to mail.nlscan.com(resolved to 202.101.116.9) from Internet. Note that 172.16.0.10 and 202.101.116.9 is not the same physical machine. The 202 one is a firewall machine who do port forwarding of port 25 and 110 to intranet address 172.16.0.10 . Now my question: If users inside corporate LAN want to resolve mail.nlscan.com, it resolves to 202.101.116.9. That's correct and workable, BUT NOT GOOD, because the mail traffic goes to the firewall machine then bounces to 172.16.0.10 . I hope that our internal DNS server can intercept the name mail.nlscan.com and resolve it to 172.16.0.10 . So, I hope that I can write an entry in "hosts" file on 172.16.0.12 to do this. But, how can Microsoft DNS server recognize this "hosts" file? Maybe you suggest, why not have intranet user use 172.16.0.10 to access my mail server? I have to say it is inconvenient, suppose a user(employee) works on his laptop, daytime in office and nighttime at home. When he is at home, he cannot use 172.16.0.10 . Creating a zone for nlscan.com on our internal DNS server is not feasible, because the name server for nlscan.com domain is on our ISP, and it is responsible for resolving other host names and sub-domains under nlscan.com . Thank you in advance.

    Read the article

  • WSS and CAG , _layout pages break

    - by Mike
    Alright, I've searched everywhere and I cannot find the answer, due to the rarity of our setup. WSS 3.0/IIS 6.0/WinServer 2003 We have a sharepoint site that is in good shape, almost. Its TCP and SSL port are uncommon and need to be rerouted to work properly. This is where the Citrix Access Gateway (CAG) comes in play. It will redirect any request from URL (something.something.com) to the correct SSL port on the correct server. My AAM is configured to Default something.something.com and nothing else, since the CAG will provide the port. We use FBA, and require SSL. This works perfectly for everything that is safe or that is anything that an end user can see, but if I try to add a webpart, it errors out. Whereas if I add it internally, or bypass the CAG the webpart adds fine. The same goes for most of the _layouts pages, like _layouts/new.aspx. If I add a Link List/Doc library on the something.something.com, it errors out (Page cannot be displayed) and the page won't display, but if I try it with an internal address it will work fine. I found that if I am trying to add something or doing anything administrative, the site will navigate to the pages that I need to go to fine, but when i actually ADD something the URL will change from something.something.com to something.something.com:SSLport, thus erroring out the site. The URL with the SSL port shows on the Site URL when navigating to Site Settings. However, if I bypass the CAG, using the internal address the _layouts page works like a charm and i can add anything. All the CAG does is reroute a DNS request to the provided server and port. I've tried reextending the application, no luck same thing. I've tried changing the AAM to hide the port and the CAG rejects it. I've tried to recreate a new webapp/site collection with the same rules on the CAG, same thing occurs. Correct me if I'm wrong, and please provide me with some feedback and answers. Any suggestions would be very appreciated. Is it the CAG or the Alternate Access Mappings (AAM)?

    Read the article

  • WinXP - Having trouble sharing internet with 3G USB modem via ICS

    - by Carlos Nunez
    all! I've been banging my head against a wall with this issue for a few days now and am hoping someone can help out. I recently signed up for T-Mobile's webConnect 3G/4G service to replace the faltering (and slow) DSL connection in my apartment. The goal was to put the SIM in one of my old phones and use its built-in WLAN tethering feature to share Internet out to rest of my computers. I quickly found out that webConnect-provisioned SIMs do not work with regular smartphones, so I was forced to either buy a 4G-compatible router or tether one of my old laptops to my wireless router and share out that way. I chose the latter, and it's sharpening my inner masochistic self by the day. Here's the setup: GSM USB modem (via hub), ICS host - 10/100 Mbps Ethernet NIC, ICS "guest" - WAN port of my SMC WGBR14N wireless router in bridged mode (i.e. wireless access point). Ideally, this would make my laptop the DHCP server and internet gateway with the WAP giving everyone wireless coverage. I can browse internet on the host laptop fine. However, when clients try to connect, they get a DHCP-assigned IP from the laptop and are able to use the Internet for a few minutes before completely dying. After that happens, they are able to re-associate with the WAP and get IP addresses, but are unable to use Internet or resolve IP addresses until the laptop and router are restarted. If they do get access, it's very, very slow. After running Wireshark on the host machine, it turns out that this is because every TCP connection keeps getting RST. DNS seems to work. I would normally think the firewall is the culprit here, but when it drops packets, it drops them completely. The fact that TCP connections are being ACK'ed by the destination rules that out. Of course, none of the event Log isn't saying anything about what's going on. I also tried disabling power management on the NIC, since that's caused problems in the past; that didn't help either. I finally disabled receive-side scaling as per a Microsoft KB (that applied to Windows Server 2003, SP2) to no avail. I'm thinking of trying it with a different NIC (will be tough; don't have a spare Ethernet NIC around for the laptop), but I'm getting the impression that this simply doesn't work. Can anyone please advise? I apologise for the length of this post; all contributions are much appreciated! -Carlos.

    Read the article

  • Remote installing an msi on citrix servers using WMI

    - by capn
    OK, I'm a C# programmer that is trying to streamline the deployment of a custom windows form app I inherited and built an installer for with WiX (this app will need to be reinstalled regularly as I'm making changes to it). I'm not really used to admin type things (or vbs, or WMI, or terminal servers, or Citrix, and even WiX and MSI are not things I usually deal with) but so far I put together some vbs and have an end goal in mind. The msi does work, and I've installed it from the mapped O: drive on my dev machine and while RDP'd to a citrix machine. End Goal: Deploy code written on my dev machine and compiled into an MSI (that I can improve upon within the confines of WiX and whatever the Windows Installer Engine allows) to the cluster of Citrix machines my users have access to. What am I missing in my script to get the MSI to execute on the remote machines? Layout: Machine A is my dev machine, and has the vbs script and the msi file (XP SP3) Machines C1 - C6 are the Citrix Servers that need the application installed them via the msi (Server 2003 R2 SP2) There is also optionally a shared network resource that all the machines can access. Script: 'Set WMI Constants Const wbemImpersonationLevelImpersonate = 3 Const wbemAuthenticationLevelPktPrivacy = 6 'Set whether this is installing to the debug Citrix Servers Const isDebug = true 'Set MSI location 'Network location yields error 1619 (This installation package could not be opened.) msiLocation = "\\255.255.255.255\odrive\Citrix Deployment\Setup.msi" 'Directory on machine A yields error 3 (file not found) 'msiLocation = "C:\Temp\Deploy\Setup.msi" 'Mapped network drive (on both machines) yield error 3 (file not found) 'msiLocation = "O:\Citrix Deployment\Setup.msi" 'Set login information strDomain = "MyDomain" Wscript.StdOut.Write "user name:" strUser = Wscript.StdIn.ReadLine Set objPassword = CreateObject("ScriptPW.Password") Wscript.StdOut.Write "password:" strPassword = objPassword.GetPassword() 'Names of Citrix Servers Dim citrixServerArray If isDebug Then citrixServerArray = array("C4") Else 'citrixServerArray = array("C1","C2","C3","C5","C6") End If 'Loop through each Citrix Server For Each citrixServer in citrixServerArray 'Login to remote computer Set objLocator = CreateObject("WbemScripting.SWbemLocator") Set objWMIService = objLocator.ConnectServer(citrixServer, _ "root\cimv2", _ strUser, _ strPassword, _ "MS_409", _ "ntlmdomain:" + strDomain) 'Set Remote Impersonation level objWMIService.Security_.ImpersonationLevel = wbemImpersonationLevelImpersonate objWMIService.Security_.AuthenticationLevel = wbemAuthenticationLevelPktPrivacy 'Reference to a process on the machine Dim objProcess : Set objProcess = objWMIService.Get("Win32_Process") 'Change user to install for terminal services errReturn = objProcess.Create _ ("cmd.exe /c change user /install", Null, Null, intProcessID) WScript.Echo errReturn 'Install MSI here 'Reference to a product on the machine Set objSoftware = objWMIService.Get("Win32_Product") 'All users set in option parameter, I'm led to believe that the third parameter is actually ignored 'http://www.webmasterkb.com/Uwe/Forum.aspx/vbscript/2433/Installing-programs-with-VbScript errReturn = objSoftware.Install(msiLocation,"ALLUSERS=2 REBOOT=ReallySuppress",True) Wscript.Echo errReturn 'Change user back to execute errReturn = objProcess.Create _ ("cmd.exe /c change user /execute", Null, Null, intProcessID) WScript.Echo errReturn Next I also tried using this to install, it doesn't return an error code, but doesn't install the msi either, and it makes me wonder if the change user /install command is even really working. errReturn = objProcess.Create _ ("cmd.exe /c msiexec /i ""O:\Citrix Deployment\Setup.msi"" /quiet") Wscript.Echo errReturn

    Read the article

  • X:\ is not accessible. Insufficient system resources exist to complete the requested service. Help [

    - by Katherine
    I keeping getting the error message from above on multiple computers that I administer. I wasn't sure if I should be posting this on SuperUser or ServerFault so my apologizes if it should go there... Basically, I have at least 5 computers of varying ages (some fresh out of the box!) throwing the above error. X:\ is one of our network drives that is mapped for users. Most of the time if you shut down the biggest application it will fix the problem, but it's becoming an increasing issue, and I can't keep running around fixing it manually. I have tried to do some research, but most of it just states the obvious without supplying a permanent fix. The machines are all running Win XP SP3, with at least 2gb of ram. Sorry for the delay in getting back to people... a lot of good questions. To respond back to people... It is a windows 2003 server that houses the file share. We have about 175 users, however i cannot state how many are actually accessing the information at a single moment. Considering that this is our largest file share, I would say that probably at least 100+. The files we work with are large, but not that big considering that we do a lot of graphical and video work. ~50mb. That being said, this is error occurs simply when trying to gain access to the server itself, not actual files. When I say close a program, I mean that it can be any program. It doesn't matter which program. It varies from machine to machine, and from day to day. Some days it is Firefox, some days it is Outlook, some days it is Excel. There doesn't seem to be a common bond behind which application could be causing the problem. Thank you for the articles, and the recommendation on paging files. I will have to look into that. None of our computers are set to hibernate, so I am going to rule that out.

    Read the article

  • Losing internet connectivity on server after installing LogMeIn Hamachi (with server set as gateway node)

    - by Kim Jong-Un
    Our domain controller (SBS 2003) completely lost internet and network connectivity yesterday after I remotely installed LogMeIn Hamachi on it and set it to be a gateway node- in an attempt to create a VPN link between the server and a remote site. I had to go in to the office to resolve the problem as, unsurprisingly, my own remote access to the server was also lost. I was only able to restore network connectivity by deleting a virtual network adapter Hamachi created when making the server the gateway node (called "Hamachi bridge" I believe), then rebooting the server. This is a repeatable problem. Every time I try to get this to work, it just takes the server offline. Why would this bridge affect regular TCP/IP connectivity on the NIC in this way? I have tried a "hub-and-spoke" configuration between the server and our PC at a remote site (server set as hub, remote site as spoke). This caused no such problems with general internet connectivity, and file transfer worked well between the two computers. However, there was a DNS issue with the VPN between the two sites- resulting in Active Directory not being able to communicate between them (could not log on using domain user accounts at remote site if they were not already cached on that machine). I only tried a "gateway" network as LogMeIn support told me: If you can get the Active Directory to work it would only be through a "Gateway" network type with the server acting as the Gateway Node. You would configure the gateway settings on the server in the Hamachi client on that machine to push whatever IP's/DNS settings you prefer and at that point AD would be able (all things being equal) communicate to the client node when it attaches. We do not have any ActiveDirectory configuration info as that's outside the scope of our support. I hope this helps. It would be fantastic if I could get Active Directory to work over a Hamachi VPN connection, without worrying about the server going offline in this way. Does anyone have any ideas how I should proceed, or any theories as to what is going on when I try to use the "gateway" network type? I want to try to narrow down what is going on here.

    Read the article

  • Further Performance Tuning on Medium SharePoint Farm?

    - by elorg
    I figured I would post this here, since it may be related more to the server configuration than the SharePoint configuration or a combination of both? I'm open for ideas to try, or even feedback on things that maybe have been configured incorrectly as far as performance is concerned. We have a medium MOSS 2007 install prepped and ready for receiving the WSS 2003 data to upgrade. The environment was originally architected by a previous coworker, and I have since added a few configuration modifications to assist with performance before we finally performed the install. When testing the new site collections & SharePoint install (no actual data yet), things seemed a bit slow. I had assumed that it was because I was accessing it remotely. Apparently the client is still experiencing this and it is unacceptably slow. 1 SQL Server running SQL Server 2008 2x SharePoint WFEs - hosting queries (no index) 1x SharePoint Index - hosting index (no queries) MOSS 2007 installed and patched up through December '09 on WFEs & Index All 4 servers are VMs, should have more than sufficient disk space & RAM (don't recall at the moment), and are running Windows Server 2008 - everything is 64-bit. The WFEs have Windows NLB configured, with a DNS name & IP for the NLB cluster. Single NIC on each server (virtual, since VMWare). The Index server is configured as a WFE (outside of the NLB cluster) so that it can index itself and replicate the indexes to the WFEs that will serve the queries. Everything is configured & working properly - it just takes a minute or two to load a page on the local LAN. The client is still using their old portal (we haven't started the migration/upgrade just yet) so there's virtually no data or users. We need to either further tune the configuration, or fix anything that may have been configured incorrectly which is causing this slowness? I've already reviewed & taken into account everything that I could find that was relevant before we even started the install. Does anyone have ideas or pointers? Perhaps there's something that I've missed?

    Read the article

  • Any reason not to disable the Windows pagefile given enough physical RAM?

    - by Evgeny
    The question of disabling the Windows pagefile has already been discussed quite a bit, for example here and here and here. People continue to upvote answers that say "you should not disable your pagefile even if you have plenty of RAM", but I have yet to see any concrete, verifiable reasons being given for this advice. As far as I can see, if you never need to read from the pagefile (because you have enough RAM) then performance could only be worse with it enabled due to Windows pre-emptively writing to it. At best, performance would be the same. I can't see how it could possibly be improved by writing data you never need to read. So my question is: Assuming that I have enough physical RAM for everything I do, is there any reason I should not disable the pagefile? Let's say the version of Windows is Windows XP x64 SP2 or Windows Server 2003 x64 SP2 (same thing). If it's different for Windows Server 2008 x64 I'd be interested to hear an answer for that as well. I'm looking for specific, objective reasons from good sources, not just opinions. Something like "here are the benchmarks done with and without a pagefile and the results were better with a pagefile, even with enough RAM" or "according to this MS KB article problem X occurs if you disable the pagefile". So far the only reasons I've seen mentioned are: Even if you think you have enough RAM you might run out. OK, but for the purposes of this question, let's just take it as a given that I have enough. Maybe I only ever read my email and I have 16GB RAM. Or 128GB. Or 1TB. Or whatever - but it's enough for 100% of what I do, 100% of the time. Another way to think of it is: if I have x MB physical RAM and y MB pagefile and I never run out of RAM in that configuration, would I not be better off, performance-wise, with x+y MB physical RAM and no pagefile? Windows is "used to" having a paging file and it might not function as reliably (from Understanding the Impact of RAM on Overall System Performance That's rather vague and I find it hard to believe, given that MS has provided the option to disable the pagefile. Windows knows what it's doing better than you. No - it doesn't know that I won't run more programs or load more data, but I do.

    Read the article

  • NNTP-via-SMTP software

    - by Thufir
    I see that mailman can operate as gateway: Try mailman. Create a new list for each newsgroup that you read and then configure the lists as mail<-news gateways. Subscribe yourself to the lists. Viola. Instant NNTP-via-SMTP. http://lists.debian.org/debian-user/2003/08/msg00522.html What alternatives are there to mailman for this functionality? I see that others have asked this, or something similar.

    Read the article

  • Splunk is fantastically expensive: What are the alternatives? [closed]

    - by samsmith
    Possible Duplicate: Alternatives to Splunk? This has been discussed, but it has been several months, so it may be time to revisit it: Earlier discussion RE Splunk alternatives For the record, Splunk rocks. But the pricing is simply beyond what we can consider (When I spoke with Splunk today, the cost for a system to index 5gb/day of data is over $30,000.) That is more than we spend on SQL Server (by a large multiple), more than we spend on a rack of servers (by a multiple), etc. etc. The splunk sales team is correct (that for $30K we get more value and functionality than if we spend the same building our own system), but it doesn't matter. The splunk cost is simply too high (by a multiple). Soooooo, we are looking around! Is anyone out there building a splunk like system? Our basic need: Able to listen for syslog messages on multiple udp ports Able to index the incoming data in an async way Some kind of search engine Some kind of UI An API to the search engine (to embed in our console) We currently need to index 3-5gb/day, but need to be able to scale to 10gb/day or more. We do not need a lot of history (30 days is fine). We use Windows 2008 and 2003 servers. Thanks for your thoughts! UPDATE: We spent two weeks researching commercial and open source options. Our conclusion: Write our own (we are a software company... we know how to write things). We built a great system built on mongodb and .NET that gives us the functions we needed from MongoDB in about one engineering week. We have now completed our implementation. We use two Mongodb servers (master and slave), and are able to log and index any amount of log data (5gb/day, 15gb/day, etc), limited only by disk space. OBSERVATIONS: This space needs a solid solution that is $1000-3000 flat rate. The licensing models used by the commercial firms are based on a "milk the data center ops guys" models. That is their right (of course!), but it leaves a HUGE space open for someone to come in underneath them. My guess is that in another year or two there will be a good open source solution that will be really usable. Thank you all for your input (even if it was self promotion).

    Read the article

  • Outlook refuses to connect to Exchange

    - by wfaulk
    Outlook 2007 under Windows XP connecting to Exchange 2003 SP2: when started, it flips back and forth between "Connecting to Exchange Server" and "Disconnected" three or four times, then gives up and stays disconnected. I tried deleting the ost file (which was nearly 2GB), turning Cached mode on and off, recreating the account inside the Mail control panel, changing the account to use HTTP, and probably some other things. None of it seemed to make any difference, until … After fiddling with it for a while, I got this absurd error message dialog at startup, and it exits after I click OK: Cannot start Microsoft Office Outlook. Cannot open the Outlook window. The set of folders cannot be opened. Microsoft Exchange is not available. Either there are network problems or the Exchange server is down for maintenance. (I'm not sure if I can even trust that message. It's so long, it just feels like a random offset into Outlook's stack of error messages.) Either way, the Exchange server is available to everyone else, and is available via OWA from that computer. I ran Process Explorer against Outlook and it showed 5 or so ESTABLISHED connections to our Exchange server, plus listening on two UDP ports, and two CLOSE_WAIT connections to localhost. If I managed to look at Outlook's IP connections while it was doing its Connecting/Disconnected dance, it had a huge number of connections open to the Exchange server. It more than filled ProcExp's dialog box; I'm guessing at least 20, probably more. The only other odd thing is that our network admin at some point added a wildcard DNS record to the domain name that we use for email, and now Outlook will sometimes (always?) start by complaining about autodiscover.example.com's SSL certificate. There is a web server there, but it doesn't have any sort of email autodiscover anything on it. It doesn't make any difference if I click "OK" or "Cancel" (or whatever the buttons are). I also added a bogus entry for the hostname to Windows' hosts file, pointing it at 127.0.0.2, and it stopped complaining about the certificate. (The CLOSE_WAIT sockets above were from before I made this change, and went away after.) I don't think this is related, as the same problem should exist for everyone, but it might be. This is the second time this user has had this problem. The first time, I never found a solution other than reinstalling Outlook. Now that it's a pattern, I'd like to find a permanent solution, rather than assume it's a random glitch.

    Read the article

  • SQL 2008 R2 Named Instance Client Connectivity Issues?

    - by Jerry Dodge
    We're upgrading our software from using SQL 2000 to 2008 R2. Our customers will be installing an update which uninstalls 2000 and installs 2008 R2 under the same instance. So if no instance existed, then no instance name will be set (default). However, the problem starts with the customers which have a named SQL instance. Starting in 2008 R2 (not sure of ones before), for some reason, a client connecting to the server by its instance name is unsuccessful. I'm testing from the Management Studio - if I can't connect this, then nothing can connect. I browse network servers, and find the specific server\instance in the list. But, upon trying to connect to an instance name like MyServer\INST, I get: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) (Microsoft SQL Server, Error: -1) I do in fact have TCP/IP and Named Pipes protocols enabled, this is the first thing I did. When I connect to the server using a comma (,) and port number like MyServer, 49195, it works just fine. So it appears that client computers are just unable to identify the instance names. This has happened on all our installations of SQL 2008 R2 and from all client computers, including Win 7, XP, Vista, Server 2008, and Server 2003. We never experienced such issues on earlier versions of SQL. The problem even persists if the firewalls and antiviruses are all disabled. Now, this is a large update which we will be distributing soon to all our customers, and we want to minimize the interaction they need with us to get this installed. We absolutely hate the idea of using a port number, because it will always be different, and we would have to modify each client to point to this server/port. Some of our customers may have hundreds of client computers. How do I make client connections to a named SQL instance work again? After all, this is the whole purpose of named instances, and if a client can't connect to this instance by its name, then what is it even named for? EDIT It was mentioned to make sure SQL Browser is running, so I checked, and it is running. The server is also able to connect to its self (locally) - just external connections are refused. UPDATE After more careful checking, I learned the firewall wasn't completely disabled when testing, and upon disabling it completely, this works. So it appears that SQL Browser is being blocked by the firewall from external clients from accessing.

    Read the article

  • Windows 2008 running as KVM guest networking issue

    - by Evolver
    I have a strange networking problem with Windows 2008 server R2, running as guest under KVM-Qemu host. Host is CentOS 6.3 x86_64. It's network settings: # cat /etc/sysconfig/network-scripts/ifcfg-br0 DEVICE=br0 BOOTPROTO=static BROADCAST=xx.xx.xx.63 IPADDR=xx.xx.xx.4 NETMASK=255.255.255.192 NETWORK=xx.xx.xx.0 ONBOOT=yes TYPE=Bridge # cat /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 HWADDR=xx:xx:xx:xx:xx:xx ONBOOT=yes BRIDGE=br0 IPV6INIT=yes IPV6_AUTOCONF=yes # cat /etc/sysconfig/network NETWORKING=yes NETWORKING_IPV6=no HOSTNAME=my.hostname GATEWAY=xx.xx.xx.1 # cat /etc/sysctl net.ipv4.ip_forward = 1 # tried to set it to 0 without any changes net.ipv4.conf.default.rp_filter = 1 # tried to set it to 0 without any changes net.ipv4.conf.default.accept_source_route = 0 # tried to set it to 1 without any changes kernel.sysrq = 0 kernel.core_uses_pid = 1 net.ipv4.tcp_syncookies = 1 net.bridge.bridge-nf-call-ip6tables = 0 net.bridge.bridge-nf-call-iptables = 0 net.bridge.bridge-nf-call-arptables = 0 kernel.msgmnb = 65536 kernel.msgmax = 65536 kernel.shmmax = 68719476736 kernel.shmall = 4294967296 # route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface xx.xx.xx.0 0.0.0.0 255.255.255.192 U 0 0 0 br0 169.254.0.0 0.0.0.0 255.255.0.0 U 1004 0 0 br0 0.0.0.0 xx.xx.xx.1 0.0.0.0 UG 0 0 0 br0 Node IP is xx.xx.xx.4, guest IP is xx.xx.xx.24, both host and guest is in the same network (/26). There are several linux guest running fine on the node (centos, debian, ubuntu, arch), and even Windows 2003 x86 also running fine. But Win2008 does not. I wonder, what's the difference. From Win2008 guest I can ping nothing: neither gateway, nor any other IP, even they are in the same subnet. From outside I also cannot ping guest. Almost. If I ping it from another server in same subnet, it's barely pinging, losing more than 90% packets. Firewall on the guest is completely off. Tried to set up network manually as well as via DHCP without success (BTW, DHCP set up network settings correctly). I suspect that is a kind of routing problem, but I spent whole day and still cannot figure it out. I would be appreciate for any help.

    Read the article

  • IIS/ASP.NET performance incident - Perfmon Current Annonymous Users going through roof but Requests/sec low

    - by Laurence
    Setup: ASP.NET 4.0 website on IIS 6.0 on Win 2003 64 bit, 8xCPUs, 16GB memory, separate SQL 2005 DB server. Had a serious slowdown today with any otherwise fairly well performing ASP.NET site. For a period of a couple of hours all page requests were taking a very long time to be served - e.g. 30-60s compared to usual 2s. The w3wp.exe's CPU and memory usage on the webserver was not much higher than normal. The application pool was not in the middle of recycling (and it hadn't recycled for several hours). Bottlenecks in the database were ruled out - no blocks occurring and query results were being returned quickly. I couldn't make any sense of it and set up the following Perfmon counters: Current Anonymous Users (for site in question) Get requests/sec (ditto) Requests/sec for the ASP.NET application running the site Get requests/sec was averaging 100-150. Requests/sec for ASP.NET was averaging 5-10. However Current Anonymous Users was around 200. And then as I was watching, the Current Anonymous Users began to climb steeply going up to about 500 within a few minutes. All this time Get requests/sec & Requests/sec for ASP.NET was if anything going down. I did a whole load of things (in a panic!) to try to get the site working, like shutting it down, recycling the app pool, and adding another worker process to the pool. I also extended the expiration time for content (in IIS under HTTP Headers) in an attempt to lower the number of requests for static files (there are a lot of images on the site). The site is now back to normal, and the counters are fairly steady and reading (added Current Connections counter): Current Anonymous Users : average 30 Get requests/sec : average 100 Requests/sec for ASP.NET : 5 Current Connections : average 300 I have also observed an inverse relationship between Get requests/sec & Current Anonymous Users. Usually both are fairly steady but there will be short periods when Get requests/sec will go down dramatically and Current Anonymous Users will go up in a perfect mirror image. Then they will flip back to their usual levels. So, my questions are: Thinking of the original performance issue - if w3wp.exe CPU, memory usage were normal and there was no DB bottleneck, what could explain page requests taking 20 times longer to be served than usual? What other counters should I be looking at if this happens again? What explains the inverse relationship between Get requests/sec & Current Anonymous Users? What could explain Current Anonymous Users going from 200 to 500 within a few minutes? Many thanks for any insight into this.

    Read the article

  • PDF Corruption When Sending with Microsoft Products

    - by Winner
    I have the same PDF corruption problem in two different offices that I am the tech support for. Office 1: Started in the middle of December. PDF received from outside the office and is viewable with no problems. I have no control over how it is created. If it is forwarded to anyone else, the PDF is corrupted. I have forwarded it to multiple people in the office. I have tried viewing with Reader 8, 9, Sumatra and Fox IT. I have tried forwarding to Gmail and their viewer says it is corrupted. If I save the PDF and create a new email, it will be corrupted when sent using Outlook 2003, Outlook 2007, Microsoft Live Mail and Outlook Express. If I create the email using Thunderbird 3, Gmail or the webclient Iclient for IPSwitch IMail it will not be corrupted. I have confirmed the same results when using our IMail SMTP and also Using Gmail as the SMTP server. To be clear, if I created in Thunderbird, Gmail or Iclient and received on any of the MS products, it will be viewable. This office receives PDFs daily from multiple sources. There is only a small subset that are having this problem. So far they problem PDFs are from two different companies they deal with, but not all of the PDFs are bad. Office 2: PDFs are created by a management system. I'm not sure what engine is used to create them. Same exact same issues. At both offices, I noticed that the file size is wrong. One small PDF the proper file size is 12kb for the PDF when it's viewable, when it shows up corrupted it is only 8kb. We handle the email for both offices. Both are POP servers, not Exchange. IMail was updated after these issues start. I have tried different SMTP servers and it still seems to happen only when using Microsoft products to send. Anyone else having problems with PDFs getting corrupted? Any ideas how to find out a resolution?

    Read the article

  • Remotely from Chrome or IE page loads ~60seconds, from Firefox or IE on local machine - instantly.

    - by Janis Veinbergs
    The problem: If i access SharePoint from Windows 7 with IE8 or Chrome5 - I must wait for like a minute to get a response. If i use other Windows 7 with IE8, just the same - just wait a MINUTE. If i use Firefox3.6 on W7 machine - page opens up instantly. Now switch to IE rendering engine in Firefox, you will have to wait just as with IE. Now i tried IE8 on XP SP3 - page opens up instantly. I tried IE8 on Windows Server 2003 SP2 (machine on which SharePoint is hosted) - page opens up instantly. IIS6 Logs I did request almost instantly from all 3 browsers and this is what shows up in IIS logs (first 2 entries for each browser): Chrome Ok, IIS saw first Chrome request when i Hit enter in browser, but i had to wait long for things to move on 2010-06-01 05:46:04 W3SVC1794621940 192.168.0.9 GET /sapulces - 80 - 192.168.0.186 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+en-US)+AppleWebKit/533.4+(KHTML,+like+Gecko)+Chrome/5.0.375.55+Safari/533.4 401 2 2148074254 Loading... 2010-06-01 05:47:07 W3SVC1794621940 192.168.0.9 GET /sapulces - 80 - 192.168.0.186 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+en-US)+AppleWebKit/533.4+(KHTML,+like+Gecko)+Chrome/5.0.375.55+Safari/533.4 401 1 0 ... etc... Firefox All Instantly 2010-06-01 05:46:06 W3SVC1794621940 192.168.0.9 GET /sapulces - 80 - 192.168.0.186 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+lv;+rv:1.9.2.3)+Gecko/20100401+Firefox/3.6.3 401 2 2148074254 2010-06-01 05:46:06 W3SVC1794621940 192.168.0.9 GET /sapulces - 80 - 192.168.0.186 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+lv;+rv:1.9.2.3)+Gecko/20100401+Firefox/3.6.3 401 1 0 ... etc... IE I did hit enter when it was 05:46:06, but these are first entries in IIS logs 2010-06-01 05:47:08 W3SVC1794621940 192.168.0.9 GET /sapulces - 80 - 192.168.0.186 Mozilla/4.0+(compatible;+MSIE+7.0;+Windows+NT+6.1;+Trident/4.0;+SLCC2;+.NET+CLR+2.0.50727;+.NET+CLR+3.5.30729;+.NET+CLR+3.0.30729;+Media+Center+PC+6.0;+Tablet+PC+2.0;+.NET+CLR+1.1.4322;+.NET4.0C;+.NET4.0E) 401 1 0 2010-06-01 05:47:08 W3SVC1794621940 192.168.0.9 GET /sapulces - 80 - 192.168.0.186 Mozilla/4.0+(compatible;+MSIE+7.0;+Windows+NT+6.1;+Trident/4.0;+SLCC2;+.NET+CLR+2.0.50727;+.NET+CLR+3.5.30729;+.NET+CLR+3.0.30729;+Media+Center+PC+6.0;+Tablet+PC+2.0;+.NET+CLR+1.1.4322;+.NET4.0C;+.NET4.0E) 401 1 0 ... etc... Nothing to see in Event Logs. The question Similar question has been asked but there is no response and i`m trying to access page without SSL and that happens even on GET requests. Where do I look? Where would be the problem? Browser? OS? I don't even know what to think about. Just a note Just a note about chrome's process isolation: I found it sad that while I was waiting that minute with Chrome, i could not use any other tab (i could switch, but i could not, for example, scroll or use any controls)

    Read the article

  • Powershell: Cannot connect via SSL

    - by JSWork
    Am following "secrets to powershell remoting" to setup an SLL account and seem to be missing a step. I ran Winrm create winrm/config/Listener?Address=*+Transport=HTTPS @{Hostname="redacted";CertificateThumbprint="redacted"} and got PS WSMan:\localhost&gt; dir wsman:\localhost\listener\Listener_1184937132 WSManConfig: Microsoft.WSMan.Management\WSMan::localhost\Listener\Listener_1184937132 Name Value Type ---- ----- ---- Address * System.String Transport HTTP System.String Port 5985 System.String Hostname System.String Enabled true System.String URLPrefix wsman System.String CertificateThumbprint System.String ListeningOn_756355952 10.0.0.54 System.String ListeningOn_1201550598 127.0.0.1 System.String PS WSMan:\localhost&gt; dir wsman:\localhost\listener\Listener_1187163138 WSManConfig: Microsoft.WSMan.Management\WSMan::localhost\Listener\Listener_1187163138 Name Value Type ---- ----- ---- Address * System.String Transport HTTP System.String Port 80 System.String Hostname System.String Enabled true System.String URLPrefix wsman System.String CertificateThumbprint System.String ListeningOn_756355952 10.0.0.54 System.String ListeningOn_1201550598 127.0.0.1 System.String PS WSMan:\localhost&gt; dir wsman:\localhost\listener\Listener_220862350 WSManConfig: Microsoft.WSMan.Management\WSMan::localhost\Listener\Listener_220862350 Name Value Type ---- ----- ---- Address * System.String Transport HTTPS System.String Port 5986 System.String Hostname redacted System.String Enabled true System.String URLPrefix wsman System.String CertificateThumbprint redacted System.String ListeningOn_756355952 10.0.0.54 System.String ListeningOn_1201550598 127.0.0.1 System.String Trouble is when i do this PS C:\Users\redacted> enter-pssession -Computername redacted -Credential redacted\redacted -UseSSL I get this Enter-PSSession : Connecting to remote server failed with the following error message : The client cannot connect to th e destination specified in the request. Verify that the service on the destination is running and is accepting requests . Consult the logs and documentation for the WS-Management service running on the destination, most commonly IIS or Win RM. If the destination is the WinRM service, run the following command on the destination to analyze and configure the WinRM service: "winrm quickconfig". For more information, see the about_Remote_Troubleshooting Help topic. At line:1 char:16 + enter-pssession <<<< -Computername redacted -Credential redacted\redacted -UseSSL + CategoryInfo : InvalidArgument: (redacted:String) [Enter-PSSession], PSRemotingTransportException + FullyQualifiedErrorId : CreateRemoteRunspaceFailed This happens even when the firewall is off completely and when the machine tires to connect to itself locally. On top of that, despite the listners eing lsited on wsman, when I run PS WSMan:\localhost&gt; Get-PSSessionConfiguration I get Name PSVersion StartupScript Permission ---- --------- ------------- ---------- Microsoft.PowerShell 2.0 PS WSMan:\localhost&gt; Any ideas what I'm missing/doing wrong? edit: Windows 2003. Powershell v2.0

    Read the article

  • SMB returns the entire file instead of header info

    - by billdlawson
    Starting a section of code checks for access to many data files (flat files so each table is a file) and when I do a packet capture, in our capture only the header info is sent by the server to the client. However I have one Customer who is using a SAN that gets the whole file instead of just the header info,and besides just being slower, this is causing file access issues. They have already turned off OPLOCKS at the server and at the workstations. This is not client server. The data files and the application reside on the server but the users run the application locally via a shortcut with a mapped drive or UNC. So when I simply select an option that prompts for a vehicle number, not tryng to select a record but rather simply verify the datafiles are accessible, that window opens in 1-2 seconds for me. When they do the same thing it takes 6-15 seconds after there several users are running the program. Maximum number of users is 15. The program has a lot of small modules, 800 .cob modules. So it is very chatty but these are datafiles. We have Wireshark captures that show he's pulling the whole file and we're just getting the header. Thier capture vs ours. We suspect the SAN. Has anyone ever heard of a SAN improperly interpreting runtime requests? So an SMB request. This is Acucobol-GT (now Microfocus). The application is written in COBOL. This is not a new program just a new problem. This is one customer of over a thousand who are otherwise running smoothly and we are totally stumped. All XP users, the server is Windows 2003 (with Virtual server) and I don't yet know the SAN info. Also we have many installations running virtual servers but only few on SANs or we just don't know it. This is not a network throught put issue, the load is less than 5% on the server and theer are no timeout or retransmits. PS If it wasn't for Wireshark I'd still be chasing my tail. An application trace file on thier installation just looks like they run slower. If you want the Wireshark trace file I can make it available. Thanks in advance - Please excuse my verbosity (word?) but I'm not sure what's relavent.

    Read the article

  • kvm and qemu host: Is there a limit for max CPUs (Ubuntu 10.04)?

    - by Valentin
    Today we encountered a really strange behaviour on two identical kvm and qemu hosts. The host systems each have 4 x 10 Cores, which means that 40 physical cores are displayed as 80 within the operating system (Ubuntu Linux 10.04 64 Bit). We started a Windows 2003 32 Bit VM (1 CPU, 1 GB RAM, we changed those values multiple times) on one of the nodes and noticed that it took 15 minutes until the boot process began. During those 15 minutes, a black screen is shown and nothing happens. libvirt and the host system show that the qemu-kvm process for the guest is almost idling. stracing this process only shows some FUTEX entries, but nothing special. After those 15 minutes, the Windows VM suddenly starts booting and the Windows logo occurs. After a few seconds, the VM is ready to be used. The VM itself is very performant, so this is no performance issue. We tried to pin the CPUs with the virsh and taskset tools, but this only made things worse. When we boot the Windows VM with a Linux Live CD there is also a black screen for several minutes, but not as long as 15. When booting another VM on this host (Ubuntu 10.04) it also has the black screen problem, and also here the black screen is only shown for 2-3 minutes (instead of 15). So, summerinzing this: Each guest on each of those identical nodes suffers from idling a few minutes after being started. After a few minutes, the boot process suddenly starts. We have observed that the idling time happens right after the bios of the guest was initialized. One of our employees had the idea to limit the amount of CPUs with maxcpus=40 (because of 40 physical cores existing) within Grub (kernel parameter) and suddenly the "black-screen-idling"-behaviour disappeared. Searching the KVM and Qemu mailing lists, the internet, forums, serverfault and other various sites for known bugs etc. showed no useful results. Even asking in the dev IRC channels brought no new ideas. The people there recommend us to use CPU pinning, but as stated before it didn't help. My question is now: Is there a sort of limit of CPUs for a qemu or kvm host system? Browsing the source code of those two tools showed that KVM would send a warning if your host has more than 255 CPUs. But we are not even scratching on that limit. Some stuff about the host system: 3.0.0-20-server kvm 1:84+dfsg-0ubuntu16+0.14.0+noroms+0ubuntu4 kvm-pxe 5.4.4-7ubuntu2 qemu-kvm 0.14.0+noroms-0ubuntu4 qemu-common 0.14.0+noroms-0ubuntu4 libvirt 0.8.8-1ubuntu6 4 x Intel(R) Xeon(R) CPU E7-4870 @ 2.40GHz, 10 Cores

    Read the article

< Previous Page | 189 190 191 192 193 194 195 196 197 198 199 200  | Next Page >