Search Results

Search found 86974 results on 3479 pages for 'visualsvn server'.

Page 193/3479 | < Previous Page | 189 190 191 192 193 194 195 196 197 198 199 200  | Next Page >

  • Standard Network Tiers in a Distributed N-Tier System

    Distributed N-Tier client/server architecture allows for segments of an application to be broken up and distributed across multiple locations on a network.  Listed below are standard tiers in a Distributed N-Tier System. End-User Client Tier The End-User Client is responsible for sending and receiving requests from web servers and other applications servers and translating the responses so that the End-User can interpret the data effectively. The primary roles for this tier are to communicate with servers and translate server responses back to the end-user to interpret. Business-Specific Functions Validate Data Display Data Send Data to Webserver Web Server Tier The Web server tier processes new requests for information coming in from the HTTP and HTTPS ports. This primarily handles the generation of user interfaces and calls the application server when needed to access Data and business logic when needed. Business-specific functions Send Data to application server Format Data for Display Validate Data Application Server Tier The application server stores and executes predefined business logic that is applied to various pieces of data as the business determines. The processed data is then returned back to the Webserver. Additionally, this server directly calls the database to obtain and store any data used by the system Business-Specific Functions Validate Data Process Data Send Data to Database Server Database Server Tier The Database Server is responsible for storing and returning all data need by the calling applications. The primary role for this this server is storage. Data is stored as needed and can be recalled at any point later in time. Business-Specific Functions Insert Data Delete Data Return Data to Application Server

    Read the article

  • How to disable windows server 2008 timestamp response

    - by Cal
    Posted this question on stackoverflow but then got instructed to post it here: I was using Rapid7's Nexpose to scan one of our web servers (windows server 2008), and got a vulnerability for timestamp response. According to Rapid7, timestamp response shall be disabled: http://www.rapid7.com/db/vulnerabilities/generic-tcp-timestamp So far I have tried several things: Edit the registry, add a "Tcp1323Opts" key to HKLM\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters, and set it to 0. http://technet.microsoft.com/en-us/library/cc938205.aspx Use this command: netsh int tcp set global timestamps=disabled Tried powershell command: Set-netTCPsetting -SettingName InternetCustom -Timestamps disabled (got error: Set-netTCPsetting : The term 'Set-netTCPsetting' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again.) None of above attempts was successful, after re-scan we still got the same alert. Rapid7 suggested using a firewall that's capable of blocking it, but we want to know if there is a setting on windows to achieve it. Is it through a specific port? If yes, what is the port number? If not, could you suggest a 3rd party firewall that is capable of blocking it? Thank you very much.

    Read the article

  • Setting up Group Managed Service Account on Windows Server 2012 R2

    - by Moo MinTroll
    I have a Windows 2012 R2 domain controller called cox.win.testlab. I have set up a group of hosts where I would like to use a gMSA (Group Managed Service Account). This group is called SQLManagedHosts. I created the account by following these steps in Powershell on the domain controller: PS C:\Windows\system32> Add-KdsRootKey -EffectiveTime ((get-date).addhours(-10)) Guid ---- 9b68b1e7-db76-c4e4-4978-63c2965e5596 PS C:\Windows\system32> New-ADServiceAccount mSQL -DNSHostName cox.win.testlab -PrincipalsAllowedToRetrieveManagedPassword SQLManagedHosts PS C:\Windows\system32> Get-ADServiceAccount msql DistinguishedName : CN=mSQL,CN=Managed Service Accounts,DC=win,DC=testlab Enabled : True Name : mSQL ObjectClass : msDS-GroupManagedServiceAccount ObjectGUID : cf9df74a-38e0-4d7a-856e-9af882b08800 SamAccountName : mSQL$ SID : S-1-5-21-3443997112-87545443-1733229669-1602 UserPrincipalName : On one of the hosts listed in SQLManagedHosts, I ran: PS C:\Windows\system32> Install-ADServiceAccount msql Install-ADServiceAccount : Cannot install service account. Error Message: 'An unspecified error has occurred'. At line:1 char:1 + Install-ADServiceAccount msql + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : WriteError: (mSQL:String) [Install-ADServiceAccount], ADException + FullyQualifiedErrorId : InstallADServiceAccount:PerformOperation:InstallServiceAcccountFailure,Microsoft.ActiveDirectory.Management.Commands.InstallADServiceAccount Any ideas why it might be failing? All servers involved are Windows Server 2012 R2.

    Read the article

  • Server 2008 RAID5 resynching

    - by benpage
    I built a W2K8 R2 server last weekend, and built a R5 array using Disk Management, 5x 1TB drives. For the next 60 hours the status said 'Resynching (X%)' as it built the array. during this time i did read and write to the drive (slowly) and once the rebuild was complete speeds were quite fast. I was copying over some data from a USB hard drive overnight, and the machine crashed (looking in the error i believe it was the driver for the usb drive) and so the RAID5 went back to resynching, since the machine was not shut down properly. The issue is, it's been about 48 hours since that started and I'm happy to say it will take roughly 60 hours again to resynch, however.. all this time in Disk Management, the status has only said 'Resynching', not 'Resynching (X%)'. The hard drive light is going nuts, and read/writes are slow again, so I'm assuming it is actually resynching. The question is - is that a correct assumption? and why is disk management not telling me what %age it's done? Is this normal behaviour for a not-properly-shut-down r5 array?

    Read the article

  • BSOD Dump - EXCEPTION_DOUBLE_FAULT - ON Windows 2008 Server 64bit

    - by Mark K
    Hello, my windows 2008 server (datacenter ed) 64bit , have recently created a series of BSOD on a different applications. the error message is in general EXCEPTION_DOUBLE_FAULT. Can anyone please help with the analysis of the dump file bellow- Best regards, Mark 2: kd !analyze -v * Bugcheck Analysis * * UNEXPECTED_KERNEL_MODE_TRAP (7f) This means a trap occurred in kernel mode, and it's a trap of a kind that the kernel isn't allowed to have/catch (bound trap) or that is always instant death (double fault). The first number in the bugcheck params is the number of the trap (8 = double fault, etc) Consult an Intel x86 family manual to learn more about what these traps are. Here is a portion of those codes: If kv shows a taskGate use .tss on the part before the colon, then kv. Else if kv shows a trapframe use .trap on that value Else .trap on the appropriate frame will show where the trap was taken (on x86, this will be the ebp that goes with the procedure KiTrap) Endif kb will then show the corrected stack. Arguments: Arg1: 0000000000000008, EXCEPTION_DOUBLE_FAULT Arg2: 0000000080050033 Arg3: 00000000000006f8 Arg4: fffff800018b1678 Debugging Details: BUGCHECK_STR: 0x7f_8 CUSTOMER_CRASH_COUNT: 1 DEFAULT_BUCKET_ID: DRIVER_FAULT_SERVER_MINIDUMP PROCESS_NAME: CustomerService. CURRENT_IRQL: 1 EXCEPTION_RECORD: fffffa6004e45568 -- (.exr 0xfffffa6004e45568) ExceptionAddress: fffff800018a0150 (nt!RtlVirtualUnwind+0x0000000000000250) ExceptionCode: 10000004 ExceptionFlags: 00000000 NumberParameters: 2 Parameter[0]: 0000000000000000 Parameter[1]: 00000000000000d8 TRAP_FRAME: fffffa6004e45610 -- (.trap 0xfffffa6004e45610) NOTE: The trap frame does not contain all registers. Some register values may be zeroed or incorrect. rax=0000000000000050 rbx=0000000000000000 rcx=0000000000000004 rdx=00000000000000d8 rsi=0000000000000000 rdi=0000000000000000 rip=fffff800018a0150 rsp=fffffa6004e457a0 rbp=fffffa6004e459e0 r8=0000000000000006 r9=fffff8000181e000 r10=ffffffffffffff88 r11=fffff80001a1c000 r12=0000000000000000 r13=0000000000000000 r14=0000000000000000 r15=0000000000000000 iopl=0 nv up ei pl zr na po nc nt!RtlVirtualUnwind+0x250: fffff800018a0150 488b02 mov rax,qword ptr [rdx] ds:00000000000000d8=???????????????? Resetting default scope LAST_CONTROL_TRANSFER: from fffff800018781ee to fffff80001878450 STACK_TEXT: fffffa6001768a68 fffff800018781ee : 000000000000007f 0000000000000008 0000000080050033 00000000000006f8 : nt!KeBugCheckEx fffffa6001768a70 fffff80001876a38 : 0000000000000000 0000000000000000 0000000000000000 0000000000000000 : nt!KiBugCheckDispatch+0x6e fffffa6001768bb0 fffff800018b1678 : 0000000000000000 0000000000000000 0000000000000000 0000000000000000 : nt!KiDoubleFaultAbort+0xb8 fffffa6004e44e30 fffff800018782a9 : fffffa6004e45568 0000000000000001 fffffa6004e45610 000000000000023b : nt!KiDispatchException+0x34 fffffa6004e45430 fffff800018770a5 : 0000000000000000 0000000000000000 0000000000000000 0000000000000001 : nt!KiExceptionDispatch+0xa9 fffffa6004e45610 fffff800018a0150 : fffffa6004e46638 fffffa6004e46010 fffff80001965190 fffff8000181e000 : nt!KiPageFault+0x1e5 fffffa6004e457a0 fffff800018a3f78 : fffffa6000000001 0000000000000000 0000000000000000 ffffffffffffff88 : nt!RtlVirtualUnwind+0x250 fffffa6004e45810 fffff800018b1706 : fffffa6004e46638 fffffa6004e46010 fffffa6000000000 0000000000000000 : nt!RtlDispatchException+0x118 fffffa6004e45f00 0000000000000000 : 0000000000000000 0000000000000000 0000000000000000 0000000000000000 : nt!KiDispatchException+0xc2 STACK_COMMAND: kb FOLLOWUP_IP: nt!KiDoubleFaultAbort+b8 fffff800`01876a38 90 nop SYMBOL_STACK_INDEX: 2 SYMBOL_NAME: nt!KiDoubleFaultAbort+b8 FOLLOWUP_NAME: MachineOwner MODULE_NAME: nt IMAGE_NAME: ntkrnlmp.exe DEBUG_FLR_IMAGE_TIMESTAMP: 4a7801eb FAILURE_BUCKET_ID: X64_0x7f_8_nt!KiDoubleFaultAbort+b8 BUCKET_ID: X64_0x7f_8_nt!KiDoubleFaultAbort+b8 Followup: MachineOwner

    Read the article

  • Error 0x80300001 Installing Windows Server 2008 R2 64bit on FastTrak TX4660 RAID volume

    - by Konstantin Boyandin
    I am trying to install Windows Server 2008 R2 Enterprise 64bit on the following hardware: motherboard Intel DBS1200BTL Promise FastTrak TX4660 RAID controller 4 disks set up in two RAID1 arrays (handled by FastTrak) I am trying to install Windows so it would boot from RAID1 volume created with the FastTrak controller. The installation goes as in the manual, I insert the disk with the driver, select 'Browse' and specify the correct driver, it finds all the RAID arrays but notifies me that error 0x80300001 happened, Windows can't be installed on the mentioned RAID volumes, since they may not be bootable (even though the target RAID volume is the first in boot options list). If I proceed with the installation, Windows copies and unpacks itself, performs other standard actions after that. After the computer is restarted, it won't boot (Windows Boot Manager appears in the boot devices list; however, neither it nor the RAID volume itself does not boot). Is it a known problem? I can attach the boot disks to the motherboard and use its RAID capabilities instead, but I'd prefer FastTrak ones. Driver version is 1.3.0.4. Thanks.

    Read the article

  • .net Framework won't install on Server 2003 SP2 x64

    - by Yvan JANSSENS
    Hi, When I install the .net Framework 3.5 SP1 on my rental VPS, I get the message that setup has failed. It's a Server 2003 VPS w/ SP2 installed (64-bit). The .net Framework v 2.0 installed correctly. How do I fix this? This is the installation log: [03/10/10,07:44:46] Microsoft .NET Framework 2.0a x64: [2] Failed to fetch setup file in CBaseComponent::PreInstall() [03/10/10,07:44:47] setup.exe: [2] ISetupComponent::Pre/Post/Install() failed in ISetupManager::InternalInstallManager() with HRESULT -2147467260. [03/10/10,07:44:48] setup.exe: [2] CSetupManager::RunInstallPhase() - Call to Pre/Install/Post for InstallComponents failed [03/10/10,07:44:49] setup.exe: [2] CSetupManager::RunInstallPhaseAndCheckResults() - RunInstallPhase() returned a NULL piActionResults [03/10/10,07:44:49] setup.exe: [2] CSetupManager::RunInstallFromList() - RunInstallPhaseAndCheckResults failed [2] [03/10/10,07:44:51] setup.exe: [2] ISetupManager::RunInstallLists(IP_PREINSTALL failed in ISetupManager::RunInstallFromThread() [03/10/10,07:44:52] setup.exe: [2] ISetupManager::RunInstallFromThread() failed in ISetupManager::RunInstall() [03/10/10,07:44:53] setup.exe: [2] CSetupManager::Run() - Call to RunInstall() failed [03/10/10,07:44:59] WapUI: [2] DepCheck indicates Microsoft .NET Framework 2.0a x64 is not installed. [03/10/10,07:45:00] WapUI: [2] DepCheck indicates XPSEPSC x64 Installer was not attempted to be installed. [03/10/10,07:45:02] WapUI: [2] DepCheck indicates Microsoft .NET Framework 3.0 SP2 x64 was not attempted to be installed. [03/10/10,07:45:02] WapUI: [2] DepCheck indicates Microsoft .NET Framework 3.5 (x64) 'package' was not attempted to be installed. [03/11/10,14:19:23] Microsoft .NET Framework 3.0 SP2 x64: [2] Error: Installation failed for component Microsoft .NET Framework 3.0 SP2 x64. MSI returned error code 1604 [03/11/10,14:26:14] WapUI: [2] DepCheck indicates Microsoft .NET Framework 3.0 SP2 x64 is not installed. Thanks!! Yvan

    Read the article

  • SQL Server 2008 Logshipping not Restoring

    - by Nai
    I am getting the following errors during the restore part of the Logshipping process on my secondary server: 2010-04-01 10:00:01.85 Error: The file 'F:\UK_20100327090001.trn' is too recent to apply to the secondary database 'UK_Backup'.(Microsoft.SqlServer.Management.LogShipping) 2010-04-01 10:00:01.85 Error: The log in this backup set begins at LSN 55408000007387500001, which is too recent to apply to the database. An earlier log backup that includes LSN 55147000001788900001 can be restored. RESTORE LOG is terminating abnormally.(.Net SqlClient Data Provider) 2010-04-01 10:00:01.87 Searching for an older log backup file. Secondary Database: 'UK_Backup' 2010-04-01 10:00:01.90 Skipped log backup file. Secondary DB: 'UK_Backup', File: 'F:\UK_20100324090000.trn' 2010-04-01 10:00:01.93 Error: Could not find a log backup file that could be applied to secondary database 'UK_Backup'.(Microsoft.SqlServer.Management.LogShipping) 2010-04-01 10:00:01.93 Deleting old log backup files. Primary Database: 'UK' 2010-04-01 10:00:01.96 The restore operation completed with errors. Secondary ID: 'c066bb63-930c-4b73-861c-f59f0a38c12c' It was happily humming along until I checked it this morning. Some additional details. In the Logshipping folder, there is one file UK_20100324090001.trn dated on the 2009-3-24. The next most recent .trn file is the UK_20100374090001.trn which is the file that was applied during the restore. Why is there an older trn file seemingly on it's own? How can I fix this problem? It'll be a real pain to restart the entire logshipping process. x_x

    Read the article

  • Tomcat 6 Windows Server 64 Redirect Connector Fails

    - by Rafe
    So is there some problem with running the Tomcat connectors under a 64 bit windows OS? Here's my configuration: Windows Server 2003 64 bit Intel Xeon Tomcat 6.0.26 JVM 1.6.0 (64bit) ISAPI Redirect Connector 1.2.30.0 (64 bit) Calling the IP address of the site with :8080 brings up the tomcat page so I know that's running and the examples all work so its obviously not having a problem with the JVM. Calling the site ip on port 80 however gives me error 324 - looking at the application log on windows shows "Could not load all ISAPI filters for site/service. Therefore startup aborted". The ISAPI filter page under the web site properties shows the status of this filter to be down with a red arrow. The ISAPI filter name is jakarta and there is a corresponding virtual directory set up in the root of the site pointing to the same directory as the filter. The jakarta web service extension is also pointing to the required dll (c:\program files\apache software foundation\jakarta isapi redirector\bin\isapi_redirect.dll). Incidentally, this same problem occurs when trying to use Tomcat 5.5. I've also tried swapping out various redirect versions. It's really odd because I got it to work once with a version of the redirector that came with Plesk but I've since uninstalled everything to do with plesk and even trying to use the plesk-compiled dll doesn't work now. I am pulling my hair out on this, any ideas?

    Read the article

  • How to enable telnet with port 3306 during Master to master replication on MySQL Server

    - by Mainio
    I am trying to do Master to Master Replication in Windows Server 2008. I am successfully able to replicate all the database of Master 1 to Master 2. But I am unable to replicate the changes made on Master 2 to Master 1. Later on I found that, I can telnet to Master 1 from Master 2 with port 3306 but I am not able on telnet from Master 1 to Master 2. When I check netstat on both Master. I found the following result. I couldn't publish my public IP so I put name as Master 1 and Master 2 for their respective IP Master 1 C:\Users\XXXXX>netstat Active Connections Proto Local Address Foreign Address State TCP Master 1:3306 Master 2:61566 ESTABLISHED TCP Master 1:3389 My remote:56053 ESTABLISHED TCP 127.0.0.1:3306 Master 1:60675 ESTABLISHED TCP 127.0.0.1:3306 Master 1:60712 ESTABLISHED TCP 127.0.0.1:60675 Master 1:3306 ESTABLISHED TCP 127.0.0.1:60712 Master 1:3306 ESTABLISHED Master 2 C:\Users\XXXX>netstat Active Connections Proto Local Address Foreign Address State TCP Master 2:3389 My remote:56124 ESTABLISHED TCP Master 2:61566 Master 1:3306 ESTABLISHED TCP Master 2:61574 bil-sc-cm02:http ESTABLISHED TCP 127.0.0.1:3306 Master 2:61562 ESTABLISHED TCP 127.0.0.1:3306 Master 2:61563 ESTABLISHED TCP 127.0.0.1:61562 Master 2:3306 ESTABLISHED TCP 127.0.0.1:61563 Master 2:3306 ESTABLISHED TCP 127.0.0.1:61573 Master 2:3306 TIME_WAIT All shows that In my master 2, port 3306 is not activate. Now I need solution over here. How can I figure it. Your small suggestion would be million for me. Thank you Regards, Udhyan

    Read the article

  • how to split a very large database on sql server

    - by ken jackson
    I have a 90 GB SQL Server database that I want to make more manageable. It stores stock data from 50+ different stocks from 2009 and 2010, and each stock is a separate table. Some tables have hundreds of millions of rows, and other have just a few million. What I want to do is somehow split the database, so that I don't have a single database file that is 90 GB. What I want is to be able to somehow magically split all the tables so that I can backup the 2009 data once and not have to keep on including it in the backup every time I backup the entire database, however, I would like the 2009 data to be included whenever I do a query. Is partitioning the database the way to go? Will it do the above for me, or will I need some other solution? I research partitioning, but I wasn't sure if that would solve all my problems. I wasn't able to find anything that would tell me whether or not it would migrate prexisting data, or whether it only worked for newly inserted data. Any help or pointers would be much appreciated. Thanks in advance, Ken

    Read the article

  • Update BIOS on Sun Fire X4150 server

    - by Massimo
    I have some Sun Fire X4150 servers with a very old BIOS release (1ADQW015), which seems to have some compatibility problems with WMware ESX Server 3.5 and Windows 2008 R2 virtual machines; so I want to update the BIOS on them. The problem: according to this page, if your servers run ELOM (mine do), you first need to update to the latest ELOM release, then to the interim transition release, then finally you can update to the latest one. Ok, I'm willing to do that... but it looks like Sun (now Oracle) will happily let you download the latest firmware DVD (3.3.0), but it will not let you download the transition release (2.0) if you don't have a support contract. Well, I actuall don't care at all about the servers' management controllers (we don't even use them), so upgrading from ELOM to ILOM is totally irrelevant to me; but I need to update the servers' BIOS. So my question is: can I update the servers' BIOS to the latest version without doing the full ELOM-to-ILOM migration, or will this not work (or even make the servers unusable)? Do BIOS versions and SP ones need to be matched, or can one be updated without bothering with the other? Bonus question: if this whole ELOM-to-ILOM thing actually is needed in order to update the BIOS, can that 2.0 CD-ROM be obtained without having a support contract with Sun/Oracle (which we are definitely not going to sign, being that quite old hardware)? Update: I tried upgrading only the BIOS on one of the servers, and it didn't boot anymore. So it really looks like a full firmware upgrade is needed, and the management controller and BIOS versions should be kept in sync. So... where can I find that *&!£%$% 2.0 CD-ROM? Or at least the transition firmware that can be found on it?

    Read the article

  • Windows Server 2008 network speed slow, Xen 3.4.3 HVM ISO

    - by Elliot.Bradshaw
    I've setup a VM running Windows Server 2008 on a host node running Xen 3.4.3-5 and the following kernel: 2.6.18-308.1.1.el5xen #1 SMP Wed Mar 7 05:38:01 EST 2012 i686 i686 i386 GNU/Linux The network speed on the VM is very slow--using the online speed tests I can only get it up to 8-9mbps. The line is 100mbps burstable and the host node has no problem achieving those speeds. If it setup a VM running CentOS, it too has no problems achieving those speeds. I've done some pretty exhaustive troubleshooting, but nothing has helped: New VM installations of Win2k8 do have the same network problem. Upgrading to most recent kernel-xen did not help (2.6.18-308.1.1.el5xen). Upgrading from xen 3.4.0 to xen 3.4.3-5 did not help. Disabling Windows firewall, etc did not help. Changing network card device config from auto negotiation to manually be 100mbps full duplex did not help. Changing the network receive buffer packet size did not help (tried all combos from 64k to 8k). At this point I'm pretty much out of ideas--any help would be appreciated!

    Read the article

  • Recommendations for good FTP server for Win 2008 x64

    - by sfhtimssf1970
    I spent a bunch of time learning/configuring the "all new and better" FTP feature for IIS7. In my opinion, it still fails hard: In order to have multiple FTP sites on the same machine, you have to use host|user usernames (like domain.com|jason) for every account. Using IIS Manager auth doesn't seem to work at all. I'm sure I'm doing something wrong, but I can't figure out what the hell it is. I've read all the official articles on it and configured it a hundred different ways. Doesn't play well with passive connection types. That has to be disabled on the client in order for it to work. Doesn't have any way to allow one user to see multiple sites no matter what binding they are connected to. For instance, if "jason" connects to ftp.domain.com, he should be able to see domain2.com, domain3.com without seeing domain4.com and domain5.com. It takes an act of God to set this up with IIS7. So I'm wanting to install a third party FTP server instead. I've looked at FileZilla both ZFTPServer. Anyone know of any pros/cons on these? Any other recommendations?

    Read the article

  • SQL Server Backup File Significantly Smaller After Table Recreation

    - by userx
    We run automated weekly backups of our SQL Server. The database in question is configured for Simple Recovery. We backup using Full, not differential. Recently, we had to re-create one of our tables with data in it (making 2 varchar fields a couple characters longer). This required running a script which created a new table, copied the data over, and then dropped the old. This worked correctly. Oddly though, our weekly backup files now SHRANK by over 75%! The tables don't have large indexes. All data was copied over correctly (and verified). I've verified that we are doing full and not incremental backups. The new files restore just fine. I can't seem to figure out why the backup files would have shrank so much? I've also noticed that they get about 10 MB larger every week, even though less than that amount of data is being added. I'm guessing that I'm simply not understanding something. Any insight would be appreciated.

    Read the article

  • Adding 2008 Server to 2008 Domain

    - by Phillip
    Hello, I'm trying to create a lab for testing before I deploy solutions, I'm no experienced IT Administrator, and therefore I come here for help. I'm running 2 Virtual Servers on the same machine on a local connection between those two. They'are able to ping each other. Their names is TSDATA1 and TSDATA2 where TSDATA1 is the Domain Controller. I am able to ping between those two, on both "ping TSDATA1" and "ping 10.0.0.1" which is the IP address of TSDATA1. The IP address of TSDATA2 is 10.0.0.2. I'm trying to join the domain with TSDATA2 both I'm getting this error when trying: Note: This information is intended for a network administrator. If you are not your network's administrator, notify the administrator that you received this information, which has been recorded in the file C:\Windows\debug\dcdiag.txt. The following error occurred when DNS was queried for the service location (SRV) resource record used to locate an Active Directory Domain Controller for domain tsdata.local: The error was: "DNS name does not exist." (error code 0x0000232B RCODE_NAME_ERROR) The query was for the SRV record for _ldap._tcp.dc._msdcs.tsdata.local Common causes of this error include the following: The DNS SRV records required to locate a AD DC for the domain are not registered in DNS. These records are registered with a DNS server automatically when a AD DC is added to a domain. They are updated by the AD DC at set intervals. This computer is configured to use DNS servers with the following IP addresses: 10.0.0.1 One or more of the following zones do not include delegation to its child zone: tsdata.local local . (the root zone) For information about correcting this problem, click Help. I've figured out it has something to do with DNS lookup, but I have no clue what to do. Can anyone help?

    Read the article

  • Resource Monitor (resmon) in Windows Server 2008 R2

    - by Clever Human
    In Windows Server 2008 R2's Resource Monitor, is there a way to set the scale of the various graphs to be constant values instead of variable based on data? It seems to me that the utility of a graph is to get a quick overview glance at the values those graphs are showing. So if I look at the CPU graph and the line is up near the top, I can know immediately that something is using all my CPU and go investigate what. I don't really care if the CPU is jumping between .01% and 2%. Or if the network usage monitor is up near the top, I will know that all my bandwidth is being used up, and go figure out what. But the way things are now, the graphs are meaningless because the scales constantly shift. If you look at the network usage graph in one second it might have a scale out of 100kbps, and the next second have a scale based on 1mbps! So... is there a registry key or something that will peg the scale of these graphs to logical maximums? (the graph on the right hand side of the screenshot below):

    Read the article

  • Cooling for a small server room

    - by John Zwinck
    I have a server room about 12 feet square with an unfinished ceiling (exposed ducts and wiring). It houses a few servers (about ten, 1U and 2U) and some networking gear (four 1U switches, three routers, three modems, two cable boxes). With the door closed, it runs around 80 degrees Fahrenheit with half the servers turned on. When I turned on all the servers it reached 86 before I chickened out and propped the door open. The room is adjacent to air-conditioned office space, but does not itself have dedicated air conditioning. The ventilation for this room seems to be limited to one duct coming in at ceiling level, with a powered fan to draw air in, and one duct at ceiling level to allow air to flow out (it seems like it may just go into the drop ceiling cavity in the adjacent room). The adjacent office space stays fairly cool, but I'd prefer not to leave the door propped open all the time. There is both 110v and 208v service in the room, and plenty of power available. But there are no windows, and no floor drains (in a pinch we might be able to run a condensation hose through a small hole we'd drill in the wall to a nearby sink area, but only if absolutely necessary). I've considered portable A/C units, but I'm not sure on sizing and a lot less sure how we would run the exhaust hose(s). I suppose we could point one at the existing room exhaust duct (air return), but substantially modifying the duct is probably a no-no. I've also considered installing a fan box in the door of the room, but I'm concerned that this will only drop the temperature a little. Even right now, with all the equipment on, the room is at 83 degrees with the door open. And the main building A/C turns off daily at 6 PM to conserve energy, so the adjacent room temperature rises at night. How would you cool this room? Let's say the goal is to bring the temperature with everything running from a steady state of around 90 degrees down to 75 (equivalently, to offset the heat produced by ten 1U servers).

    Read the article

  • VMware Data Recovery error -3960 and Event ID 8193 on Windows Server 2003

    - by flooooo
    I've been trying to solve this problem since a few days now without any success. What I'm trying is to make a backup of a virtual machine running Windows Server 2003 SP 2 using VMware Data Recovery 2.0.0.1861. When starting the backup task it tries to make a snapshot of the virtual machine using VSS which fails with error: Event Type: Error Event Source: VSS Event Category: None Event ID: 8193 Date: 05.06.2012 Time: 12:12:01 User: N/A Computer: LEGOLAS Description: Volume Shadow Copy Service error: Unexpected error calling routine RegSaveKeyExW. hr = 0x800703f8. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Data: 0000: 2d 20 43 6f 64 65 3a 20 - Code: 0008: 57 52 54 52 45 47 52 43 WRTREGRC 0010: 30 30 30 30 30 33 39 36 00000396 0018: 2d 20 43 61 6c 6c 3a 20 - Call: 0020: 57 52 54 52 45 47 52 43 WRTREGRC 0028: 30 30 30 30 30 33 31 38 00000318 0030: 2d 20 50 49 44 3a 20 20 - PID: 0038: 30 30 30 30 36 34 38 38 00006488 0040: 2d 20 54 49 44 3a 20 20 - TID: 0048: 30 30 30 30 34 33 38 34 00004384 0050: 2d 20 43 4d 44 3a 20 20 - CMD: 0058: 43 3a 5c 57 49 4e 44 4f C:\WINDO 0060: 57 53 5c 53 79 73 74 65 WS\Syste 0068: 6d 33 32 5c 76 73 73 76 m32\vssv 0070: 63 2e 65 78 65 20 20 20 c.exe 0078: 2d 20 55 73 65 72 3a 20 - User: 0080: 4e 54 20 41 55 54 48 4f NT AUTHO 0088: 52 49 54 59 5c 53 59 53 RITY\SYS 0090: 54 45 4d 20 20 20 20 20 TEM 0098: 2d 20 53 69 64 3a 20 20 - Sid: 00a0: 53 2d 31 2d 35 2d 31 38 S-1-5-18 This machine was converted p2v. I have no idea where to search for the problem and what to do. Google showed a few result but none of them were useful for me. Please help me. If you need further information I'll tell you - just ask!

    Read the article

  • Current wisdom on SQL Server and Hyperthreading?

    - by BradC
    Lots of articles out there (see Slava Oks's original SQL 2000 article and Kevin Kline's SQL 2005 update) recommend disabling hyperthreading on SQL servers, or at least testing your specific workload before enabling it on your servers. This issue is gradually becoming less relevant as true multi-core processors replace hyperthreaded ones, but what's the current wisdom on this issue? Does this advice change any with SQL 2005 64-bit, or SQL 2008, or Windows Server 2008? Ideally, this should be tested in advance in a staging environment, but what about for servers that have already made it into production with HT enabled? How can I tell if performance issues we're experiencing might be related to HT? Is there some specific combination of perfmon counters that might point me in that direction, as opposed to all the other things I normally pursue when working on improving SQL performance? Edit: This is especially attractive because of the potential for an across the board improvement for some of my high-cpu servers, but the client is going to want to see something concrete that helps me identify which servers really could benefit from disabling hyperthreading. Of course, conventional performance troubleshooting is ongoing, but sometimes any little bit helps.

    Read the article

  • Gerrit ssh key setup on windows server

    - by hotpotato
    I am attempting to configure google's 'Gerrit' code review web app on a Windows server 2008 virtual machine on our internal network. We are using Apache Tomcat (6.0.36) to host the web app and have deployed the gerrit.war to tomcats webapp folder, setup the context.xml, web.xml etc for the web app correctly i believe. However when i startup Tomcat using the $CATALINA_HOME/bin/startup.bat i get the following message in the tomcat logs: *Dec 07, 2012 1:03:54 PM org.apache.catalina.core.StandardContext listenerStart SEVERE: Exception sending context initialized event to listener instance of class com.google.gerrit.httpd.WebAppInitializer com.google.inject.CreationException: Guice creation errors:* 1) No SSH keys under C:\Gerrit\config\etc while locating com.google.gerrit.sshd.HostKeyProvider at com.google.gerrit.sshd.SshModule.configure(SshModule.java:90) I have created a is_rsa.pub SSH key and placed it in the specified directory to no avail. I have been googling this for about a week now and can't seem to find any information about the file or format it is expecting... documentation on setting gerrit up on windows seems hard to come by! Can anyone provide useful information about how to correctly configure a host SSH key in this context?

    Read the article

  • SQL Server Backup problem when browsing to the directory

    - by Richard West
    I want to allow a group (eg. 'BackupManagers') who can only preform backup and restore operations on certain databases. When creating the BackupManagers user account I checked db_backupoperator. When the user logs in to create a backup they get an error message similar to the following when the select Tasks - Backup - Click on Add in the destiantion block - click on the "..." button to browse TITLE: Locate Database Files - MYSERVER\SQL2005 E:\MSSQL\Backup Cannot access the specified path or file on the server. Verify that you have the necessary security privileges and that the path or file exists. If you know that the service account can access a specific file, type in the full path for the file in the File Name control in the Locate dialog box. I have confirmed that the user has permissions to the folder. I have even created a share to this folder and had them access it through explorer. They are able to create and delete files within the folder. I have found that if they type in the path to the file instead of using the "..." button to browse the directory tree then they can create a backup file fine. Why is the browse button not working as expected? Thanks!

    Read the article

  • Open Source Chef Server can't upload cookbook

    - by veilig
    I just setup the open source chef server on an Ubuntu 12.04 EC2 instance, I've setup my webui and am able to get responses from my knife commands ie: knife node list, knife client list, knife user list, etc... I'm able to update roles, databags, environments, etc... but I cannot upload any cookbooks. I'm running my workstation on Mac OSX. I keep getting this output at the end of my command knife cookbook upload -VV curl. Doesn't matter what cookbook I upload, or if I upload them all - I keep getting the same response DEBUG: Chef::HTTP calling Chef::HTTP::ValidateContentLength#handle_response DEBUG: Chef::HTTP calling Chef::HTTP::RemoteRequestID#handle_response DEBUG: Chef::HTTP calling Chef::HTTP::Authenticator#handle_response DEBUG: Chef::HTTP calling Chef::HTTP::Decompressor#handle_response DEBUG: Chef::HTTP calling Chef::HTTP::CookieManager#handle_response DEBUG: Chef::HTTP calling Chef::HTTP::JSONToModelOutput#handle_response /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/http/json_output.rb:51:in `handle_response': undefined method `chomp' for nil:NilClass (NoMethodError) from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/http.rb:229:in `block in apply_response_middleware' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/http.rb:227:in `each' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/http.rb:227:in `inject' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/http.rb:227:in `apply_response_middleware' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/http.rb:144:in `request' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/http.rb:118:in `put' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/cookbook_uploader.rb:123:in `block in uploader_function_for' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/util/threaded_job_queue.rb:52:in `call' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/util/threaded_job_queue.rb:52:in `block (3 levels) in process' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/util/threaded_job_queue.rb:50:in `loop' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/util/threaded_job_queue.rb:50:in `block (2 levels) in process'INFO: HTTP Request Returned 204 No Content:

    Read the article

  • Somewhat powerful server needed for computationally expensive stuff

    - by Dane Larsen
    So here's my problem. My Dad runs a company that does some rather computationally expensive stuff. This is not supercomputer level stuff, but it does take several hours to run the average job on his Core i7 desktop. He asked me to look into a way to have his customers use the code on an hourly basis, namely via a server. Ideally he'd be able to buy a box for about $1000, and hook it right up to our home connection. Unfortunately, the data that needs to be both sent and received is on the order of several hundred megs. We live in a rural area, and the fastest connection offered is 1.5Mbit/s. Download. It's like .3Mbit/s upload. Not workable. What are the options for this kind of thing? Ideally, we'd have about 2GB of ram, 300-500GB of storage, and a nice dual core, and it has to run some flavor of Linux. Any suggestions? Thanks in advance EDIT: Also, ideally the monthly price would be < $100 per month.

    Read the article

  • WSS Search fills 10 GB limit on SBS server 2011

    - by Kactus
    I've got a SBS Server 2011 Standard SP1 that isn't very busy. 2 Users local and 2 remote. We have sharepoint that has maybe a dozen small documents at most. I've just started getting the following two error occur Could not allocate space for object 'dbo.MSSBatchHistory'.'IX_MSSBatchHistory' in database 'WSS_Search_SERVER' because the 'PRIMARY' filegroup is full. Create disk space by deleting unneeded files, dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup. And CREATE DATABASE or ALTER DATABASE failed because the resulting cumulative database size would exceed your licensed limit of 10240 MB per database. Digging around in SQL manager I see that WSS Search DB file size is 10241MB, the log file is only 147 MB Firstly, why is WSS Search taking up so much space? How can I stop it from doing so, and what can I do now to get things running ok. I know about log file truncating and this isn't the case here since the log is tiny. Any help is appreciated. There is plenty of free space on the disk (791GB free) Thanks Kactus

    Read the article

< Previous Page | 189 190 191 192 193 194 195 196 197 198 199 200  | Next Page >