Search Results

Search found 89127 results on 3566 pages for 'backup server'.

Page 425/3566 | < Previous Page | 421 422 423 424 425 426 427 428 429 430 431 432  | Next Page >

  • Cron job execute backup.bash

    - by leejava
    Dear all, I wish to let cron executes backup.bash, but when I try to create cron as below: */1 * * * * /var/www/mango_gis/delete_snapshot.bash /dev/null It didn't execute my script at all. Here is my script as below: #!/bin/bash get() { local pos=$1 shift eval 'echo ${'$pos'}'; } length(){ echo $#; } find_snapshots() { echo $(ec2-describe-snapshots | xargs -n1 basename); } snapshots=$(find_snapshots) len=$(length $snapshots) row_count=$(($len/6)) if(($row_count 6)); then delete_count=$(($row_count-6)) for (( i=1; i<=$delete_count; i++ )); do ec2-delete-snapshot $(echo $(get $((2+$((6*$(($i-1)))))) $snapshots)) /dev/null done fi Please advise... Leakhina

    Read the article

  • Overwriting TFS Web Services

    - by javarg
    In this blog I will share a technique I used to intercept TFS Web Services calls. This technique is a very invasive one and requires you to overwrite default TFS Web Services behavior. I only recommend taking such an approach when other means of TFS extensibility fail to provide the same functionality (this is not a supported TFS extensibility point). For instance, intercepting and aborting a Work Item change operation could be implemented using this approach (consider TFS Subscribers functionality before taking this approach, check Martin’s post about subscribers). So let’s get started. The technique consists in versioning TFS Web Services .asmx service classes. If you look into TFS’s ASMX services you will notice that versioning is supported by creating a class hierarchy between different product versions. For instance, let’s take the Work Item management service .asmx. Check the following .asmx file located at: %Program Files%\Microsoft Team Foundation Server 2010\Application Tier\Web Services\_tfs_resources\WorkItemTracking\v3.0\ClientService.asmx The .asmx references the class Microsoft.TeamFoundation.WorkItemTracking.Server.ClientService3: <%-- Copyright (c) Microsoft Corporation. All rights reserved. --%> <%@ webservice language="C#" Class="Microsoft.TeamFoundation.WorkItemTracking.Server.ClientService3" %> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The inheritance hierarchy for this service class follows: Note the naming convention used for service versioning (ClientService3, ClientService2, ClientService). We will need to overwrite the latest service version provided by the product (in this case ClientService3 for TFS 2010). The following example intercepts and analyzes WorkItem fields. Suppose we need to validate state changes with more advanced logic other than the provided validations/constraints of the process template. Important: Backup the original .asmx file and create one of your own. Create a Visual Studio Web App Project and include a new ASMX Web Service in the project Add the following references to the project (check the folder %Program Files%\Microsoft Team Foundation Server 2010\Application Tier\Web Services\bin\): Microsoft.TeamFoundation.Framework.Server.dll Microsoft.TeamFoundation.Server.dll Microsoft.TeamFoundation.Server.dll Microsoft.TeamFoundation.WorkItemTracking.Client.QueryLanguage.dll Microsoft.TeamFoundation.WorkItemTracking.Server.DataAccessLayer.dll Microsoft.TeamFoundation.WorkItemTracking.Server.DataServices.dll Replace the default service implementation with the something similar to the following code: Code Snippet /// <summary> /// Inherit from ClientService3 to overwrite default Implementation /// </summary> [WebService(Namespace = "http://schemas.microsoft.com/TeamFoundation/2005/06/WorkItemTracking/ClientServices/03", Description = "Custom Team Foundation WorkItemTracking ClientService Web Service")] public class CustomTfsClientService : ClientService3 {     [WebMethod, SoapHeader("requestHeader", Direction = SoapHeaderDirection.In)]     public override bool BulkUpdate(         XmlElement package,         out XmlElement result,         MetadataTableHaveEntry[] metadataHave,         out string dbStamp,         out Payload metadata)     {         var xe = XElement.Parse(package.OuterXml);         // We only intercept WorkItems Updates (we can easily extend this sample to capture any operation).         var wit = xe.Element("UpdateWorkItem");         if (wit != null)         {             if (wit.Attribute("WorkItemID") != null)             {                 int witId = (int)wit.Attribute("WorkItemID");                 // With this Id. I can query TFS for more detailed information, using TFS Client API (assuming the WIT already exists).                 var stateChanged =                     wit.Element("Columns").Elements("Column").FirstOrDefault(c => (string)c.Attribute("Column") == "System.State");                 if (stateChanged != null)                 {                     var newStateName = stateChanged.Element("Value").Value;                     if (newStateName == "Resolved")                     {                         throw new Exception("Cannot change state to Resolved!");                     }                 }             }         }         // Finally, we call base method implementation         return base.BulkUpdate(package, out result, metadataHave, out dbStamp, out metadata);     } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } 4. Build your solution and overwrite the original .asmx with the new implementation referencing our new service version (don’t forget to backup it up first). 5. Copy your project’s .dll into the following path: %Program Files%\Microsoft Team Foundation Server 2010\Application Tier\Web Services\bin 6. Try saving a WorkItem into the Resolved state. Enjoy!

    Read the article

  • SQL Server IO handling mechanism can be severely affected by high CPU usage

    - by sqlworkshops
    Are you using SSD or SAN / NAS based storage solution and sporadically observe SQL Server experiencing high IO wait times or from time to time your DAS / HDD becomes very slow according to SQL Server statistics? Read on… I need your help to up vote my connect item – https://connect.microsoft.com/SQLServer/feedback/details/744650/sql-server-io-handling-mechanism-can-be-severely-affected-by-high-cpu-usage. Instead of taking few seconds, queries could take minutes/hours to complete when CPU is busy.In SQL Server when a query / request needs to read data that is not in data cache or when the request has to write to disk, like transaction log records, the request / task will queue up the IO operation and wait for it to complete (task in suspended state, this wait time is the resource wait time). When the IO operation is complete, the task will be queued to run on the CPU. If the CPU is busy executing other tasks, this task will wait (task in runnable state) until other tasks in the queue either complete or get suspended due to waits or exhaust their quantum of 4ms (this is the signal wait time, which along with resource wait time will increase the overall wait time). When the CPU becomes free, the task will finally be run on the CPU (task in running state).The signal wait time can be up to 4ms per runnable task, this is by design. So if a CPU has 5 runnable tasks in the queue, then this query after the resource becomes available might wait up to a maximum of 5 X 4ms = 20ms in the runnable state (normally less as other tasks might not use the full quantum).In case the CPU usage is high, let’s say many CPU intensive queries are running on the instance, there is a possibility that the IO operations that are completed at the Hardware and Operating System level are not yet processed by SQL Server, keeping the task in the resource wait state for longer than necessary. In case of an SSD, the IO operation might even complete in less than a millisecond, but it might take SQL Server 100s of milliseconds, for instance, to process the completed IO operation. For example, let’s say you have a user inserting 500 rows in individual transactions. When the transaction log is on an SSD or battery backed up controller that has write cache enabled, all of these inserts will complete in 100 to 200ms. With a CPU intensive parallel query executing across all CPU cores, the same inserts might take minutes to complete. WRITELOG wait time will be very high in this case (both under sys.dm_io_virtual_file_stats and sys.dm_os_wait_stats). In addition you will notice a large number of WAITELOG waits since log records are written by LOG WRITER and hence very high signal_wait_time_ms leading to more query delays. However, Performance Monitor Counter, PhysicalDisk, Avg. Disk sec/Write will report very low latency times.Such delayed IO handling also occurs to read operations with artificially very high PAGEIOLATCH_SH wait time (with number of PAGEIOLATCH_SH waits remaining the same). This problem will manifest more and more as customers start using SSD based storage for SQL Server, since they drive the CPU usage to the limits with faster IOs. We have a few workarounds for specific scenarios, but we think Microsoft should resolve this issue at the product level. We have a connect item open – https://connect.microsoft.com/SQLServer/feedback/details/744650/sql-server-io-handling-mechanism-can-be-severely-affected-by-high-cpu-usage - (with example scripts) to reproduce this behavior, please up vote the item so the issue will be addressed by the SQL Server product team soon.Thanks for your help and best regards,Ramesh MeyyappanHome: www.sqlworkshops.comLinkedIn: http://at.linkedin.com/in/rmeyyappan

    Read the article

  • WebLogic Server Performance and Tuning: Part II - Thread Management

    - by Gokhan Gungor
    WebLogic Server, like any other java application server, provides resources so that your applications use them to provide services. Unfortunately none of these resources are unlimited and they must be managed carefully. One of these resources is threads which are pooled to provide better throughput and performance along with the fast response time and to avoid deadlocks. Threads are execution points that WebLogic Server delivers its power and execute work. Managing threads is very important because it may affect the overall performance of the entire system. In previous releases of WebLogic Server 9.0 we had multiple execute queues and user defined thread pools. There were different queues for different type of work which had fixed number of execute threads.  Tuning of this thread pools and finding the proper number of threads was time consuming which required many trials. WebLogic Server 9.0 and the following releases use a single thread pool and a single priority-based execute queue. All type of work is executed in this single thread pool. Its size (thread count) is automatically decreased or increased (self-tuned). The new “self-tuning” system simplifies getting the proper number of threads and utilizing them.Work manager allows your applications to run concurrently in multiple threads. Work manager is a mechanism that allows you to manage and utilize threads and create rules/guidelines to follow when assigning requests to threads. We can set a scheduling guideline or priority a request with a work manager and then associate this work manager with one or more applications. At run-time, WebLogic Server uses these guidelines to assign pending work/requests to execution threads. The position of a request in the execute queue is determined by its priority. There is a default work manager that is provided. The default work manager should be sufficient for most applications. However there can be cases you want to change this default configuration. Your application(s) may be providing services that need mixture of fast response time and long running processes like batch updates. However wrong configuration of work managers can lead a performance penalty while expecting improvement.We can define/configure work managers at;•    Domain Level: config.xml•    Application Level: weblogic-application.xml •    Component Level: weblogic-ejb-jar.xml or weblogic.xml(For a specific web application use weblogic.xml)We can use the following predefined rules/constraints to manage the work;•    Fair Share Request Class: Specifies the average thread-use time required to process requests. The default is 50.•    Response Time Request Class: Specifies a response time goal in milliseconds.•    Context Request Class: Assigns request classes to requests based on context information.•    Min Threads Constraint: Limits the number of concurrent threads executing requests.•    Max Threads Constraint: Guarantees the number of threads the server will allocate to requests.•    Capacity Constraint: Causes the server to reject requests only when it has reached its capacity. Let’s create a work manager for our application for a long running work.Go to WebLogic console and select Environment | Work Managers from the domain structure tree. Click New button and select Work manager and click next. Enter the name for the work manager and click next. Then select the managed server instances(s) or clusters from available targets (the one that your long running application is deployed) and finish. Click on MyWorkManager, and open the Configuration tab and check Ignore Stuck Threads and save. This will prevent WebLogic to tread long running processes (that is taking more than a specified time) as stuck and enable to finish the process.

    Read the article

  • Add Network Printer drivers in Windows 7/Server 2008 R2?

    - by Matias Nino
    I'm running a 64 bit Windows 7 / Windows 2008 R2 workstation that I just installed. I need to add a printer that is shared on the network from a 32bit Windows 2000 print server. This is an HP LaserJet 5Si printer, the drivers for which HP tells me are automatically built into Windows 7/R2. However, whenever I connect to the printer or try to add it, I get the following screen: Upon clicking OK, I get this screen asking me to locate the driver: How can I possibly locate a driver that is SUPPOSED TO BE NATIVELY SUPPORTED on Windows 7/R2? The tough part is that this printer is one of many shared on a server and does not have a direct IP address. Even worse: I have no access to the print server so I cannot put the 64 bit drivers on there. Any ideas? UPDATE: HP doesn't make a Vista driver either. It claims it is natively supported by Vista and 7, which is true because I am able to create a local printer on a fake tcp/ip port and Windows lets me pick the proper driver. However, when adding from the network, Windows does not let me select a driver and demands an INF. I tried searching the entire sub-structure of the C:\Windows directory and could not find any INF files that contain HP information. The INF might be located somewhere in the Windows installation DVD, but all the files on the DVD are compressed and unrecognizable. UPDATE #2 I installed the proper printer driver as a local printer (with no printer attached) and it installed. However, this did not change the fact that it STILL asks me to provide drivers when connecting to the networked printer.

    Read the article

  • How do I Install fonts on Windows Web Server 2008 R2

    - by Eric Brearley
    I would like to install Arial on to our web servers. Just need to add, this is because we generate reports server-side and make them available in a number of downloadable formats (Excel, PDF etc), hence the need to have the fonts installed on the server. I have console access to our webfarm, and from the server I've copied the .ttf files and placed them in c:\fonts folder. Then I run the following VBScript on the server. ' VBScript to install fonts on Blade Servers ' Arial font-family Set objShell = CreateObject("Shell.Application") Set objFolder = objShell.Namespace("c:\fonts") Set objFolderItem = objFolder.ParseName("arial.ttf") objFolderItem.InvokeVerb("Install") Set objShell = CreateObject("Shell.Application") Set objFolder = objShell.Namespace("c:\fonts") Set objFolderItem = objFolder.ParseName("arialbd.ttf") objFolderItem.InvokeVerb("Install") Set objShell = CreateObject("Shell.Application") Set objFolder = objShell.Namespace("c:\fonts") Set objFolderItem = objFolder.ParseName("arialbi.ttf") objFolderItem.InvokeVerb("Install") Set objShell = CreateObject("Shell.Application") Set objFolder = objShell.Namespace("c:\fonts") Set objFolderItem = objFolder.ParseName("ariali.ttf") objFolderItem.InvokeVerb("Install") Set objShell = CreateObject("Shell.Application") Set objFolder = objShell.Namespace("c:\fonts") Set objFolderItem = objFolder.ParseName("ariblk.ttf") objFolderItem.InvokeVerb("Install") msgbox "Fonts installed" I get the message box, but no font installation pop-ups like I do when I run this script on my desktop. The fonts do not get installed, they do not sure in the font selection dialogue in notepad (on the web server) and we get the asp.net exception "Font 'Arial' cannot be found.". Have also restarted the server. I have also tried copying the .ttf files to the c:\windows\fonts folder and restarting the server. What do I need to do to install fonts on Windows Web Server 2008 R2?

    Read the article

  • 2 Server FC SAN Configuration

    - by BSte
    I have 2 identical servers: -48GB Ram -8GigE NIC's -2FC NIC's -2x72GB RAID1 Hard Drives -Server 2008R2 Host I also Have a Fibre Channel SAN: -16x146GB RAID10 Hard Drives -2xDual-port FC Controllers (Controller A and B both have ports 1 and 2) -Server 1 has Fiber to Ports A1 and B1 -Server 2 has Fiber to Ports A2 and B2 -I kept the default config with 1 Virtual Disk and 1 Volume -The default mappings show ports A1,A2,B1,B2 on LUN 0 with read-write My goal is: -2xVM's with IIS and Guest Level Failover -2xVM's with SQL 2008 Enterprise using a Single DB and Guest Level Failover -1xVM that is an application server, preferable with Host Failover. From what I read, this will also need AD for clustering to work. -I need at least 1 VM always running for IIS and the SQLDB. This includes hardware failover and application (ie: reboot a VM for Critical updates) I was told I could install the VM's and run them from the SAN, and this is what I've tried: Installed MPIO and HyperV on Server1 and Server 2 Added the SAN as Disk E: on both servers, made it GPT and formatted NTFS Configured HyperV on both server to store use E:\VD and E:\VHD On server1, I was able to install 3 VM's on the SAN and all worked well. On server2, I would start installing the other 2 VM's, but always at some point the VM's would get a corrupt .VHD message (either server). Everything I found about the message typically related to antivirus, so I removed all antivirus on both Host servers (now only running 2008R2). I reformatted drive E: (SAN), recreated the VHD and VD directories, installed 3 VM's on Server 1, and then had the same issue when installing VM's on Server2. Obviously something is wrong, but I'm not certain what exactly. My questions: 1) Are my goals possible with this hardware setup? -I've read 2008R2 supports FC SAN's, but a lot of articles seem to only give examples with iSCSCI setups 2) What would be the suggested route on setting up the SAN (disks,volumes,LUN's)? I've worked with HyperV on a single machine before and never had issues. Actual experience working on SAN's and clustering is new to me. Any suggestions or recommendations to get me in the right direction would be much appreciated.

    Read the article

  • How to redirect all Internet traffic to OpenVPN Server

    - by JuliaS
    I have seen working solutions around the issue of forcing Internet traffic to go through the OpenVPN server but they are all done in Linux, all I want to know is how to add an entry to the route table in windows to make this happen. connectivity between the client and server is fine, my Windows 7 client can establish a connection to the Windows 2008 Server, but when established Internet traffic is still going from the local Windows 7 machine. Here are the details: Server: Windows 2008 Server with one NIC OpenVPN IP Address: 192.168.0.1 Local NIC IP Address (connects the server to the Internet): 10.242.69.107 Client: Windows 7 with one NIC OpenVPN IP Address: 192.168.0.2 ISP allocated IP Address: 10.0.8.2 (gateway 10.0.8.1) Server OpenVPN Config: dev tun ifconfig 192.168.0.1 192.168.0.2 secret static.key push "redirect-gateway def1" Client OpenVPN Config: remote xxx.xxx.com dev tun ifconfig 192.168.0.2 192.168.0.1 secret static.key I'm not an expert with adding routes...etc. I would be grateful if someone could let me know how to add this entry in my server/client route table. EDIT: Output from the client's netstat -rnv IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 10.0.8.1 10.0.8.2 20 10.0.8.0 255.255.255.252 On-link 10.0.8.2 276 10.0.8.2 255.255.255.255 On-link 10.0.8.2 276 10.0.8.3 255.255.255.255 On-link 10.0.8.2 276 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 192.168.0.0 255.255.255.252 On-link 192.168.0.2 286 192.168.0.2 255.255.255.255 On-link 192.168.0.2 286 192.168.0.3 255.255.255.255 On-link 192.168.0.2 286 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 10.0.8.2 276 224.0.0.0 240.0.0.0 On-link 192.168.0.2 286 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 10.0.8.2 276 255.255.255.255 255.255.255.255 On-link 192.168.0.2 286 ===========================================================================

    Read the article

  • How to enable the 2 concurrent (+1 console) sessions on Windows Server 2012

    - by Dai
    I have a Windows Server 2012 VM running on Windows Azure. I want to enable the ability for 2 simultaneous administrative sessions over Remote Desktop. This is permitted under the EULA for Windows Server 2012. This is NOT the same thing as the fully-blown Terminal Services / RDS feature. In Windows Server 2000 and 2003, multiple concurrent sessions (up to a limit of 2, plus the root /console session) were enabled by default (such that logging-in via RDP without logging-out first would create a new session rather than reconnecting to the old session). In Server 2008 and later it uses single-sessions by default, as this simplifies administration (as most people want to connect to old sessions). In Windows Server 2008 R2, you can add the MMC snap-ins for Remote Desktop Host Configuration which allows you to re-enable concurrent sessions. However, in Server 2012, after adding the Remote Administration snap-ins from Server Manager it seems the Remote Desktop Host Configuration snap-in has been removed. How can I re-enable the multiple concurrent sessions for Remote Desktop for Administration in Windows Server 2012?

    Read the article

  • Help setting up a secondary authoritative DNS server.

    - by GLB03
    We have three Authoritative DNS servers and three recursive/caching DNS servers on my campus. Authoritative servers DNS1- Windows 2003 DNS2- Old Red Hat ----- Replacing w/ newer version DNS3- Windows 2008 (I installed) Caching and Recursive resolvers servers Server1- Windows 2003 Server2- CentOS 5.2 (I installed) Server3- CentOS 5.3 (I installed) I am replacing DNS2 with a newer Red Hat version, but have no documentation on how it was implemented. I have setup caching and windows authoritative servers, but not a linux secondary authoritative server. I have a perl script from the original server that pulls data from our DNS1 server. We use DJBDNS and TinyDNS on our linux servers. Our Network Engineer says the DNS2 server I am replacing is an authoritative server that doesn't need to be caching, but the only instructions I see is for an Authoritative server that does caching as well. Can someone point me in the right directions. I thought I was on the right track with using these instructions but when I query my new dns server I get "No response from server", I have temporarily disabled iptables to eliminate it from being an issue. ps -aux | grep dns avahi 3493 0.0 0.2 2600 1272 ? Ss Apr24 0:05 avahi-daemon: running [newdns2.local] root 5254 0.0 0.1 3920 680 pts/0 R+ 09:56 0:00 grep dns root 6451 0.0 0.0 1528 308 ? S Apr29 0:00 supervise tinydns dnslog 6454 0.0 0.0 1540 308 ? S Apr29 0:00 multilog t ./main tinydns 9269 0.0 0.0 1652 308 ? S Apr29 0:00 /usr/local/bin/tinydns

    Read the article

  • Move postfix maildir files from one mail server to another

    - by Tauren
    I have a new mail server configured as described in this howto: http://howtoforge.com/virtual-users-domains-postfix-courier-mysql-squirrelmail-ubuntu-9.10 I also have an ancient mail server configured very similarly (using the same HOWTO, just for Fedora Core 6, if I recall correctly). Earlier today I had to switch from the old server to the new one, and the old one is no longer online. However, after I had migrated everything and switched it all over, I discovered a bunch of undelivered mail in the queue. It got delivered to the local mailboxes on the old server, so now there are a bunch of messages on it that I'd like to move to the new server. The new server has already received new messages, so I need to merge the files together somehow. For each user with an email of [email protected], there are files like this on both servers: /home/vmail/customer.com/username/maildirsize /home/vmail/customer.com/username/courierpop3dsizelist /home/vmail/customer.com/username/new/1271481177.Vca01I6006bM580357.mailhost.mydomain.com Can I simply copy the hundreds of files in the various new directories on the old server to the corresponding new directories on the new server? Will the maildirsize and courierpop3dsizelist files get updated automatically, or do I need to do something to update them?

    Read the article

  • Team Foundation Server 2008 - TF220056 Error during installation

    - by David
    I'm attempting to install Team Foundation Server 2008 on a Windows Server 2003 instance that exists under Hyper-V. The SQL Server database itself is held on the root partition of the Hyper-V server and has the Reporting Services installed (so I've solved the TF220059 error already). After hitting "Next " after typing the name of the SQL Server I get this error: --------------------------- Microsoft Visual Studio 2008 Team Foundation Server Setup --------------------------- TF220056: An unrecoverable error occurred while trying to check the status of the Team Foundation database. Installation cannot continue. Check the install log for more details. --------------------------- OK --------------------------- The error log's stack trace makes it look like a bug in the TFS installer itself: [03/22/10,19:14:42] TFSUI: [2] tfsdb.exe: System.IO.IOException: The directory name is invalid. [03/22/10,19:14:42] TFSUI: [2] tfsdb.exe: at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath) [03/22/10,19:14:42] TFSUI: [2] tfsdb.exe: at System.IO.__Error.WinIOError() [03/22/10,19:14:43] TFSUI: [2] tfsdb.exe: at System.IO.Path.GetTempFileName() [03/22/10,19:14:43] TFSUI: [2] tfsdb.exe: at Microsoft.TeamFoundation.DatabaseInstaller.CommandLine.Commands.InstallerCommand.get_Log() [03/22/10,19:14:43] TFSUI: [2] tfsdb.exe: at Microsoft.TeamFoundation.DatabaseInstaller.CommandLine.Commands.InstallerCommand.Run() [03/22/10,19:14:43] TFSUI: [2] tfsdb.exe: at Microsoft.TeamFoundation.DatabaseInstaller.CommandLine.CommandLine.RunCommand(String[] args) [03/22/10,19:14:43] TFSUI: [2] tfsdb.exe: The directory name is invalid. [03/22/10,19:14:43] TFSUI: [2] tfsdb.exe check failed with error code: 100 I'm running the installer as the domain Administrator, although the server is a Terminal Server in Application Mode, might that be the cause of the problems?

    Read the article

  • Squid SSL transparent proxy - SSL_connect:error in SSLv2/v3 read server hello A

    - by larryzhao
    I am trying to setup a SSL proxy for one of my internal servers to visit https://www.googleapis.com using Squid, to make my Rails application on that server to reach googleapis.com via the proxy. I am new to this, so my approach is to setup a SSL transparent proxy with Squid. I build Squid 3.3 on Ubuntu 12.04, generated a pair of ssl key and crt, and configure squid like this: http_port 443 transparent cert=/home/larry/ssl/server.csr key=/home/larry/ssl/server.key And leaves almost all other configurations default. The authorization of the dir that holds key/crt is drwxrwxr-x 2 proxy proxy 4096 Oct 17 15:45 ssl Back on my dev laptop, I put <proxy-server-ip> www.googleapis.com in my /etc/hosts to make the call goes to my proxy server. But when I try it in my rails application, I got: SSL_connect returned=1 errno=0 state=SSLv2/v3 read server hello A: unknown protocol And I also tried with openssl in cli: openssl s_client -state -nbio -connect www.googleapis.com:443 2>&1 | grep "^SSL" SSL_connect:before/connect initialization SSL_connect:SSLv2/v3 write client hello A SSL_connect:error in SSLv2/v3 read server hello A SSL_connect:error in SSLv2/v3 read server hello A Where did I do wrong?

    Read the article

  • Issues configuring Exchange 2010 as well as SSL problems.

    - by Eric Smith
    Possibly-Relevant Background Info: I've recently moved up from icky shared hosting to a glorious, Remote Desktop-administrated VPS server running Windows Server 2008 R2. Even though I'm only 21 now and a computer science major, I've tried to play with every Windows Server release since '03, just to learn new things. What usually happens is inevitably I'll do something wrong and pretty much ruin the install. You're dealing with an amateur here :) Through the past few months of working with my new server, I've mastered DNS, IIS, got Team Foundation Server running (yay!), and can install all of the other basics like SQL Server and Active Directory. The Problem: Now, these last few weeks I've been trying to install Exchange Server 2010 (SP1). To make a long story short, it took me several attempts, and I even had to get my server wiped just so I could start fresh since Exchange decided uninstalling properly was for sissies (cost me $20, bah). Today, at long last, I got Exchange mostly working. There were two main problems left, however, that left me unsatisfied: Exchange installed itself and all of its child sites into Default Web Site. I wanted to access Exchange via mail.domain.com, but instead everything was configured to domain.com. My limited server admin knowledge was not enough to configure IIS or Exchange to move itself over to the website I had set up for it, appropriately titled 'mail.domain.com', which I had bound to a dedicated IP address (I was told this was necessary, but he may have been wrong). I have two SSL certificates: one for my main domain and one for my mail subdomain. For whatever reason, I had issues geting Exchange to use my mail certificate, even though I had assigned the proper roles in the MMC. I did, at one point, get it to work (or mostly work, anyways. Frankly, my memory of today is clouded by intense frustration). Additionally, I was confused which type of SSL certificate I should be using for Exchange. My SSL provider, GoDaddy, allows me to request a new certificate whenever, so I can use either the certificate request provided by IIS or the more complicated and specific request you can create with Exchange. Which type should I be using, the IIS or Exchange certificate? If I must use the Exchange certificate, will that 1) cause issues when I bind that certificate to my mail.domain.com subdomain or 2) is that an unnecessary step? The SSL Certificate Strikes Back When I thought I had the proper SSL certificate assigned for those brief, sweet moments, Google Chrome reported the correct mail.domain.com certificate when browsing https://mail.domain.com. However, Outlook 2010 threw up an error when trying to configure my email account claiming that the certificate didn't match the domain of "mail.domain.com". Is this an issue that will be resolved by problem #2 or is it a separate one entirely? Apologies for the massive wall of text, but I wanted to provide as much info as I possibly could. Exchange is the last thing I'd like installed on my server, and naturally it's turning out to be the hardest. Thanks for any info at all. Even a point in a vague direction would be a huge help at this point. Thanks! -Eric P.S.: The reason I keep ruining my install is that when I attempt to uninstall Exchange, something invariably goes wrong. The last time the uninstaller complained that there was still a mailbox active and it couldn't proceed until I deleted it. ... The only mailbox left was the Administrator account, the built-in one I couldn't delete. So I attempted to manually uninstall it following several guides online only to now be stuck unable to launch the installer and have to get my system wiped AGAIN for the second time today ($40 down the drain, bah!). I do not understand at all why "uninstall" just can't mean "hey, you, delete everything and go away". There's not even a force uninstall option, only a "recover system" option that just fails to fix anything and makes it so I can't even use the GUI uninstaller. </rant>

    Read the article

  • Help setting up an secondary authoritative DNS server.

    - by GLB03
    We have three Authoritative DNS servers and three recursive/caching DNS servers on my campus. Authoritative servers DNS1- Windows 2003 DNS2- Old Red Hat ----- Replacing w/ newer version DNS3- Windows 2008 (I installed) Caching and Recursive resolvers servers Server1- Windows 2003 Server2- CentOS 5.2 (I installed) Server3- CentOS 5.3 (I installed) I am replacing DNS2 with a newer Red Hat version, but have no documentation on how it was implemented. I have setup caching and windows authoritative servers, but not a linux secondary authoritative server. I have a perl script from the original server that pulls data from our DNS1 server. We use DJBDNS and TinyDNS on our linux servers. Our Network Engineer says the DNS2 server I am replacing is an authoritative server that doesn't need to be caching, but the only instructions I see is for an Authoritative server that does caching as well. Can someone point me in the right directions. I thought I was on the right track with using these instructions but when I query my new dns server I get "No response from server", I have temporarily disabled iptables to eliminate it from being an issue. ps -aux | grep dns avahi 3493 0.0 0.2 2600 1272 ? Ss Apr24 0:05 avahi-daemon: running [newdns2.local] root 5254 0.0 0.1 3920 680 pts/0 R+ 09:56 0:00 grep dns root 6451 0.0 0.0 1528 308 ? S Apr29 0:00 supervise tinydns dnslog 6454 0.0 0.0 1540 308 ? S Apr29 0:00 multilog t ./main tinydns 9269 0.0 0.0 1652 308 ? S Apr29 0:00 /usr/local/bin/tinydns

    Read the article

  • Create Virtual Image of Laptop before Formatting

    - by Simon Mark Smith
    I have a 3 year old laptop running Windows XP that I used for business. Although I have not used the laptop in over a year, I now want to re-commission it with Windows 7 and a fresh install. Before I do the fresh install I want to create a Virtual Image of the laptop that I can keep and potentially run on my desktop machine should I ever need to access any of the old files/projects that it contains currently. I know that most people will say just copy the files over to your desktop, but my concern is the configuration of the laptop. I used to use it for development and it has older versions of Visual Studio, SQL Server, Active X controls etc, etc than I currently use so I really want to preserve the environment not just the files. So really I am asking what is the best tool-set/method to achieve this? I understand there are free VM tools available but I have never done this before and would appreciate any help.

    Read the article

  • SQL2008R2 install issues on windows 7 - unable to install setup support files?

    - by Liam
    I am trying to install the above but am getting the following errors when its attempting to install the setup support files, This is the first error that occurs during installation of the setup support files TITLE: Microsoft SQL Server 2008 R2 Setup ------------------------------ The following error has occurred: The installer has encountered an unexpected error. The error code is 2337. Could not close file: Microsoft.SqlServer.GridControl.dll GetLastError: 0. Click 'Retry' to retry the failed action, or click 'Cancel' to cancel this action and continue setup. For help, click: http://go.microsoft.com/fwlink?LinkID=20476&ProdName=Microsoft+SQL+Server&EvtSrc=setup.rll&EvtID=50000&ProdVer=10.50.1600.1&EvtType=0xDF039760%25401201%25401 This is the second error that occurs after clicking continue in the installer after the first error is generated TITLE: Microsoft SQL Server 2008 R2 Setup ------------------------------ The following error has occurred: SQL Server Setup has encountered an error when running a Windows Installer file. Windows Installer error message: The Windows Installer Service could not be accessed. This can occur if the Windows Installer is not correctly installed. Contact your support personnel for assistance. Windows Installer file: C:\Users\watto_uk\Desktop\In-Digital\Software\Microsoft\SQL Server 2008 R2\1033_ENU_LP\x64\setup\sqlsupport_msi\SqlSupport.msi Windows Installer log file: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20110713_205508\SqlSupport_Cpu64_1_ComponentUpdate.log Click 'Retry' to retry the failed action, or click 'Cancel' to cancel this action and continue setup. For help, click: http://go.microsoft.com/fwlink?LinkID=20476&ProdName=Microsoft+SQL+Server&EvtSrc=setup.rll&EvtID=50000&ProdVer=10.50.1600.1&EvtType=0xDC80C325 These errors are generated from an ISO package downloaded from Microsoft. I have also tried using the web platform installer to install the express version instead but the SQL Server Installation fails with that also. The management studio installs fine but not the server. I have checked to make sure that the Windows Installer is started and it is. Cant seem to find an answer for this anywhere as all previous reported issues appear to be related to XP. I did have the express edition installed on the machine previously but uninstalled it to upgrade to the full version, I wish I hadn't now. Can anyone kindly offer any advice or point me in the right direction to stop me going insane with this? Any advice will be appreciated. Update======================= After digging a bit deeper ive located details of the error from the setup log file, i can also upload the log file if required. MSI (s) (E8:28) [23:35:18:705]: Assembly Error:The module '%1' was expected to contain an assembly manifest. MSI (s) (E8:28) [23:35:18:705]: Note: 1: 1935 2: 3: 0x80131018 4: IStream 5: Commit 6: MSI (s) (E8:28) [23:35:18:705]: Note: 1: 2337 2: 0 3: Microsoft.SqlServer.GridControl.dll MSI (s) (E8:28) [23:35:22:869]: Product: Microsoft SQL Server 2008 R2 Setup (English) -- Error 2337. The installer has encountered an unexpected error. The error code is 2337. Could not close file: Microsoft.SqlServer.GridControl.dll GetLastError: 0. MSI (s) (E8:28) [23:35:22:916]: Internal Exception during install operation: 0xc0000005 at 0x000007FEE908A23E. MSI (s) (E8:28) [23:35:22:916]: WER report disabled for silent install. MSI (s) (E8:28) [23:35:22:932]: Internal MSI error. Installer terminated prematurely. Error 2337. The installer has encountered an unexpected error. The error code is 2337. Could not close file: Microsoft.SqlServer.GridControl.dll GetLastError: 0. MSI (s) (E8:28) [23:35:22:932]: MainEngineThread is returning 1603 MSI (s) (E8:58) [23:35:22:932]: RESTART MANAGER: Session closed. Installer stopped prematurely. MSI (c) (0C:14) [23:35:22:947]: Decrementing counter to disable shutdown. If counter >= 0, shutdown will be denied. Counter after decrement: -1 MSI (c) (0C:14) [23:35:22:947]: MainEngineThread is returning 1601 === Verbose logging stopped: 13/07/2011 23:35:22 ===

    Read the article

  • Windows 2008 server smart card security module problem

    - by chris13work
    Hi, I've got a smart card reader and a server application using it as a security module. If I run it under DOS prompt, everything is fine. The server is running and clients can connect to it. I tried to install the server as window service and start it. The server starts but always gives back authentication error because it cannot call the smart card to do encryption. Then I tried to start it with task scheduler and set the trigger factor as "on startup". The server starts also but still cannot access the smart card reader. Then I tried remote desktop to the machine and run the server application under DOS prompt. Same error is returned. The situation is that the smart card reader only works under active console desktop environment. In the server application, WINSCARD API is used to access the smart card reader. Any suggestion so that we can access the smart card reader in running services? OS: Windows Server 2008 Smart Card Driver: Windows USB smart card Reader Smart Card API: WINSCARD

    Read the article

  • rsync to cifs mount but preserve permissions

    - by weberwithoneb
    I'm backing up a linux server to a windows share. I'm currently mounting the windows share with cifs and using rsync for incremental backups. File permissions and ownership are not being preserved, as should be expected after reading this samba document: The core CIFS protocol does not provide unix ownership information or mode for files and directories. Because of this, files and directories will generally appear to be owned by whatever values the uid= or gid= options are set, and will have permissions set to the default file_mode and dir_mode for the mount. How can I achieve my goal of preserving unix file permissions while writing to a windows share? Is there another network file system that would allow me to do this? Thanks.

    Read the article

  • Windows Boot disc to save files?

    - by acidzombie24
    Somehow after updating my legit windows 7 OS with no pirated or mod software on my PC i was welcome to a black screen. I popped in my ghost disc and copied the files i need to an external HD. IIRC windows 7 disc can do that too. Problem with the way i did it on ghost was it excepted me to select 1 file (an HD disc image) so i couldnt select multiple folders to move. Also when i did move i had no idea if it finished or how long it would take. My linux live cd couldnt access the HD. Anyways, is there a disc i can use to easily copy files from my laptop to my external HD? I think ghost, windows 7 and windows server all allow me but is there one that is better suited to copy files?

    Read the article

  • W2003StdR2 server: DNS dysfunctional!

    - by Tor
    I hate to have to do this, but i feel up that creek with no... well, some of you might know. At the moment my one and only DNS server refuses to do Forwarding. The story is as goes: This site had 2 servers, one W2003SBS and an W2003StdR2. The SBS degraded over a short periode of time, and to not go down with it i decided to move all data over to the other server. This was of course an AD integrated site. Move went ok, the Std server removed from the domain, and the SBS put to rest. For the time being we decided to run the Std as a server only, and no AD. We renamed the internal domain to xxx.local, and set the server up with DNS, DHCP and installed WINS (not activated). Forwarding of DNS is to our ISP through a Netgear Firewall. The same address setup used as before. So - DNS server started and all went ok, clients reconfigured and hooked up and then - after a day's time - internet name resolution stopped working on the server! Nothing had changed, been altered, modified, nothing! What i now get when doing NSLOOKUP is just a 2 sec timeout response! And i have checked and looked, but to no avail. Anybody seen this behaviour before? And yes - ALL servicepacks have been updated on the server. I would be much obliged if anyone in here could lend an ear... and give advice! Thanks.... from Tor in Norway Today is the 14th, and i still have no resolution to this nagging problem. Anybody else got any advice in the matter? Please?

    Read the article

  • Verifying Time Machine backups

    - by jtimberman
    I'm preparing my system for a Snow Leopard upgrade, and I prepare for the worst case scenario: full reinstall and restore. I would like to verify that my Time Machine backups are valid, and will restore correctly. My Time Machine backups go to a Linux server running Netatalk, and the backups complete successfully. How do I do a test restore to an alternative location, or otherwise verify my data without overwriting any existing files? Do I need to save anything in particular externally to make sure I can access the backups if I have to reinstall from scratch?

    Read the article

  • javascript doesn't seem to be able to post form data (nginx server w/ php-fpm)

    - by Jones
    So the situation is like so: I have a nginx server with php-fpm installed. All is well and the site scripts and all work perfectly. I am able to use html to POST form data and it works just fine. However, There seems to be be some correlation between javascript, the POST protocol and nothing happening. I cant seem to determine the issue. Example: I have a user login widget that uses javascript on submit the fields and POST the data to a backend auth script which returns a server message that then populates the login box saying something like "Login Successful" followed by reloading the page to properly enable content. Problem is, nothing happens when you hit submit. I do know the setup works because i had it working on apache before migrating. Also if it makes any difference, the server is a Amazon EC2 instance using the Amazon AMI. I really dont know where to start looking on this one, but below is my default.conf for the server: upstream backend_get { server 127.0.0.1:80 weight=1; } upstream backend_post { server 127.0.0.1:80 weight=1; } #Main website url server { listen 80; server_name server.com; #charset koi8-r; access_log logs/host.access.log main; error_log logs/host.error.log; location / { root /usr/share/nginx/html; index index.php index.html index.htm; if ($request_method = POST) { proxy_pass http://backend_post; break; } } location ~ \.php$ { #fastcgi_pass 127.0.0.1:9000; fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } }

    Read the article

  • Using rsync to synchronise folders without overwriting files of same name on Mac OS X

    - by Adam
    I would like to synchronise the contents of two directories. Without overwriting but to create a copy if two files have the same name, but different sizes Without duplicating if two files have the same name and size. To work recursively So far I have found the following command which might work $ rsync -varE --progress ~/folder /volumes/server/folder But I'm not entirely sure what the -E flag does. It was suggested by a user on bananica.com but couldn't see a description for it in the manual. Would this do what I require successfully? Thanks

    Read the article

  • Setting up Web server so it is easy to migrate

    - by Nyxynyx
    Hi I am about to move my site from a VPS to another host's dedicated server. One of my concern is about scaling the site in the future that involves a change of server. Now that I am starting the dedicated server from scratch with only the OS, this means that I need to install the web server stack, including Apache and its mods, PHP, MySQL, PostgreSQL, Tomcat, Solr and a few other softwares like ImageMagick and git. Question: Is there a way for me to setup this new dedicated server such that I can easily migrate the entire site, both the technology stack and the code to the a newer server (upgrade from this new dedicated server) easily without reinstalling and reconfiguring everything? The code for the website is being handled by git and github so thats not a problem. I'm more conerned about the rest of the software required. Side question: The current VPS uses CentOs with cpanel and it seems that many packages are outdated on yum and cpanel interfers with the installation of many packages. Which OS should I go with for the new server? Ubuntu?

    Read the article

< Previous Page | 421 422 423 424 425 426 427 428 429 430 431 432  | Next Page >