Search Results

Search found 8998 results on 360 pages for 'upgrade failure'.

Page 112/360 | < Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >

  • What to do after a servicing fails on TFS 2010

    - by Martin Hinshelwood
    What do you do if you run a couple of hotfixes against your TFS 2010 server and you start to see seem odd behaviour? A customer of mine encountered that very problem, but they could not just, or at least not easily, go back a version.   You see, around the time of the TFS 2010 launch this company decided to upgrade their entire 250+ development team from TFS 2008 to TFS 2010. They encountered a few problems, owing mainly to the size of their TFS deployment, and the way they were using TFS. They were not doing anything wrong, but when you have the largest deployment of TFS outside of Microsoft you tend to run into problems that most people will never encounter. We are talking half a terabyte of source control in TFS with over 80 proxy servers. Its certainly the largest deployment I have ever heard of. When they did their upgrade way back in April, they found two major flaws in the product that meant that they had to back out of the upgrade and wait for a couple of hotfixes. KB983504 – Hotfix KB983578 – Patch KB2401992 -Hotfix In the time since they got the hotfixes they have run 6 successful trial migrations, but we are not talking minutes or hours here. When you have 400+ GB of data it takes time to copy it around. It takes time to do the upgrade and it takes time to do a backup. Well, last week it was crunch time with their developers off for Christmas they had a window of opportunity to complete the upgrade. Now these guys are good, but they wanted Northwest Cadence to be available “just in case”. They did not expect any problems as they already had 6 successful trial upgrades. The problems surfaced around 20 hours in after the first set of hotfixes had been applied. The new Team Project Collection, the only thing of importance, had disappeared from the Team Foundation Server Administration console. The collection would not reattach either. It would not even list the new collection as attachable! Figure: We know there is a database there, but it does not This was a dire situation as 20+ hours to repeat would leave the customer over time with 250+ developers sitting around doing nothing. We tried everything, and then we stumbled upon the command of last resort. TFSConfig Recover /ConfigurationDB:SQLServer\InstanceName;TFS_ConfigurationDBName /CollectionDB:SQLServer\instanceName;"Collection Name" -http://msdn.microsoft.com/en-us/library/ff407077.aspx WARNING: Never run this command! Now this command does something a little nasty. It assumes that there really should not be anything wrong and sets about fixing it. It ignores any servicing levels in the Team Project Collection database and forcibly applies the latest version of the schema. I am sure you can imagine the types of problems this may cause when the schema is updated leaving the data behind. That said, as far as we could see this collection looked good, and we were even able to find and attach the team project collection to the Configuration database. Figure: After attaching the TPC it enters a servicing mode After reattaching the team project collection we found the message “Re-Attaching”. Well, fair enough that sounds like something that may need to happen, and after checking that there was disk IO we left it to it. 14+ hours later, it was still not done so the customer raised a priority support call with MSFT and an engineer helped them out. Figure: Everything looks good, it is just offline. Tip: Did you know that these logs are not represented in the ~/Logs/* folder until they are opened once? The engineer dug around a bit and listened to our situation. He knew that we had run the dreaded “tfsconfig restore”, but was not phased. Figure: This message looks suspiciously like the wrong servicing version As it turns out, the servicing version was slightly out of sync with the schema. KB Schema Successful           KB983504 341 Yes   KB983578 344 sort of   KB2401992 360 nope   Figure: KB, Schema table with notation to its success The Schema version above represents the final end of run version for that hotfix or patch. The only way forward The problem was that the version was somewhere between 341 and 344. This is not a nice place to be in and the engineer give us the  only way forward as the removal of the servicing number from the database so that the re-attach process would apply the latest schema. if his sounds a little like the “tfsconfig recover” command then you are exactly right. Figure: Sneakily changing that 3 to a 1 should do the trick Figure: Changing the status and dropping the version should do it Now that we have done that we should be able to safely reattach and enable the Team Project Collection. Figure: The TPC is now all attached and running You may think that this is the end of the story, but it is not. After a while of mulling and seeking expert advice we came to the opinion that the database was, for want of a better term, “hosed”. There could well be orphaned data in there and the likelihood that we would have problems later down the line is pretty high. We contacted the customer back and made them aware that in all likelihood the repaired database was more like a “cut and shut” than anything else, and at the first sign of trouble later down the line was likely to split in two. So with 40+ hours invested in getting this new database ready the customer threw it away and started again. What would you do? Would you take the “cut and shut” to production and hope for the best?

    Read the article

  • How common are power supply failures in comparison to hard disk failures?

    - by Adrian Grigore
    Hi, My webhost offers two different types of high availability options for dedicated servers: Redundant hard disks (RAID1) Redundant hard disks (RAID1) plus redundant power supply How common is a power supply failure in comparison to hard disk failure? I know it's not possible to know the exact figures without knowing the exact hardware, but ballpark figures are good enough for me at the moment. Thanks, Adrian

    Read the article

  • Python error after installing libboost-all-dev on debian [migrated]

    - by Cameron Metzke
    A friend of mine wanted the liboost libraries installed on our shared computer so after installing libboost-all-dev 1.49.0.1 ( A debian wheezy machine ), I get this error when using the "pydoc modules" command on the commandline. It spits out the following error -- root@debian:/usr/include/c++/4.7# pydoc modules Please wait a moment while I gather a list of all available modules... **[debian:49065] [[INVALID],INVALID] ORTE_ERROR_LOG: A system-required executable either could not be found or was not executable by this user in file ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 357 [debian:49065] [[INVALID],INVALID] ORTE_ERROR_LOG: A system-required executable either could not be found or was not executable by this user in file ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 230 [debian:49065] [[INVALID],INVALID] ORTE_ERROR_LOG: A system-required executable either could not be found or was not executable by this user in file ../../../orte/runtime/orte_init.c at line 132 -------------------------------------------------------------------------- It looks like orte_init failed for some reason; your parallel process is likely to abort. There are many reasons that a parallel process can fail during orte_init; some of which are due to configuration or environment problems. This failure appears to be an internal failure; here's some additional information (which may only be relevant to an Open MPI developer): orte_ess_set_name failed --> Returned value A system-required executable either could not be found or was not executable by this user (-127) instead of ORTE_SUCCESS -------------------------------------------------------------------------- -------------------------------------------------------------------------- It looks like MPI_INIT failed for some reason; your parallel process is likely to abort. There are many reasons that a parallel process can fail during MPI_INIT; some of which are due to configuration or environment problems. This failure appears to be an internal failure; here's some additional information (which may only be relevant to an Open MPI developer): ompi_mpi_init: orte_init failed --> Returned "A system-required executable either could not be found or was not executable by this user" (-127) instead of "Success" (0) -------------------------------------------------------------------------- *** The MPI_Init() function was called before MPI_INIT was invoked. *** This is disallowed by the MPI standard. *** Your MPI job will now abort. [debian:49065] Abort before MPI_INIT completed successfully; not able to guarantee that all other processes were killed!** root@debian:/usr/include/c++/4.7# I tried looking into the problem and ended up uninstalling the following to get it to work again. openmpi common all 1.4.5-1 libibverbs-dev amd64 1.1.6-1 libopenmpi-dev amd64 1.4.5-1 mpi-default-dev amd64 1.0.1 libboost-mpi-python1.49.0 although pydoc works again, I'm assuming the packages I removed are gunna hurt somethiong else down the track ? As you guessed im not a c/c++ programmer. So I guess my question is, will this hurt something later ? is their a way to install those packages without hurting python ?

    Read the article

  • SSH into Fedora 17 will not work with new users

    - by psion
    I just deployed a new Fedora 17 server on the Amazon EC2. I was able to log in as ec2-user with my generated keypair, but I cannot log in under normal circumstances as a user I created. This is just a normal ssh: ssh user@ip-address Any ideas on what is going on here? EDIT: This is a snippit from my sshd_config file # To disable tunneled clear text passwords, change to no here! PasswordAuthentication no #PermitEmptyPasswords no PasswordAuthentication no EDIT AGAIN: This is the output of ssh -v. OpenSSH_5.8p2, OpenSSL 1.0.0i-fips 19 Apr 2012 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug1: Connecting to 107.23.2.165 [107.23.2.165] port 22. debug1: Connection established. debug1: identity file /home/psion/.ssh/id_rsa type 1 debug1: identity file /home/psion/.ssh/id_rsa-cert type -1 debug1: identity file /home/psion/.ssh/id_dsa type 2 debug1: identity file /home/psion/.ssh/id_dsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.9 debug1: match: OpenSSH_5.9 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.8 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Server host key: RSA 19:cb:84:21:a9:0e:83:96:2f:6a:fa:7d:ce:39:0f:31 debug1: Host '107.23.2.165' is known and matches the RSA host key. debug1: Found key in /home/psion/.ssh/known_hosts:5 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic debug1: Next authentication method: gssapi-keyex debug1: No valid Key exchange context debug1: Next authentication method: gssapi-with-mic debug1: Unspecified GSS failure. Minor code may provide more information Credentials cache file '/tmp/krb5cc_1000' not found debug1: Unspecified GSS failure. Minor code may provide more information Credentials cache file '/tmp/krb5cc_1000' not found debug1: Unspecified GSS failure. Minor code may provide more information debug1: Unspecified GSS failure. Minor code may provide more information debug1: Next authentication method: publickey debug1: Offering DSA public key: /home/psion/.ssh/id_dsa debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic debug1: Offering RSA public key: /home/psion/.ssh/id_rsa debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic debug1: No more authentication methods to try. Permission denied (publickey,gssapi-keyex,gssapi-with-mic).

    Read the article

  • Sending email after backup (Windows Server 2008)

    - by woodsbw
    I have a client who is using Windows Server 2008 (Small Business Server), and using Windows Backup. What I need to do is configure the backup task so that, upon completion, it sends an email notifying the client of backup success or failure. I have been able to find that task in task scheduler, and even see where I can send an email...but I cannot find a way to make the content of the email different based on success or failure of the backup. How might I do this?

    Read the article

  • Does BitLocker reduce write reliability?

    - by Unsigned
    For the purposes of this question, BitLocker refers to the BitLocker-to-go variety on a disk with write-caching disabled. NTFS supports metadata journaling, which, although not completely failsafe, does mitigate certain types of potential filesystem errors. Assuming an NTFS volume is protected with BitLocker, does this reduce the failure tolerance? Would a power failure during a write to an NTFS volume, that's protected with BitLocker, be more prone to corruption than on an unencrypted NTFS volume?

    Read the article

  • isa 2004 - banned site rule cause slow internet

    - by Holian
    Hi Gods, We have windows server 2003 with isa 2004. Our clients uses internet with proxy. We have two isa rule: order name action protocolls from/listener to condition 1. trafic ALLOW all outbound all networks all networks all users 2. FTP ALLOW FTP Server EXTERNAL/INTERNAL/Local host 10.1.1.1 So we have to "bann" a few webpage (like facebook, youtube...etc...), so we make a new rule 0. banned DENY HTTP internal denied pages all users In the denied pages we have the *.facebook.com domain set. After we enable this rule, the entire internet slows down. The banning rule works well, redirect to an internal site, but the other sites.... If i open a page..it normally takes 3-10 sec to load, but after this rule this time is: 2-4 minutes. In the monitor / logging menu we got a few FAILED CONNECTION ATTEMPT like: Log type: Web Proxy (Forward) Status: 304 Not Modified Rule: All local traffic Source: Internal ( 10.1.1.1:0 ) Destination: External ( 172.24.28.22:3128 ) Request: GET http://www.konyvelozona.hu/wp-content/uploads/nyugdijas-holgy-2.jpg Filter information: Req ID: 17270b72 Protocol: http User: anonymous Additional information Client agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.3072... Object source: Verified Cache Processing time: 9047 Cache info: 0x18801002 MIME type: - In the event log we got a few log: Description: The Web Proxy filter failed to bind its socket to 10.1.1.1 port 80. This may have been caused by another service that is already using the same port or by a network adapter that is not functional. To resolve this issue, restart the Microsoft Firewall service. The error code specified in the data area of the event properties indicates the cause of the failure. The failure is due to error: 0x8007271d The Web Proxy filter failed to bind its socket to 127.0.0.1 port 80. This may have been caused by another service that is already using the same port or by a network adapter that is not functional. To resolve this issue, restart the Microsoft Firewall service. The error code specified in the data area of the event properties indicates the cause of the failure. The failure is due to error: 0x8007271d If i tpye: netstat -o -n -a | findstr 0.0:80 then i got, tcp 0.0.0.0:80 0.0.0.0:0 LISTEN 4 udp 0.0.0.0:8031 *.* 2780 udp 0.0.0.0:8082 *.* 2780 Some month ago we installed XMAP, but now we only use mysql. Apache service stopped. In the Xamp port check menu i see: Service POrt Status Apache (http) 80 Process: System Maybee this is the problem? I dont know what should i do now... Thank you folks.

    Read the article

  • Postfix + SASLAUTHD + MySQL authentication problems

    - by Or W
    I've been trying to sort this out for the past 6 hours or so, this is the error message I'm facing (Running CentOS x64): /var/log/maillog: Jun 22 20:42:49 ptroa postfix/smtpd[10130]: warning: SASL authentication failure: Password verification failed Jun 22 20:42:49 ptroa postfix/smtpd[10130]: warning: bzq-79-177-192-133.red.bezeqint.net[79.177.192.133]: SASL PLAIN authentication failed: authentication failure Jun 22 20:42:49 ptroa postfix/smtpd[10130]: warning: bzq-79-177-192-133.red.bezeqint.net[79.177.192.133]: SASL LOGIN authentication failed: authentication failure /var/log/messages: Jun 22 20:15:38 ptroa saslauthd[9401]: do_auth : auth failure: [user=myuser] [service=smtp] [realm=domain.com] [mech=pam] [reason=PAM auth error] I have dovecot installed as well and I'm able to receive emails via the MySQL authentication. The problem is when I'm trying to use SMTP to send out emails. Some config files: /etc/postfix/main.cf: # See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. myorigin = /etc/mailname smtpd_banner = Server Message biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = /usr/share/doc/postfix # TLS parameters smtpd_tls_cert_file = /etc/postfix/smtpd.cert smtpd_tls_key_file = /etc/postfix/smtpd.key smtpd_use_tls = yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. myhostname = domain.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = relayhost = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all html_directory = /usr/share/doc/postfix/html message_size_limit = 30720000 virtual_alias_domains = virtual_alias_maps = proxy:mysql:/etc/postfix/mysql-virtual_forwardings.cf, mysql:/etc/postfix/mysql-virtual_email2email.cf virtual_mailbox_domains = proxy:mysql:/etc/postfix/mysql-virtual_domains.cf virtual_mailbox_maps = proxy:mysql:/etc/postfix/mysql-virtual_mailboxes.cf virtual_mailbox_base = /home/vmail virtual_uid_maps = static:5000 virtual_gid_maps = static:5000 smtpd_sasl_auth_enable = yes broken_sasl_auth_clients = yes smtpd_sasl_authenticated_header = yes smtpd_recipient_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination virtual_create_maildirsize = yes virtual_maildir_extended = yes proxy_read_maps = $local_recipient_maps $mydestination $virtual_alias_maps $virtual_alias_domains $virtual_mailbox_maps $virtual_mailbox_domains $relay_recipient_maps $relay_domains $canonical_maps $sender_canonical_maps $recipient_cano$ virtual_transport = dovecot dovecot_destination_recipient_limit = 1 /etc/default/saslauthd: START=yes DESC="SASL Authentication Daemon" NAME="saslauthd" MECHANISMS="pam" MECH_OPTIONS="" THREADS=5 OPTIONS="-c -m /var/spool/postfix/var/run/saslauthd -r" /etc/pam.d/smtp: #%PAM-1.0 #auth include password-auth #account include password-auth auth required pam_mysql.so user=mail_admin passwd=password host=127.0.0.1 db=mail table=users usercolumn=email passwdcolumn=password crypt=1 verbose=1 account sufficient pam_mysql.so user=mail_admin passwd=password host=127.0.0.1 db=mail table=users usercolumn=email passwdcolumn=password crypt=1 verbose=1

    Read the article

  • How to re-run one line of a PowerShell cmdlet if it fails

    - by pansal
    I wrote a PowerShell script which is supposed to send emails automatically, but sometimes the email won't send out due to a network issue. Here is how I'm sending email: $smtp_notification.Send($mail_notification) Here are the error logs: Exception calling "Send" with "1" argument(s): "Failure sending mail." At line:1 char:24 + $smtp_notification.Send <<<< ($mail_notification) + CategoryInfo : NotSpecified: (:) [], MethodInvocationException + FullyQualifiedErrorId : DotNetMethodException Is there anyway to re-run the sending line when I met this failure? Can anyone give me some suggestions please?

    Read the article

  • Looking for Hard Drive Health Monitoring software

    - by RandyMorris
    I am aware that the current standard method of drive redundancy and backups would be a better solution, but I would be interested if there was a software that could be installed on workstation computers that could monitor hard drive health and give a warning if the drive looks like a failure is imminent. I have tried hdd health but it does not give me very useful information. I am not interested in drive space, just want a heads up before a drive failure. Anyone know anything like this?

    Read the article

  • SharePoint Web Services. Using UserPofileService.GetUserProfileByName. After SP upgrade... failing.

    - by Steve
    The below web services code has worked properly for me for over a year. We have updated our SharePoint servers, and now the below code throws an exception (at the bottom line of code) "Object reference not set to an instance of an object" UserProfileWS.UserProfileService userProfileService = new UserProfileWS.UserProfileService(); userProfileService.Credentials = System.Net.CredentialCache.DefaultNetworkCredentials; string serviceloc = "/_vti_bin/UserProfileService.asmx"; userProfileService.Url = _webUrl + serviceloc; UserProfileWS.PropertyData[] info = userProfileService.GetUserProfileByName(null); EDIT: The service is still there. I browse http:///_vti_bin/UserProfileService.asmx, and the information for the service is still there, including the full description of the GetUserProfileByName call. EDIT2: This does appear to be due to a change in SharePoint. I loaded a previous version of my software (known to be working), and it exhibits the same erroneous behavior.

    Read the article

  • How to sysprep SQL Server Express?

    - by Jim
    We plan to deploy Hyper-V VHD with Windows Server 2008 R2 and SQL Server 2012 Express installed to multiple hosts. From my understanding, the correct way to do this is to install SQL Server in prepartion mode, sysprep Windows, then complete SQL Server installation when the VHD is deployed. I mostly followed the process in this blog post: http://sethusrinivasan.com/category/sysprep/ However, after the VHD is deployed, I'm unable to complete the SQL Server installation. It keeps saying "Upgrade matrix is incorrect". It seems that it's trying to upgrade itself to Enterprise edition (I was asked for product key during install, but I skipped it). Could anyone share their experience in deploying VHDs with SQL Server (we're fine with either SQL Server 2008 R2 or 2012)? I think the source of my issue is because I can't select "Express Edition" when entering the product key at the completion stage, so the installation is trying to do an upgrade to Enterprise Edition. I have no idea why the drop down list is empty.

    Read the article

  • Unable to install Visual Studio 2005 on Windows 7

    - by lopez
    After downloading Visual Studio 2005 SP1 and Visual Studio 2005 Upgrade for Vista to install Visual Studio 2005 on Windows 7, when I tried to intall Visual Studio 2005 SP1, I got this error message: The up patch cannot be installed by the windows installer service because the program to be upgraded is missing, or the upgrade patch may update a different version of program. Verify that the program to be updated exist on your computer and you have correct upgrade patch I have saved VS80sp1-KB926601-X86-ENU (1) and VS80sp1-KB932232-X86-ENU on the D: drive. What should I do to install Visual Studio 2005 on Windows 7?

    Read the article

  • System backup with Norton Ghost

    - by Mehper C. Palavuzlar
    I will upgrade my system from Windows Vista Home Premium (x64) to Windows 7 (x64). Before starting the upgrade process, I want to back up my current system with Norton Ghost. I have never used it before, so I need assistance to do that. At the moment, there is 139 GB used space by Vista and I have 1 TB external HD connected via USB. If you can tell me the step by step instructions about how to back up and how to restore if the upgrade somehow fails, I'll appreciate that. Thanks.

    Read the article

  • HAProxy -- pause/queue all traffic without losing requests

    - by Marc
    I basically have the same problem as mentioned in this thread -- I would like to temporarily suspend all requests to all servers of a certain backend, so that I can upgrade the backend and the database it uses. Since this is a live system, I would like to queue up requests, and send them to the backend servers once they've been upgraded. Since I'm doing a database upgrade with the code change, I have to upgrade all backend servers simultaneously, so I can't just bring one down at a time. I tried using the tcp-request options combined with removing the static healthcheck file as mentioned in that thread, but had no luck. Setting the default "maxconn" value to 0 seems to pause and queue connections as desired, but then there seems to be no way to increase the value back to a positive number without restarting HAProxy, which kills all requests that had been queued up until that point. (The "hot-reconfiguration" options using -sf and -st start a new process, which doesn't seem to do what I want). Is what I'm trying to do possible?

    Read the article

  • ASP.NET login control - can I add the FailureText as an item in a ValidationSummary?

    - by tkahn
    I'm currently working with the ASP.NET login control. I can set a custom failure text and I can add a literal on the page where the failure text is displayed if the login fails. I also have a validation summary on the page in which I collect all errors that can occur (for the moment it just validates that the user has entered a login name and a password. It would be really nice if I could add the failure text of the login control as an item in the validation summary, but I'm not sure if this is even possible? Hoping that the massive brainpower of stackoverflow can give me some pointers? Thanks! /Thomas Kahn PS. I'm coding in C#.

    Read the article

  • Upgrading Ubuntu hardy to Ruby 1.8.7

    - by Simone Carletti
    My server is running Ubuntu Hardy and Ruby 1.8.6 installed using aptitude. I'd like to upgrade to Ruby 1.8.7 but, unfortunately, the Ruby package includes Ruby 1.8.7 starting from Ubuntu Intrepid. I read a couple of tutorials about how to upgrade to Ruby 1.8.7 and I found at least 3 different way to accomplish this task: backports installation from source installation from source and multiple versions I'm a bit confused. How do you recommend to upgrade to Ruby 1.8.7 taking into consideration I don't need multiple Ruby versions on the same server? I'd like to cleanly replace the existing Ruby 1.8.6 with Ruby 1.8.7.

    Read the article

  • Configuring IIS 7.5 to be FIPS 140.2 compliant

    - by tomfanning
    I need to configure IIS 7.5 (Server 2008 R2) to be FIPS 140.2 compliant. Specifically, this involves disabling all SSL protocols other than TLS 1.0. I have set the following registry keys: HKLM\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 2.0\Server HKLM\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 3.0\Server HKLM\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\PCT 1.0\Server to Enabled(DWORD) = 0 as per this KB, but SSL Labs' checker says "SSL 2.0+ Upgrade Support" is enabled. (Everything other than that and TLS 1.0 is not available, so we're getting somewhere). It also says "FIPS ready - no" - presumably because SSL 2.0+ Upgrade Support is still enabled. serversniff.net says SSL 2.0 is turned off, and doesn't say anything about SSL 2.0+ Upgrade Support. Could this be an anomaly with SSL Labs' checker?

    Read the article

  • Apache configuration: choose a site to display according to visitor's IP address

    - by user64294
    Hi all. I would like to set a special configuration in our apache web server. I would like to display sites to the users according their IP addresses. We plan to upgrade our web sites. During the upgrade we'll put a maintenance site: so all the users which will connect to our web sites will get this site. but in order to test the upgrade i need to set apache to let only my IP address to access to asked site. If my IP address is a.b.c.d and if i ask for test_dot_com i want to see it. but all other users, having a different IP address, should get the maintenane site even if they look for test_dot_com. Is there a way to do this? PS: sorry As i'm a new user i can't use more than one link. so test_dot_com = test.com Thank you.

    Read the article

  • Visual Studio 2005 SP1 refuses to install in Windows 7

    - by JesperE
    I'm trying to install Visual Studio 2005 in Windows 7. When attempting to start, Windows 7 complains that VS2005 is incompatible with Windows 7 and offers me to search online for a solution. It comes up with a link to VS2005 SP1 for Vista and Windows 7, but when I download it (VS80sp1-KB932232-X86-ENU.exe) and try to run it, it refuses to install saying that The upgrade patch cannot be installed by the Windows Installer service because the program to be upgraded may be missing, or the upgrade patch may update a different version of the program. Verify that the program to be upgraded exists on your computer and that you have the correct upgrade patch.

    Read the article

  • Dell Inspirion 1525 64bit driver compatibility

    - by Tony
    Hello, Can anyone tell me what the compatibility of drivers on 64bit vista are like on the Dell Inspiron 1525. Mine is just a year old now and I want to upgrade to 64bit vista. I wondered if there's known issues with drivers that I should be aware of before upgrading? Tony PS: is the upgrade worth it? My machine has 3GB of RAM and runs a T5800 2.0GHz DuoCore Processor. I use Visual Studio often and have loads of things open at the same time a lot. That's why I'm considering an upgrade.

    Read the article

  • Korn Shell - Test with variable that may be not set

    - by C. Ross
    I have the following code in KornShell FAILURE=1 SUCCESS=0 isNumeric(){ if [ -n "$1" ]; then case $1 in *[!0-9]* | "") return $FAILURE; * ) return $SUCCESS; esac; else return $FAILURE; fi; } #... FILE_EXT=${FILE#*.} if [ isNumeric ${FILE_EXT} ]; then echo "Numbered file." fi #... In some cases the file name not have an extension, and this causes the FILE_EXT variable to be empty, which causes the following error: ./script[37]: test: 0403-004 Specify a parameter with this command. How should I be calling this function so that I do not get this error?

    Read the article

  • Intel RAID0 on Windows 8 not Displaying Correct Media Type

    - by kobaltz
    I have my primary C Drive which consists of 2 Intel 120GB SSD Drives in a RAID0. I have a clean install of Windows 8 Pro, latest MEI software, latest RST software, latest Intel Toolbox. Prior to this I had installed Windows 8 Pro as an upgrade. When I went into the Optimize Drives while in the Upgrade installation, it showed the Media Types as Solid State Drives. However, now since I am in a brand new install, it is showing the Media Type as Hard Disk Drive. I am worried about this because of the trim not working properly. Before when in the upgrade, it showed SSD as the media type and the Optimize option would perform a manual trim. Unfortunately, my search credentials on Google are so common to many other things (ie Raid0, SSD, Windows 8, Media Type) that all I am finding are useless topics. Before, (found on random site) it showed the Media Type as below

    Read the article

  • Install newer version of GCC in Knoppix

    - by Z boson
    I have Knoppix 7.30 installed on a USB with a persistent file. It comes with GCC 4.7.2. I would like to install GCC 4.9.x or 4.8.x. Being that Knopppix is based on Debian I would normally do something like apt-get upgrade But as far as I understand it's not recommended to do this with Knoppix. Warning: apt-get upgrade is a BAD IDEA. It will, quite probably, render your KNOPPIX remaster unbootable, or broken in some way. A far safer method is to only upgrade packages as necessary. I can say from experience that that is the cse. So how should I go about installing GCC 4.9? Another option would be to "remaster" knoppix to use GCC 4.9 by default rather than install it with a persistent file. I would be happy with either solution.

    Read the article

  • Upgrading phpmyadmin (and other packages) on Debian Squeeze

    - by westexasman
    I just setup a new VM with Debian Squeeze (latest stable release, 6.0.4). I am going for a webserver, so I installed the usual... apache, php5, mysql, phpmyadmin, etc. Everything went well, everything is working. My question is about upgrading packages. I noticed the phpmyadmin version is 3.3.7... the latest is 3.4.10.1. Doing apt-get update/upgrade does not upgrade the package. How does one go about upgrading packages on a Debian Squeeze server if apt-get update/upgrade does not work? Thanks!

    Read the article

< Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >