Search Results

Search found 21141 results on 846 pages for 'old mac'.

Page 554/846 | < Previous Page | 550 551 552 553 554 555 556 557 558 559 560 561  | Next Page >

  • I want to replace 120GB SSD with 240GB SSD. Will I need to reinstall Windows?

    - by Borek
    Some SSD vendors offer "upgrade kits" that are supposed to move the operating system from the old disk to the new one without the need to reinstall it, however, it didn't really quite work for me in the past and I always ended up installing the system from scratch. I'd really like to avoid it now so I'd like to ask: Generally speaking, when upgrading from a smaller to a larger SSD, will something like Windows Complete PC Backup and Restore work? Has someone experienced a seamless upgrade? Is there a proven tool to do that? (That worked for you, not that should work theoretically.) My problem usually is that the restored system sees a different disk, thinks it is a different hardware and doesn't want to restore.

    Read the article

  • I think my computer is eating my USB devices

    - by user1068446
    Hi so I think the USB ports on the front of my desktop PC are eating all the devices that are plugged in. (I've found with a couple of USB disks I put at the start of the year, were then unusable on any device after wards). If this is the case, what's likely to be the problem, and how do I fix it? What would a good, simple way to diagnose the problem be? I'm a bit scared to put any of my USB devices in, because that sounds expensive. (I'm thinking I might look out for some cheap old disks).

    Read the article

  • Can't access unprotected .pst file in Outlook 2010

    - by KGraves
    My father has a Windows 7 x64 system with 32-bit Outlook 2010. He asked me to access his archived mail to get to some old mail. When going to open the file I'm asked for a password (he never applied one) and I get a message saying "password is incorrect. Re-Type password". Since he told me that he never set a password I tried opening the .pst with Nucleus, a tool that allows me to view the .pst's. Does anyone know what the deal is? With the free version of Nucleus we can't view the attachments.

    Read the article

  • How to compare differences between directories (linux)

    - by Phil
    I have two directories - one from earlier backup and second from newest backup. How do i compare what changes were made to files in directory from newest backup on Linux? Also how do i display changes in for example text and php files - i'm thinking about something like revision history on wikipedia where you see old version on one side of the screen and newest version on other and changes are highlighted. How do i achieve something like that? edit: How do i also compare remote dir with local?

    Read the article

  • Why is the word PERSONAL still relevant in the term PC? [closed]

    - by Bill
    I have spent half an hour trying to change an icon on my Win-7-64 machine (Why Can't I Change the Icon). One reasonable suggestion (reasonable in terms of having a solution, not reasonable in terms of having to jump through these hoops for such a basic requirement) was to delete the old icon from the %userprofile% \ Local Settings..., however when I click on this folder in Windows Explorer I am told the folder is not accessible - Access Denied. Well! It's my PERSONAL computer isn't it? Isn't that what PC stands for? It's MY computer - why can't I get access to that folder? It's about time we started calling these machines MCs (Microsoft Computer), or WCs (Windows Computer) - because they sure as hell aint PERSONAL damn computers!!!!

    Read the article

  • After software update, why is webmin showing wrong mySQL version?

    - by teleute00
    I did a full OS/package update on a server running webmin. Now when I go into webmin, it's showing the correct new version of Ubuntu, Apache, etc...but it's still showing the old version of mySQL. At the command line, if I enter mysql -v, it shows the correct new one, but it's just not getting recognized in webmin. I found a file at /etc/webmin/mysql called "version", and it's just a text file with the version number in it. So theoretically I could just change this and it would be fine. However, this obviously doesn't seem like how this should go. How should this file normally get updated? ETA: Services have all been restarted (in fact, there's been a full reboot). Sorry for not specifying this...it just seemed like the obvious first thing to do and not worth mentioning. :-)

    Read the article

  • No option to import documents into Google Docs [migrated]

    - by Code Droid
    How do I import document into Google Docs? I don't see an import option. Do I open the drive and drop it in? I am trying to follow these ACRA (Application Crash Report for Android) steps: Login to your Google Docs account Import the CrashReports-template.csv contained in the archive (acra-4.2.3/CrashReport/doc), with conversion enabled Open the imported document Rename it as you like In the Google Docs menu, click on Tools / Form / Create a form Where is import option and conversion enabled? I have Google Docs and I'm on a Mac.

    Read the article

  • My monitor constantly changes from DVI-D to HD15

    - by guest
    I noticed this lately, and it's becoming more often. Basically, when I start my PC, my monitor, which is connected with a DVI-D cable (I don't know what it's called), screen switches to the HD15, where HD15 isn't connected at all. Like, input is changed from DVI-D to HD15, therefore it just blacks out, and then goes back... it's not constant, but it's in random time intervals... I can't do anything on my computer because of this. I even tried connecting it with both DVI-D cable and HD15 cable to the processor, but it still sucks. Is there any way I could force my monitor to stay at only one input method? I'm on XP SP3, and this monitor is about 10 or less years old, a Sony model.

    Read the article

  • 1TB HDD making strange noise (not a common one)

    - by Darkkurama
    I built a new PC some days ago, and everything seems perfect, except that the 1 TB HDD I cloned from my old 500 GB HDD is making a deep weird sound. First of all, every time I access the disk, I hear a deep sound, and when the PC is turning on, I hear some clicking (the rapid clicking is my mouse, I'm opening and closing folders to trigger the vibrating deep weird sound I'm describing). I'm using this 1TB disk for data mainly (I use a SSD as the OS). As background information, the disk is a seagate barracuda 7200 rpm which was RMAd and replaced with a refurbished one. Maybe the refurbished disks make these noises? should I worry about my data? (although the disk is working normal and passed a seagatetools short generic test? Thanks! PS: I recorded the sounds, just click on the links. Thanks

    Read the article

  • NET USE LPT2: printer_port

    - by tpierzina
    This I understand: net use P: \\SOME_COMPUTER\SOME_SHARE net use P: \\1.2.3.4\SOME_SHARE (the second argument is a logical share on the given computer) This, I do NOT understand: net use LPT2: IP_1.2.3.4 (where IP_1.2.3.4 is the name of a "port" assigned to a printer; the IP is a valid and responding device, but the full string "IP_1.2.3.4" is not) Can anyone tell me, is there ever a syntax of NET USE that could operate on a printer port like that? I can't get it to work, can't find anything via Google, and am practically in tears. [Sorry if this is cheating, this is basically a re-post of http://stackoverflow.com/questions/1884235/old-school-windows-2000-printing-or-when-is-a-port-name-a-computer but with the scope narrowed just to the main issue at hand.

    Read the article

  • Special filename in OS X; semantics of asterisk?

    - by kbp
    So I'm using Mac OS X 10.5, and I have a file called _Mail.grxml that is being handled funny. ls -l will show the file, but ls -l * will not. It's just this one file; note ls -l | wc -l gives 43 (the number of files in the directory), but ls -l * | wc -l gives 42. So the question is -- Are there filenames that OSX just doesn't play nicely with? Or are the semantics of the * on the command line different from what I expect? (Note this is NOT the only file whose name begins with an underscore; the other files are picked up just fine by *).

    Read the article

  • What are my options for a secure External File Share in Server 2008 R2?

    - by Nitax
    Hi, I have a Windows Server 2008 R2 machine installed on a home network with a number of files that need to be shared in a few different scenarios. I would like for all three scenarios to have a solution with some sort of encyption to protect the data during transfer. Scenario 1: I need to access files from my laptop (Mac OSX) or another computer outside of the network. This option seems like the easy one to answer in that I could use LogMeIn, the windows VPN, etc. to create such a connection. Scenario 2: I need to provide access to another user with minimal installation / configuration on his or her end. This makes me think of the new FTP 7.5 provided with Server 2008 R2 but i'm not sure of the details: Does it support SSH or some other form of encryption?, can an OSX user connect?, etc. My question here is what are my options? I really just don't know where to get started...

    Read the article

  • Can I wake up PC/laptop on remote printing request?

    - by dzieciou
    I have an old Epson printer that does not have network adapter, only a USB socket. I have also a computer with network card. I could combine them to put this printer in my local network, but then I would need to keep the computer always up and running. That's definitely not economic. Is there a way to wake up computer when printing request arrives? Something like Wake-on-LAN? Or I am forced to use additional hardware?

    Read the article

  • Apache server, PHP

    - by user65649
    I am running a php site on my apache server (Mac). I am having trouble displaying images on the site when I access it externally or from another computer on the same server. If I try to access the image directly. website.com/image.jpg I get a broken link icon and can't display the image. Any ideas what could cause this? My images are embedded using a style.css file. background-image:url(image.jpg);

    Read the article

  • I can't get an IP address from a VirtualBox VM on OSX from some routers (ie Internet cafes)

    - by ezrock
    On OSX 10.7.3, VirtualBox, using bridged adapter. Everything on the networking side works perfectly as expected in some networking environments, like my home router and some cafes. In others, I can't get an IP address over DHCP, and I don't know why. I suspect there is some setting on the router that is preventing me, or I have some issue with my MAC adress. When it's not working, in syslog, I'll see a few DHCPDISCOVER messages as my VM tries to find a DHCP server, and after a while, "No DHCPOFFERS received" And when I go to a "good" router, a simple "service network restart" is all I need to get an IP. Any ideas?

    Read the article

  • How to copy a folder with many files with integrity check?

    - by RafaelM
    I just got a new hard drive and I want to move many of the folders from the old hard drive to the new one, but I want to make sure everything is copied over correctly. I tried using md5summer to generate md5 sums of the original files, copied files over and then tried to compare md5 sums of both sets of files. This took ages because there are many large video files. Is there any software I can use to make this process as painless as possible? I just need basic file integrity checking. Thanks in advance

    Read the article

  • My server hostname doesn't work? [on hold]

    - by xSpartanCx
    I've got a raspberry pi running raspbian server edition. It's a modified debian that runs well on the rPi. My problem is that the only way I can ssh into it with putty is through the static ip. My router doesn't recognize the hostname; it shows the mac address as the name. This causes the pi not to show my website online (I think). The only way I've gotten it to work is using my other linux server to forward using virtual hosts, and that has to use the ip address, too. However, now that I have my other server off, the website doesn't work and I can't ssh (or find it anywhere on the network) using the hostname.

    Read the article

  • Best way for an external (remote) graphics designer to style ASP.NET MVC 4 app?

    - by Tom K
    My customer has his own graphics designer he wants to use to style his web application we're building in ASP.NET MVC 4. Our solution is in Bitbucket, but if he can't run it what choices do we have? I doubt he uses Visual Studio 2012. One idea is for us to publish to our solution to a file system, send it to him, have him create a local IIS website on his machine (assuming he isn't using a Mac). Mocking data or pointing to a test SQL in Azure isn't a problem. Then he can make changes to .css and .cshtml files. Will this even work? The point is that he needs to be able to test his changes. I know he can modify the views and just check-in. But he needs to deliver a working design. So it seems inefficient. The graphics designer will have access to our test site so he can see how it works, what data we have and fields. Another idea is for him to build a static mock site using just HTML/CSS. Later I'd integrate his styles into customer's solution, split his html into partial views which we use and add Razor syntax. Again, we'd like to leverage graphics designer for all of this. Is there a best practice documented around this subject? How do other teams deal with this situation?

    Read the article

  • [ASP.NET 4.0] Persisting Row Selection in Data Controls

    - by HosamKamel
    Data Control Selection Feature In ASP.NET 2.0: ASP.NET Data Controls row selection feature was based on row index (in the current page), this of course produce an issue if you try to select an item in the first page then navigate to the second page without select any record you will find the same row (with the same index) selected in the second page! In the sample application attached: Select the second row in the books GridView. Navigate to second page without doing any selection You will find the second row in the second page selected. Persisting Row Selection: Is a new feature which replace the old selection mechanism which based on row index to be based on the row data key instead. This means that if you select the third row on page 1 and move to page 2, nothing is selected on page 2. When you move back to page 1, the third row is still selected. Data Control Selection Feature In ASP.NET 3.5 SP1: The Persisting Row Selection was initially supported only in Dynamic Data projects Data Control Selection Feature In ASP.NET 4.0: Persisted selection is now supported for the GridView and ListView controls in all projects. You can enable this feature by setting the EnablePersistedSelection property, as shown below: Important thing to note, once you enable this feature you have to set the DataKeyNames property too because as discussed the full approach is based on the Row Data Key Simple feature but  is a much more natural behavior than the behavior in earlier versions of ASP.NET. Download Demo Project

    Read the article

  • Intel Corporation Ethernet Connection does not start properly

    - by Oscar Alejos
    I'm experiencing some problems when trying to connect my PC to the router through a switch. When the PC is directly connected to the router, everything works fine, Ubuntu (14.04) starts normally, and the Internet connection runs inmediately. The Ethernet controller is the Intel Corporation Ethernet Connection, as lspci returns: $ lspci | grep Eth 00:19.0 Ethernet controller: Intel Corporation Ethernet Connection I217-V (rev 04) However, when I try to connect through the switch what I get is the following. dmesg returns: $ dmesg | grep eth [ 1.035585] e1000e 0000:00:19.0 eth0: registered PHC clock [ 1.035587] e1000e 0000:00:19.0 eth0: (PCI Express:2.5GT/s:Width x1) 00:22:4d:a7:be:5d [ 1.035589] e1000e 0000:00:19.0 eth0: Intel(R) PRO/1000 Network Connection [ 1.035625] e1000e 0000:00:19.0 eth0: MAC: 11, PHY: 12, PBA No: FFFFFF-0FF [ 1.357838] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready [ 2.165413] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready [ 2.165574] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready [ 2.641287] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready [ 16.715086] e1000e: eth0 NIC Link is Up 100 Mbps Full Duplex, Flow Control: Rx/Tx [ 16.715090] e1000e 0000:00:19.0 eth0: 10/100 speed: disabling TSO [ 16.715117] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready It looks like eth0 is properly working. Actually, nm-tool returns: $ nm-tool - Device: eth0 [Conexión cableada] ------------------------------------------- Type: Wired Driver: e1000e State: connected Default: yes HW Address: 00:22:4D:A7:BE:5D Capabilities: Carrier Detect: yes Speed: 100 Mb/s Wired Properties Carrier: on IPv4 Settings: Address: 192.168.1.30 Prefix: 24 (255.255.255.0) Gateway: 192.168.1.1 DNS: 80.58.61.250 DNS: 80.58.61.254 DNS: 192.168.1.1 However, ping returns: $ ping 192.168.1.1 PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data. From 192.168.1.30 icmp_seq=1 Destination Host Unreachable From 192.168.1.30 icmp_seq=2 Destination Host Unreachable From 192.168.1.30 icmp_seq=3 Destination Host Unreachable The connection is restored by restarting it: # ifconfig eth0 down # ifconfig eth0 up From this point on, everything runs smoothly, as if the PC were directly connected to the router. It seems to be an issue related to the integrated LAN adaptor and the Ethernet controller, since my laptop connects without any problem. My desktop board is an Intel DB85FL. I'd be grateful if anyone could give some ideas on how to solve this issue. Thank you in advance.

    Read the article

  • Comments syntax for Idoc Script

    - by kyle.hatlestad
    Maybe this is widely known and I'm late to the party, but I just ran across the syntax for making comments in Idoc Script. It's been something I've been hoping to see for a long time. And it looks like it quietly snuck into the 10gR3 release. So for comments in Idoc Script, you simply [[% surround your comments in these symbols. %]] They can be on the same line or span multiple lines. If you look in the documentation, it still mentions making comments using the syntax. Well, that's certainly not an ideal approach. You're stuffing your comment into an actual variable, it's taking up memory, and you have to watch double-quotes in your comment. A perhaps better way in the old method is to start with my comments . Still not great, but now you're not assigning something to a variable and worrying about quotes. Unfortunately, this syntax only works in places that use the Idoc format. It can't be used in Idoc files that get indexed (.hcsp & .hcsf) and use the <!--$...--> format. For those, you'll need to continue using the older methods. While on the topic, I thought I would highlight a great plug-in to Notepad++ that Arnoud Koot here at Oracle wrote for Idoc Script. It does script highlighting as well as type-ahead/auto-completion for common variables, functions, and services. For some reason, I can never seem to remember if it's DOC_INFO_LATESTRELEASE or DOC_INFO_LATEST_RELEASE, so this certainly comes in handy. I've updated his plug-in to use this new comments syntax. You can download a copy of the plug-in here which includes installation instructions.

    Read the article

  • Default /etc/apt/sources.list?

    - by piemesons
    I need default source list for ubuntu 10.04. Can anybody help me? Here is Mine:--- Ubuntu supported packages deb http://archive.ubuntu.com/ubuntu/ lucid main restricted multiverse universe deb http://archive.ubuntu.com/ubuntu/ lucid-backports main restricted universe multiverse deb http://archive.ubuntu.com/ubuntu/ lucid-updates main restricted multiverse universe deb http://security.ubuntu.com/ubuntu lucid-security main restricted universe multiverse deb http://security.ubuntu.com/ubuntu lucid-proposed main restricted universe multiverse deb-src http://archive.ubuntu.com/ubuntu/ lucid main restricted multiverse universe deb-src http://archive.ubuntu.com/ubuntu/ lucid-backports main restricted universe multiverse deb-src http://archive.ubuntu.com/ubuntu/ lucid-updates main restricted multiverse universe deb-src http://security.ubuntu.com/ubuntu lucid-security main restricted universe multiverse deb-src http://security.ubuntu.com/ubuntu lucid-proposed main restricted universe multiverse Canonical Commercial Repository deb http://archive.canonical.com/ubuntu lucid partner deb http://archive.canonical.com/ubuntu lucid-backports partner deb http://archive.canonical.com/ubuntu lucid-updates partner deb http://archive.canonical.com/ubuntu lucid-security partner deb http://archive.canonical.com/ubuntu lucid-proposed partner deb-src http://archive.canonical.com/ubuntu lucid partner deb-src http://archive.canonical.com/ubuntu lucid-backports partner deb-src http://archive.canonical.com/ubuntu lucid-updates partner deb-src http://archive.canonical.com/ubuntu lucid-security partner deb-src http://archive.canonical.com/ubuntu lucid-proposed partner medibuntu deb http://packages.medibuntu.org/ lucid free non-free deb-src http://packages.medibuntu.org/ lucid free non-free PlayOnLinux deb http://deb.playonlinux.com/ lucid main opera deb http://deb.opera.com/opera/ lenny non-free google deb http://dl.google.com/linux/deb/ stable non-free main Dropbox Official Source deb http://linux.dropbox.com/ubuntu karmic main Skype deb http://download.skype.com/linux/repos/debian/ stable non-free This is the error i am getting:-- (sudo apt-get update) Get:9 http://dl.google.com stable/main Packages [1,076B] Err http://ppa.launchpad.net lucid/main Packages 404 Not Found Get:10 http://dl.google.com stable/main Packages [735B] and finally :-- Fetched 9,724B in 3s (2,645B/s) W: Failed to fetch http://ppa.launchpad.net/bisig/ppa/ubuntu/dists/lucid/main/binary-i386/Packages.gz 404 Not Found E: Some index files failed to download, they have been ignored, or old ones used instead.

    Read the article

  • PPA causing 404 error?

    - by piemesons
    I need default source list for ubuntu 10.04. Can anybody help me? Here is Mine:--- Ubuntu supported packages deb http://archive.ubuntu.com/ubuntu/ lucid main restricted multiverse universe deb http://archive.ubuntu.com/ubuntu/ lucid-backports main restricted universe multiverse deb http://archive.ubuntu.com/ubuntu/ lucid-updates main restricted multiverse universe deb http://security.ubuntu.com/ubuntu lucid-security main restricted universe multiverse deb http://security.ubuntu.com/ubuntu lucid-proposed main restricted universe multiverse deb-src http://archive.ubuntu.com/ubuntu/ lucid main restricted multiverse universe deb-src http://archive.ubuntu.com/ubuntu/ lucid-backports main restricted universe multiverse deb-src http://archive.ubuntu.com/ubuntu/ lucid-updates main restricted multiverse universe deb-src http://security.ubuntu.com/ubuntu lucid-security main restricted universe multiverse deb-src http://security.ubuntu.com/ubuntu lucid-proposed main restricted universe multiverse Canonical Commercial Repository deb http://archive.canonical.com/ubuntu lucid partner deb http://archive.canonical.com/ubuntu lucid-backports partner deb http://archive.canonical.com/ubuntu lucid-updates partner deb http://archive.canonical.com/ubuntu lucid-security partner deb http://archive.canonical.com/ubuntu lucid-proposed partner deb-src http://archive.canonical.com/ubuntu lucid partner deb-src http://archive.canonical.com/ubuntu lucid-backports partner deb-src http://archive.canonical.com/ubuntu lucid-updates partner deb-src http://archive.canonical.com/ubuntu lucid-security partner deb-src http://archive.canonical.com/ubuntu lucid-proposed partner medibuntu deb http://packages.medibuntu.org/ lucid free non-free deb-src http://packages.medibuntu.org/ lucid free non-free PlayOnLinux deb http://deb.playonlinux.com/ lucid main opera deb http://deb.opera.com/opera/ lenny non-free google deb http://dl.google.com/linux/deb/ stable non-free main Dropbox Official Source deb http://linux.dropbox.com/ubuntu karmic main Skype deb http://download.skype.com/linux/repos/debian/ stable non-free This is the error i am getting:-- (sudo apt-get update) Get:9 http://dl.google.com stable/main Packages [1,076B] Err http://ppa.launchpad.net lucid/main Packages 404 Not Found Get:10 http://dl.google.com stable/main Packages [735B] and finally :-- Fetched 9,724B in 3s (2,645B/s) W: Failed to fetch http://ppa.launchpad.net/bisig/ppa/ubuntu/dists/lucid/main/binary-i386/Packages.gz 404 Not Found E: Some index files failed to download, they have been ignored, or old ones used instead.

    Read the article

  • Upgrading Team Foundation Server 2008 to 2010

    - by Martin Hinshelwood
    I am sure you will have seen my posts on upgrading our internal Team Foundation Server from TFS2008 to TFS2010 Beta 2, RC and RTM, but what about a fresh upgrade of TFS2008 to TFS2010 using the RTM version of TFS. One of our clients is taking the plunge with TFS2010, so I have the job of doing the upgrade. It is sometimes very useful to have a team member that starts work when most of the Sydney workers are heading home as I can do the upgrade without impacting them. The down side is that if you have any blockers then you can be pretty sure that everyone that can deal with your problem is asleep I am starting with an existing blank installation of TFS 2010, but Adam Cogan let slip that he was the one that did the install so I thought it prudent to make sure that it was OK. Verifying Team Foundation Server 2010 We need to check that TFS 2010 has been installed correctly. First, check the Admin console and have a root about for any errors. Figure: Even the SQL Setup looks good. I don’t know how Adam did it! Backing up the Team Foundation Server 2008 Databases As we are moving from one server to another (recommended method) we will be taking a backup of our TFS2008 databases and resorting them to the SQL Server for the new TFS2010 Server. Do not just detach and reattach. This will cause problems with the version of the database. If you are running a test migration you just need to create a backup of the TFS 2008 databases, but if you are doing the live migration then you should stop IIS on the TFS 2008 server before you backup the databases. This will stop any inadvertent check-ins or changes to TFS 2008. Figure: Stop IIS before you take a backup to prevent any TFS 2008 changes being written to the database. It is good to leave a little time between taking the TFS 2008 server offline and commencing the upgrade as there is always one developer who has not finished and starts screaming. This time it was John Liu that needed 10 more minutes to make his changes and check-in, so I always give it 30 minutes and see if anyone screams. John Liu [SSW] said:   are you doing something to TFS :-O MrHinsh [SSW UK][VS ALM MVP] said:   I have stopped TFS 2008 as per my emails John Liu [SSW] said:   haven't finish check in @_@   can we have it for 10mins? :) MrHinsh [SSW UK][VS ALM MVP] said:   TFS 2008 has been started John Liu [SSW] said:   I love you! -IM conversation at TFS Upgrade +25 minutes After John confirmed that he had everything done I turned IIS off again and made a cup of tea. There were no more screams so the upgrade can continue. Figure: Backup all of the databases for TFS and include the Reporting Services, just in case.   Figure: Check that all the backups have been taken Once you have your backups, you need to copy them to your new TFS2010 server and restore them. This is a good way to proceed as if we have any problems, or just plain run out of time, then you just turn the TFS 2008 server back on and all you have lost is one upgrade day, and not 10 developer days. As per the rules, you should record the number of files and the total number of areas and iterations before the upgrade so you have something to compare to: TFS2008 File count: Type Count 1 1845 2 15770 Areas & Iterations: 139 You can use this to verify that the upgrade was successful. it should however be noted that the numbers in TFS 2010 will be bigger. This is due to some of the sorting out that TFS does during the upgrade process. Restore Team Foundation Server 2008 Databases Restoring the databases is much more time consuming than just attaching them as you need to do them one at a time. But you may be taking a backup of an operational database and need to restore all your databases to a particular point in time instead of to the latest. I am doing latest unless I encounter any problems. Figure: Restore each of the databases to either a latest or specific point in time.     Figure: Restore all of the required databases Now that all of your databases are restored you now need to upgrade them to Team Foundation Server 2010. Upgrade Team Foundation Server 2008 Databases This is probably the easiest part of the process. You need to call a fire and forget command that will go off to the database specified, find the TFS 2008 databases and upgrade them to 2010. During this process all of the 6 main TFS 2008 databases are merged into the TfsVersionControl database, upgraded and then the database is renamed to TFS_[CollectionName]. The rename is only the database and not the physical files, so it is worth going back and renaming the physical file as well. This keeps everything neat and tidy. If you plan to keep the old TFS 2008 server around, for example if you are doing a test migration first, then you will need to change the TFS GUID. This GUID is unique to each TFS instance and is preserved when you upgrade. This GUID is used by the clients and they can get a little confused if there are two servers with the same one. To kick of the upgrade you need to open a command prompt and change the path to “C:\Program Files\Microsoft Team Foundation Server 2010\Tools” and run the “import” command in  “tfsconfig”. TfsConfig import /sqlinstance:<Previous TFS Data Tier>                  /collectionName:<Collection Name>                  /confirmed Imports a TFS 2005 or 2008 data tier as a new project collection. Important: This command should only be executed after adequate backups have been performed. After you import, you will need to configure portal and reporting settings via the administration console. EXAMPLES -------- TfsConfig import /sqlinstance:tfs2008sql /collectionName:imported /confirmed TfsConfig import /sqlinstance:tfs2008sql\Instance /collectionName:imported /confirmed OPTIONS: -------- sqlinstance         The sql instance of the TFS 2005 or 2008 data tier. The TFS databases at that location will be modified directly and will no longer be usable as previous version databases.  Ensure you have back-ups. collectionName      The name of the new Team Project Collection. confirmed           Confirm that you have backed-up databases before importing. This command will automatically look for the TfsIntegration database and verify that all the other required databases exist. In this case it took around 5 minutes to complete the upgrade as the total database size was under 700MB. This was unlike the upgrade of SSW’s production database with over 17GB of data which took a few hours. At the end of the process you should get no errors and no warnings. The Upgrade operation on the ApplicationTier feature has completed. There were 0 errors and 0 warnings. As this is a new server and not a pure upgrade there should not be a problem with the GUID. If you think at any point you will be doing this more than once, for example doing a test migration, or merging many TFS 2008 instances into a single one, then you should go back and rename the physical TfsVersionControl.mdf file to the same as the new collection. This will avoid confusion later down the line. To do this, detach the new collection from the server and rename the physical files. Then reattach and change the physical file locations to match the new name. You can follow http://www.mssqltips.com/tip.asp?tip=1122 for a more detailed explanation of how to do this. Figure: Stop the collection so TFS does not take a wobbly when we detach the database. When you try to start the new collection again you will get a conflict with project names and will require to remove the Test Upgrade collection. This is fine and it just needs detached. Figure: Detaching the test upgrade from the new Team Foundation Server 2010 so we can start the new Collection again. You will now be able to start the new upgraded collection and you are ready for testing. Do you remember the stats we took off the TFS 2008 server? TFS2008 File count: Type Count 1 1845 2 15770 Areas & Iterations: 139 Well, now we need to compare them to the TFS 2010 stats, remembering that there will probably be more files under source control. TFS2010 File count: Type Count 1 19288 Areas & Iterations: 139 Lovely, the number of iterations are the same, and the number of files is bigger. Just what we were looking for. Testing the upgraded Team Foundation Server 2010 Project Collection Can we connect to the new collection and project? Figure: We can connect to the new collection and project.   Figure: make sure you can connect to The upgraded projects and that you can see all of the files. Figure: Team Web Access is there and working. Note that for Team Web Access you now use the same port and URL as for TFS 2010. So in this case as I am running on the local box you need to use http://localhost:8080/tfs which will redirect you to http://localhost:8080/tfs/web for the web access. If you need to connect with a Visual Studio 2008 client you will need to use the full path of the new collection, http://[servername]/tfs/[collectionname] and this will work with all of your collections. With Visual Studio 2005 you will only be able to connect to the Default collection and in both VS2008 and VS2005 you will need to install the forward compatibility updates. Visual Studio Team System 2005 Service Pack 1 Forward Compatibility Update for Team Foundation Server 2010 Visual Studio Team System 2008 Service Pack 1 Forward Compatibility Update for Team Foundation Server 2010 To make sure that you have everything up to date, make sure that you run SSW Diagnostics and get all green ticks. Upgrade Done! At this point you can send out a notice to everyone that the upgrade is complete and and give them the connection details. You need to remember that at this stage we have 2008 project upgraded to run under TFS 2010 but it is still running under that same process template that it was running before. You can only “enable” 2010 features in a process template you can’t upgrade. So what to do? Well, you need to create a new project and migrate things you want to keep across. Souse code is easy, you can move or Branch, but Work Items are more difficult as you can’t move them between projects. This instance is complicated more as the old project uses the Conchango/EMC Scrum for Team System template and I will need to write a script/application to get the work items across with their attachments in tact. That is my next task! Technorati Tags: TFS 2010,TFS 2008,VS ALM

    Read the article

  • LinqPad with Azure Table Storage

    - by Sarang
    LinqPad as we all know has been a wonderful tool for running ad-hoc queries. With Windows Azure Table storage in picture LinqPad was no longer in picture and we shifted focus to Cloud Storage Studio only to realize the limited and strange querying capabilities of CSS. With some tweaking to Linqpad we can get the comfortable old shoe of ad-hoc queries with LinqPad in the Windows Azure Table storage. Steps: 1. Start LinqPad 2. Right Click in the query window and select “Query Properties” 3. In The Additional References add reference to Microsoft.WindowsAzure.StorageClient, System.Data.Services.Client.dll and the assembly containing the implementation of the DataServiceContext class tied to the Windows Azure table storage. 4. In the additional namespace imports import the same three namespaces mentioned above. 5. Then we need to provide following details. a. Table storage account name and shared key. b. DataServiceContext implementing class in your code. c. A LINQ query. e.x.         var storageAccountName = "myStorageAccount";  // Enter valid storage account name         var storageSharedKey = "mysharedKey"; // Enter valid storage account shared key         var uri = new System.Uri("http://table.core.windows.net/");         var storageAccountInfo = new CloudStorageAccount(new StorageCredentialsAccountKey(storageAccountName, storageSharedKey), false);         var serviceContext = new TweetPollDataServiceContext(storageAccountInfo); // Specify the DataServiceContext implementation         // The query         var query = from row in serviceContext.Table                     select row;         query.Dump(); Thanks LinqPad! Technorati Tags: LinqPad,Azure Table Storage,Linq

    Read the article

< Previous Page | 550 551 552 553 554 555 556 557 558 559 560 561  | Next Page >