Search Results

Search found 6504 results on 261 pages for 'tfs upgrade'.

Page 88/261 | < Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >

  • Virtual Machines: What components should I upgrade to improve running virtual machines?

    - by joshsvoss
    at work I need to have one or sometimes two instances of a vmware virtual Windows 7 machine running on my real Windows 7 machine. The computer I'm using is Dell Precision 490 from 2009 I believe, possibly earlier. Running Windows 7 ultimate. Problems while running VM's: The entire computer slows down when a vmware instance is running. Pages take a while to react to a scroll, applications take forever to launch, and programs hang both in the virtual machines and on the real one. So, what components should I upgrade to improve this? I guess a more pointed question would be, which components will help the most? Possible options: Getting 8gb of RAM instead of 4gb new graphics card new processors? (Is that really an option?) My intuition tells me it will be a combination of the RAM and graphics card. There is also the possibility that an '09 tower just isn't cut out for vm's and our business should purchase a new tower.

    Read the article

  • How to resolve "HTTP/1.1 403 Forbidden" errors from iCal/CalDAV server after upgrade to Snow Leopard Server?

    - by morgant
    We recently upgraded our Open Directory Master & Replica to Mac OS X 10.6.4 Snow Leopard Server. We had a mismatched server FQDN & LDAP Search Base/Kerberos Realm, so we exported all users & groups, created the new Open Directory Master w/matching FQDN & Search Base/Realm, reimported users & groups, and re-bound all servers & workstations to the new OD Master. At the same time as all of this, we upgraded our iCal/CalDAV server to Mac OS X 10.6.4 Snow Leopard Server. Ever since doing so, we've seen the following issues with our iCal/CalDAV server and iCal clients on both Mac OS X 10.5 Leopard & Mac OS X 10.6: If a user attempts to move or delete an event (single or repeating) that was created prior to the upgrade to 10.6 Server, they get the following error: Access to "blah" in "blah" in account "blah" is not permitted. The server responded: "HTTP/1.1 403 Forbidden" to operation CalDAVWriteEntityQueueableOperation. New users added to the directory get the following error when attempting to add their account to in iCal's preferences: The user "blah" has no configured pricipals. Confirm with your network administrator that your account has at least one CalDAV principal configured. Interestingly, we've since discovered that users seem to be able to delete individual events from an old repeating event without error, but that's a massive amount of work to get rid of a repeating event. I will note that we have not yet added an SRV record in DNS as instructed on page 19 of iCal_Server_Admin_v10.6.pdf. Further Investigation: In this particular case, a user is attempting to decline repeating events created prior to the upgrade to Snow Leopard Server. Granting the user full write access with sudo calendarserver_manage_principals --add-write-proxy users:user1 users:user2 (which did work) doesn't allow deletion of the events. Still get the usual error: Access to "blah blah" in "blah blah" in account "blah blah" is not permitted. The server responded: "HTTP/1.1 403 Forbidden" to operation CalDAVWriteEntityQueueableOperation. The error that shows up in /var/log/caldavd/error.log on the iCal Server when attempting to delete one of the events is: 2011-03-17 15:14:30-0400 [-] [caldav-8009] [PooledMemCacheProtocol,client] [twistedcaldav.extensions#info] PUT /calendars/__uids__/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX/calendar/XXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX.ics HTTP/1.1 2011-03-17 15:14:30-0400 [-] [caldav-8009] [PooledMemCacheProtocol,client] [twistedcaldav.scheduling.implicit#error] Cannot change ORGANIZER: UID:XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX And the error in /var/log/system.log on the client is: Mar 17 15:14:30 192-168-21-169-dhcp iCal[33509]: CalDAV CalDAVWriteEntityQueueableOperation failed: status 'HTTP/1.1 403 Forbidden' request:\n\nBEGIN:VCALENDAR^M\nVERSION:2.0^M\nPRODID:-//Apple Inc.//iCal 3.0//EN^M\nCALSCALE:GREGORIAN^M\nBEGIN:VTIMEZONE^M\nTZID:US/Eastern^M\nBEGIN:DAYLIGHT^M\nTZOFFSETFROM:-0500^M\nTZOFFSETTO:-0400^M\nDTSTART:20070311T020000^M\nRRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU^M\nTZNAME:EDT^M\nEND:DAYLIGHT^M\nBEGIN:STANDARD^M\nTZOFFSETFROM:-0400^M\nTZOFFSETTO:-0500^M\nDTSTART:20071104T020000^M\nRRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU^M\nTZNAME:EST^M\nEND:STANDARD^M\nEND:VTIMEZONE^M\nBEGIN:VEVENT^M\nSEQUENCE:5^M\nDTSTART;TZID=US/Eastern:20090117T094500^M\nDTSTAMP:20081227T143043Z^M\nSUMMARY:blah blah^M\nATTENDEE;CN="First Last";CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT:urn:uuid^M\n :XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX^M\nATTENDEE;CN="First Last";CUTYPE=INDIVIDUAL;PARTSTAT=ACCEPTED:mailto:user@d^M\n omain.tld^M\nEXDATE;TZID=US/Eastern:20110319T094500^M\nDTEND;TZID=US/Eastern:20090117T183000^M\nRRULE:FREQ=WEEKLY;INTERVAL=1^M\nTRANSP:OPAQUE^M\nUID:XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX^M\nORGANIZER;CN="First Last":mailto:[email protected]^M\nX-WR-ITIPSTATUSML:UNCLEAN^M\nCREATED:20110317T191348Z^M\nEND:VEVENT^M\nEND:VCALENDAR^M\n\n\n... response:\nHTTP/1.1 403 Forbidden^M\nDate: Thu, 17 Mar 2011 19:14:30 GMT^M\nDav: 1, access-control, calendar-access, calendar-schedule, calendar-auto-schedule, calendar-availability, inbox-availability, calendar-proxy, calendarserver-private-events, calendarserver-private-comments, calendarserver-principal-property-search^M\nContent-Type: text/xml^M\nContent-Length: 134^M\nServer: Twisted/8.2.0 TwistedWeb/8.2.0 TwistedCalDAV/2.5 (iCal Server v12.56.21)^M\n^M\n<?xml version='1.0' encoding='UTF-8'?><error xmlns='DAV:'>^M\n <valid-attendee-change xmlns='urn:ietf:params:xml:ns:caldav'/>^M\n</error> One thing I have noticed, and I'm not sure if this has any real effect is that in many of these pre-Snow Leopard Server migration events, the ORGANIZER is specified like the following: ORGANIZER;CN=First Last:mailto:[email protected] But newer ones are more like one of the two following: ORGANIZER;CN=First Last;[email protected];SCHEDULE-STATUS=1 ORGANIZER;CN=First Last;[email protected]:urn:uuid:XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX iCal_Server_Admin_v10.6.pdf notes that the ".db.sqlite" files are completely disposable, they're merely a performance cache and are re-built on the fly, so are safe to delete. I did delete the one for the organizer's calendars and it took longer to process the attempted event delete while it rebuilt the database, but still errored out in the end. FWIW the error is thrown by this code: https://trac.calendarserver.org/browser/CalendarServer/trunk/twistedcaldav/scheduling/implicit.py Any further suggestions? I see lots of questions about this in my Google searches, but not solutions and this is a widespread problem on our iCal Server. Now, we mostly try to get users to ignore them (hence the amount of time this question has been open), but every now and then I dig in deeper trying to find the culprit and/or solution.

    Read the article

  • Is it possible to upgrade Server 2008 to RDP version 6.1?

    - by Dmitri K.
    Problem Server 2008 has RDP 6.0, Server 2008 R2 has RDP 6.1. My application works with RDP ActiveX control 6.1, but does not work with version 6.0 - generates access violation from mstscax.dll. Is there any way to upgrade Server 2008 (not 2008 R2) with RDP 6.1? More details: my application is written in Delphi, the ActiveX control was generated from mstscax.dll version 6.1.7601.17514. Target system where the application doesn't work, has mstscax.dll version 6.0.6002.18356. When the application tries to connect, it generate an access violation exception in mstscax.dll. The way I see it is a compatibility problem. I appreciate any ideas on how to resolve this.

    Read the article

  • Is it possible to upgrade PHP to 5.3 on a Centos Kloxo installation?

    - by Malachi
    I have a VPS running Centos with Kloxo on and I was wondering how I would upgrade the PHP to 5.3 - It's currently running 5.2.6. When I try and do a yum update I get the following errors: Resolving Dependencies --> Running transaction check --> Processing Dependency: libpq.so.4 for package: lxphp ---> Package postgresql-libs.i386 0:8.3.7-umask.7 set to be updated --> Finished Dependency Resolution lxphp-5.2.1-400.i386 from installed has depsolving problems --> Missing Dependency: libpq.so.4 is needed by package lxphp-5.2.1-400.i386 (installed) Error: Missing Dependency: libpq.so.4 is needed by package lxphp-5.2.1-400.i386 (installed) You could try using --skip-broken to work around the problem You could try running: package-cleanup --problems package-cleanup --dupes rpm -Va --nofiles --nodigest The program package-cleanup is found in the yum-utils package. Any help would be greatly appreciated.

    Read the article

  • Can I upgrade a Windows 2000 domain to 2008 and demote the 2000 server without clients attached?

    - by techie007
    Hi all, We're planning to replace a Windows 2000 domain controller with a new 2008 DC (new hardware). We've elected to take the route of getting the 2000 domain schema up-to-snuff, join the 2008 server, upgrade it to a DC, and after replication demote the 2000 server (eventually to be taken off-line). The goal being to not have to visit all the workstations, and limited domain down-time. :) We want bring the old server here and do all the backups, Domain prep, migration and role transfers here, and then (hopefully) just plop the new 2008 back in place after it's done, and join the 2000 server back as a member server (so we can then do folder migrations, etc.). Can this server work be done off-site, without the workstations attached? If we do this will anything need to be done to the clients, once the new DC is physically in place, so they contact the new 2008 DC; or will they just 'know' and continue on using the existing domain settings/user profiles, etc.? Thanks in advance! :)

    Read the article

  • How to upgrade OS on Mac Mini with external USB Drive?

    - by David
    We have a G4 Mac Mini, circa 2005, running Mac OS X 10.4 Tiger, and we want to upgrade to 10.5 Leopard. We have a Leopard install disk, but the optical drive in this mini is broken. So we transferred the install disk image to a USB HDD, but now we can't figure out how to boot off it. From what I've read in Mac forums, some PPC Macs, including some G4's, have been able to boot from USB, even though it sounds like this wasn't officially supported, and it may well depend on the specific model of USB drive and Mac. My Mac says CPU is "PowerPC G4 (1.2)" and Boot ROM is "4.8.9f4". I was hoping I might just find somebody here who had that same Mac Mini and find out if they could make it work. I'd especially like to know any specifics about the USB drive they found success with. Any insights at all would be appreciated. Thanks.

    Read the article

  • How can I upgrade bash to >= 4.1 on CentOS 5.5?

    - by Agvorth
    I have a CentOS 5.5 VPS server. I want to use RVM. According to the console output when I run the RVM installer, RVM requires bash = 4.1. I just ran yum update. My bash version is now 3.2.25. If I understand how yum works, that means that 3.2.25 is sort of the version of bash that "belongs with" my CentOS version, and it's the latest version I can get using yum. (Right? Or am I wrong about this?) How can I get that on my CentOS 5.5 system? To clarify, I understand that I can just download the source and install, but I'm hesitant to break out of yum's version management system. Is there a way to upgrade bash without disrupting yum?

    Read the article

  • Is it possible to upgrade PHP to 5.3 on a Centos Kloxo installation?

    - by Malachi Soord
    I have a VPS running Centos with Kloxo on and I was wondering how I would upgrade the PHP to 5.3 - It's currently running 5.2.6. When I try and do a yum update I get the following errors: Resolving Dependencies --> Running transaction check --> Processing Dependency: libpq.so.4 for package: lxphp ---> Package postgresql-libs.i386 0:8.3.7-umask.7 set to be updated --> Finished Dependency Resolution lxphp-5.2.1-400.i386 from installed has depsolving problems --> Missing Dependency: libpq.so.4 is needed by package lxphp-5.2.1-400.i386 (installed) Error: Missing Dependency: libpq.so.4 is needed by package lxphp-5.2.1-400.i386 (installed) You could try using --skip-broken to work around the problem You could try running: package-cleanup --problems package-cleanup --dupes rpm -Va --nofiles --nodigest The program package-cleanup is found in the yum-utils package. Any help would be greatly appreciated.

    Read the article

  • Debian/Ubuntu: Enabling "dist-upgrade" behavior for unattended-upgrades?

    - by Mark Renouf
    We've got a customized distribution of Ubuntu, a repository with some custom packages and we run unattended-upgrades on a number of systems. What we want to be able to do is supply an update of one of our packages which might have a new dependency which is not yet installed. I understand apt normally prevents that from happening automatically, and using dist-upgrade would permit it. How can I get that behavior so our unattended upgrades work the same way? Ideally we'd only want new packages installed if one of our packages causes it to be needed (either as a direct dependency or a child, etc.) Should I be aware of any potential problems or increased risk of breakage. The systems are generally not easily accessed via the console so anything causing a problem requiring manual intervention would be very bad!

    Read the article

  • Upgrade Subversion 1.6 to 1.7 on CentOS? (can't find yum repository)

    - by user743919
    I want to upgrade my SVN Server from 1.6 to 1.7. Unfortunately I can't find anything on the internet how to do this with yum. I have checked rpmforge-extras but it has only svn 1.6 and not 1.7 I wanted to update with yum because this is the most secure way for me. I'm not an experienced Linux user. Is there a yum repository that contains 1.7 (subversion.x86_64 0:1.7.xxxxx.el5.rfx) I hope somebody can help me out? If there is non, perhaps a short explenation how to update with just step by step.

    Read the article

  • How do I justify to my management that we need a bandwidth upgrade?

    - by Sandeep
    I work in an office with a 8mbps line and about a 100 people. Our internet has slowed to crawl over the past few months, as we added headcount. However, using speedtest.net or other sites, still shows bandwidth as 8mbps. Now, how do I justify to management that we indeed need to upgrade our bandwidth ? Please note that I dont have access to our main routers or any network equipment. I can only use my system (windows+linux dual boot) to make a case for a reasonable justification. help!

    Read the article

  • Like to Upgrade My PC (7 year old) - for animation and hardcore gaming ! - help me [closed]

    - by sri
    I like to buy a new computer for my studies and as well as gaming. My old pc has 1.5 GB RAM with 512MB Graphics card. And it is very old to run Adobe CS5 version and other high end animation software. My budget will be INR 20k-25k. I have 500GB hard disk, keyboard and mouse new. So apart from this, I like to buy : Intel or AMD is good ? My idea is : Corei5 or Corei7 = which is best and economy ? Which mother board. 4 GB RAM with upto 8 GB RAM slot for future upgrade. 1 GB or 2 GB Graphic card - which one ? If I am wrong - please suggest me

    Read the article

  • What are the pros of switching DNS names with a database server hardware upgrade?

    - by wilbbe01
    When we upgrade to new hardware at work we usually increment a number in the DNS name. For example. We have a server called database-2, that is slated to become database-3 in the coming days. I haven't been able to find a good reason why this is good behavior. To me the work of trying to catch all end user machines, as well as all servers dependent on the database server is far riskier than simply moving the database and ip/name with it to the new hardware. A little over a year ago we spent several months of requests coming in, as infrequent users began using software that needed to be updated to point to a new DNS name. I am struggling to find answers as to why this is a good practice. So the question. Why is using DNS names as a "server hardware version identifier" a good idea? What am I overlooking? Thanks much.

    Read the article

  • How to migrate from SourceGear Fortress 1.1.x to TFS 2010?

    - by Jaxidian
    I found this question but it was only similar and, more importantly, dated by over a year. I'm hoping there is something I can't find out there and that is better than what that answer points to. Requirements: Preserve source code history (even if only loosely via text only since all of our prior users may not be created in the TFS repository) Preserve our item tracking history (again, even if just loosely since Fortress wasn't all that great about this). Ultimately, I want some searchable history of what we have done in the past and why. I don't necessarily need all of the hooks in place tying work items to source code or anything like that, but I do need discussions and decisions associated with the work items kept around.

    Read the article

  • How to get Netty websocket client example working

    - by Paul Butcher
    I'm struggling to get the netty example websocket client working. I've compiled successfully with 4.0.0.Alpha8, but when I try to connect to a server (also a netty example) I get: Exception in thread "main" io.netty.handler.codec.http.websocketx.WebSocketHandshakeException: Invalid handshake response upgrade: websocket at io.netty.handler.codec.http.websocketx.WebSocketClientHandshaker13.finishHandshake(WebSocketClientHandshaker13.java:204) which makes no sense to me whatsoever, given that this piece of code reads: String upgrade = response.getHeader(Names.UPGRADE); if (!Values.WEBSOCKET.equalsIgnoreCase(upgrade)) { throw new WebSocketHandshakeException("Invalid handshake response upgrade: " + response.getHeader(Names.UPGRADE)); } And as the message in the exception demonstrates, the Upgrade header does contain websocket as expected. Any idea what's going on?

    Read the article

  • Subversion and project management web based super tool. Like Team Foundation Server but not TFS.

    - by Rob Stevenson-Leggett
    Hi, We're currently looking at an IT upgrade and I'm after recommendations for a tool which can do some or all of the following. SVN management (authz, web viewer, commit log, diff) Create template projects (1 click e.g. create me a microsite with this name in svn and give these people access) Reporting on code churn, time spent on tasks on a per project basis User story management Basically like Team Foundation Server but that integrates with SVN properly (reason for this - we have a wide range of skill sets and not everyone can use a TFS client). Is there a combination of Trac plugins + something that can create trac instances (a la Dreamhost's admin panel) that can acheive this. On a side note, does anyone have any experience of version controlling designery type files - e.g. PSDs, Illustrator files. Any advice at all appreciated. Cheers, Rob

    Read the article

  • Using TFS Team Build 2010 to deploy to Dev site and create packages for Staging and Production sites

    - by Kb
    I am trying to configure a TFS Team Build 2010 to perform automatic deployment to development environment and creation of deployment packages for staging and production environment. In the field for MSBuildArguments in the build definition I have: /p:DeployOnBuild=True <br/> /p:DeployTarget=MsDeployPublish <br/> /p:MSDeployPublishMethod=RemoteAgent <br/> /p:CreatePackageOnPublish=True <br/> /p:DeployIISAppPath=devwebsitename<br/> /p:MsDeployServiceUrl=http://deployserver/MsDeployAgentService<br/> /p:UserName=username<br/> /p:Password=password<br/> Automatic deployment of dev web site is ok and I get a package for the web site generated How can I (the same build) get deploy packages for the other environments : Staging and Production? Or am I missing som basic concept here?

    Read the article

  • Is it relatively safe to install kernel from "Canonical Kernel Team ppa" than "Mainline"

    - by tijybba
    I already referred most of the questions stating Upgrade from Mainline Builds or Compiling from latest source or PPA and also concluded that it can cause breakage to Current stable installed system. My question is regarding the kernel builds from Canonical Kernel Team which i have subscribed in Ubuntu 12.04 64-bit , states This is the core kernel team as hired by Canonical. Do not use this team, use ubuntu-kernel-team instead. My stable kernel states 3.2.0-27.42 from Ubuntu repository , also i consider Canonical Kernel Team to be Official ,currently urging me to Upgrade 3.2.0-27.43 , so from the Odd numbered and through the PPA description it is categorized as Unstable. From this ,it can be said next stable release would be 3.2.0-27.44. Is upgrading to .43 version is stable enough to continue , since .44 will be provide by Ubuntu itself based on .43 version and so on. Though i can't expect a lot of Changes ,but does it provide new Improvements or just Bug Fixes since it is just a preceding Release. Also , apart from Ubuntu mainline kernel , is Canonical Kernel Team different. If so , in what development or contribution terms. Is the Ubuntu kernel developed by Two different teams or same team. P.S.: Just noticed that sudo apt-get update && sudo apt-get upgrade provides me upgrade to .43 kernel , which normally requires sudo apt-get dist-upgrade to upgrade to newer kernel available , unless it normally provides message like " Following packages were not upgraded..." , is it an error or an exception to this Canonical Kernel PPA.

    Read the article

  • ALERT: Error Processing US Wage Attachment Elements In Payroll Run After RUP Patches

    - by LuciaC
    Customers who have run the Upgrade Wage Attachments process after applying the 2012 RUP are reporting errors similar to those listed below when either running a quickpay or processing a payroll for employee(s) with involuntary deductions. Error: HR_51118_HRPROC_ERR_ON_ASG ASGNO 1115 APP-PAY-51118: Error was encountered when processing assignment 1115 HR_51119_HRPROC_ERR_OCC_ON_ET ETNAME: Garnishment 3 APP-PAY-51119: Error was encountered when processing Element Type Garnishment 3 HR_6881_HRPROC_ORA_ERR SQLERRMC ORA-01403: No data found SQL_NO 520 TABLE_NAME pay_input_values_f APP-PAY-06881:Error ORA-01403: no data found has occured in table pay_input_values_f at location 520 This issue was logged in Bug 14679161 - QUICK PAY ERROR AFTER RUP (2012) AND WAGE ATTACHMENT UPGRADE APP-PAY-06881. The following one off patches have been released to My Oracle Support to resolve this issue*: 11i -  Patch 14679161 12.0 - Patch 14849394:R12.PAY.A 12.1 - Patch 14849394:R12.PAY.B * IMPORTANT:  Depending on when/if customers have run the Wage Attachment upgrade process will determine the appropriate action to take. Any customer who is encountering the above error and/or has run the Wage Attachment upgrade process AFTER applying the 2012 RUP (applicable to their release level) should log a Service Request with Oracle Support to receive assistance on the necessary steps to take to resolve the problem BEFORE applying the above patch. Any customer who has not yet run the Wage Attachment Upgrade process (either before or after applying the 2012 RUP), should follow the action plan documented in the patch readme. For those customers who have already run the Wage Attachment Upgrade process BEFORE applying the 2012 RUP, should apply the patch (applicable to your release) listed above. Be sure to run any post install processes, such as the data install utility and HR global driver.  See the patch readme for full details. Please consult Note 404478.1: Americas (US, CA, MX) HCM High Priority Alert for the latest Alert status.

    Read the article

  • this error appeared when upgrating 12.04 LTS to 12.10 [closed]

    - by habcity
    Possible Duplicate: How do I fix a “Problem with MergeList” error when trying to do an update? ryder@ryder-Q1500M:~$ do-release-upgrade Checking for a new Ubuntu release Get:1 Upgrade tool signature [198 B] Get:2 Upgrade tool [1,200 kB] Fetched 1,200 kB in 6s (6,988 B/s) authenticate 'quantal.tar.gz' against 'quantal.tar.gz.gpg' extracting 'quantal.tar.gz' [sudo] password for ryder: Reading cache A fatal error occurred Please report this as a bug and include the files /var/log/dist-upgrade/main.log and /var/log/dist-upgrade/apt.log in your report. The upgrade has aborted. Your original sources.list was saved in /etc/apt/sources.list.distUpgrade. Traceback (most recent call last): File "/tmp/update-manager-63XThv/quantal", line 10, in sys.exit(main()) File "/tmp/update-manager-63XThv/DistUpgrade/DistUpgradeMain.py", line 237, in main save_system_state(logdir) File "/tmp/update-manager-63XThv/DistUpgrade/DistUpgradeMain.py", line 130, in save_system_state scrub_sources=True) File "/tmp/update-manager-63XThv/DistUpgrade/apt_clone.py", line 146, in save_state self._write_state_installed_pkgs(sourcedir, tar) File "/tmp/update-manager-63XThv/DistUpgrade/apt_clone.py", line 173, in _write_state_installed_pkgs cache = self._cache_cls(rootdir=sourcedir) File "/usr/lib/python2.7/dist-packages/apt/cache.py", line 102, in init self.open(progress) File "/usr/lib/python2.7/dist-packages/apt/cache.py", line 145, in open self._cache = apt_pkg.Cache(progress) SystemError: E:Encountered a section with no Package: header, E:Problem with MergeList /var/lib/apt/lists/us.archive.ubuntu.com_ubuntu_dists_precise-backports_multiverse_i18n_Translation-en, E:The package lists or status file could not be parsed or opened.

    Read the article

  • What do you upgrade to make games load faster? [on hold]

    - by Superbest
    Let's say you have a relatively modern game like Shogun 2. The loading screens take several minutes. This bothers you and you'd like to improve it. What is actually going on when loading screens are up? I'm guessing assets are being loaded into memory from disk, and possibly being decompressed first. However, what is actually causing the slow down? The memory? Mainboard? CPU? HDD? If you had $100 to spend on upgrades and your only goal is to speed up loading screens without reducing other performance, what component of the computer does it make sense to upgrade for maximum benefit? If your answer is "it depends on the existing setup", what sort of benchmarks would you run to determine what is causing the bottleneck? What if you had $500 instead? I give the two budgets for context. I am not asking for actual recommendations about which component to buy (nor are the numbers supposed to be rigid limits), but what features are important when shopping for components with small and large budgets (a large budget could allow buying multiple components which are not so good on their own, but work particularly well together). I mention Shogun 2 as an example, but I'm asking about reducing overall loading times, across all games, not just one game. Therefore, "put it on a solid state disk" probably won't be good solution, because putting every game on your SDD will quickly fill it up.

    Read the article

  • Upgrade an Ubuntu 8.04 installation with VMware Server 1.0.8 and lots of guest OSes to Something Els

    - by Glyph
    I have an Ubuntu 8.04 (Hardy Heron) host machine which is running a whole slew of virtual machines in VMWare Server 1.0.8. Among other guest OSes, there is every release version of Ubuntu since 6.06, OpenSolaris 2009.06, and Windows XP. Right now I access these VMs from a variety of client OSes as well; Linux and Windows via the VMWare server console, and MacOS via X-forwarding the host machine's server console. I'd like to upgrade the host to Ubuntu 10.04 (Lucid Lynx), but from what I can tell, getting VMWare Server 1.x to work on a more recent version of Linux is a real pain. While VMware Server 2.x is a bit easier, it's still not packaged as Debian packages, so installing security updates is a big chore. As long as I'm upgrading anyway, I'd like to move to a virtualization solution that will allow me to automate applying updates. The options that I'm aware of right now are KVM (managed via virt-manager) and VirtualBox (as managed by its own tools or via its own libvirt bindings), but I'm open to other suggestions. For each option, I'd like to know how do I convert my guest images to the new format? am I going to have to re-activate my Windows guests (alternatively, "If the virtual hardware is different by default, can I avoid re-activation by changing some virtualization configuration to provide me with more similar virtual hardware") what are the management options like for each client OS (mac, linux, windows)? Thanks.

    Read the article

  • initrd problem and Kernel panics after openSUSE 11.2 upgrade.

    - by unixbhaskar
    Once I have done the upgrade form openSUSE11.1 to openSUSE11.2 by doing this: zypper dup Now I tried to boot the system and it failed sync with VFS and kernel panic, so clearly a initrd problem . if I'm not mistaken. Now a bit of explanation about the problem: while upgrading it shows me the error updating initramfs( I forgot the exact error or might be warning).Oh yeah it shows some grub warning too. I have had been doing that from a chroot environment.. with all the required file mounted in proper place in the chroot environment. Now .after bit googling and painfully looking the susegeek.com forum and opensuse.org forum I have decided to recreate the initrd ...but the fellow called "mkinitrd" is real real crap as I hev been pointed out by few forum members. I tried to make an initrd image by myself, failed to do so .as it shows error that device not found( if I boot into suse live cd and mount the partition ) then I tried from the chrooted env and it says "there is no space left on the device" A bit bemused :( yeah most of you pointed it right may lack of knowledge of mine. Kindly suggest me and show me steps to do it correctly and get opensuse11.2 up and running. TIA

    Read the article

  • Guidance: A Branching strategy for Scrum Teams

    - by Martin Hinshelwood
    Having a good branching strategy will save your bacon, or at least your code. Be careful when deviating from your branching strategy because if you do, you may be worse off than when you started! This is one possible branching strategy for Scrum teams and I will not be going in depth with Scrum but you can find out more about Scrum by reading the Scrum Guide and you can even assess your Scrum knowledge by having a go at the Scrum Open Assessment. You can also read SSW’s Rules to Better Scrum using TFS which have been developed during our own Scrum implementations. Acknowledgements Bill Heys – Bill offered some good feedback on this post and helped soften the language. Note: Bill is a VS ALM Ranger and co-wrote the Branching Guidance for TFS 2010 Willy-Peter Schaub – Willy-Peter is an ex Visual Studio ALM MVP turned blue badge and has been involved in most of the guidance including the Branching Guidance for TFS 2010 Chris Birmele – Chris wrote some of the early TFS Branching and Merging Guidance. Dr Paul Neumeyer, Ph.D Parallel Processes, ScrumMaster and SSW Solution Architect – Paul wanted to have feature branches coming from the release branch as well. We agreed that this is really a spin-off that needs own project, backlog, budget and Team. Scenario: A product is developed RTM 1.0 is released and gets great sales.  Extra features are demanded but the new version will have double to price to pay to recover costs, work is approved by the guys with budget and a few sprints later RTM 2.0 is released.  Sales a very low due to the pricing strategy. There are lots of clients on RTM 1.0 calling out for patches. As I keep getting Reverse Integration and Forward Integration mixed up and Bill keeps slapping my wrists I thought I should have a reminder: You still seemed to use reverse and/or forward integration in the wrong context. I would recommend reviewing your document at the end to ensure that it agrees with the common understanding of these terms merge (forward integration) from parent to child (same direction as the branch), and merge  (reverse integration) from child to parent (the reverse direction of the branch). - one of my many slaps on the wrist from Bill Heys.   As I mentioned previously we are using a single feature branching strategy in our current project. The single biggest mistake developers make is developing against the “Main” or “Trunk” line. This ultimately leads to messy code as things are added and never finished. Your only alternative is to NEVER check in unless your code is 100%, but this does not work in practice, even with a single developer. Your ADD will kick in and your half-finished code will be finished enough to pass the build and the tests. You do use builds don’t you? Sadly, this is a very common scenario and I have had people argue that branching merely adds complexity. Then again I have seen the other side of the universe ... branching  structures from he... We should somehow convince everyone that there is a happy between no-branching and too-much-branching. - Willy-Peter Schaub, VS ALM Ranger, Microsoft   A key benefit of branching for development is to isolate changes from the stable Main branch. Branching adds sanity more than it adds complexity. We do try to stress in our guidance that it is important to justify a branch, by doing a cost benefit analysis. The primary cost is the effort to do merges and resolve conflicts. A key benefit is that you have a stable code base in Main and accept changes into Main only after they pass quality gates, etc. - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft The second biggest mistake developers make is branching anything other than the WHOLE “Main” line. If you branch parts of your code and not others it gets out of sync and can make integration a nightmare. You should have your Source, Assets, Build scripts deployment scripts and dependencies inside the “Main” folder and branch the whole thing. Some departments within MSFT even go as far as to add the environments used to develop the product in there as well; although I would not recommend that unless you have a massive SQL cluster to house your source code. We tried the “add environment” back in South-Africa and while it was “phenomenal”, especially when having to switch between environments, the disk storage and processing requirements killed us. We opted for virtualization to skin this cat of keeping a ready-to-go environment handy. - Willy-Peter Schaub, VS ALM Ranger, Microsoft   I think people often think that you should have separate branches for separate environments (e.g. Dev, Test, Integration Test, QA, etc.). I prefer to think of deploying to environments (such as from Main to QA) rather than branching for QA). - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft   You can read about SSW’s Rules to better Source Control for some additional information on what Source Control to use and how to use it. There are also a number of branching Anti-Patterns that should be avoided at all costs: You know you are on the wrong track if you experience one or more of the following symptoms in your development environment: Merge Paranoia—avoiding merging at all cost, usually because of a fear of the consequences. Merge Mania—spending too much time merging software assets instead of developing them. Big Bang Merge—deferring branch merging to the end of the development effort and attempting to merge all branches simultaneously. Never-Ending Merge—continuous merging activity because there is always more to merge. Wrong-Way Merge—merging a software asset version with an earlier version. Branch Mania—creating many branches for no apparent reason. Cascading Branches—branching but never merging back to the main line. Mysterious Branches—branching for no apparent reason. Temporary Branches—branching for changing reasons, so the branch becomes a permanent temporary workspace. Volatile Branches—branching with unstable software assets shared by other branches or merged into another branch. Note   Branches are volatile most of the time while they exist as independent branches. That is the point of having them. The difference is that you should not share or merge branches while they are in an unstable state. Development Freeze—stopping all development activities while branching, merging, and building new base lines. Berlin Wall—using branches to divide the development team members, instead of dividing the work they are performing. -Branching and Merging Primer by Chris Birmele - Developer Tools Technical Specialist at Microsoft Pty Ltd in Australia   In fact, this can result in a merge exercise no-one wants to be involved in, merging hundreds of thousands of change sets and trying to get a consolidated build. Again, we need to find a happy medium. - Willy-Peter Schaub on Merge Paranoia Merge conflicts are generally the result of making changes to the same file in both the target and source branch. If you create merge conflicts, you will eventually need to resolve them. Often the resolution is manual. Merging more frequently allows you to resolve these conflicts close to when they happen, making the resolution clearer. Waiting weeks or months to resolve them, the Big Bang approach, means you are more likely to resolve conflicts incorrectly. - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft   Figure: Main line, this is where your stable code lives and where any build has known entities, always passes and has a happy test that passes as well? Many development projects consist of, a single “Main” line of source and artifacts. This is good; at least there is source control . There are however a couple of issues that need to be considered. What happens if: you and your team are working on a new set of features and the customer wants a change to his current version? you are working on two features and the customer decides to abandon one of them? you have two teams working on different feature sets and their changes start interfering with each other? I just use labels instead of branches? That's a lot of “what if’s”, but there is a simple way of preventing this. Branching… In TFS, labels are not immutable. This does not mean they are not useful. But labels do not provide a very good development isolation mechanism. Branching allows separate code sets to evolve separately (e.g. Current with hotfixes, and vNext with new development). I don’t see how labels work here. - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft   Figure: Creating a single feature branch means you can isolate the development work on that branch.   Its standard practice for large projects with lots of developers to use Feature branching and you can check the Branching Guidance for the latest recommendations from the Visual Studio ALM Rangers for other methods. In the diagram above you can see my recommendation for branching when using Scrum development with TFS 2010. It consists of a single Sprint branch to contain all the changes for the current sprint. The main branch has the permissions changes so contributors to the project can only Branch and Merge with “Main”. This will prevent accidental check-ins or checkouts of the “Main” line that would contaminate the code. The developers continue to develop on sprint one until the completion of the sprint. Note: In the real world, starting a new Greenfield project, this process starts at Sprint 2 as at the start of Sprint 1 you would have artifacts in version control and no need for isolation.   Figure: Once the sprint is complete the Sprint 1 code can then be merged back into the Main line. There are always good practices to follow, and one is to always do a Forward Integration from Main into Sprint 1 before you do a Reverse Integration from Sprint 1 back into Main. In this case it may seem superfluous, but this builds good muscle memory into your developer’s work ethic and means that no bad habits are learned that would interfere with additional Scrum Teams being added to the Product. The process of completing your sprint development: The Team completes their work according to their definition of done. Merge from “Main” into “Sprint1” (Forward Integration) Stabilize your code with any changes coming from other Scrum Teams working on the same product. If you have one Scrum Team this should be quick, but there may have been bug fixes in the Release branches. (we will talk about release branches later) Merge from “Sprint1” into “Main” to commit your changes. (Reverse Integration) Check-in Delete the Sprint1 branch Note: The Sprint 1 branch is no longer required as its useful life has been concluded. Check-in Done But you are not yet done with the Sprint. The goal in Scrum is to have a “potentially shippable product” at the end of every Sprint, and we do not have that yet, we only have finished code.   Figure: With Sprint 1 merged you can create a Release branch and run your final packaging and testing In 99% of all projects I have been involved in or watched, a “shippable product” only happens towards the end of the overall lifecycle, especially when sprints are short. The in-between releases are great demonstration releases, but not shippable. Perhaps it comes from my 80’s brain washing that we only ship when we reach the agreed quality and business feature bar. - Willy-Peter Schaub, VS ALM Ranger, Microsoft Although you should have been testing and packaging your code all the way through your Sprint 1 development, preferably using an automated process, you still need to test and package with stable unchanging code. This is where you do what at SSW we call a “Test Please”. This is first an internal test of the product to make sure it meets the needs of the customer and you generally use a resource external to your Team. Then a “Test Please” is conducted with the Product Owner to make sure he is happy with the output. You can read about how to conduct a Test Please on our Rules to Successful Projects: Do you conduct an internal "test please" prior to releasing a version to a client?   Figure: If you find a deviation from the expected result you fix it on the Release branch. If during your final testing or your “Test Please” you find there are issues or bugs then you should fix them on the release branch. If you can’t fix them within the time box of your Sprint, then you will need to create a Bug and put it onto the backlog for prioritization by the Product owner. Make sure you leave plenty of time between your merge from the development branch to find and fix any problems that are uncovered. This process is commonly called Stabilization and should always be conducted once you have completed all of your User Stories and integrated all of your branches. Even once you have stabilized and released, you should not delete the release branch as you would with the Sprint branch. It has a usefulness for servicing that may extend well beyond the limited life you expect of it. Note: Don't get forced by the business into adding features into a Release branch instead that indicates the unspoken requirement is that they are asking for a product spin-off. In this case you can create a new Team Project and branch from the required Release branch to create a new Main branch for that product. And you create a whole new backlog to work from.   Figure: When the Team decides it is happy with the product you can create a RTM branch. Once you have fixed all the bugs you can, and added any you can’t to the Product Backlog, and you Team is happy with the result you can create a Release. This would consist of doing the final Build and Packaging it up ready for your Sprint Review meeting. You would then create a read-only branch that represents the code you “shipped”. This is really an Audit trail branch that is optional, but is good practice. You could use a Label, but Labels are not Auditable and if a dispute was raised by the customer you can produce a verifiable version of the source code for an independent party to check. Rare I know, but you do not want to be at the wrong end of a legal battle. Like the Release branch the RTM branch should never be deleted, or only deleted according to your companies legal policy, which in the UK is usually 7 years.   Figure: If you have made any changes in the Release you will need to merge back up to Main in order to finalise the changes. Nothing is really ever done until it is in Main. The same rules apply when merging any fixes in the Release branch back into Main and you should do a reverse merge before a forward merge, again for the muscle memory more than necessity at this stage. Your Sprint is now nearly complete, and you can have a Sprint Review meeting knowing that you have made every effort and taken every precaution to protect your customer’s investment. Note: In order to really achieve protection for both you and your client you would add Automated Builds, Automated Tests, Automated Acceptance tests, Acceptance test tracking, Unit Tests, Load tests, Web test and all the other good engineering practices that help produce reliable software.     Figure: After the Sprint Planning meeting the process begins again. Where the Sprint Review and Retrospective meetings mark the end of the Sprint, the Sprint Planning meeting marks the beginning. After you have completed your Sprint Planning and you know what you are trying to achieve in Sprint 2 you can create your new Branch to develop in. How do we handle a bug(s) in production that can’t wait? Although in Scrum the only work done should be on the backlog there should be a little buffer added to the Sprint Planning for contingencies. One of these contingencies is a bug in the current release that can’t wait for the Sprint to finish. But how do you handle that? Willy-Peter Schaub asked an excellent question on the release activities: In reality Sprint 2 starts when sprint 1 ends + weekend. Should we not cater for a possible parallelism between Sprint 2 and the release activities of sprint 1? It would introduce FI’s from main to sprint 2, I guess. Your “Figure: Merging print 2 back into Main.” covers, what I tend to believe to be reality in most cases. - Willy-Peter Schaub, VS ALM Ranger, Microsoft I agree, and if you have a single Scrum team then your resources are limited. The Scrum Team is responsible for packaging and release, so at least one run at stabilization, package and release should be included in the Sprint time box. If more are needed on the current production release during the Sprint 2 time box then resource needs to be pulled from Sprint 2. The Product Owner and the Team have four choices (in order of disruption/cost): Backlog: Add the bug to the backlog and fix it in the next Sprint Buffer Time: Use any buffer time included in the current Sprint to fix the bug quickly Make time: Remove a Story from the current Sprint that is of equal value to the time lost fixing the bug(s) and releasing. Note: The Team must agree that it can still meet the Sprint Goal. Cancel Sprint: Cancel the sprint and concentrate all resource on fixing the bug(s) Note: This can be a very costly if the current sprint has already had a lot of work completed as it will be lost. The choice will depend on the complexity and severity of the bug(s) and both the Product Owner and the Team need to agree. In this case we will go with option #2 or #3 as they are uncomplicated but severe bugs. Figure: Real world issue where a bug needs fixed in the current release. If the bug(s) is urgent enough then then your only option is to fix it in place. You can edit the release branch to find and fix the bug, hopefully creating a test so it can’t happen again. Follow the prior process and conduct an internal and customer “Test Please” before releasing. You can read about how to conduct a Test Please on our Rules to Successful Projects: Do you conduct an internal "test please" prior to releasing a version to a client?   Figure: After you have fixed the bug you need to ship again. You then need to again create an RTM branch to hold the version of the code you released in escrow.   Figure: Main is now out of sync with your Release. We now need to get these new changes back up into the Main branch. Do a reverse and then forward merge again to get the new code into Main. But what about the branch, are developers not working on Sprint 2? Does Sprint 2 now have changes that are not in Main and Main now have changes that are not in Sprint 2? Well, yes… and this is part of the hit you take doing branching. But would this scenario even have been possible without branching?   Figure: Getting the changes in Main into Sprint 2 is very important. The Team now needs to do a Forward Integration merge into their Sprint and resolve any conflicts that occur. Maybe the bug has already been fixed in Sprint 2, maybe the bug no longer exists! This needs to be identified and resolved by the developers before they continue to get further out of Sync with Main. Note: Avoid the “Big bang merge” at all costs.   Figure: Merging Sprint 2 back into Main, the Forward Integration, and R0 terminates. Sprint 2 now merges (Reverse Integration) back into Main following the procedures we have already established.   Figure: The logical conclusion. This then allows the creation of the next release. By now you should be getting the big picture and hopefully you learned something useful from this post. I know I have enjoyed writing it as I find these exploratory posts coupled with real world experience really help harden my understanding.  Branching is a tool; it is not a silver bullet. Don’t over use it, and avoid “Anti-Patterns” where possible. Although the diagram above looks complicated I hope showing you how it is formed simplifies it as much as possible.   Technorati Tags: Branching,Scrum,VS ALM,TFS 2010,VS2010

    Read the article

  • Part 2&ndash;Load Testing In The Cloud

    - by Tarun Arora
    Welcome to Part 2, In Part 1 we discussed the advantages of creating a Test Rig in the cloud, the Azure edge and the Test Rig Topology we want to get to. In Part 2, Let’s start by understanding the components of Azure we’ll be making use of followed by manually putting them together to create the test rig, so… let’s get down dirty start setting up the Test Rig.  What Components of Azure will I be using for building the Test Rig in the Cloud? To run the Test Agents we’ll make use of Windows Azure Compute and to enable communication between Test Controller and Test Agents we’ll make use of Windows Azure Connect.  Azure Connect The Test Controller is on premise and the Test Agents are in the cloud (How will they talk?). To enable communication between the two, we’ll make use of Windows Azure Connect. With Windows Azure Connect, you can use a simple user interface to configure IPsec protected connections between computers or virtual machines (VMs) in your organization’s network, and roles running in Windows Azure. With this you can now join Windows Azure role instances to your domain, so that you can use your existing methods for domain authentication, name resolution, or other domain-wide maintenance actions. For more details refer to an overview of Windows Azure connect. A very useful video explaining everything you wanted to know about Windows Azure connect.  Azure Compute Windows Azure compute provides developers a platform to host and manage applications in Microsoft’s data centres across the globe. A Windows Azure application is built from one or more components called ‘roles.’ Roles come in three different types: Web role, Worker role, and Virtual Machine (VM) role, we’ll be using the Worker role to set up the Test Agents. A very nice blog post discussing the difference between the 3 role types. Developers are free to use the .NET framework or other software that runs on Windows with the Worker role or Web role. Developers can also create applications using languages such as PHP and Java. More on Windows Azure Compute. Each Windows Azure compute instance represents a virtual server... Virtual Machine Size CPU Cores Memory Cost Per Hour Extra Small Shared 768 MB $0.04 Small 1 1.75 GB $0.12 Medium 2 3.50 GB $0.24 Large 4 7.00 GB $0.48 Extra Large 8 14.00 GB $0.96   You might want to review the Windows Azure Pricing FAQ. Let’s Get Started building the Test Rig… Configuration Machine Role Comments VM – 1 Domain Controller for Playpit.com On Premise VM – 2 TFS, Test Controller On Premise VM – 3 Test Agent Cloud   In this blog post I would assume that you have the domain, Team Foundation Server and Test Controller Installed and set up already. If not, please refer to the TFS 2010 Installation Guide and this walkthrough on MSDN to set up your Test Controller. You can also download a preconfigured TFS 2010 VM from Brian Keller's blog, Brian also has some great hands on Labs on TFS 2010 that you may want to explore. I. Lets start building VM – 3: The Test Agent Download the Windows Azure SDK and Tools Open Visual Studio and create a new Windows Azure Project using the Cloud Template                   Choose the Worker Role for reasons explained in the earlier post         The WorkerRole.cs implements the Run() and OnStart() methods, no code changes required. You should be able to compile the project and run it in the compute emulator (The compute emulator should have been installed as part of the Windows Azure Toolkit) on your local machine.                   We will only be making changes to WindowsAzureProject, open ServiceDefinition.csdef. Ensure that the vmsize is small (remember the cost chart above). Import the “Connect” module. I am importing the Connect module because I need to join the Worker role VM to the Playpit domain. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WorkerRole name="WorkerRole1" vmsize="Small"> <Imports> <Import moduleName="Diagnostics" /> <Import moduleName="Connect"/> </Imports> </WorkerRole> </ServiceDefinition> Go to the ServiceConfiguration.Cloud.cscfg and note that settings with key ‘Microsoft.WindowsAzure.Plugins.Connect.%%%%’ have been added to the configuration file. This is because you decided to import the connect module. See the config below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> </ConfigurationSettings> </Role> </ServiceConfiguration>             Let’s go step by step and understand all the highlighted parameters and where you can find the values for them.       osFamily – By default this is set to 1 (Windows Server 2008 SP2). Change this to 2 if you want the Windows Server 2008 R2 operating system. The Advantage of using osFamily = “2” is that you get Powershell 2.0 rather than Powershell 1.0. In Powershell 2.0 you could simply use “powershell -ExecutionPolicy Unrestricted ./myscript.ps1” and it will work while in Powershell 1.0 you will have to change the registry key by including the following in your command file “reg add HKLM\Software\Microsoft\PowerShell\1\ShellIds\Microsoft.PowerShell /v ExecutionPolicy /d Unrestricted /f” before you can execute any power shell. The other reason you might want to move to os2 is if you wanted IIS 7.5.       Activation Token – To enable communication between the on premise machine and the Windows Azure Worker role VM both need to have the same token. Log on to Windows Azure Management Portal, click on Connect, click on Get Activation Token, this should give you the activation token, copy the activation token to the clipboard and paste it in the configuration file. Note – Later in the blog I’ll be showing you how to install connect on the on premise machine.                       EnableDomainJoin – Set the value to true, ofcourse we want to join the on windows azure worker role VM to the domain.       DomainFQDN, DomainControllerFQDN, DomainAccountName, DomainPassword, DomainOU, Administrators – This information is specific to your domain. I have extracted this information from the ‘service manager’ and ‘Active Directory Users and Computers’. Also, i created a new Domain-OU namely ‘CloudInstances’ so all my cloud instances joined to my domain show up here, this is optional. You can encrypt the DomainPassword – refer to the instructions here. Or hold fire, I’ll be covering that when i come to certificates and encryption in the coming section.       Now once you have filled all this information up, the configuration file should look something like below, <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="2" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="45f55fea-f194-4fbc-b36e-25604faac784" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="WIN-KUDQMQFGQOL.play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="************************" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="OU=CloudInstances, DC=Play, DC=Pit, DC=com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="Playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> </ConfigurationSettings> </Role> </ServiceConfiguration> Next we will be enabling the Remote Desktop module in to the ServiceDefinition.csdef, we could make changes manually or allow a beautiful wizard to help us make changes. I prefer the second option. So right click on the Windows Azure project and choose Publish       Now once you get the publish wizard, if you haven’t already you would be asked to import your Windows Azure subscription, this is simply the Msdn subscription activation key xml. Once you have done click Next to go to the Settings page and check ‘Enable Remote Desktop for all roles’.       As soon as you do that you get another pop up asking you the details for the user that you would be logging in with (make sure you enter a reasonable expiry date, you do not want the user account to expire today). Notice the more information tag at the bottom, click that to get access to the certificate section. See screen shot below.       From the drop down select the option to create a new certificate        In the pop up window enter the friendly name for your certificate. In my case I entered ‘WAC – Test Rig’ and click ok. This will create a new certificate for you. Click on the view button to see the certificate details. Do you see the Thumbprint, this is the value that will go in the config file (very important). Now click on the Copy to File button to copy the certificate, we will need to import the certificate to the windows Azure Management portal later. So, make sure you save it a safe location.                                Click Finish and enter details of the user you would like to create with permissions for remote desktop access, once you have entered the details on the ‘Remote desktop configuration’ screen click on Ok. From the Publish Windows Azure Wizard screen press Cancel. Cancel because we don’t want to publish the role just yet and Yes because we want to save all the changes in the config file.       Now if you go to the ServiceDefinition.csdef file you will see that the RemoteAccess and RemoteForwarder roles have been imported for you. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WorkerRole name="WorkerRole1" vmsize="Small"> <Imports> <Import moduleName="Diagnostics" /> <Import moduleName="Connect" /> <Import moduleName="RemoteAccess" /> <Import moduleName="RemoteForwarder" /> </Imports> </WorkerRole> </ServiceDefinition> Now go to the ServiceConfiguration.Cloud.cscfg file and you see a whole bunch for setting “Microsoft.WindowsAzure.Plugins.RemoteAccess.%%%” values added for you. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="2" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="45f55fea-f194-4fbc-b36e-25604faac784" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="WIN-KUDQMQFGQOL.play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="************************" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="OU=CloudInstances, DC=Play, DC=Pit, DC=com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="Playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.Enabled" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountUsername" value="Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountEncryptedPassword" value="MIIBnQYJKoZIhvcNAQcDoIIBjjCCAYoCAQAxggFOMIIBSgIBADAyMB4xHDAaBgNVBAMME1dpbmRvd 3MgQXp1cmUgVG9vbHMCEGa+B46voeO5T305N7TSG9QwDQYJKoZIhvcNAQEBBQAEggEABg4ol5Xol66Ip6QKLbAPWdmD4ae ADZ7aKj6fg4D+ATr0DXBllZHG5Umwf+84Sj2nsPeCyrg3ZDQuxrfhSbdnJwuChKV6ukXdGjX0hlowJu/4dfH4jTJC7sBWS AKaEFU7CxvqYEAL1Hf9VPL5fW6HZVmq1z+qmm4ecGKSTOJ20Fptb463wcXgR8CWGa+1w9xqJ7UmmfGeGeCHQ4QGW0IDSBU6ccg vzF2ug8/FY60K1vrWaCYOhKkxD3YBs8U9X/kOB0yQm2Git0d5tFlIPCBT2AC57bgsAYncXfHvPesI0qs7VZyghk8LVa9g5IqaM Cp6cQ7rmY/dLsKBMkDcdBHuCTAzBgkqhkiG9w0BBwEwFAYIKoZIhvcNAwcECDRVifSXbA43gBApNrp40L1VTVZ1iGag+3O1" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountExpiration" value="2012-11-27T23:59:59.0000000+00:00" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteForwarder.Enabled" value="true" /> </ConfigurationSettings> <Certificates> <Certificate name="Microsoft.WindowsAzure.Plugins.RemoteAccess.PasswordEncryption" thumbprint="AA23016CF0BDFC344400B5B82706B608B92E4217" thumbprintAlgorithm="sha1" /> </Certificates> </Role> </ServiceConfiguration>          Okay let’s look at them one at a time,       Enabled - Yes, we would like to enable Remote Access.       AccountUserName – This is the user name you entered while you were on the publish windows azure role screen, as detailed above.       AccountEncrytedPassword – Try and decode that, the certificate is used to encrypt the password you specified for the user account. Remember earlier i said, either use the instructions or wait and i’ll be showing you encryption, now the user account i am using for rdp has the same password as my domain password, so i can simply copy the value of the AccountEncryptedPassword to the DomainPassword as well.       AccountExpiration – This is the expiration as you specified in the wizard earlier, make sure your account does not expire today.       Remote Forwarder – Check out the documentation, below is how I understand it, -- One role in an application that implements a remote desktop connection must import the RemoteForwarder module. The two modules work together to enable the remote desktop connections to role instances. -- If you have multiple roles defined in the service model, it does not matter which role you add the RemoteForwarder module to, but you must add it to only one of the role definitions.       Certificate – Remember the certificate thumbprint from the wizard, the on premise machine and windows azure role machine that need to speak to each other must have the same thumbprint. More on that when we install Windows Azure connect Endpoints on the on premise machine. As i said earlier, in this blog post, I’ll be showing you the manual process so i won’t be scripting any star up tasks to install the test agent or register the test agent with the TFS Server. I’ll be showing you all this cool stuff in the next blog post, that’s because it’s important to understand the manual side of it, it becomes easier for you to troubleshoot in case something fails. Having said that, the changes we have made are sufficient to spin up the Windows Azure Worker Role aka Test Agent VM, have it connected with the play.pit.com domain and have remote access enabled on it. Before we deploy the Test Agent VM we need to set up Windows Azure Connect on the TFS Server. II. Windows Azure Connect: Setting up Connect on VM – 2 i.e. TFS & Test Controller Glad you made it so far, now to enable communication between the on premise TFS/Test Controller and Azure-ed Test Agent we need to enable communication. We have set up the Azure connect module in the Test Agent configuration, now the connect end points need to be enabled on the on premise machines, let’s have a look at how we can do this. Log on to VM – 2 running the TFS Server and Test Controller Log on to the Windows Azure Management Portal and click on Virtual Network Click on Virtual Network, if you already have a subscription you should see the below screen shot, if not, you would be asked to complete the subscription first        Click on Install Local Endpoints from the top left on the panel and you get a url appended with a token id in it, remember the token i showed you earlier, in theory the token you get here should match the token you added to the Test Agent config file.        Copy the url to the clip board and paste it in IE explorer (important, the installation at present only works out of IE and you need to have cookies enabled in order to complete the installation). As stated in the pop up, you can NOT download and run the software later, you need to run it as is, since it contains a token. Once the installation completes you should see the Windows Azure connect icon in the system tray.                         Right click the Azure Connect icon, choose Diagnostics and refer to this link for diagnostic detail terminology. NOTE – Unfortunately I could not see the Windows Azure connect icon in the system tray, a bit of binging with Google revealed that the azure connect icon is only shown when the ‘Windows Azure Connect Endpoint’ Service is started. So go to services.msc and make sure that the service is started, if not start it, unfortunately again, the service did not start for me on a manual start and i realised that one of the dependant services was disabled, you can look at the service dependencies and start them and then start windows azure connect. Bottom line, you need to start Windows Azure connect service before you can proceed. Please refer here on MSDN for more on Troubleshooting Windows Azure connect. (Follow the next step as well)   Now go back to the Windows Azure Management Portal and from Groups and Roles create a new group, lets call it ‘Test Rig’. Make sure you add the VM – 2 (the TFS Server VM where you just installed the endpoint).       Now if you go back to the Azure Connect icon in the system tray and click ‘Refresh Policy’ you will notice that the disconnected status of the icon should change to ready for connection. III. Importing Certificate in to Windows Azure Management Portal But before that you need to import the certificate you created in Step I in to the Windows Azure Management Portal. Log on to the Windows Azure Management Portal and click on ‘Hosted Services, Storage Accounts & CDN’ and then ‘Management Certificates’ followed by Add Certificates as shown in the screen shot below        Browse to the location where you saved the certificate earlier, remember… Refer to Step I in case you forgot.        Now you should be able to see the imported certificate here, make sure the thumbprint of the certificate matches the one you inserted in the config files        IV. Publish Windows Azure Worker Role aka Test Agent Having completed I, II and III, you are ready to publish the Test Agent VM – 3 to the cloud. Go to Visual Studio and right click the Windows Azure project and select Publish. Verify the infomration in the wizard, from the advanced settings tab, you can also enabled capture of intellitrace or profiling information.         Click Next and Click Publish! From the view menu bar select the Windows Azure Activity Log window.       Now you should be able to see the deployment progress in real time.             In the Windows Azure Management Portal, you should also be able to see the progress of creation of a new Worker Role.       Once the deployment is complete you should be able to RDP (go to run prompt type mstsc and in the pop up the machine name) in to the Test Agent Worker Role VM from the Playpit network using the domain admin user account. In case you are unable to log in to the Test Agent using the domain admin user account it means the process of joining the Test Agent to the domain has failed! But the good news is, because you imported the connect module, you can connect to the Test Agent machine using Windows Azure Management Portal and troubleshoot the reason for failure, you will be able to log in with the user name and password you specified in the config file for the keys ‘RemoteAccess.AccountUsername, RemoteAccess.EncryptedPassword (just that enter the password unencrypted)’, fix it or manually join the machine to the domain. Once you have managed to Join the Test Agent VM to the Domain move to the next step.      So, log in to the Test Agent Worker Role VM with the Playpit Domain Administrator and verify that you can log in, the machine is connected to the domain and the connect service is successfully running. If yes, give your self a pat on the back, you are 80% mission accomplished!         Go to the Windows Azure Management Portal and click on Virtual Network, click on Groups and Roles and click on Test Rig, click Edit Group, the edit the Test Rig group you created earlier. In the Connect to section, click on Add to select the worker role you have just deployed. Also, check the ‘Allow connections between endpoints in the group’ with this you will enable to communication between test controller and test agents and test agents/test agents. Click Save.      Now, you are ready to deploy the Test Agent software on the Worker Role Test Agent VM and configure it to work with the Test Controller. V. Configuring VM – 3: Installing Test Agent and Associating Test Agent to Controller Log in to the Worker Role Test Agent VM that you have just successfully deployed, make sure you log in with the domain administrator account. Download the All Agents software from MSDN, ‘en_visual_studio_agents_2010_x86_x64_dvd_509679.iso’, extract the iso and navigate to where you have extracted the iso. In my case, i have extracted the iso to “C:\Resources\Temp\VsAgentSetup”. Open the Test Agent folder and double click on setup.exe. Once you have installed the Test Agent you should reach the configuration window. If you face any issues installing TFS Test Agent on the VM, refer to the walkthrough on MSDN.       Once you have successfully installed the Test Agent software you will need to configure the test agent. Right click the test agent configuration tool and run as a different user. i.e. an Administrator. This is really to run the configuration wizard with elevated privileges (you might have UAC block something's otherwise).        In the run options, you can select ‘service’ you do not need to run the agent as interactive un less you are running coded UI tests. I have specified the domain administrator to connect to the TFS Test Controller. In real life, i would never do that, i would create a separate test user service account for this purpose. But for the blog post, we are using the most powerful user so that any policies or restrictions don’t block you.        Click the Apply Settings button and you should be all green! If not, the summary usually gives helpful error messages that you can resolve and proceed. As per my experience, you may run in to either a permission or a firewall blocking communication issue.        And now the moment of truth! Go to VM –2 open up Visual Studio and from the Test Menu select Manage Test Controller       Mission Accomplished! You should be able to see the Test Agent that you have just configured here,         VI. Creating and Running Load Tests on your brand new Azure-ed Test Rig I have various blog posts on Performance Testing with Visual Studio Ultimate, you can follow the links and videos below, Blog Posts: - Part 1 – Performance Testing using Visual Studio 2010 Ultimate - Part 2 – Performance Testing using Visual Studio 2010 Ultimate - Part 3 – Performance Testing using Visual Studio 2010 Ultimate Videos: - Test Tools Configuration & Settings in Visual Studio - Why & How to Record Web Performance Tests in Visual Studio Ultimate - Goal Driven Load Testing using Visual Studio Ultimate Now that you have created your load tests, there is one last change you need to make before you can run the tests on your Azure Test Rig, create a new Test settings file, and change the Test Execution method to ‘Remote Execution’ and select the test controller you have configured the Worker Role Test Agent against in our case VM – 2 So, go on, fire off a test run and see the results of the test being executed on the Azur-ed Test Rig. Review and What’s next? A quick recap of the benefits of running the Test Rig in the cloud and what i will be covering in the next blog post AND I would love to hear your feedback! Advantages Utilizing the power of Azure compute to run a heavy virtual user load. Benefiting from the Azure flexibility, destroy Test Agents when not in use, takes < 25 minutes to spin up a new Test Agent. Most important test Network Latency, (network latency and speed of connection are two different things – usually network latency is very hard to test), by placing the Test Agents in Microsoft Data centres around the globe, one can actually test the lag in transferring the bytes not because of a slow connection but because the page has been requested from the other side of the globe. Next Steps The process of spinning up the Test Agents in windows Azure is not 100% automated. I am working on the Worker process and power shell scripts to make the role deployment, unattended install of test agent software and registration of the test agent to the test controller automated. In the next blog post I will show you how to make the complete process unattended and automated. Remember to subscribe to http://feeds.feedburner.com/TarunArora. Hope you enjoyed this post, I would love to hear your feedback! If you have any recommendations on things that I should consider or any questions or feedback, feel free to leave a comment. See you in Part III.   Share this post : CodeProject

    Read the article

< Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >