Search Results

Search found 2075 results on 83 pages for 'migration assistant'.

Page 59/83 | < Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >

  • MySQL cluster: 20Tb x 3K tables

    - by ethrbunny
    Over the next 2-3 years we will be scaling up data collection for a project. As a result the amount of data will grow 10-fold. Our current MySQL installation can keep up with the 2Tb of data but for larger queries there is a fair amount of IOWait. Im investigating a migration to a clustered solution to spread out the IO but am wondering about NDB and what happens to data that doesn't get accessed very often. The impression I get from reading about MySQL cluster is that it relies on memory tables for most of the data. What happens with tables that don't get accessed very often (or at all)? And how does backup work? Can I use MYSQLDUMP or is there a better solution?

    Read the article

  • Exchange 2010 Autodiscover/OAB update issue

    - by bulldog5046
    Mid way through migration from 2003 to 2010 and with a few test users on 2010 i've noticed that the OAB is not being downloaded to outlook clients. I've checked the URL's are configured, addded both our CAS servers to the web based distribution list for the OAB and assigned the OAB to 2 mailbox databases we use but when i use outlook 'Test E-Mail AutoConfiguration' test i still see that the autodiscover says "OAB URL: Public Folder" even though i've now deselected the option. I've ran Test-OutlookWebServices to which i was getting an OAB error about no URL in the autodiscovery but having just re-ran it now appears fine, yey the autoconfigure test still does not. Does anyone have any idea why i'm getting this discrepency?

    Read the article

  • Copying large Windows directory structure to new server with permissions intact

    - by Chris
    I'm soon going to be doing a large migration of an old-school web server that serves mostly ASP pages (currently on a Windows 2003 server) to a newer, virtualized Windows 2008 server. This new server is going to be in a different domain, as well. So I'm copying the root web folder, and all its subfolders and files, to this new server. I'd like to keep permissions intact. It's also pretty massive - I'd like to be able to compress it before transferring to the new server with permissions intact. Any way to do that? And will the new server being in a different AD domain screw with my plans?

    Read the article

  • Moving Domain Controller Guests between Hyper-V Hosts

    - by Jim
    We're moving our domain controller to a new Hyper-V host. I read it on TechNet about not using export on a VM running as DC (although I saw a lot of answers on TechNet suggesting doing so to move DC). What we plan to do is shutdown the VM, move the VHD to the new Hyper-V host, then create a new VM using that VHD. I don't think USN rollback would occur since it's like shutting down the VM and starting it back up. We have another Hyper-V host with a DC guest that will be running during the migration. All the hosts and VMs are running Windows Server 2008 R2. Is it a good way to move virtualize DC b/t hosts? If not, how should I proceed?

    Read the article

  • Web Server Setup

    - by gustyaquino
    Hello, In my workplace, we want to implement your own web server for at leat 100 Apache/PHP/MySQL web pages. My boss is opposed to hiring skilled personnel, he think we can do ourselves. Currently, we are working with hostgator reseller account. I chose CentOS as the operating system, but I don't know the best hardware solution. HP, Dell ? What about the setup on these platforms? Thanks. PS: sorry for my bad english Edit: The purpose of this migration isn't related to performance issues. But independence.

    Read the article

  • Forward Request to Multiple Servers

    - by cactuarz
    We have 2 servers. One is old server and another is the new one. Currently we about doing a migration because the old server is not capable enough to handle everyday requests. The specs are: Old server Ubuntu 10.04 Nginx as Reverse Proxy Apache WSGI Python/Django New Server Ubuntu 10.04 Nginx Gunicorn Python/Django Celery+Redis Our manager asked us to research if the old server can perform multiple forwarding to all incoming request, for example, set Nginx of old server to forward all request to both old and new server. The purpose is to perform unit testing to new server using old server as comparer, see if the new server is ready to take over the role. Please help, if there is an idea, or must install some engine, or what we do is impossible. Many thanks.

    Read the article

  • How can I re-create Microsoft Cluster Service resource groups on a different cluster?

    - by PersonalNexus
    I use Microsoft Cluster Service on a cluster of Windows Server 2003 machines containing several dozen resource groups. In the process of migrating to newer hardware, I would like move resources to the new machines on resource group at a time spread out over a few days to ease the migration and minimize risk. I was wondering of there was a smarter way to do this than manually re-creating resources on the new and then deleting them on the old cluster? The cluster has already been set up properly, the only missing is the resource groups and the resources they contain (IP, network names, services...). I have looked through the options of the cluster admin GUI and cluster.exe's commandline options, but haven't found anything like an import/export feature to copy over the configuration of a resource or entire resource group. Does something like this exist?

    Read the article

  • SVN Checkout error on large repositories

    - by Brian Mitchell
    I wonder if anyone can help me. We have recently migrated our Subversion repository from a VisualSVN Server on Windows to a subversion server on CentOS. The migration was succesfull however we are getting the following error message Error REPORT of svn'/svn/MangoRepository/!svn/vcc/default': Could not read chunk size: Error connection was closed by server (http://servername) Now the workaround for this is simply to perform a update on the repo and it will contine where is left off. Im just wondering if anyone was a permanent fix for this as it can be quite frustrating to repeat my self to 60-70 developers.

    Read the article

  • Local Profile Map to New Active Directory Login

    - by user42937
    Preface: I am sure this has been asked some where on the site before but I couldn't find any questions about it, or maybe I am not using the correct verbiage... Our admins are giving us a new active directory account on different domain. As I am a progammer (a member of IT) we are the group gets assigned new accounts first to test the migration. When I log in to my local machine using the new account I get a new local profile. Not the biggest deal, but on the new profile I am missing mappings, desktop items, wallpaper, etc. Our users I going to throw a fit if there is no way around this. Two Questions: I've seen references to NTUSER.DAT and suggestions to Copy all user files from "Documents and Settings", but is there a good way or is it even possible to associate my local profile with the new AD account? Is there any thing that our admins can do to prevent this from happening?

    Read the article

  • rsync invocation to replace symlinks pointing to source?

    - by bdbaddog
    Currently I'm moving a big filesystem to a new server as the original fileserver is no longer able to handle the filesystem writes. To make this quick I made symlinks at the target filesystem pointing to the original filesystem. Initially: /company/release (mountpoint of the original filesystem) After migration: /company/release.old (points to original filesystem after automount map update) /company/release (points to new fileserver/filesystem after automount map update) In /company/release there are symlinks like the following: /company/release/product-1.0.tar.gz - /company/release.old/product-1.0.tar.gz /company/release/product-1.0 - /company/release.old/product-1.0 (this is a tree of files) Using symlinks allowed me to move the writes to the new filesystem quickly. Now I'd like to slowly migrate the existing files and directories to the new filesystem. The problem I'm running into is that since the symlinks point back at the original files rsync doesn't see any difference and so it doesn't actually copy the file(s) or directory(s) and remove/overwrite the symlinks. Is there a set of rsync flags which will do what I want?

    Read the article

  • How to put text in same row but different column if a certain text is present in the same row?

    - by melai
    How can I put text in the same row but different column if a certain text is present in the same row? Issue Area Correction Done Process changed bin Process skip lap converted to global Security done global migration Process changed bin How can I code this in a macro? For example: If the correction done is in the cell, the Issue should be Process automatically. If the word global is present the Issue should be Security. I have 500 rows and I want to have the code until row 500.

    Read the article

  • Exchange 2010: Import a PST when Local Move Request fails

    - by gravyface
    So the trail of tears continues with my SBS 2003 to 2011 migration: all the mailboxes have moved mailbox store from OLDSERVER to NEWSERVER, with the Local Move Requests completing successfully, except for one. I've logged into their machine and have exported their mailbox as a PST. I'm about to import it, but it seems to me that because the mailbox is still on OLDSERVER, even with a new Outlook profile pointing to NEWSERVER in Outlook, it'll push the mail into the current mailbox store on the old server. Please tell me I don't have to blow away her existing mailbox, logon, etc. on the old SBSERVER: is there a way to change the state from "Legacy" to "User Mailbox" without actually moving the mailbox store? Create a new mailbox for her user in NEWSERVER?

    Read the article

  • Can Google App users view Exchange users public calendars and contacts?

    - by CT
    My company currently uses MS Exchange 2003 for company email, contacts, and calendars. We have approximately 150 users. Construction industry. I would like to look into migrating from Exchange to Google Apps. It will be an easier sell to the powers that be if we can migrate certain smaller departments first successfully than an entire company move. I would like to first migrate our field superintendents who are usually out of the office working remotely. Approx 30 users. Will Google App users be able to see our Exchange user's calendars and vice versa? How about public folders? Anyone's migration story is much appreciated. Thank you.

    Read the article

  • Rails on server syntax error?

    - by Danny McClelland
    Hi Everyone, I am trying to get my rails application running on my web server, but when I run the rake db:migrate I get the following error: r oot@oak [/home/macandco/rails_apps/survey_manager]# rake db:migrate (in /home/macandco/rails_apps/survey_manager) == Baseapp: migrating ======================================================== -- create_table(:settings, {:force=>true}) -> 0.0072s -- create_table(:users) -> 0.0072s -- add_index(:users, :login, {:unique=>true}) -> 0.0097s -- create_table(:profiles) -> 0.0084s -- create_table(:open_id_authentication_associations, {:force=>true}) -> 0.0067s -- create_table(:open_id_authentication_nonces, {:force=>true}) -> 0.0064s -- create_table(:roles) -> 0.0052s -- create_table(:roles_users, {:id=>false}) -> 0.0060s rake aborted! An error has occurred, all later migrations canceled: 555 5.5.2 Syntax error. g9sm2526951gvc.8 Has anyone come across this before? Thanks, Danny Main Migration file c lass Baseapp < ActiveRecord::Migration def self.up # Create Settings Table create_table :settings, :force => true do |t| t.string :label t.string :identifier t.text :description t.string :field_type, :default => 'string' t.text :value t.timestamps end # Create Users Table create_table :users do |t| t.string :login, :limit => 40 t.string :identity_url t.string :name, :limit => 100, :default => '', :null => true t.string :email, :limit => 100 t.string :mobile t.string :signaturenotes t.string :crypted_password, :limit => 40 t.string :salt, :limit => 40 t.string :remember_token, :limit => 40 t.string :activation_code, :limit => 40 t.string :state, :null => :false, :default => 'passive' t.datetime :remember_token_expires_at t.string :password_reset_code, :default => nil t.datetime :activated_at t.datetime :deleted_at t.timestamps end add_index :users, :login, :unique => true # Create Profile Table create_table :profiles do |t| t.references :user t.string :real_name t.string :location t.string :website t.string :mobile t.timestamps end # Create OpenID Tables create_table :open_id_authentication_associations, :force => true do |t| t.integer :issued, :lifetime t.string :handle, :assoc_type t.binary :server_url, :secret end create_table :open_id_authentication_nonces, :force => true do |t| t.integer :timestamp, :null => false t.string :server_url, :null => true t.string :salt, :null => false end create_table :roles do |t| t.column :name, :string end # generate the join table create_table :roles_users, :id => false do |t| t.column :role_id, :integer t.column :user_id, :integer end # Create admin role and user admin_role = Role.create(:name => 'admin') user = User.create do |u| u.login = 'admin' u.password = u.password_confirmation = 'advices' u.email = '[email protected]' end user.register! user.activate! user.roles << admin_role end def self.down # Drop all BaseApp drop_table :settings drop_table :users drop_table :profiles drop_table :open_id_authentication_associations drop_table :open_id_authentication_nonces drop_table :roles drop_table :roles_users end end

    Read the article

  • A methology that allows for a single Java code base covering many different versions?

    - by Thorbjørn Ravn Andersen
    I work in a small shop where we have a LOT of legacy Cobol code and where a methology has been adopted to allow us to minimize forking and branching as much as possible. For a given release we have three levels: CORE - bottom layer, this code is common to all releases GROUP - optional code common to several customers. CUSTOMER - optional code specific for a single customer. When a program is needed, it is first searched for in CUSTOMER, then in GROUP and finally in CORE. A given application for us invokes many programs which all are looked for in this sequence (think exe files and PATH under Windows). We also have Java programs interacting with this legacy code, and as the core-group-customer lookup mehchanism does not lend it self easily to Java it has tended to grow in a CVS branch for each customer, requiring much too much maintainance. The Java part and the backend part tend to be developed in parallel. I have been assigned to figure out a way to make the two worlds meet. Essentially we want a Java enviornment which allows us to have a single code base with sources for each release, where we easily can select a group and a customer and work with the application as it goes for that customer, and then easily switch to another codeset and THAT customer. I was thinking of perhaps a scenario with an Eclipse project for each core, customer, and group and then use Project Sets to select those we need for a given scenario. The problem I cannot get my head about, is how we would create robust code in the CORE projects which will work regardless of which group and customer is selected. A Factory class which knows which sub class of a passed Class object to invoke instead of each and every new? Others must have had similar code base management problems. Anybody with experiences to share? EDIT: The conclusion to this problem above has been that CVS needs to be replaced with a source code management system better suited for dealing with many branches concurrently and the migration of source from one component to the other while keeping history. Inspired by the recent migration by slf4j and logback we are currently looking at git as it handles branches very well. We've considered subversion and mercurial too but git appears to be better for single location, multibranched projects. I've asked about Perforce in another question, but my personal inclination is towards open source solutions for something as crucial as this. EDIT: After some more pondering, we've found that our actual pain point is that we use branches in CVS, and that branches in CVS are the easiest to work with if you branch ALL files! The revised conclusion is that we can do this with CVS alone, by switching to a forest of java projects, each corresponding to one of the levels above, and use the Eclipse build paths to tie them together so each CUSTOMER version pulls in the appropriate GROUP and CORE project. We still want to switch to a better versioning system but this is so important a decision so we want to delay it as much as possible. EDIT: I now have a proof-of-concept implementation of the CORE-GROUP-CUSTOMER concept using Google Guice 2.0 - the @ImplementedBy tag is just what we need. I wonder what everybody else does? Using if's all over the place? EDIT: Now I also need this functionality for web applications. Guice was until the JSR-330 is in place. Anybody with versioning experience? EDIT: JSR-330/299 is now in place with the JEE6 reference implementation Weld based on JBoss Seam and I have reimplemented the proof-of-concept with Weld and can see that if we use @Alternative along with ... in beans.xml we can get the behaviour we desire. I.e. provide a new implementation for a given functionality in CORE without changing a bit in the CORE jars. Initial reading up on the Servlet 3.0 specification indicates that it may support the same functionality for web application resources (not code). We will now do initial testing on the real application.

    Read the article

  • Authlogic OpenID integration

    - by Craig
    I'm having difficulty getting OpenId authentication working with Authlogic. It appears that the problem arose with changes to the open_id_authentication plugin. From what I've read so far, one needs to switch from using gems to using plugins. Here's what I done thus far to get Authlogic-OpenID integration working: Removed relevant gems: authlogic authlogic-oid rack-openid ruby-openid * Installed, configured, and started the authlogic sample application (http://github.com/binarylogic/authlogic_example)--works as expected. This required: installing the authlogic (2.1.3) gem ($ sudo gem install authlogic) adding a dependency (config.gem "authlogic") to the environment.rb file. added migration to add open-id support to User model; ran migration; columns added as expected made changes to the UsersController and UserSessionsController to use blocks to save each. made changes to new user-sessions view to support open id (f.text_field :openid_identifier) installed open_id_authentication plugin ($ script/plugin install git://github.com/rails/open_id_authentication.git) installed the authlogic-oid plugin ($ script/plugin install git://github.com/binarylogic/authlogic_openid.git) installed the plugin ($ script/plugin install git://github.com/glebm/ruby-openid.git) restarted mongrel (CTRL-C; $ script/server) Mogrel failed to start, returning the following error: /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `gem_original_require': no such file to load -- rack/openid (MissingSourceFile) from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `require' from /Users/craibuc/NetBeansProjects/authlogic_example/vendor/rails/activesupport/lib/active_support/dependencies.rb:156:in `require' from /Users/craibuc/NetBeansProjects/authlogic_example/vendor/rails/activesupport/lib/active_support/dependencies.rb:521:in `new_constants_in' from /Users/craibuc/NetBeansProjects/authlogic_example/vendor/rails/activesupport/lib/active_support/dependencies.rb:156:in `require' from /Users/craibuc/NetBeansProjects/authlogic_example/vendor/plugins/open_id_authentication/lib/open_id_authentication.rb:3 from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `gem_original_require' from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `require' from /Users/craibuc/NetBeansProjects/authlogic_example/vendor/rails/activesupport/lib/active_support/dependencies.rb:156:in `require' from /Users/craibuc/NetBeansProjects/authlogic_example/vendor/rails/activesupport/lib/active_support/dependencies.rb:521:in `new_constants_in' from /Users/craibuc/NetBeansProjects/authlogic_example/vendor/rails/activesupport/lib/active_support/dependencies.rb:156:in `require' from /Users/craibuc/NetBeansProjects/authlogic_example/vendor/plugins/open_id_authentication/init.rb:5:in `evaluate_init_rb' from ./script/../config/../vendor/rails/railties/lib/rails/plugin.rb:146:in `evaluate_init_rb' from /Users/craibuc/NetBeansProjects/authlogic_example/vendor/rails/activesupport/lib/active_support/core_ext/kernel/reporting.rb:11:in `silence_warnings' from ./script/../config/../vendor/rails/railties/lib/rails/plugin.rb:142:in `evaluate_init_rb' from ./script/../config/../vendor/rails/railties/lib/rails/plugin.rb:48:in `load' from ./script/../config/../vendor/rails/railties/lib/rails/plugin/loader.rb:38:in `load_plugins' from ./script/../config/../vendor/rails/railties/lib/rails/plugin/loader.rb:37:in `each' from ./script/../config/../vendor/rails/railties/lib/rails/plugin/loader.rb:37:in `load_plugins' from ./script/../config/../vendor/rails/railties/lib/initializer.rb:348:in `load_plugins' from ./script/../config/../vendor/rails/railties/lib/initializer.rb:163:in `process' from ./script/../config/../vendor/rails/railties/lib/initializer.rb:113:in `send' from ./script/../config/../vendor/rails/railties/lib/initializer.rb:113:in `run' from /Users/craibuc/NetBeansProjects/authlogic_example/config/environment.rb:13 from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `gem_original_require' from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `require' from /Users/craibuc/NetBeansProjects/authlogic_example/vendor/rails/activesupport/lib/active_support/dependencies.rb:156:in `require' from /Users/craibuc/NetBeansProjects/authlogic_example/vendor/rails/activesupport/lib/active_support/dependencies.rb:521:in `new_constants_in' from /Users/craibuc/NetBeansProjects/authlogic_example/vendor/rails/activesupport/lib/active_support/dependencies.rb:156:in `require' from /Users/craibuc/NetBeansProjects/authlogic_example/vendor/rails/railties/lib/commands/server.rb:84 from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `gem_original_require' from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `require' from script/server:3 I suspect this is related the rack-openid gem, but as it was dependent upon the ruby-openid gem, it was removed when the ruby-openid gem was removed. Perhaps this can be installed as a plugin. Any assistance with this matter is greatly appreciated--I'm just about to give up on OpenId integration. * ruby-openid (2.1.2) is installed at /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8. I'm not certain if this is affecting anything. In any case, I'm not sure how to uninstall it or if I should. ** edit ** It appears that there are a number of gems in the /Library/Ruby/Gems/1.8/gems directory that may be causing an issue: authlogic-oid (1.0.4) rack-openid (1.0.3) ruby-openid (2.1.7) Questions: - why doesn't the gem list command list these gems? - Why doesn't the gem uninstall command remove these gems?

    Read the article

  • how to user ajax with json in ruby on rails

    - by fenec
    I am implemeting a facebook application in rails using facebooker plugin, therefore it is very important to use this architecture if i want to update multiple DOM in my page. if my code works in a regular rails application it would work in my facebook application. i am trying to use ajax to let the user know that the comment was sent, and update the comments bloc. migration: class CreateComments < ActiveRecord::Migration def self.up create_table :comments do |t| t.string :body t.timestamps end end def self.down drop_table :comments end end controller: class CommentsController < ApplicationController def index @comments=Comment.all end def create @comment=Comment.create(params[:comment]) if request.xhr? @comments=Comment.all render :json=>{:ids_to_update=>[:all_comments,:form_message], :all_comments=>render_to_string(:partial=>"comments" ), :form_message=>"Your comment has been added." } else redirect_to comments_url end end end view: <script> function update_count(str,message_id) { len=str.length; if (len < 200) { $(message_id).innerHTML="<span style='color: green'>"+ (200-len)+" remaining</span>"; } else { $(message_id).innerHTML="<span style='color: red'>"+ "Comment too long. Only 200 characters allowed.</span>"; } } function update_multiple(json) { for( var i=0; i<json["ids_to_update"].length; i++ ) { id=json["ids_to_update"][i]; $(id).innerHTML=json[id]; } } </script> <div id="all_comments" > <%= render :partial=>"comments/comments" %> </div> Talk some trash: <br /> <% remote_form_for Comment.new, :url=>comments_url, :success=>"update_multiple(request)" do |f|%> <%= f.text_area :body, :onchange=>"update_count(this.getValue(),'remaining');" , :onkeyup=>"update_count(this.getValue(),'remaining');" %> <br /> <%= f.submit 'Post'%> <% end %> <p id="remaining" >&nbsp;</p> <p id="form_message" >&nbsp;</p> <br><br> <br> if i try to do alert(json) in the first line of the update_multiple function , i got an [object Object]. if i try to do alert(json["ids_to_update"][0]) in the first line of the update_multiple function , there is no dialog box displayed. however the comment got saved but nothing is updated. questions: 1.how can javascript and rails know that i am dealing with json objects? 2.how can i debug this problem? 3.how can i get it to work?

    Read the article

  • Upgrading from TFS 2010 RC to TFS 2010 RTM done

    - by Martin Hinshelwood
    Today is the big day, with the Launch of Visual Studio 2010 already done in Asia, and rolling around the world towards us, we are getting ready for the RTM (Released). We have had TFS 2010 in Production for nearly 6 months and have had only minimal problems. Update 12th April 2010  – Added Scott Hanselman’s tweet about the MSDN download release time. SSW was the first company in the world outside of Microsoft to deploy Visual Studio 2010 Team Foundation Server to production, not once, but twice. I am hoping to make it 3 in a row, but with all the hype around the new version, and with it being a production release and not just a go-live, I think there will be a lot of competition. Developers: MSDN will be updated with #vs2010 downloads and details at 10am PST *today*! @shanselman - Scott Hanselman Same as before, we need to Uninstall 2010 RC and install 2010 RTM. The installer will take care of all the complexity of actually upgrading any schema changes. If you are upgrading from TFS 2008 to TFS2010 you can follow our Rules To Better TFS 2010 Migration and read my post on our successes.   We run TFS 2010 in a Hyper-V virtual environment, so we have the advantage of running a snapshot as well as taking a DB backup. Done - Snapshot the hyper-v server Microsoft does not support taking a snapshot of a running server, for very good reason, and Brian Harry wrote a post after my last upgrade with the reason why you should never snapshot a running server. Done - Uninstall Visual Studio Team Explorer 2010 RC You will need to uninstall all of the Visual Studio 2010 RC client bits that you have on the server. Done - Uninstall TFS 2010 RC Done - Install TFS 2010 RTM Done - Configure TFS 2010 RTM Pick the Upgrade option and point it at your existing “tfs_Configuration” database to load all of the existing settings Done - Upgrade the SharePoint Extensions Upgrade Build Servers (Pending) Test the server The back out plan, and you should always have one, is to restore the snapshot. Upgrading to Team Foundation Server 2010 – Done The first thing you need to do is off the TFS server and then log into the Hyper-v server and create a snapshot. Figure: Make sure you turn the server off and delete all old snapshots before you take a new one I noticed that the snapshot that was taken before the Beta 2 to RC upgrade was still there. You should really delete old snapshots before you create a new one, but in this case the SysAdmin (who is currently tucked up in bed) asked me not to. I guess he is worried about a developer messing up his server Turn your server on and wait for it to boot in anticipation of all the nice shiny RTM’ness that is coming next. The upgrade procedure for TFS2010 is to uninstal the old version and install the new one. Figure: Remove Visual Studio 2010 Team Foundation Server RC from the system.   Figure: Most of the heavy lifting is done by the Uninstaller, but make sure you have removed any of the client bits first. Specifically Visual Studio 2010 or Team Explorer 2010.  Once the uninstall is complete, this took around 5 minutes for me, you can begin the install of the RTM. Running the 64 bit OS will allow the application to use more than 2GB RAM, which while not common may be of use in heavy load situations. Figure: It is always recommended to install the 64bit version of a server application where possible. I do not think it is likely, with SharePoint 2010 and Exchange 2010  and even Windows Server 2008 R2 being 64 bit only, I do not think there will be another release of a server app that is 32bit. You then need to choose what it is you want to install. This depends on how you are running TFS and on how many servers. In our case we run TFS and the Team Foundation Build Service (controller only) on out TFS server along with Analysis services and Reporting Services. But our SharePoint server lives elsewhere. Figure: This always confuses people, but in reality it makes sense. Don’t install what you do not need. Every extra you install has an impact of performance. If you are integrating with SharePoint you will need to run this install on every Front end server in your farm and don’t forget to upgrade your Build servers and proxy servers later. Figure: Selecting only Team Foundation Server (TFS) and Team Foundation Build Services (TFBS)   It is worth noting that if you have a lot of builds kicking off, and hence a lot of get operations against your TFS server, you can use a proxy server to cache the source control on another server in between your TFS server and your build servers. Figure: Installing Microsoft .NET Framework 4 takes the most time. Figure: Now run Windows Update, and SSW Diagnostic to make sure all your bits and bobs are up to date. Note: SSW Diagnostic will check your Power Tools, Add-on’s, Check in Policies and other bits as well. Configure Team Foundation Server 2010 – Done Now you can configure the server. If you have no key you will need to pick “Install a Trial Licence”, but it is only £500, or free with a MSDN subscription. Anyway, if you pick Trial you get 90 days to get your key. Figure: You can pick trial and add your key later using the TFS Server Admin. Here is where the real choices happen. We are doing an Upgrade from a previous version, so I will pick Upgrade the same as all you folks that are using the RC or TFS 2008. Figure: The upgrade wizard takes your existing 2010 or 2008 databases and upgraded them to the release.   Once you have entered your database server name you can click “List available databases” and it will show what it can upgrade. Figure: Select your database from the list and at this point, make sure you have a valid backup. At this point you have not made ANY changes to the databases. At this point the configuration wizard will load configuration from your existing database if you have one. If you are upgrading TFS 2008 refer to Rules To Better TFS 2010 Migration. Mostly during the wizard the default values will suffice, but depending on the configuration you want you can pick different options. Figure: Set the application tier account and Authentication method to use. We use NTLM to keep things simple as we host our TFS server externally for our remote developers.  Figure: Setting your TFS server URL’s to be the remote URL’s allows the reports to be accessed without using VPN. Very handy for those remote developers. Figure: Detected the existing Warehouse no problem. Figure: Again we love green ticks. It gives us a warm fuzzy feeling. Figure: The username for connecting to Reporting services should be a domain account (if you are on a domain that is). Figure: Setup the SharePoint integration to connect to your external SharePoint server. You can take the option to connect later.   You then need to run all of your readiness checks. These check can save your life! it will check all of the settings that you have entered as well as checking all the external services are configures and running properly. There are two reasons that TFS 2010 is so easy and painless to install where previous version were not. Microsoft changes the install to two steps, Install and configuration. The second reason is that they have pulled out all of the stops in making the install run all the checks necessary to make sure that once you start the install that it will complete. if you find any errors I recommend that you report them on http://connect.microsoft.com so everyone can benefit from your misery.   Figure: Now we have everything setup the configuration wizard can do its work.  Figure: Took a while on the “Web site” stage for some point, but zipped though after that.  Figure: last wee bit. TFS Needs to do a little tinkering with the data to complete the upgrade. Figure: All upgraded. I am not worried about the yellow triangle as SharePoint was being a little silly Exception Message: TF254021: The account name or password that you specified is not valid. (type TfsAdminException) Exception Stack Trace:    at Microsoft.TeamFoundation.Management.Controls.WizardCommon.AccountSelectionControl.TestLogon(String connectionString)    at System.ComponentModel.BackgroundWorker.WorkerThreadStart(Object argument) [Info   @16:10:16.307] Benign exception caught as part of verify: Exception Message: TF255329: The following site could not be accessed: http://projects.ssw.com.au/. The server that you specified did not return the expected response. Either you have not installed the Team Foundation Server Extensions for SharePoint Products on this server, or a firewall is blocking access to the specified site or the SharePoint Central Administration site. For more information, see the Microsoft Web site (http://go.microsoft.com/fwlink/?LinkId=161206). (type TeamFoundationServerException) Exception Stack Trace:    at Microsoft.TeamFoundation.Client.SharePoint.WssUtilities.VerifyTeamFoundationSharePointExtensions(ICredentials credentials, Uri url)    at Microsoft.TeamFoundation.Admin.VerifySharePointSitesUrl.Verify() Inner Exception Details: Exception Message: TF249064: The following Web service returned an response that is not valid: http://projects.ssw.com.au/_vti_bin/TeamFoundationIntegrationService.asmx. This Web service is used for the Team Foundation Server Extensions for SharePoint Products. Either the extensions are not installed, the request resulted in HTML being returned, or there is a problem with the URL. Verify that the following URL points to a valid SharePoint Web application and that the application is available: http://projects.ssw.com.au. If the URL is correct and the Web application is operating normally, verify that a firewall is not blocking access to the Web application. (type TeamFoundationServerInvalidResponseException) Exception Data Dictionary: ResponseStatusCode = InternalServerError I’ll look at SharePoint after, probably the SharePoint box just needs a restart or a kick If there is a problem with SharePoint it will come out in testing, But I will definatly be passing this on to Microsoft.   Upgrading the SharePoint connector to TFS 2010 You will need to upgrade the Extensions for SharePoint Products and Technologies on all of your SharePoint farm front end servers. To do this uninstall  the TFS 2010 RC from it in the same way as the server, and then install just the RTM Extensions. Figure: Only install the SharePoint Extensions on your SharePoint front end servers. TFS 2010 supports both SharePoint 2007 and SharePoint 2010.   Figure: When you configure SharePoint it uploads all of the solutions and templates. Figure: Everything is uploaded Successfully. Figure: TFS even remembered the settings from the previous installation, fantastic.   Upgrading the Team Foundation Build Servers to TFS 2010 Just like on the SharePoint servers you will need to upgrade the Build Server to the RTM. Just uninstall TFS 2010 RC and then install only the Team Foundation Build Services component. Unlike on the SharePoint server you will probably have some version of Visual Studio installed. You will need to remove this as well. (Coming Soon) Connecting Visual Studio 2010 / 2008 / 2005 and Eclipse to TFS2010 If you have developers still on Visual Studio 2005 or 2008 you will need do download the respective compatibility pack: Visual Studio Team System 2005 Service Pack 1 Forward Compatibility Update for Team Foundation Server 2010 Visual Studio Team System 2008 Service Pack 1 Forward Compatibility Update for Team Foundation Server 2010 If you are using Eclipse you can download the new Team Explorer Everywhere install for connecting to TFS. Get your developers to check that you have the latest version of your applications with SSW Diagnostic which will check for Service Packs and hot fixes to Visual Studio as well.   Technorati Tags: TFS,TFS2010,TFS 2010,Upgrade

    Read the article

  • Forum software advice needed

    - by David Thompson
    Hello All ... we want to migrate our sites current forum (proprietary built) to a newer, more modern (feature rich) platform. I've been looking around at the available options and have narrowed it down to vBulletin, Vanilla or Phorum (unless you have another suggestion ?). I hope someone here can give me some feedback on their experiences either migrating to a new forum or working deeply with one. The current forum we have has approx 2.2 million threads in it and is contained in a MySQL database. Data Migration is obviously the first issue, is one of the major Forum vendors better or worse in this regard ? The software needs to be able to be clustered and cached to ensure availability and performance. We want it to be PHP based and store it's data in MySQL. The code needs to be open to allow us to highly customise the software both to strip out a lot of stuff and be able to integrate our sites features. A lot of the forums I've looked at have a lot of duplicate features to our main site, in particular member management, profiles etc. I realise we'll have to do a good bit of development in removing these and tieing it all back to the main site so we want to find a platform that makes this kind of integration as easy as possible. Finally I guess if 'future proofing' the forum (as best as possible) given the above. Which platform will allow us to customise it but also allow us to keep instep with upgrades. Which forum software has the best track record for bringing online new features in a timely manner ? etc. etc. I know it's a big question but if anyone here has any experience in some or all of the above I'd be very grateful.

    Read the article

  • Windows Phone 7 ActiveSync error 86000C09 (My First Post!)

    - by Chris Heacock
    Hello fellow geeks! I'm kicking off this new blog with an issue that was a real nuisance, but was relatively easy to fix. During a recent Exchange 2003 to 2010 migration, one of the users was getting an error on his Windows Phone 7 device. The error code that popped up on the phone on every sync attempt was 86000C09 We tested the following: Different user on the same device: WORKED Problem user on a different device: FAILED   Seemed to point (conclusively) at the user's account as the crux of the issue. This error can come up if a user has too many devices syncing, but he had no other phones. We verified that using the following command: Get-ActiveSyncDeviceStatistics -Identity USERID Turns out, it was the old familiar inheritable permissions issue in Active Directory. :-/ This user was not an admin, nor had he ever been one. HOWEVER, his account was cloned from an ex-admin user, so the unchecked box stayed unchecked. We checked the box and voila, data started flowing to his device(s). Here's a refresher on enabling Inheritable permissions: Open ADUC, and enable Advanced Features: Then open properties and go to the Security tab for the user in question: Click on Advanced, and the following screen should pop up: Verify that "Include inheritable permissions from this object's parent" is *checked*.   You will notice that for certain users, this box keeps getting unchecked. This is normal behavior due to the inbuilt security of Active Directory. People that are in the following groups will have this flag altered by AD: Account Operators Administrators Backup Operators Domain Admins Domain Controllers Enterprise Admins Print Operators Read-Only Domain Controllers Replicator Schema Admins Server Operators Once the box is cheked, permissions will flow and the user will be set correctly. Even if the box is unchecked, they will function normally as they now has the proper permissions configured. You need to perform this same excercise when enabling users for Lync, but that's another blog. :-)   -Chris

    Read the article

  • Partner Webcast – More out of Database Appliance with DB Options - 13 September 2012

    - by Thanos
    The Oracle Database Appliance is a new way to take advantage of the world's most popular database—Oracle Database 11g —in a single, easy-to-deploy and manage system. It's a complete package of software, server, storage, and networking that's engineered for simplicity; saving time and money by simplifying deployment, maintenance, and support of database workloads. But that is not all, with the support for all Oracle Database Options, Oracle Database Appliance can be the ideal solution for many use cases. Feature Benefit Simplifies deployment, maintenance, and support of high-availability database workloads Saves significant time and effort throughout the database administration lifecycle An engineered system of software, server, storage, and networking High availability for a wide range of custom and packaged OLTP and data warehousing application databases Simple one-button Installation, full-stack integrated patching and diagnostics Reduces planned and unplanned downtime by automatically monitoring and logging service requests with Oracle Support Built using the world’s #1 database Protects databases from server and storage failures with Oracle Real Application Clusters and Automatic Storage Management Unique Pay-As-You-Grow software licensing Reduces cost with flexibility to adjust your software spend as your business grows without the need for any hardware upgrades Discover the Oracle Database Appliance Value Proposition and learn how to position and combine it with database options to capture new business and easily roll out solutions safely and with maximum cost efficiency. This webcast is repeated once again for your benefit. Agenda: Oracle Database& Engineered Systems Innovation. What’s the Oracle Database Appliance ? Oracle Database Appliance Value Proposition. Oracle Database Appliance with Database Options Oracle Database Appliance Partners Business Delivery FormatThis FREE online LIVE eSeminar will be delivered over the Web. Registrations received less than 24hours prior to start time may not receive confirmation to attend. Duration: 1 hour Register Now! Oracle Database Appliance is available for purchase at the Oracle Store under Engineered Systems. For any questions please contact us at partner.imc-AT-beehiveonline.oracle-DOT-com Visit regularly our ISV Migration Center blog Or Follow us @oracleimc to learn more on Oracle Technologies as well as upcoming partner webcasts and events.

    Read the article

  • Windows Azure Learning Plan - SQL Azure

    - by BuckWoody
    This is one in a series of posts on a Windows Azure Learning Plan. You can find the main post here. This one deals with Security for  Windows Azure.   Overview and Training Overview and general  information about SQL Azure - what it is, how it works, and where you can learn more. General Overview (sign-in required, but free) http://social.technet.microsoft.com/wiki/contents/articles/inside-sql-azure.aspx General Guidelines and Limitations http://msdn.microsoft.com/en-us/library/ee336245.aspx Microsoft SQL Azure Documentation http://msdn.microsoft.com/en-us/windowsazure/sqlazure/default.aspx Samples and Learning Sources for online and other SQL Azure Training Free Online Training http://blogs.msdn.com/b/sqlazure/archive/2010/05/06/10007449.aspx 60-minute Overview (webcast) https://msevents.microsoft.com/CUI/WebCastEventDetails.aspx?culture=en-US&EventID=1032458620&CountryCode=US Architecture SQL Azure Internals and Architectures for Scale Out and other use-cases. SQL Azure Architecture http://social.technet.microsoft.com/wiki/contents/articles/inside-sql-azure.aspx Scale-out Architectures http://tinyurl.com/247zm33 Federation Concepts http://tinyurl.com/34eew2w Use-Cases http://blogical.se/blogs/jahlen/archive/2010/11/23/sql-azure-why-use-it-and-what-makes-it-different-from-sql-server.aspx SQL Azure Security Model (video) http://www.msdev.com/Directory/Description.aspx?EventId=1491 Administration Standard Administrative Tasks and Tools Tools Options http://social.technet.microsoft.com/wiki/contents/articles/overview-of-tools-to-use-with-sql-azure.aspx SQL Azure Migration Wizard http://sqlazuremw.codeplex.com/ Managing Databases and Login Security http://msdn.microsoft.com/en-us/library/ee336235.aspx General Security for SQL Azure http://msdn.microsoft.com/en-us/library/ff394108.aspx Backup and Recovery http://social.technet.microsoft.com/wiki/contents/articles/sql-azure-backup-and-restore-strategy.aspx More Backup and Recovery Options http://social.technet.microsoft.com/wiki/contents/articles/current-options-for-backing-up-data-with-sql-azure.aspx Syncing Large Databases to SQL Azure http://blogs.msdn.com/b/sync/archive/2010/09/24/how-to-sync-large-sql-server-databases-to-sql-azure.aspx Programming Programming Patterns and Architectures for SQL Azure systems. How to Build and Manage a Business Database on SQL Azure http://tinyurl.com/25q5v6g Connection Management http://social.technet.microsoft.com/wiki/contents/articles/sql-azure-connection-management-in-sql-azure.aspx Transact-SQL Supported by SQL Azure http://msdn.microsoft.com/en-us/library/ee336250.aspx

    Read the article

  • Slides and links for Looking at the Clouds through Dirty Windows :-)

    - by Eric Nelson
    Tomorrow (Friday 23/4/2010) I am delivering a session at the Cloud Grid Exchange in London at SkillsMatter (A top training company and superb supporter of development communities). To be perfectly honest – I’m more interested in attending than presenting as the sessions and speaker line up look great. But in the middle of all that I will be doing the following (rather cheekily named) session: Looking at the Clouds through dirty Windows Many developers assume that the Microsoft Windows Azure Platform for Cloud Computing is only relevant if you develop solutions using Microsoft Visual Studio and the .NET Framework. The reality is somewhat different. In the same way that developers can build great applications on Windows Server using a variety of programming languages, developers can do the same for Azure. Java, Tomcat, PHP, Ruby, Python, MySQL and more all work great on Azure. In this session we will take a lap around the services offered by the Azure PaaS and demonstrate just how easy it is to build and deploy applications built in .NET and other technologies. The session will be a mix of slides and demos – currently I plan to demo .NET and Ruby on Rails running on Azure – but I may flex that depending on how the morning sessions go and who turns up. Looking at the clouds through dirty windows View more presentations from Eric Nelson. Links: Getting started: Details on how to sign up for FREE to try out Windows Azure http://bit.ly/azure25  Getting started with Windows Azure UK Site http://bit.ly/startazure UK Azure Site http://bit.ly/landazure UK Community http://ukazure.ning.com Examples of Azure and none .NET technologies: http://ukinterop.cloudapp.net Restlet based, using Windows Azure Storage http://rubyukinterop.cloudapp.net Rails based clone using Windows Azure Storage (down at time of posting) http://rubysqlazure.cloudapp.net Simple rails using SQL Azure http://bookingbug.com Real world “Ruby on Rails on Azure” (Work in progress for conversion to Azure) Domino’s Pizza migration of Java/Tomcat on Solaris to Java/Tomcat on Windows Azure Main Azure Interop site http://www.microsoft.com/WindowsAzure/interop/: Eclipse Tooling http://windowsazure4e.org Java support http://www.windowsazure4j.org/ Rails on Azure skeleton project for Visual Studio http://code.msdn.com/railsonazure Azure Runme utility for spawning processes http://azurerunme.codeplex.com Feedback www.mygreatwindowsazureidea.com

    Read the article

  • Collaborate 2010: Spotlight on Oracle Content Management

    - by [email protected]
    Excitement is building for the Collaborate conference April 18th through the 22nd. Outside of the event being in Las Vegas, which for me often seems to add to the excitement, there will be a great lineup of Oracle Content Management focused sessions. In fact, there are currently over 30 content management sessions scheduled, and attendees will get to hear from customers, partners, as well as Oracle experts. Attendees should expect to hear a lot about Oracle Content Management 11g at Collaborate 2010. Roel Stalman and Andy MacMillan will kick off these discussions on Monday, April 19th as they present Oracle Content Management's product strategy and roadmap (10:45 - 11:45). Monday's lineup also includes sessions on Oracle Imaging and Process Management (I/PM) 11g and Oracle Forms Recognition (2:30 - 3:30), which were both released in January. For those customers using older versions of I/PM or Stellent IBPM, be sure not to miss the "migrating to I/PM 11g" session on Monday as well (1:15 - 2:15) as this should give you some insight into the migration process. Check out the entire list of Oracle Content Management sessions here. Another focus at Collaborate this year is to discuss the benefits of using Oracle Content Management with Oracle Applications - Oracle E-Business Suite, PeopleSoft, and Siebel - so be sure to check out these sessions too: Accelerating Accounts Payable Processes with Integrated Document Imaging(Monday, April 19th, 3:45 - 4:45)Supercharge Your Siebel Sales and Marketing with Integrated Document Management(Tuesday, April 20th, 2:00 - 3:00)Oracle Enterprise 2.0 for Oracle Applications: The Value of an Integrated E2.0 Platform(Tuesday, April 20th, 3:15 - 4:15)Comprehensive Human Resources Automation with Oracle Content Management(Wednesday, April 21st, 1:00 - 2:00) Collaborate is also the perfect opportunity to meet Oracle executives and product experts. Attendees can sign up for 1 on 1 meetings at the event, and there will be someone representing each Oracle Content Management product. These meetings are probably the best way to get your product questions answered in a face-to-face manner. It seems more and more to me that Oracle Content Management customers are viewing Collaborate as "the" conference to attend each year. I hope you have plans to attend and I will see you there.

    Read the article

  • Migrating from SQL Trace to Extended Events

    - by extended_events
    In SQL Server codenamed “Denali” we are moving our diagnostic tracing capabilities forward by building a system on top of Extended Events. With every new system you face the specter of migration which is always a bit of a hassle. I’m obviously motivated to see everyone move their diagnostic tracing systems over to the new extended events based system, so I wanted to make sure we lowered the bar for the migration process to help ease your trials. In my initial post on Denali CTP 1 I described a couple tables that we created that will help map the existing SQL Trace Event Classes to the equivalent Extended Events events. In this post I’ll describe the tables in a bit more details, explain the relationship between the SQL Trace objects (Event Class & Column) and Extended Event objects (Events & Actions) and at the end provide some sample code for a managed stored procedure that will take an existing SQL Trace session (eg. a trace that you can see in sys.Traces) and converts it into event session DDL. Can you relate? In some ways, SQL Trace and Extended Events is kind of like the Standard and Metric measuring systems in the United States. If you spend too much time trying to figure out how to convert between the two it will probably make your head hurt. It’s often better to just use the new system without trying to translate between the two. That said, people like to relate new things to the things they’re comfortable with, so, with some trepidation, I will now explain how these two systems are related to each other. First, some terms… SQL Trace is made up of Event Classes and Columns. The Event Class occurs as the result of some activity in the database engine, for example, SQL:Batch Completed fires when a batch has completed executing on the server. Each Event Class can have any number of Columns associated with it and those Columns contain the data that is interesting about the Event Class, such as the duration or database name. In Extended Events we have objects named Events, EventData field and Actions. The Event (some people call this an xEvent but I’ll stick with Event) is equivalent to the Event Class in SQL Trace since it is the thing that occurs as the result of some activity taking place in the server. An  EventData field (from now on I’ll just refer to these as fields) is a piece of information that is highly correlated with the event and is always included as part of the schema of an Event. An Action is something that can be associated with any Event and it will cause some additional “action” to occur when ever the parent Event occurs. Actions can do a number of different things for example, there are Actions that collect additional data and, take memory dumps. When mapping SQL Trace onto Extended Events, Columns are covered by a combination of both fields and Actions. Knowing exactly where a Column is covered by a field and where it is covered by an Action is a bit of an art, so we created the mapping tables to make you an Artist without the years of practice. Let me draw you a map. Event Mapping The table dbo.trace_xe_event_map exists in the master database with the following structure: Column_name Type trace_event_id smallint package_name nvarchar xe_event_name nvarchar By joining this table sys.trace_events using trace_event_id and to the sys.dm_xe_objects using xe_event_name you can get a fair amount of information about how Event Classes are related to Events. The most basic query this lends itself to is to match an Event Class with the corresponding Event. SELECT     t.trace_event_id,     t.name [event_class],     e.package_name,     e.xe_event_name FROM sys.trace_events t INNER JOIN dbo.trace_xe_event_map e     ON t.trace_event_id = e.trace_event_id There are a couple things you’ll notice as you peruse the output of this query: For the most part, the names of Events are fairly close to the original Event Class; eg. SP:CacheMiss == sp_cache_miss, and so on. We’ve mostly stuck to a one to one mapping between Event Classes and Events, but there are a few cases where we have combined when it made sense. For example, Data File Auto Grow, Log File Auto Grow, Data File Auto Shrink & Log File Auto Shrink are now all covered by a single event named database_file_size_change. This just seemed like a “smarter” implementation for this type of event, you can get all the same information from this single event (grow/shrink, Data/Log, Auto/Manual growth) without having multiple different events. You can use Predicates if you want to limit the output to just one of the original Event Class measures. There are some Event Classes that did not make the cut and were not migrated. These fall into two categories; there were a few Event Classes that had been deprecated, or that just did not make sense, so we didn’t migrate them. (You won’t find an Event related to mounting a tape – sorry.) The second class is bigger; with rare exception, we did not migrate any of the Event Classes that were related to Security Auditing using SQL Trace. We introduced the SQL Audit feature in SQL Server 2008 and that will be the compliance and auditing feature going forward. Doing this is a very deliberate decision to support separation of duties for DBAs. There are separate permissions required for SQL Audit and Extended Events tracing so you can assign these tasks to different people if you choose. (If you’re wondering, the permission for Extended Events is ALTER ANY EVENT SESSION, which is covered by CONTROL SERVER.) Action Mapping The table dbo.trace_xe_action_map exists in the master database with the following structure: Column_name Type trace_column_id smallint package_name nvarchar xe_action_name nvarchar You can find more details by joining this to sys.trace_columns on the trace_column_id field. SELECT     c.trace_column_id,     c.name [column_name],     a.package_name,     a.xe_action_name FROM sys.trace_columns c INNER JOIN    dbo.trace_xe_action_map a     ON c.trace_column_id = a.trace_column_id If you examine this list, you’ll notice that there are relatively few Actions that map to SQL Trace Columns given the number of Columns that exist. This is not because we forgot to migrate all the Columns, but because much of the data for individual Event Classes is included as part of the EventData fields of the equivalent Events so there is no need to specify them as Actions. Putting it all together If you’ve spent a bunch of time figuring out the inner workings of SQL Trace, and who hasn’t, then you probably know that the typically set of Columns you find associated with any given Event Class in SQL Profiler is not fix, but is determine by the contents of the table sys.trace_event_bindings. We’ve used this table along with the mapping tables to produce a list of Event + Action combinations that duplicate the SQL Profiler Event Class definitions using the following query, which you can also find in the Books Online topic How To: View the Extended Events Equivalents to SQL Trace Event Classes. USE MASTER; GO SELECT DISTINCT    tb.trace_event_id,    te.name AS 'Event Class',    em.package_name AS 'Package',    em.xe_event_name AS 'XEvent Name',    tb.trace_column_id,    tc.name AS 'SQL Trace Column',    am.xe_action_name as 'Extended Events action' FROM (sys.trace_events te LEFT OUTER JOIN dbo.trace_xe_event_map em    ON te.trace_event_id = em.trace_event_id) LEFT OUTER JOIN sys.trace_event_bindings tb    ON em.trace_event_id = tb.trace_event_id LEFT OUTER JOIN sys.trace_columns tc    ON tb.trace_column_id = tc.trace_column_id LEFT OUTER JOIN dbo.trace_xe_action_map am    ON tc.trace_column_id = am.trace_column_id ORDER BY te.name, tc.name As you might imagine, it’s also possible to map an existing trace definition to the equivalent event session by judicious use of fn_trace_geteventinfo joined with the two mapping tables. This query extracts the list of Events and Actions equivalent to the trace with ID = 1, which is most likely the Default Trace. You can find this query, along with a set of other queries and steps required to migrate your existing traces over to Extended Events in the Books Online topic How to: Convert an Existing SQL Trace Script to an Extended Events Session. USE MASTER; GO DECLARE @trace_id int SET @trace_id = 1 SELECT DISTINCT el.eventid, em.package_name, em.xe_event_name AS 'event'    , el.columnid, ec.xe_action_name AS 'action' FROM (sys.fn_trace_geteventinfo(@trace_id) AS el    LEFT OUTER JOIN dbo.trace_xe_event_map AS em       ON el.eventid = em.trace_event_id) LEFT OUTER JOIN dbo.trace_xe_action_map AS ec    ON el.columnid = ec.trace_column_id WHERE em.xe_event_name IS NOT NULL AND ec.xe_action_name IS NOT NULL You’ll notice in the output that the list doesn’t include any of the security audit Event Classes, as I wrote earlier, those were not migrated. But wait…there’s more! If this were an infomercial there’d by some obnoxious guy next to me blogging “Well Mike…that’s pretty neat, but I’m sure you can do more. Can’t you make it even easier to migrate from SQL Trace?”  Needless to say, I’d blog back, in an overly excited way, “You bet I can' obnoxious blogger side-kick!” What I’ve got for you here is a Extended Events Team Blog only special – this tool will not be sold in any store; it’s a special offer for those of you reading the blog. I’ve wrapped all the logic of pulling the configuration information out of an existing trace and and building the Extended Events DDL statement into a handy, dandy CLR stored procedure. Once you load the assembly and register the procedure you just supply the trace id (from sys.traces) and provide a name for the event session. Run the procedure and out pops the DDL required to create an equivalent session. Any aspects of the trace that could not be duplicated are included in comments within the DDL output. This procedure does not actually create the event session – you need to copy the DDL out of the message tab and put it into a new query window to do that. It also requires an existing trace (but it doesn’t have to be running) to evaluate; there is no functionality to parse t-sql scripts. I’m not going to spend a bunch of time explaining the code here – the code is pretty well commented and hopefully easy to follow. If not, you can always post comments or hit the feedback button to send us some mail. Sample code: TraceToExtendedEventDDL   Installing the procedure Just in case you’re not familiar with installing CLR procedures…once you’ve compile the assembly you can load it using a script like this: -- Context to master USE master GO -- Create the assembly from a shared location. CREATE ASSEMBLY TraceToXESessionConverter FROM 'C:\Temp\TraceToXEventSessionConverter.dll' WITH PERMISSION_SET = SAFE GO -- Create a stored procedure from the assembly. CREATE PROCEDURE CreateEventSessionFromTrace @trace_id int, @session_name nvarchar(max) AS EXTERNAL NAME TraceToXESessionConverter.StoredProcedures.ConvertTraceToExtendedEvent GO Enjoy! -Mike

    Read the article

< Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >