Search Results

Search found 1818 results on 73 pages for 'migration'.

Page 52/73 | < Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >

  • Local Profile Map to New Active Directory Login

    - by user42937
    Preface: I am sure this has been asked some where on the site before but I couldn't find any questions about it, or maybe I am not using the correct verbiage... Our admins are giving us a new active directory account on different domain. As I am a progammer (a member of IT) we are the group gets assigned new accounts first to test the migration. When I log in to my local machine using the new account I get a new local profile. Not the biggest deal, but on the new profile I am missing mappings, desktop items, wallpaper, etc. Our users I going to throw a fit if there is no way around this. Two Questions: I've seen references to NTUSER.DAT and suggestions to Copy all user files from "Documents and Settings", but is there a good way or is it even possible to associate my local profile with the new AD account? Is there any thing that our admins can do to prevent this from happening?

    Read the article

  • rsync invocation to replace symlinks pointing to source?

    - by bdbaddog
    Currently I'm moving a big filesystem to a new server as the original fileserver is no longer able to handle the filesystem writes. To make this quick I made symlinks at the target filesystem pointing to the original filesystem. Initially: /company/release (mountpoint of the original filesystem) After migration: /company/release.old (points to original filesystem after automount map update) /company/release (points to new fileserver/filesystem after automount map update) In /company/release there are symlinks like the following: /company/release/product-1.0.tar.gz - /company/release.old/product-1.0.tar.gz /company/release/product-1.0 - /company/release.old/product-1.0 (this is a tree of files) Using symlinks allowed me to move the writes to the new filesystem quickly. Now I'd like to slowly migrate the existing files and directories to the new filesystem. The problem I'm running into is that since the symlinks point back at the original files rsync doesn't see any difference and so it doesn't actually copy the file(s) or directory(s) and remove/overwrite the symlinks. Is there a set of rsync flags which will do what I want?

    Read the article

  • How to put text in same row but different column if a certain text is present in the same row?

    - by melai
    How can I put text in the same row but different column if a certain text is present in the same row? Issue Area Correction Done Process changed bin Process skip lap converted to global Security done global migration Process changed bin How can I code this in a macro? For example: If the correction done is in the cell, the Issue should be Process automatically. If the word global is present the Issue should be Security. I have 500 rows and I want to have the code until row 500.

    Read the article

  • Exchange 2010: Import a PST when Local Move Request fails

    - by gravyface
    So the trail of tears continues with my SBS 2003 to 2011 migration: all the mailboxes have moved mailbox store from OLDSERVER to NEWSERVER, with the Local Move Requests completing successfully, except for one. I've logged into their machine and have exported their mailbox as a PST. I'm about to import it, but it seems to me that because the mailbox is still on OLDSERVER, even with a new Outlook profile pointing to NEWSERVER in Outlook, it'll push the mail into the current mailbox store on the old server. Please tell me I don't have to blow away her existing mailbox, logon, etc. on the old SBSERVER: is there a way to change the state from "Legacy" to "User Mailbox" without actually moving the mailbox store? Create a new mailbox for her user in NEWSERVER?

    Read the article

  • Can Google App users view Exchange users public calendars and contacts?

    - by CT
    My company currently uses MS Exchange 2003 for company email, contacts, and calendars. We have approximately 150 users. Construction industry. I would like to look into migrating from Exchange to Google Apps. It will be an easier sell to the powers that be if we can migrate certain smaller departments first successfully than an entire company move. I would like to first migrate our field superintendents who are usually out of the office working remotely. Approx 30 users. Will Google App users be able to see our Exchange user's calendars and vice versa? How about public folders? Anyone's migration story is much appreciated. Thank you.

    Read the article

  • Rails on server syntax error?

    - by Danny McClelland
    Hi Everyone, I am trying to get my rails application running on my web server, but when I run the rake db:migrate I get the following error: r oot@oak [/home/macandco/rails_apps/survey_manager]# rake db:migrate (in /home/macandco/rails_apps/survey_manager) == Baseapp: migrating ======================================================== -- create_table(:settings, {:force=>true}) -> 0.0072s -- create_table(:users) -> 0.0072s -- add_index(:users, :login, {:unique=>true}) -> 0.0097s -- create_table(:profiles) -> 0.0084s -- create_table(:open_id_authentication_associations, {:force=>true}) -> 0.0067s -- create_table(:open_id_authentication_nonces, {:force=>true}) -> 0.0064s -- create_table(:roles) -> 0.0052s -- create_table(:roles_users, {:id=>false}) -> 0.0060s rake aborted! An error has occurred, all later migrations canceled: 555 5.5.2 Syntax error. g9sm2526951gvc.8 Has anyone come across this before? Thanks, Danny Main Migration file c lass Baseapp < ActiveRecord::Migration def self.up # Create Settings Table create_table :settings, :force => true do |t| t.string :label t.string :identifier t.text :description t.string :field_type, :default => 'string' t.text :value t.timestamps end # Create Users Table create_table :users do |t| t.string :login, :limit => 40 t.string :identity_url t.string :name, :limit => 100, :default => '', :null => true t.string :email, :limit => 100 t.string :mobile t.string :signaturenotes t.string :crypted_password, :limit => 40 t.string :salt, :limit => 40 t.string :remember_token, :limit => 40 t.string :activation_code, :limit => 40 t.string :state, :null => :false, :default => 'passive' t.datetime :remember_token_expires_at t.string :password_reset_code, :default => nil t.datetime :activated_at t.datetime :deleted_at t.timestamps end add_index :users, :login, :unique => true # Create Profile Table create_table :profiles do |t| t.references :user t.string :real_name t.string :location t.string :website t.string :mobile t.timestamps end # Create OpenID Tables create_table :open_id_authentication_associations, :force => true do |t| t.integer :issued, :lifetime t.string :handle, :assoc_type t.binary :server_url, :secret end create_table :open_id_authentication_nonces, :force => true do |t| t.integer :timestamp, :null => false t.string :server_url, :null => true t.string :salt, :null => false end create_table :roles do |t| t.column :name, :string end # generate the join table create_table :roles_users, :id => false do |t| t.column :role_id, :integer t.column :user_id, :integer end # Create admin role and user admin_role = Role.create(:name => 'admin') user = User.create do |u| u.login = 'admin' u.password = u.password_confirmation = 'advices' u.email = '[email protected]' end user.register! user.activate! user.roles << admin_role end def self.down # Drop all BaseApp drop_table :settings drop_table :users drop_table :profiles drop_table :open_id_authentication_associations drop_table :open_id_authentication_nonces drop_table :roles drop_table :roles_users end end

    Read the article

  • A methology that allows for a single Java code base covering many different versions?

    - by Thorbjørn Ravn Andersen
    I work in a small shop where we have a LOT of legacy Cobol code and where a methology has been adopted to allow us to minimize forking and branching as much as possible. For a given release we have three levels: CORE - bottom layer, this code is common to all releases GROUP - optional code common to several customers. CUSTOMER - optional code specific for a single customer. When a program is needed, it is first searched for in CUSTOMER, then in GROUP and finally in CORE. A given application for us invokes many programs which all are looked for in this sequence (think exe files and PATH under Windows). We also have Java programs interacting with this legacy code, and as the core-group-customer lookup mehchanism does not lend it self easily to Java it has tended to grow in a CVS branch for each customer, requiring much too much maintainance. The Java part and the backend part tend to be developed in parallel. I have been assigned to figure out a way to make the two worlds meet. Essentially we want a Java enviornment which allows us to have a single code base with sources for each release, where we easily can select a group and a customer and work with the application as it goes for that customer, and then easily switch to another codeset and THAT customer. I was thinking of perhaps a scenario with an Eclipse project for each core, customer, and group and then use Project Sets to select those we need for a given scenario. The problem I cannot get my head about, is how we would create robust code in the CORE projects which will work regardless of which group and customer is selected. A Factory class which knows which sub class of a passed Class object to invoke instead of each and every new? Others must have had similar code base management problems. Anybody with experiences to share? EDIT: The conclusion to this problem above has been that CVS needs to be replaced with a source code management system better suited for dealing with many branches concurrently and the migration of source from one component to the other while keeping history. Inspired by the recent migration by slf4j and logback we are currently looking at git as it handles branches very well. We've considered subversion and mercurial too but git appears to be better for single location, multibranched projects. I've asked about Perforce in another question, but my personal inclination is towards open source solutions for something as crucial as this. EDIT: After some more pondering, we've found that our actual pain point is that we use branches in CVS, and that branches in CVS are the easiest to work with if you branch ALL files! The revised conclusion is that we can do this with CVS alone, by switching to a forest of java projects, each corresponding to one of the levels above, and use the Eclipse build paths to tie them together so each CUSTOMER version pulls in the appropriate GROUP and CORE project. We still want to switch to a better versioning system but this is so important a decision so we want to delay it as much as possible. EDIT: I now have a proof-of-concept implementation of the CORE-GROUP-CUSTOMER concept using Google Guice 2.0 - the @ImplementedBy tag is just what we need. I wonder what everybody else does? Using if's all over the place? EDIT: Now I also need this functionality for web applications. Guice was until the JSR-330 is in place. Anybody with versioning experience? EDIT: JSR-330/299 is now in place with the JEE6 reference implementation Weld based on JBoss Seam and I have reimplemented the proof-of-concept with Weld and can see that if we use @Alternative along with ... in beans.xml we can get the behaviour we desire. I.e. provide a new implementation for a given functionality in CORE without changing a bit in the CORE jars. Initial reading up on the Servlet 3.0 specification indicates that it may support the same functionality for web application resources (not code). We will now do initial testing on the real application.

    Read the article

  • Authlogic OpenID integration

    - by Craig
    I'm having difficulty getting OpenId authentication working with Authlogic. It appears that the problem arose with changes to the open_id_authentication plugin. From what I've read so far, one needs to switch from using gems to using plugins. Here's what I done thus far to get Authlogic-OpenID integration working: Removed relevant gems: authlogic authlogic-oid rack-openid ruby-openid * Installed, configured, and started the authlogic sample application (http://github.com/binarylogic/authlogic_example)--works as expected. This required: installing the authlogic (2.1.3) gem ($ sudo gem install authlogic) adding a dependency (config.gem "authlogic") to the environment.rb file. added migration to add open-id support to User model; ran migration; columns added as expected made changes to the UsersController and UserSessionsController to use blocks to save each. made changes to new user-sessions view to support open id (f.text_field :openid_identifier) installed open_id_authentication plugin ($ script/plugin install git://github.com/rails/open_id_authentication.git) installed the authlogic-oid plugin ($ script/plugin install git://github.com/binarylogic/authlogic_openid.git) installed the plugin ($ script/plugin install git://github.com/glebm/ruby-openid.git) restarted mongrel (CTRL-C; $ script/server) Mogrel failed to start, returning the following error: /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `gem_original_require': no such file to load -- rack/openid (MissingSourceFile) from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `require' from /Users/craibuc/NetBeansProjects/authlogic_example/vendor/rails/activesupport/lib/active_support/dependencies.rb:156:in `require' from /Users/craibuc/NetBeansProjects/authlogic_example/vendor/rails/activesupport/lib/active_support/dependencies.rb:521:in `new_constants_in' from /Users/craibuc/NetBeansProjects/authlogic_example/vendor/rails/activesupport/lib/active_support/dependencies.rb:156:in `require' from /Users/craibuc/NetBeansProjects/authlogic_example/vendor/plugins/open_id_authentication/lib/open_id_authentication.rb:3 from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `gem_original_require' from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `require' from /Users/craibuc/NetBeansProjects/authlogic_example/vendor/rails/activesupport/lib/active_support/dependencies.rb:156:in `require' from /Users/craibuc/NetBeansProjects/authlogic_example/vendor/rails/activesupport/lib/active_support/dependencies.rb:521:in `new_constants_in' from /Users/craibuc/NetBeansProjects/authlogic_example/vendor/rails/activesupport/lib/active_support/dependencies.rb:156:in `require' from /Users/craibuc/NetBeansProjects/authlogic_example/vendor/plugins/open_id_authentication/init.rb:5:in `evaluate_init_rb' from ./script/../config/../vendor/rails/railties/lib/rails/plugin.rb:146:in `evaluate_init_rb' from /Users/craibuc/NetBeansProjects/authlogic_example/vendor/rails/activesupport/lib/active_support/core_ext/kernel/reporting.rb:11:in `silence_warnings' from ./script/../config/../vendor/rails/railties/lib/rails/plugin.rb:142:in `evaluate_init_rb' from ./script/../config/../vendor/rails/railties/lib/rails/plugin.rb:48:in `load' from ./script/../config/../vendor/rails/railties/lib/rails/plugin/loader.rb:38:in `load_plugins' from ./script/../config/../vendor/rails/railties/lib/rails/plugin/loader.rb:37:in `each' from ./script/../config/../vendor/rails/railties/lib/rails/plugin/loader.rb:37:in `load_plugins' from ./script/../config/../vendor/rails/railties/lib/initializer.rb:348:in `load_plugins' from ./script/../config/../vendor/rails/railties/lib/initializer.rb:163:in `process' from ./script/../config/../vendor/rails/railties/lib/initializer.rb:113:in `send' from ./script/../config/../vendor/rails/railties/lib/initializer.rb:113:in `run' from /Users/craibuc/NetBeansProjects/authlogic_example/config/environment.rb:13 from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `gem_original_require' from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `require' from /Users/craibuc/NetBeansProjects/authlogic_example/vendor/rails/activesupport/lib/active_support/dependencies.rb:156:in `require' from /Users/craibuc/NetBeansProjects/authlogic_example/vendor/rails/activesupport/lib/active_support/dependencies.rb:521:in `new_constants_in' from /Users/craibuc/NetBeansProjects/authlogic_example/vendor/rails/activesupport/lib/active_support/dependencies.rb:156:in `require' from /Users/craibuc/NetBeansProjects/authlogic_example/vendor/rails/railties/lib/commands/server.rb:84 from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `gem_original_require' from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `require' from script/server:3 I suspect this is related the rack-openid gem, but as it was dependent upon the ruby-openid gem, it was removed when the ruby-openid gem was removed. Perhaps this can be installed as a plugin. Any assistance with this matter is greatly appreciated--I'm just about to give up on OpenId integration. * ruby-openid (2.1.2) is installed at /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8. I'm not certain if this is affecting anything. In any case, I'm not sure how to uninstall it or if I should. ** edit ** It appears that there are a number of gems in the /Library/Ruby/Gems/1.8/gems directory that may be causing an issue: authlogic-oid (1.0.4) rack-openid (1.0.3) ruby-openid (2.1.7) Questions: - why doesn't the gem list command list these gems? - Why doesn't the gem uninstall command remove these gems?

    Read the article

  • how to user ajax with json in ruby on rails

    - by fenec
    I am implemeting a facebook application in rails using facebooker plugin, therefore it is very important to use this architecture if i want to update multiple DOM in my page. if my code works in a regular rails application it would work in my facebook application. i am trying to use ajax to let the user know that the comment was sent, and update the comments bloc. migration: class CreateComments < ActiveRecord::Migration def self.up create_table :comments do |t| t.string :body t.timestamps end end def self.down drop_table :comments end end controller: class CommentsController < ApplicationController def index @comments=Comment.all end def create @comment=Comment.create(params[:comment]) if request.xhr? @comments=Comment.all render :json=>{:ids_to_update=>[:all_comments,:form_message], :all_comments=>render_to_string(:partial=>"comments" ), :form_message=>"Your comment has been added." } else redirect_to comments_url end end end view: <script> function update_count(str,message_id) { len=str.length; if (len < 200) { $(message_id).innerHTML="<span style='color: green'>"+ (200-len)+" remaining</span>"; } else { $(message_id).innerHTML="<span style='color: red'>"+ "Comment too long. Only 200 characters allowed.</span>"; } } function update_multiple(json) { for( var i=0; i<json["ids_to_update"].length; i++ ) { id=json["ids_to_update"][i]; $(id).innerHTML=json[id]; } } </script> <div id="all_comments" > <%= render :partial=>"comments/comments" %> </div> Talk some trash: <br /> <% remote_form_for Comment.new, :url=>comments_url, :success=>"update_multiple(request)" do |f|%> <%= f.text_area :body, :onchange=>"update_count(this.getValue(),'remaining');" , :onkeyup=>"update_count(this.getValue(),'remaining');" %> <br /> <%= f.submit 'Post'%> <% end %> <p id="remaining" >&nbsp;</p> <p id="form_message" >&nbsp;</p> <br><br> <br> if i try to do alert(json) in the first line of the update_multiple function , i got an [object Object]. if i try to do alert(json["ids_to_update"][0]) in the first line of the update_multiple function , there is no dialog box displayed. however the comment got saved but nothing is updated. questions: 1.how can javascript and rails know that i am dealing with json objects? 2.how can i debug this problem? 3.how can i get it to work?

    Read the article

  • Upgrading from TFS 2010 RC to TFS 2010 RTM done

    - by Martin Hinshelwood
    Today is the big day, with the Launch of Visual Studio 2010 already done in Asia, and rolling around the world towards us, we are getting ready for the RTM (Released). We have had TFS 2010 in Production for nearly 6 months and have had only minimal problems. Update 12th April 2010  – Added Scott Hanselman’s tweet about the MSDN download release time. SSW was the first company in the world outside of Microsoft to deploy Visual Studio 2010 Team Foundation Server to production, not once, but twice. I am hoping to make it 3 in a row, but with all the hype around the new version, and with it being a production release and not just a go-live, I think there will be a lot of competition. Developers: MSDN will be updated with #vs2010 downloads and details at 10am PST *today*! @shanselman - Scott Hanselman Same as before, we need to Uninstall 2010 RC and install 2010 RTM. The installer will take care of all the complexity of actually upgrading any schema changes. If you are upgrading from TFS 2008 to TFS2010 you can follow our Rules To Better TFS 2010 Migration and read my post on our successes.   We run TFS 2010 in a Hyper-V virtual environment, so we have the advantage of running a snapshot as well as taking a DB backup. Done - Snapshot the hyper-v server Microsoft does not support taking a snapshot of a running server, for very good reason, and Brian Harry wrote a post after my last upgrade with the reason why you should never snapshot a running server. Done - Uninstall Visual Studio Team Explorer 2010 RC You will need to uninstall all of the Visual Studio 2010 RC client bits that you have on the server. Done - Uninstall TFS 2010 RC Done - Install TFS 2010 RTM Done - Configure TFS 2010 RTM Pick the Upgrade option and point it at your existing “tfs_Configuration” database to load all of the existing settings Done - Upgrade the SharePoint Extensions Upgrade Build Servers (Pending) Test the server The back out plan, and you should always have one, is to restore the snapshot. Upgrading to Team Foundation Server 2010 – Done The first thing you need to do is off the TFS server and then log into the Hyper-v server and create a snapshot. Figure: Make sure you turn the server off and delete all old snapshots before you take a new one I noticed that the snapshot that was taken before the Beta 2 to RC upgrade was still there. You should really delete old snapshots before you create a new one, but in this case the SysAdmin (who is currently tucked up in bed) asked me not to. I guess he is worried about a developer messing up his server Turn your server on and wait for it to boot in anticipation of all the nice shiny RTM’ness that is coming next. The upgrade procedure for TFS2010 is to uninstal the old version and install the new one. Figure: Remove Visual Studio 2010 Team Foundation Server RC from the system.   Figure: Most of the heavy lifting is done by the Uninstaller, but make sure you have removed any of the client bits first. Specifically Visual Studio 2010 or Team Explorer 2010.  Once the uninstall is complete, this took around 5 minutes for me, you can begin the install of the RTM. Running the 64 bit OS will allow the application to use more than 2GB RAM, which while not common may be of use in heavy load situations. Figure: It is always recommended to install the 64bit version of a server application where possible. I do not think it is likely, with SharePoint 2010 and Exchange 2010  and even Windows Server 2008 R2 being 64 bit only, I do not think there will be another release of a server app that is 32bit. You then need to choose what it is you want to install. This depends on how you are running TFS and on how many servers. In our case we run TFS and the Team Foundation Build Service (controller only) on out TFS server along with Analysis services and Reporting Services. But our SharePoint server lives elsewhere. Figure: This always confuses people, but in reality it makes sense. Don’t install what you do not need. Every extra you install has an impact of performance. If you are integrating with SharePoint you will need to run this install on every Front end server in your farm and don’t forget to upgrade your Build servers and proxy servers later. Figure: Selecting only Team Foundation Server (TFS) and Team Foundation Build Services (TFBS)   It is worth noting that if you have a lot of builds kicking off, and hence a lot of get operations against your TFS server, you can use a proxy server to cache the source control on another server in between your TFS server and your build servers. Figure: Installing Microsoft .NET Framework 4 takes the most time. Figure: Now run Windows Update, and SSW Diagnostic to make sure all your bits and bobs are up to date. Note: SSW Diagnostic will check your Power Tools, Add-on’s, Check in Policies and other bits as well. Configure Team Foundation Server 2010 – Done Now you can configure the server. If you have no key you will need to pick “Install a Trial Licence”, but it is only £500, or free with a MSDN subscription. Anyway, if you pick Trial you get 90 days to get your key. Figure: You can pick trial and add your key later using the TFS Server Admin. Here is where the real choices happen. We are doing an Upgrade from a previous version, so I will pick Upgrade the same as all you folks that are using the RC or TFS 2008. Figure: The upgrade wizard takes your existing 2010 or 2008 databases and upgraded them to the release.   Once you have entered your database server name you can click “List available databases” and it will show what it can upgrade. Figure: Select your database from the list and at this point, make sure you have a valid backup. At this point you have not made ANY changes to the databases. At this point the configuration wizard will load configuration from your existing database if you have one. If you are upgrading TFS 2008 refer to Rules To Better TFS 2010 Migration. Mostly during the wizard the default values will suffice, but depending on the configuration you want you can pick different options. Figure: Set the application tier account and Authentication method to use. We use NTLM to keep things simple as we host our TFS server externally for our remote developers.  Figure: Setting your TFS server URL’s to be the remote URL’s allows the reports to be accessed without using VPN. Very handy for those remote developers. Figure: Detected the existing Warehouse no problem. Figure: Again we love green ticks. It gives us a warm fuzzy feeling. Figure: The username for connecting to Reporting services should be a domain account (if you are on a domain that is). Figure: Setup the SharePoint integration to connect to your external SharePoint server. You can take the option to connect later.   You then need to run all of your readiness checks. These check can save your life! it will check all of the settings that you have entered as well as checking all the external services are configures and running properly. There are two reasons that TFS 2010 is so easy and painless to install where previous version were not. Microsoft changes the install to two steps, Install and configuration. The second reason is that they have pulled out all of the stops in making the install run all the checks necessary to make sure that once you start the install that it will complete. if you find any errors I recommend that you report them on http://connect.microsoft.com so everyone can benefit from your misery.   Figure: Now we have everything setup the configuration wizard can do its work.  Figure: Took a while on the “Web site” stage for some point, but zipped though after that.  Figure: last wee bit. TFS Needs to do a little tinkering with the data to complete the upgrade. Figure: All upgraded. I am not worried about the yellow triangle as SharePoint was being a little silly Exception Message: TF254021: The account name or password that you specified is not valid. (type TfsAdminException) Exception Stack Trace:    at Microsoft.TeamFoundation.Management.Controls.WizardCommon.AccountSelectionControl.TestLogon(String connectionString)    at System.ComponentModel.BackgroundWorker.WorkerThreadStart(Object argument) [Info   @16:10:16.307] Benign exception caught as part of verify: Exception Message: TF255329: The following site could not be accessed: http://projects.ssw.com.au/. The server that you specified did not return the expected response. Either you have not installed the Team Foundation Server Extensions for SharePoint Products on this server, or a firewall is blocking access to the specified site or the SharePoint Central Administration site. For more information, see the Microsoft Web site (http://go.microsoft.com/fwlink/?LinkId=161206). (type TeamFoundationServerException) Exception Stack Trace:    at Microsoft.TeamFoundation.Client.SharePoint.WssUtilities.VerifyTeamFoundationSharePointExtensions(ICredentials credentials, Uri url)    at Microsoft.TeamFoundation.Admin.VerifySharePointSitesUrl.Verify() Inner Exception Details: Exception Message: TF249064: The following Web service returned an response that is not valid: http://projects.ssw.com.au/_vti_bin/TeamFoundationIntegrationService.asmx. This Web service is used for the Team Foundation Server Extensions for SharePoint Products. Either the extensions are not installed, the request resulted in HTML being returned, or there is a problem with the URL. Verify that the following URL points to a valid SharePoint Web application and that the application is available: http://projects.ssw.com.au. If the URL is correct and the Web application is operating normally, verify that a firewall is not blocking access to the Web application. (type TeamFoundationServerInvalidResponseException) Exception Data Dictionary: ResponseStatusCode = InternalServerError I’ll look at SharePoint after, probably the SharePoint box just needs a restart or a kick If there is a problem with SharePoint it will come out in testing, But I will definatly be passing this on to Microsoft.   Upgrading the SharePoint connector to TFS 2010 You will need to upgrade the Extensions for SharePoint Products and Technologies on all of your SharePoint farm front end servers. To do this uninstall  the TFS 2010 RC from it in the same way as the server, and then install just the RTM Extensions. Figure: Only install the SharePoint Extensions on your SharePoint front end servers. TFS 2010 supports both SharePoint 2007 and SharePoint 2010.   Figure: When you configure SharePoint it uploads all of the solutions and templates. Figure: Everything is uploaded Successfully. Figure: TFS even remembered the settings from the previous installation, fantastic.   Upgrading the Team Foundation Build Servers to TFS 2010 Just like on the SharePoint servers you will need to upgrade the Build Server to the RTM. Just uninstall TFS 2010 RC and then install only the Team Foundation Build Services component. Unlike on the SharePoint server you will probably have some version of Visual Studio installed. You will need to remove this as well. (Coming Soon) Connecting Visual Studio 2010 / 2008 / 2005 and Eclipse to TFS2010 If you have developers still on Visual Studio 2005 or 2008 you will need do download the respective compatibility pack: Visual Studio Team System 2005 Service Pack 1 Forward Compatibility Update for Team Foundation Server 2010 Visual Studio Team System 2008 Service Pack 1 Forward Compatibility Update for Team Foundation Server 2010 If you are using Eclipse you can download the new Team Explorer Everywhere install for connecting to TFS. Get your developers to check that you have the latest version of your applications with SSW Diagnostic which will check for Service Packs and hot fixes to Visual Studio as well.   Technorati Tags: TFS,TFS2010,TFS 2010,Upgrade

    Read the article

  • Forum software advice needed

    - by David Thompson
    Hello All ... we want to migrate our sites current forum (proprietary built) to a newer, more modern (feature rich) platform. I've been looking around at the available options and have narrowed it down to vBulletin, Vanilla or Phorum (unless you have another suggestion ?). I hope someone here can give me some feedback on their experiences either migrating to a new forum or working deeply with one. The current forum we have has approx 2.2 million threads in it and is contained in a MySQL database. Data Migration is obviously the first issue, is one of the major Forum vendors better or worse in this regard ? The software needs to be able to be clustered and cached to ensure availability and performance. We want it to be PHP based and store it's data in MySQL. The code needs to be open to allow us to highly customise the software both to strip out a lot of stuff and be able to integrate our sites features. A lot of the forums I've looked at have a lot of duplicate features to our main site, in particular member management, profiles etc. I realise we'll have to do a good bit of development in removing these and tieing it all back to the main site so we want to find a platform that makes this kind of integration as easy as possible. Finally I guess if 'future proofing' the forum (as best as possible) given the above. Which platform will allow us to customise it but also allow us to keep instep with upgrades. Which forum software has the best track record for bringing online new features in a timely manner ? etc. etc. I know it's a big question but if anyone here has any experience in some or all of the above I'd be very grateful.

    Read the article

  • Windows Phone 7 ActiveSync error 86000C09 (My First Post!)

    - by Chris Heacock
    Hello fellow geeks! I'm kicking off this new blog with an issue that was a real nuisance, but was relatively easy to fix. During a recent Exchange 2003 to 2010 migration, one of the users was getting an error on his Windows Phone 7 device. The error code that popped up on the phone on every sync attempt was 86000C09 We tested the following: Different user on the same device: WORKED Problem user on a different device: FAILED   Seemed to point (conclusively) at the user's account as the crux of the issue. This error can come up if a user has too many devices syncing, but he had no other phones. We verified that using the following command: Get-ActiveSyncDeviceStatistics -Identity USERID Turns out, it was the old familiar inheritable permissions issue in Active Directory. :-/ This user was not an admin, nor had he ever been one. HOWEVER, his account was cloned from an ex-admin user, so the unchecked box stayed unchecked. We checked the box and voila, data started flowing to his device(s). Here's a refresher on enabling Inheritable permissions: Open ADUC, and enable Advanced Features: Then open properties and go to the Security tab for the user in question: Click on Advanced, and the following screen should pop up: Verify that "Include inheritable permissions from this object's parent" is *checked*.   You will notice that for certain users, this box keeps getting unchecked. This is normal behavior due to the inbuilt security of Active Directory. People that are in the following groups will have this flag altered by AD: Account Operators Administrators Backup Operators Domain Admins Domain Controllers Enterprise Admins Print Operators Read-Only Domain Controllers Replicator Schema Admins Server Operators Once the box is cheked, permissions will flow and the user will be set correctly. Even if the box is unchecked, they will function normally as they now has the proper permissions configured. You need to perform this same excercise when enabling users for Lync, but that's another blog. :-)   -Chris

    Read the article

  • Partner Webcast – More out of Database Appliance with DB Options - 13 September 2012

    - by Thanos
    The Oracle Database Appliance is a new way to take advantage of the world's most popular database—Oracle Database 11g —in a single, easy-to-deploy and manage system. It's a complete package of software, server, storage, and networking that's engineered for simplicity; saving time and money by simplifying deployment, maintenance, and support of database workloads. But that is not all, with the support for all Oracle Database Options, Oracle Database Appliance can be the ideal solution for many use cases. Feature Benefit Simplifies deployment, maintenance, and support of high-availability database workloads Saves significant time and effort throughout the database administration lifecycle An engineered system of software, server, storage, and networking High availability for a wide range of custom and packaged OLTP and data warehousing application databases Simple one-button Installation, full-stack integrated patching and diagnostics Reduces planned and unplanned downtime by automatically monitoring and logging service requests with Oracle Support Built using the world’s #1 database Protects databases from server and storage failures with Oracle Real Application Clusters and Automatic Storage Management Unique Pay-As-You-Grow software licensing Reduces cost with flexibility to adjust your software spend as your business grows without the need for any hardware upgrades Discover the Oracle Database Appliance Value Proposition and learn how to position and combine it with database options to capture new business and easily roll out solutions safely and with maximum cost efficiency. This webcast is repeated once again for your benefit. Agenda: Oracle Database& Engineered Systems Innovation. What’s the Oracle Database Appliance ? Oracle Database Appliance Value Proposition. Oracle Database Appliance with Database Options Oracle Database Appliance Partners Business Delivery FormatThis FREE online LIVE eSeminar will be delivered over the Web. Registrations received less than 24hours prior to start time may not receive confirmation to attend. Duration: 1 hour Register Now! Oracle Database Appliance is available for purchase at the Oracle Store under Engineered Systems. For any questions please contact us at partner.imc-AT-beehiveonline.oracle-DOT-com Visit regularly our ISV Migration Center blog Or Follow us @oracleimc to learn more on Oracle Technologies as well as upcoming partner webcasts and events.

    Read the article

  • Windows Azure Learning Plan - SQL Azure

    - by BuckWoody
    This is one in a series of posts on a Windows Azure Learning Plan. You can find the main post here. This one deals with Security for  Windows Azure.   Overview and Training Overview and general  information about SQL Azure - what it is, how it works, and where you can learn more. General Overview (sign-in required, but free) http://social.technet.microsoft.com/wiki/contents/articles/inside-sql-azure.aspx General Guidelines and Limitations http://msdn.microsoft.com/en-us/library/ee336245.aspx Microsoft SQL Azure Documentation http://msdn.microsoft.com/en-us/windowsazure/sqlazure/default.aspx Samples and Learning Sources for online and other SQL Azure Training Free Online Training http://blogs.msdn.com/b/sqlazure/archive/2010/05/06/10007449.aspx 60-minute Overview (webcast) https://msevents.microsoft.com/CUI/WebCastEventDetails.aspx?culture=en-US&EventID=1032458620&CountryCode=US Architecture SQL Azure Internals and Architectures for Scale Out and other use-cases. SQL Azure Architecture http://social.technet.microsoft.com/wiki/contents/articles/inside-sql-azure.aspx Scale-out Architectures http://tinyurl.com/247zm33 Federation Concepts http://tinyurl.com/34eew2w Use-Cases http://blogical.se/blogs/jahlen/archive/2010/11/23/sql-azure-why-use-it-and-what-makes-it-different-from-sql-server.aspx SQL Azure Security Model (video) http://www.msdev.com/Directory/Description.aspx?EventId=1491 Administration Standard Administrative Tasks and Tools Tools Options http://social.technet.microsoft.com/wiki/contents/articles/overview-of-tools-to-use-with-sql-azure.aspx SQL Azure Migration Wizard http://sqlazuremw.codeplex.com/ Managing Databases and Login Security http://msdn.microsoft.com/en-us/library/ee336235.aspx General Security for SQL Azure http://msdn.microsoft.com/en-us/library/ff394108.aspx Backup and Recovery http://social.technet.microsoft.com/wiki/contents/articles/sql-azure-backup-and-restore-strategy.aspx More Backup and Recovery Options http://social.technet.microsoft.com/wiki/contents/articles/current-options-for-backing-up-data-with-sql-azure.aspx Syncing Large Databases to SQL Azure http://blogs.msdn.com/b/sync/archive/2010/09/24/how-to-sync-large-sql-server-databases-to-sql-azure.aspx Programming Programming Patterns and Architectures for SQL Azure systems. How to Build and Manage a Business Database on SQL Azure http://tinyurl.com/25q5v6g Connection Management http://social.technet.microsoft.com/wiki/contents/articles/sql-azure-connection-management-in-sql-azure.aspx Transact-SQL Supported by SQL Azure http://msdn.microsoft.com/en-us/library/ee336250.aspx

    Read the article

  • Slides and links for Looking at the Clouds through Dirty Windows :-)

    - by Eric Nelson
    Tomorrow (Friday 23/4/2010) I am delivering a session at the Cloud Grid Exchange in London at SkillsMatter (A top training company and superb supporter of development communities). To be perfectly honest – I’m more interested in attending than presenting as the sessions and speaker line up look great. But in the middle of all that I will be doing the following (rather cheekily named) session: Looking at the Clouds through dirty Windows Many developers assume that the Microsoft Windows Azure Platform for Cloud Computing is only relevant if you develop solutions using Microsoft Visual Studio and the .NET Framework. The reality is somewhat different. In the same way that developers can build great applications on Windows Server using a variety of programming languages, developers can do the same for Azure. Java, Tomcat, PHP, Ruby, Python, MySQL and more all work great on Azure. In this session we will take a lap around the services offered by the Azure PaaS and demonstrate just how easy it is to build and deploy applications built in .NET and other technologies. The session will be a mix of slides and demos – currently I plan to demo .NET and Ruby on Rails running on Azure – but I may flex that depending on how the morning sessions go and who turns up. Looking at the clouds through dirty windows View more presentations from Eric Nelson. Links: Getting started: Details on how to sign up for FREE to try out Windows Azure http://bit.ly/azure25  Getting started with Windows Azure UK Site http://bit.ly/startazure UK Azure Site http://bit.ly/landazure UK Community http://ukazure.ning.com Examples of Azure and none .NET technologies: http://ukinterop.cloudapp.net Restlet based, using Windows Azure Storage http://rubyukinterop.cloudapp.net Rails based clone using Windows Azure Storage (down at time of posting) http://rubysqlazure.cloudapp.net Simple rails using SQL Azure http://bookingbug.com Real world “Ruby on Rails on Azure” (Work in progress for conversion to Azure) Domino’s Pizza migration of Java/Tomcat on Solaris to Java/Tomcat on Windows Azure Main Azure Interop site http://www.microsoft.com/WindowsAzure/interop/: Eclipse Tooling http://windowsazure4e.org Java support http://www.windowsazure4j.org/ Rails on Azure skeleton project for Visual Studio http://code.msdn.com/railsonazure Azure Runme utility for spawning processes http://azurerunme.codeplex.com Feedback www.mygreatwindowsazureidea.com

    Read the article

  • Collaborate 2010: Spotlight on Oracle Content Management

    - by [email protected]
    Excitement is building for the Collaborate conference April 18th through the 22nd. Outside of the event being in Las Vegas, which for me often seems to add to the excitement, there will be a great lineup of Oracle Content Management focused sessions. In fact, there are currently over 30 content management sessions scheduled, and attendees will get to hear from customers, partners, as well as Oracle experts. Attendees should expect to hear a lot about Oracle Content Management 11g at Collaborate 2010. Roel Stalman and Andy MacMillan will kick off these discussions on Monday, April 19th as they present Oracle Content Management's product strategy and roadmap (10:45 - 11:45). Monday's lineup also includes sessions on Oracle Imaging and Process Management (I/PM) 11g and Oracle Forms Recognition (2:30 - 3:30), which were both released in January. For those customers using older versions of I/PM or Stellent IBPM, be sure not to miss the "migrating to I/PM 11g" session on Monday as well (1:15 - 2:15) as this should give you some insight into the migration process. Check out the entire list of Oracle Content Management sessions here. Another focus at Collaborate this year is to discuss the benefits of using Oracle Content Management with Oracle Applications - Oracle E-Business Suite, PeopleSoft, and Siebel - so be sure to check out these sessions too: Accelerating Accounts Payable Processes with Integrated Document Imaging(Monday, April 19th, 3:45 - 4:45)Supercharge Your Siebel Sales and Marketing with Integrated Document Management(Tuesday, April 20th, 2:00 - 3:00)Oracle Enterprise 2.0 for Oracle Applications: The Value of an Integrated E2.0 Platform(Tuesday, April 20th, 3:15 - 4:15)Comprehensive Human Resources Automation with Oracle Content Management(Wednesday, April 21st, 1:00 - 2:00) Collaborate is also the perfect opportunity to meet Oracle executives and product experts. Attendees can sign up for 1 on 1 meetings at the event, and there will be someone representing each Oracle Content Management product. These meetings are probably the best way to get your product questions answered in a face-to-face manner. It seems more and more to me that Oracle Content Management customers are viewing Collaborate as "the" conference to attend each year. I hope you have plans to attend and I will see you there.

    Read the article

  • Migrating from SQL Trace to Extended Events

    - by extended_events
    In SQL Server codenamed “Denali” we are moving our diagnostic tracing capabilities forward by building a system on top of Extended Events. With every new system you face the specter of migration which is always a bit of a hassle. I’m obviously motivated to see everyone move their diagnostic tracing systems over to the new extended events based system, so I wanted to make sure we lowered the bar for the migration process to help ease your trials. In my initial post on Denali CTP 1 I described a couple tables that we created that will help map the existing SQL Trace Event Classes to the equivalent Extended Events events. In this post I’ll describe the tables in a bit more details, explain the relationship between the SQL Trace objects (Event Class & Column) and Extended Event objects (Events & Actions) and at the end provide some sample code for a managed stored procedure that will take an existing SQL Trace session (eg. a trace that you can see in sys.Traces) and converts it into event session DDL. Can you relate? In some ways, SQL Trace and Extended Events is kind of like the Standard and Metric measuring systems in the United States. If you spend too much time trying to figure out how to convert between the two it will probably make your head hurt. It’s often better to just use the new system without trying to translate between the two. That said, people like to relate new things to the things they’re comfortable with, so, with some trepidation, I will now explain how these two systems are related to each other. First, some terms… SQL Trace is made up of Event Classes and Columns. The Event Class occurs as the result of some activity in the database engine, for example, SQL:Batch Completed fires when a batch has completed executing on the server. Each Event Class can have any number of Columns associated with it and those Columns contain the data that is interesting about the Event Class, such as the duration or database name. In Extended Events we have objects named Events, EventData field and Actions. The Event (some people call this an xEvent but I’ll stick with Event) is equivalent to the Event Class in SQL Trace since it is the thing that occurs as the result of some activity taking place in the server. An  EventData field (from now on I’ll just refer to these as fields) is a piece of information that is highly correlated with the event and is always included as part of the schema of an Event. An Action is something that can be associated with any Event and it will cause some additional “action” to occur when ever the parent Event occurs. Actions can do a number of different things for example, there are Actions that collect additional data and, take memory dumps. When mapping SQL Trace onto Extended Events, Columns are covered by a combination of both fields and Actions. Knowing exactly where a Column is covered by a field and where it is covered by an Action is a bit of an art, so we created the mapping tables to make you an Artist without the years of practice. Let me draw you a map. Event Mapping The table dbo.trace_xe_event_map exists in the master database with the following structure: Column_name Type trace_event_id smallint package_name nvarchar xe_event_name nvarchar By joining this table sys.trace_events using trace_event_id and to the sys.dm_xe_objects using xe_event_name you can get a fair amount of information about how Event Classes are related to Events. The most basic query this lends itself to is to match an Event Class with the corresponding Event. SELECT     t.trace_event_id,     t.name [event_class],     e.package_name,     e.xe_event_name FROM sys.trace_events t INNER JOIN dbo.trace_xe_event_map e     ON t.trace_event_id = e.trace_event_id There are a couple things you’ll notice as you peruse the output of this query: For the most part, the names of Events are fairly close to the original Event Class; eg. SP:CacheMiss == sp_cache_miss, and so on. We’ve mostly stuck to a one to one mapping between Event Classes and Events, but there are a few cases where we have combined when it made sense. For example, Data File Auto Grow, Log File Auto Grow, Data File Auto Shrink & Log File Auto Shrink are now all covered by a single event named database_file_size_change. This just seemed like a “smarter” implementation for this type of event, you can get all the same information from this single event (grow/shrink, Data/Log, Auto/Manual growth) without having multiple different events. You can use Predicates if you want to limit the output to just one of the original Event Class measures. There are some Event Classes that did not make the cut and were not migrated. These fall into two categories; there were a few Event Classes that had been deprecated, or that just did not make sense, so we didn’t migrate them. (You won’t find an Event related to mounting a tape – sorry.) The second class is bigger; with rare exception, we did not migrate any of the Event Classes that were related to Security Auditing using SQL Trace. We introduced the SQL Audit feature in SQL Server 2008 and that will be the compliance and auditing feature going forward. Doing this is a very deliberate decision to support separation of duties for DBAs. There are separate permissions required for SQL Audit and Extended Events tracing so you can assign these tasks to different people if you choose. (If you’re wondering, the permission for Extended Events is ALTER ANY EVENT SESSION, which is covered by CONTROL SERVER.) Action Mapping The table dbo.trace_xe_action_map exists in the master database with the following structure: Column_name Type trace_column_id smallint package_name nvarchar xe_action_name nvarchar You can find more details by joining this to sys.trace_columns on the trace_column_id field. SELECT     c.trace_column_id,     c.name [column_name],     a.package_name,     a.xe_action_name FROM sys.trace_columns c INNER JOIN    dbo.trace_xe_action_map a     ON c.trace_column_id = a.trace_column_id If you examine this list, you’ll notice that there are relatively few Actions that map to SQL Trace Columns given the number of Columns that exist. This is not because we forgot to migrate all the Columns, but because much of the data for individual Event Classes is included as part of the EventData fields of the equivalent Events so there is no need to specify them as Actions. Putting it all together If you’ve spent a bunch of time figuring out the inner workings of SQL Trace, and who hasn’t, then you probably know that the typically set of Columns you find associated with any given Event Class in SQL Profiler is not fix, but is determine by the contents of the table sys.trace_event_bindings. We’ve used this table along with the mapping tables to produce a list of Event + Action combinations that duplicate the SQL Profiler Event Class definitions using the following query, which you can also find in the Books Online topic How To: View the Extended Events Equivalents to SQL Trace Event Classes. USE MASTER; GO SELECT DISTINCT    tb.trace_event_id,    te.name AS 'Event Class',    em.package_name AS 'Package',    em.xe_event_name AS 'XEvent Name',    tb.trace_column_id,    tc.name AS 'SQL Trace Column',    am.xe_action_name as 'Extended Events action' FROM (sys.trace_events te LEFT OUTER JOIN dbo.trace_xe_event_map em    ON te.trace_event_id = em.trace_event_id) LEFT OUTER JOIN sys.trace_event_bindings tb    ON em.trace_event_id = tb.trace_event_id LEFT OUTER JOIN sys.trace_columns tc    ON tb.trace_column_id = tc.trace_column_id LEFT OUTER JOIN dbo.trace_xe_action_map am    ON tc.trace_column_id = am.trace_column_id ORDER BY te.name, tc.name As you might imagine, it’s also possible to map an existing trace definition to the equivalent event session by judicious use of fn_trace_geteventinfo joined with the two mapping tables. This query extracts the list of Events and Actions equivalent to the trace with ID = 1, which is most likely the Default Trace. You can find this query, along with a set of other queries and steps required to migrate your existing traces over to Extended Events in the Books Online topic How to: Convert an Existing SQL Trace Script to an Extended Events Session. USE MASTER; GO DECLARE @trace_id int SET @trace_id = 1 SELECT DISTINCT el.eventid, em.package_name, em.xe_event_name AS 'event'    , el.columnid, ec.xe_action_name AS 'action' FROM (sys.fn_trace_geteventinfo(@trace_id) AS el    LEFT OUTER JOIN dbo.trace_xe_event_map AS em       ON el.eventid = em.trace_event_id) LEFT OUTER JOIN dbo.trace_xe_action_map AS ec    ON el.columnid = ec.trace_column_id WHERE em.xe_event_name IS NOT NULL AND ec.xe_action_name IS NOT NULL You’ll notice in the output that the list doesn’t include any of the security audit Event Classes, as I wrote earlier, those were not migrated. But wait…there’s more! If this were an infomercial there’d by some obnoxious guy next to me blogging “Well Mike…that’s pretty neat, but I’m sure you can do more. Can’t you make it even easier to migrate from SQL Trace?”  Needless to say, I’d blog back, in an overly excited way, “You bet I can' obnoxious blogger side-kick!” What I’ve got for you here is a Extended Events Team Blog only special – this tool will not be sold in any store; it’s a special offer for those of you reading the blog. I’ve wrapped all the logic of pulling the configuration information out of an existing trace and and building the Extended Events DDL statement into a handy, dandy CLR stored procedure. Once you load the assembly and register the procedure you just supply the trace id (from sys.traces) and provide a name for the event session. Run the procedure and out pops the DDL required to create an equivalent session. Any aspects of the trace that could not be duplicated are included in comments within the DDL output. This procedure does not actually create the event session – you need to copy the DDL out of the message tab and put it into a new query window to do that. It also requires an existing trace (but it doesn’t have to be running) to evaluate; there is no functionality to parse t-sql scripts. I’m not going to spend a bunch of time explaining the code here – the code is pretty well commented and hopefully easy to follow. If not, you can always post comments or hit the feedback button to send us some mail. Sample code: TraceToExtendedEventDDL   Installing the procedure Just in case you’re not familiar with installing CLR procedures…once you’ve compile the assembly you can load it using a script like this: -- Context to master USE master GO -- Create the assembly from a shared location. CREATE ASSEMBLY TraceToXESessionConverter FROM 'C:\Temp\TraceToXEventSessionConverter.dll' WITH PERMISSION_SET = SAFE GO -- Create a stored procedure from the assembly. CREATE PROCEDURE CreateEventSessionFromTrace @trace_id int, @session_name nvarchar(max) AS EXTERNAL NAME TraceToXESessionConverter.StoredProcedures.ConvertTraceToExtendedEvent GO Enjoy! -Mike

    Read the article

  • SQLAuthority News – Download Whitepaper – A Case Study on “Hekaton” against RPM – SQL Server 2014 CTP1

    - by Pinal Dave
    In this new world of social media, apps and mobile devices, we are all now getting impatient. Automatic updates have spoiled few of our habits. When a new feature is released everybody wants to immediately adopt the feature and start using it. Though this is true in the world of apps and smart phones, but it is still not possible in the developer’s world. When new features are around, before we start using it, we need to spend quite a lots of time to understand it and test it. Once we are sold on the feature we refer the feature to our manager and eventually the entire organization makes decisions on upgrading to use the new feature. Similarly, when the new feature of In-Memory OLTP was announced, pretty much every SQL Server DBA wanted to implement that on their server. Through the implementation of the feature is not hard, it is not that easy as well. One has to do proper research about their own environment and workload before implementing this feature. Microsoft has recently released a Case Study on In-Memory OLTP feature. Here is the abstract from the white paper itself. I/O latch can cause session delays that impact application performance. This white paper describes the procedures and common I/O latch issues when migrating to Hekaton in SQL Server 2014. It also includes challenges that occurred during the migration and the performance analysis at different stages.  If you are going to implement In-Memory OLTP database, this is a good case study to refer. Download white paper from here. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, SQLAuthority News, T SQL

    Read the article

  • Manipulating Human Tasks (for testing) by Mark Nelson

    - by JuergenKress
    A few months ago, while working on a BPM migration, I had the need to look at the status of human tasks, and to manipulate them – essentially to just have a single user take random actions on them at some interval, to help drive a set of processes that were being tested. To do this, I wrote a little utility called httool.  It reuses some of the core domain classes from my custom worklist sample (with minimal changes to make it a remote client instead of a local one). I have not got around to documenting it yet, but it is pretty simple and fairly self explanatory.  So I thought I would go ahead and share it with folks, in case anyone is interested in playing with it. You can get the code from my ci-samples repository on java.net: git clone git://java.net/ci4fmw~ci-samples It is in the httool directory. I do plan to get back to this “one day” and enhance it to be more intelligent – target particular task types, update the payload, follow a set of “rules” about what action to take – so that I can use it for more driving more interesting test scenarios.  If anyone is feeling generous with their time, and interested, please feel free to join the java.net project and hack away to your heart’s content. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: Mark Nelson,Human Task,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Best Practices - updated: which domain types should be used to run applications

    - by jsavit
    This post is one of a series of "best practices" notes for Oracle VM Server for SPARC (formerly named Logical Domains). This is an updated and enlarged version of the post on this topic originally posted October 2012. One frequent question "what type of domain should I use to run applications?" There used to be a simple answer: "run applications in guest domains in almost all cases", but now there are more things to consider. Enhancements to Oracle VM Server for SPARC and introduction of systems like the current SPARC servers including the T4 and T5 systems, the Oracle SuperCluster T5-8 and Oracle SuperCluster M6-32 provide scale and performance much higher than the original servers that ran domains. Single-CPU performance, I/O capacity, memory sizes, are much larger now, and far more demanding applications are now being hosted in logical domains. The general advice continues to be "use guest domains in almost all cases", meaning, "use virtual I/O rather than physical I/O", unless there is a specific reason to use the other domain types. The sections below will discuss the criteria for choosing between domain types. Review: division of labor and types of domain Oracle VM Server for SPARC offloads management and I/O functionality from the hypervisor to domains (also called virtual machines), providing a modern alternative to older VM architectures that use a "thick", monolithic hypervisor. This permits a simpler hypervisor design, which enhances reliability, and security. It also reduces single points of failure by assigning responsibilities to multiple system components, further improving reliability and security. Oracle VM Server for SPARC defines the following types of domain, each with their own roles: Control domain - management control point for the server, runs the logical domain daemon and constraints engine, and is used to configure domains and manage resources. The control domain is the first domain to boot on a power-up, is always an I/O domain, and is usually a service domain as well. It doesn't have to be, but there's no reason to not leverage it for virtual I/O services. There is one control domain per T-series system, and one per Physical Domain (PDom) on an M5-32 or M6-32 system. M5 and M6 systems can be physically domained, with logical domains within the physical ones. I/O domain - a domain that has been assigned physical I/O devices. The devices may be one more more PCIe root complexes (in which case the domain is also called a root complex domain). The domain has native access to all the devices on the assigned PCIe buses. The devices can be any device type supported by Solaris on the hardware platform. a SR-IOV (Single-Root I/O Virtualization) function. SR-IOV lets a physical device (also called a physical function) or PF) be subdivided into multiple virtual functions (VFs) which can be individually assigned directly to domains. SR-IOV devices currently can be Ethernet or InfiniBand devices. direct I/O ownership of one or more PCI devices residing in a PCIe bus slot. The domain has direct access to the individual devices An I/O domain has native performance and functionality for the devices it owns, unmediated by any virtualization layer. It may also have virtual devices. Service domain - a domain that provides virtual network and disk devices to guest domains. The services are defined by commands that are run in the control domain. It usually is an I/O domain as well, in order for it to have devices to virtualize and serve out. Guest domain - a domain whose devices are all virtual rather than physical: virtual network and disk devices provided by one or more service domains. In common practice, this is where applications are run. Device considerations Consider the following when choosing between virtual devices and physical devices: Virtual devices provide the best flexibility - they can be dynamically added to and removed from a running domain, and you can have a large number of them up to a per-domain device limit. Virtual devices are compatible with live migration - domains that exclusively have virtual devices can be live migrated between servers supporting domains. On the other hand: Physical devices provide the best performance - in fact, native "bare metal" performance. Virtual devices approach physical device throughput and latency, especially with virtual network devices that can now saturate 10GbE links, but physical devices are still faster. Physical I/O devices do not add load to service domains - all the I/O goes directly from the I/O domain to the device, while virtual I/O goes through service domains, which must be provided sufficient CPU and memory capacity. Physical I/O devices can be other than network and disk - we virtualize network, disk, and serial console, but physical devices can be the wide range of attachable certified devices, including things like tape and CDROM/DVD devices. In some cases the lines are now blurred: virtual devices have better performance than previously: starting with Oracle VM Server for SPARC 3.1 there is near-native virtual network performance. There is more flexibility with physical devices than before: SR-IOV devices can now be dynamically reconfigured on domains. Tradeoffs one used to have to make are now relaxed: you can often have the flexibility of virtual I/O with performance that previously required physical I/O. You can have the performance and isolation of SR-IOV with the ability to dynamically reconfigure it, just like with virtual devices. Typical deployment A service domain is generally also an I/O domain: otherwise it wouldn't have access to physical device "backends" to offer to its clients. Similarly, an I/O domain is also typically a service domain in order to leverage the available PCI buses. Control domains must be I/O domains, because they boot up first on the server and require physical I/O. It's typical for the control domain to also be a service domain too so it doesn't "waste" the I/O resources it uses. A simple configuration consists of a control domain that is also the one I/O and service domain, and some number of guest domains using virtual I/O. In production, customers typically use multiple domains with I/O and service roles to eliminate single points of failure, as described in Availability Best Practices - Avoiding Single Points of Failure . Guest domains have virtual disk and virtual devices provisioned from more than one service domain, so failure of a service domain or I/O path or device does not result in an application outage. This also permits "rolling upgrades" in which service domains are upgraded one at a time while their guests continue to operate without disruption. (It should be noted that resiliency to I/O device failures can also be provided by the single control domain, using multi-path I/O) In this type of deployment, control, I/O, and service domains are used for virtualization infrastructure, while applications run in guest domains. Changing application deployment patterns The above model has been widely and successfully used, but more configuration options are available now. Servers got bigger than the original T2000 class machines with 2 I/O buses, so there is more I/O capacity that can be used for applications. Increased server capacity made it attractive to run more vertically-scaled applications, such as databases, with higher resource requirements than the "light" applications originally seen. This made it attractive to run applications in I/O domains so they could get bare-metal native I/O performance. This is leveraged by the Oracle SuperCluster engineered systems mentioned previously. In those engineered systems, I/O domains are used for high performance applications with native I/O performance for disk and network and optimized access to the Infiniband fabric. Another technical enhancement is Single Root I/O Virtualization (SR-IOV), which make it possible to give domains direct connections and native I/O performance for selected I/O devices. Not all I/O domains own PCI complexes, and there are increasingly more I/O domains that are not service domains. They use their I/O connectivity for performance for their own applications. However, there are some limitations and considerations: at this time, a domain using physical I/O cannot be live-migrated to another server. There is also a need to plan for security and introducing unneeded dependencies: if an I/O domain is also a service domain providing virtual I/O to guests, it has the ability to affect the correct operation of its client guest domains. This is even more relevant for the control domain. where the ldm command must be protected from unauthorized (or even mistaken) use that would affect other domains. As a general rule, running applications in the service domain or the control domain should be avoided. For reference, an excellent guide to secure deployment of domains by Stefan Hinker is at Secure Deployment of Oracle VM Server for SPARC. To recap: Guest domains with virtual I/O still provide the greatest operational flexibility, including features like live migration. They should be considered the default domain type to use unless there is a specific requirement that mandates an I/O domain. I/O domains can be used for applications with the highest performance requirements. Single Root I/O Virtualization (SR-IOV) makes this more attractive by giving direct I/O access to more domains, and by permitting dynamic reconfiguration of SR-IOV devices. Today's larger systems provide multiple PCIe buses - for example, 16 buses on the T5-8 - making it possible to configure multiple I/O domains each owning their own bus. Service domains should in general not be used for applications, because compromised security in the domain, or an outage, can affect domains that depend on it. This concern can be mitigated by providing guests' their virtual I/O from more than one service domain, so interruption of service in one service domain does not cause an application outage. The control domain should in general not be used to run applications, for the same reason. Oracle SuperCluster uses the control domain for applications, but it is an exception. It's not a general purpose environment; it's an engineered system with specifically configured applications and optimization for optimal performance. These are recommended "best practices" based on conversations with a number of Oracle architects. Keep in mind that "one size does not fit all", so you should evaluate these practices in the context of your own requirements. Summary Higher capacity servers that run Oracle VM Server for SPARC are attractive for applications with the most demanding resource requirements. New deployment models permit native I/O performance for demanding applications by running them in I/O domains with direct access to their devices. This is leveraged in SPARC SuperCluster, and can be leveraged in T-series servers to provision high-performance applications running in domains. Carefully planned, this can be used to provide peak performance for critical applications. That said, the improved virtual device performance in Oracle VM Server means that the default choice should still be guest domains with virtual I/O.

    Read the article

  • Partner Webcast – Platform as a Service with Oracle WebLogic and OpenStack

    - by Thanos Terentes Printzios
    Platform as a service is defined as Platform that facilitates the deployment of applications without the complexity of buying and managing the underlying hardware and software and provisioning hosting capabilities. For Java EE, that would mean an elastic Java EE platform, where the user (IT admin) deploys the application, and then the platform itself takes care of meeting the SLA. With combination of Oracle WebLogic 12c with Dynamic Clusters, Oracle Solaris 11.2 with OpenStack and some scripting, we can completely automate infrastructure and platform provisioning, effectively providing PaaS to the IT users. Join us in this webcast as explore the usage of Weblogic 12c with Openstack, to establish Platform as a Service. Agenda: PaaS overview and goals Overview of Solaris 11.2 with OpenStack Deploying WebLogic domain to Solaris 11.2 and creating base image Automating provisioning Solution Demo Summary & Q&A Delivery Format This FREE online LIVE eSeminar will be delivered over the Web. Registrations received less than 24hours prior to start time may not receive confirmation to attend. Presenter:  Jernej Kase – FMW Specialist, Oracle Partner Hub Migration Center Date: Thursday, June 26th, 10am CET (9am GMT/11am EEST) Duration: 1 hour Register Here: http://www.oracle.com/go/?Src=8101420&Act=4&pcode=EMEAPM14056477MPP002 For any questions please contact us at partner.imc-AT-beehiveonline.oracle-DOT-com Stay Connected Oracle Newsletters

    Read the article

  • Iterative Conversion

    - by stuart ramage
    Question Received: I am toying with the idea of migrating the current information first and the remainder of the history at a later date. I have heard that the conversion tool copes with this, but haven't found any information on how it does. Answer: The Toolkit will support iterative conversions as long as the original master data key tables (the CK_* tables) are not cleared down from Staging (the already converted Transactional Data would need to be cleared down) and the Production instance being migrated into is actually Production (we have migrated into a pre-prod instance in the past and then unloaded this and loaded it into the real PROD instance, but this will not work for your situation. You need to be migrating directly into your intended environment). In this case the migration tool will still know all about the original keys and the generated keys for the primary objects (Account, SA, etc.) and as such it will be able to link the data converted as part of a second pass onto these entities. It should be noted that this may result in the original opening balances potentially being displayed with an incorrect value (if we are talking about Financial Transactions) and also that care will have to be taken to ensure that all related objects are aligned (eg. A Bill must have a set to bill segments, meter reads and a financial transactions, and these entities cannot exist independantly). It should also be noted that subsequent runs of the conversion tool would need to be 'trimmed' to ensure that they are only doing work on the objects affected. You would not want to revalidate and migrate all Person, Account, SA, SA/SP, SP and Premise details since this information has already been processed, but you would definitely want to run the affected transactional record validation and keygen processes. There is no real "hard-and-fast" rule around this processing since is it specific to each implmentations needs, but the majority of the effort required should be detailed in the Conversion Tool section of the online help (under Adminstration/ The Conversion Tool). The major rule is to ensure that you only run the steps and validation/keygen steps that you need and do not do a complete rerun for your subsequent conversion.

    Read the article

  • Java devs: why not use Groovy?

    - by FarmBoy
    OK, so there are quite a few people using Java these days. But as the language nears two decades of age, it isn't exactly the coolest option out there. Many of us are excited about dynamic languages with some functional features like Ruby or Python, even though we spend our days using Java. So why is it that the adoption of Groovy has been so slow? It seems that Groovy offers much of the benefits of Ruby and Python, but it is far easier to transition a Java shop to Groovy. Even if performance were the concern, it seems that many would want to use Groovy for testing the production Java code. Or use Groovy/Grails for internal apps in which performance concerns are minimal. Or for writing one-off scripts to generate code. Yet Groovy languishes outside of Tiobe's top 50 languages, for reasons that are unclear to me. I have been using Groovy and Grails professionally for about four months, and it has been an excellent experience, such that I hate to think about going back to the Java/Spring/Hibernate model. Does anyone have any sense on why we are not seeing more significant migration from Java to Groovy? Note that I'm not asking why Java developers are still using Java for new projects. My question is: Why is it that most Java Developers are still not using Groovy at all. Edit: I am assuming that all good developers see the utility of dynamic typing and higher order functions for some programming tasks. (Even if it is deemed inappropriate for production code.)

    Read the article

  • Yay! Oracle Solaris 11.1 Is Here!

    - by rickramsey
    Even the critters are happy. This is no cosmetic release. It's got TONS of new stuff for both system admins and system developers. In the coming weeks and months I'll highlight specific new capabilities, but for now, here are a few resources to get you started. What's New (pdf) Describes enhancements for sysadmins in: Installation System configuration Virtualization Security and Compliance Networking Data management Kernel/platform support Network drivers User environment And for system developers: Preflight Applications Checker Oracle ExaStack Labs (available to Oracle Partner Network Gold-level members for application certification) Oracle Solaris Studio Integrated Java Virtual Machine (JVM): Updates are now managed using the Image Packaging System (IPS) Migration guides and technology mapping tables for AIX, HP-UX and Red Hat Linux: Download Free downloads for SPARC and x86 are available, along with instructions and tips for using the new repositories and Image Packaging System. Tech Article: How to Upgrade to Oracle Solaris 11.1 You can upgrade using either Oracle's official Solaris release repository or, if you have a support contract, the Support repository. Peter Dennis explains how. Documentation Superbly written instructions from our dedicated cadre of world-renowned but woefully underpaid technical writers: Getting Started Installing, Booting, and Updating Establishing an Oracle Solaris Network Administering Essential Features Administering Network Services Securing the Operating System Monitoring and Tuning Creating and Using Virtual Environments Working with the Desktop Developing Applications Reference Manuals And more Training And don't forget the new online training courses from Oracle University! I really liked them. Here are my first and second impressions. Website Newsletter Facebook Twitter

    Read the article

  • SQLAuthority News – Microsoft SQL Server 2012 Service Pack 1 Released (SP1)

    - by pinaldave
    Last week, I was attending SQLPASS 2012 and I had great fun attending the event. During the event long awaited SQL Serer 2012 Service Pack 1 was released. I am pretty excited with SP1 as new service packs are cumulative updates and upgrade all editions and service levels of SQL Server 2012 to SP1. This service pack contains SQL Server 2012 Cumulative Update 1 (CU1) and Cumulative Update 2 (CU2). The latest SP1 has many new and enhanced features. Here are a few for example: Cross-Cluster Migration of AlwaysOn Availability Groups for OS Upgrade Selective XML Index DBCC SHOW_STATISTICS works with SELECT permission New function returns statistics properties – sys.dm_db_stats_properties SSMS Complete in Express SlipStream Full Installation Business Intelligence highlights with Office and SharePoint Server 2013 Management Object Support Added for Resource Governor DDL Please note that the size of the service pack is near 1 GB. Here is the link to SQL Server 2012 Service Pack 1. SQL Server Express is the free and feature rich edition of the SQL Server. It is used with lightweight website and desktop applications. Here is the link to SQL Server 2012 EXPRESS Service Pack 1. Here is the question for you – how long have you been using SQL Server 2012? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Service Pack

    Read the article

< Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >