Search Results

Search found 1693 results on 68 pages for 'sqlalchemy migrate'.

Page 42/68 | < Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >

  • Does TFS 2010 lock a project collection when it's being cloned?

    - by Hirvox
    We're planning to migrate a project collection currently hosted on TFS 2010 to TFS 2012. We want to keep the current installation running while resolving any issues that might arise, so we need to copy the current project collection to the new server. However, TFS doesn't allow us to attach a restored database backup directly. The database first must be detached from the original TFS installation. We can get around that limitation by cloning the project collection and detaching the clone, but we're not sure whether that would also impact the original project collection. Does TFS lock the original project collection while it's being cloned?

    Read the article

  • Switching to Chrome from IE

    - by Alan Parrish
    Hi, I work in IT at a school and we recently updated our database software, however the web access that the teachers use to do registration does not work too well with Internet Explorer 7 and we're thinking about switching to chrome (mostly due to me hating IE in general) but the problem we have is that users are unable to migrate their bookmarks over from IE due to account restrictions, is there any way to get this working? A bit of info about the system if it could be helpful; Almost all the client PCs are running Win XP SP2. My colleagues machines are running Windows 7, I use OS X Snow Leopard and most of out servers are running windows server 2003 (with the exception of 2 one on 2000 server and another on 2008 R2). The Active Directory Controller is running Windows Server 2003

    Read the article

  • Is there any way to do "mail server parking"?

    - by percyboy
    I am managing a mail server, which will be temporarily closed for three or four days due to the data center maintenance. I want to find a solution to (completely or partly) solve the lost mails during this unavailable period. Because the data volume is huge, it is very hard to migrate it to other data center. One approach I think out is to setup a temporary mail server in other data center, and when new mail received, the mail server automatically sends a return mail to tell the sender "We are temporarily closed for three or four days. Please send the mail later or contact in other means." I am wondering is this approach possible with existed mail server ? Or something better available ? (free solution is preferred for it is only for temporary)

    Read the article

  • SQL Cluster install on Hyper V options

    - by Chris W
    I've been reading up on running a SQL Cluster in a Hyper V environment and there seems to be a couple of options: Install guest cluster on 2 VMs that are themselves part of a fail over cluster. Install SQL cluster on 2 VMs but the VMs themselves are not part of an underlying cluster. With option 1, it's little more complex as there's effectively two clusters in play but this adds some flexibility in the sense that I'm free to migrate the VMs between and physical blades in their cluster for physical maintenance without affecting the status of the SQL guest cluster that's running within them. With option 2, the set-up is a bit simpler as there's only 1 cluster in the mix but my VMs are anchored to the physical blades that they're set-up on (I'll ignore the fact I could manually move the VHDs for the purposes of this question). Are there any other factors that I should consider here when deciding which option to go for? I'm free to test out both options and probably will do but if any one has working experience of these set-ups and can offer some input that would be great.

    Read the article

  • converting 0+1 raid array to 0

    - by werelord
    I'm currently running raid 0+1; four 500 GB drives in the array.. I'm looking at migrating the array from 0+1 (Stripe+Mirror) to 0 only (stripe).. The goal is to remove the hard drives from the array in order to put them in the newly purchased Drobo, then copy the data from the remaining striped raid to the said Drobo.. Is it sufficient to just remove the drives themselves and change the raid configuration within the nvidia raid config?? Or is there something more that needs to be done?? Does the order matter (i.e. removing drives first or changing the raid configuration??) Is it possible to migrate the array this way without having any loss of data? I'm wary about burning all that data to DVDs (few hundred GB worth) to back it up.. Any other advice from people that may have done this (or have other insight) would be helpful as well..

    Read the article

  • TFS 2012 or TFS Azure (Preview)

    - by Fore
    We want to migrate our current TFS 2010 solution that's hosted today in one of our own servers to TFS 2012 hosted somewhere else. We don't want to handle the servers any more, and therefor are looking at alternatives. TFS preview / Azure is one alternative, hosted in the cloud, but I'm not that happy with forcing users to use live id, and we don't have an AD. My second thought was to create a Azure virtual mashine, and there install and host TFS 2012. Is there any downsides with this? Compared to the price of bying a VPS this is cheap and feels reliable in Azure? Do you have any other ideas?

    Read the article

  • Enabling Office SharePoint Server Publishing Infrastructure Breaks Navigation

    - by swagers
    I'm migrating from WSS 3.0 to MOSS 2007, below are the steps I took to migrate. Backed up the content database of our WSS 3.0 site. Restored the database on our MOSS 2007 database server Create a new Web Application on our MOSS 2007 server and pointed the database to the newly restored database. Everything works correctly on the new server. I enabled Office SharePoint Server Publishing Infrastructure and navigations stops working correctly. Where it use to say Home it now says /. When I clicked on a link to any sub sites the top navigations reduces down to one button that says Error. Also any sub site navigation on the side bar reads Error. When I disable Office SharePoint Server Publishing Infrastructure everything goes back to the way it was.

    Read the article

  • migration of physical server to a virtual solution, what i have to do?

    - by bibarse
    Hello I'm new in this forum, so i would like that you forgive me for my blissfully and my low English level. I'm a trainee in company one month ago, and my mission is to migrate 3 physicals servers to a virtualization technology. The company edit softwares for E-learning so there are lots of data like videos, flash and compressed (zip). This is some inventory of the servers: OS: Debian, 2 redhat, apache, php/mysql, sendMail/Dovecot, webmin with virtualmin template to create dynamically the web sites because there is no sysadmin ... The future provider will be responsible of to secure, update and create the virtual machines (outsourcing) and with a RedHat OS's. So i want that you help me to choose a virtualisation technologie (for the i prefer KVM of Redhat RHEV, VMWare is expensive), how evaluate the hardware needs (this for evolution of 4 or 5 years) and to elaborate a good planing to don't forget any think. Thank you for your responses.

    Read the article

  • Git push on localhost with htaccess

    - by Rooneyl
    I am looking into setting up a remote git repo. To start with I have created it on my Windows machine using xampp following this guide. All works fine except when I try to add some security, as per step 6 of the guide (for when I migrate it to my main web server). I have added passwords by using passwd and adding htaccess to the htdocs folder. This works fine (I have checked in my web browser), but when I try and push I get prompted for my password the it fails with a error (code 22). $ git push origin master Password for 'http://git@localhost': error: Cannot access URL http://git@localhost/s.git/, return code 22 fatal: git-http-push failed Any ideas?

    Read the article

  • rsync invocation to replace symlinks pointing to source?

    - by bdbaddog
    Currently I'm moving a big filesystem to a new server as the original fileserver is no longer able to handle the filesystem writes. To make this quick I made symlinks at the target filesystem pointing to the original filesystem. Initially: /company/release (mountpoint of the original filesystem) After migration: /company/release.old (points to original filesystem after automount map update) /company/release (points to new fileserver/filesystem after automount map update) In /company/release there are symlinks like the following: /company/release/product-1.0.tar.gz - /company/release.old/product-1.0.tar.gz /company/release/product-1.0 - /company/release.old/product-1.0 (this is a tree of files) Using symlinks allowed me to move the writes to the new filesystem quickly. Now I'd like to slowly migrate the existing files and directories to the new filesystem. The problem I'm running into is that since the symlinks point back at the original files rsync doesn't see any difference and so it doesn't actually copy the file(s) or directory(s) and remove/overwrite the symlinks. Is there a set of rsync flags which will do what I want?

    Read the article

  • We want to setup low cost private cloud [closed]

    - by Virtual Jasper
    We are a small company with very limit funds. In order to improve our server reliability, we are studying to migrate to CLOUD. We seen some CLOUD provider, they would charge by resources such as, CPU, RAM.....Disk space....High Availability....etc. We have server team, so we also consider to built the private CLOUD, we seen the Windows 8 server, it does need license fee. So we looking at Linux side, we look at Ubuntu and OpenStack. What is the different between Ubuntu and OpenStack solutions? Is it both free on software license? and only to pay the technical support.

    Read the article

  • DNS lookup fails when forwarding to subdomain

    - by Kitaro
    In order to migrate to a new mailserver with little dns problems/downtime, I have set up a second postfix that is currently accessible on a subdomain mx record, eg. the main postfix accepts mail for [email protected] while the second postfix also accepts mail for [email protected]. I added a forwarding rule to postfix saying that postfix should forward mail destined for [email protected] to [email protected] (for regular local delivery) and to [email protected]. Local delivery still works as expected, but when trying forward the mail to the new mx, postfix appeds the domain part at the end of the forwarding address, resulting in [email protected], which of course fails and the mail bounces. Why does postfix mess with the alias name in that way and how can I turn that of?

    Read the article

  • Which is the recommended filesystem for VMware Server / ESXi?

    - by elitalon
    We have a couple of servers in office with VMware Server as virtualization solution. We are planning an upgrade of our infrastructure. Some servers will remain with VMware Server, but we want to migrate some others to VMware ESXi. In both cases we are making a fresh install, and I wonder if there any suggestion/guidelines regarding the host filesystem and its partitions. EDIT: We are using local storage instead of SAN/NAS external storage, because we are not sure if it is worth it to use them given our office size/requirements.

    Read the article

  • OSX Dirve dropped out of RAID 5 Array?

    - by user41724
    I had a drive "Fail" and drop out of our raid 5 array... After reboot the drive came back up, but its "Roaming". How do I re-integrate it back into the RAID 5? Right now I have no redundancy with only 2 disks in the array. The migrate RAID Set feature of RAID Utility seems to want to crate a RAID 0 only? I have provided links to some screen caps. http://tiny.cc/3ns5r Any help would be appreciated. Thanks

    Read the article

  • Migrating from Desktop PC to real Server

    - by tevlon84
    i am a student, working as a part-time Administrator at a startup. I never ever used a real Server ( only a Desktop Pc with apache ) The Company i am working for is growing and they want to switch to a real Server. My idea would be to use the Ubuntu build-in Backup function and use this Backup file as Base for the Rack-Server, but i don't know, which problems i would run into. Is it a good idea ? So basically my question is : *What is the easiest way to migrate from a Desktop PC to a real Rack-Server? ( on an Ubuntu Server) *

    Read the article

  • Rails on server syntax error?

    - by Danny McClelland
    Hi Everyone, I am trying to get my rails application running on my web server, but when I run the rake db:migrate I get the following error: r oot@oak [/home/macandco/rails_apps/survey_manager]# rake db:migrate (in /home/macandco/rails_apps/survey_manager) == Baseapp: migrating ======================================================== -- create_table(:settings, {:force=>true}) -> 0.0072s -- create_table(:users) -> 0.0072s -- add_index(:users, :login, {:unique=>true}) -> 0.0097s -- create_table(:profiles) -> 0.0084s -- create_table(:open_id_authentication_associations, {:force=>true}) -> 0.0067s -- create_table(:open_id_authentication_nonces, {:force=>true}) -> 0.0064s -- create_table(:roles) -> 0.0052s -- create_table(:roles_users, {:id=>false}) -> 0.0060s rake aborted! An error has occurred, all later migrations canceled: 555 5.5.2 Syntax error. g9sm2526951gvc.8 Has anyone come across this before? Thanks, Danny Main Migration file c lass Baseapp < ActiveRecord::Migration def self.up # Create Settings Table create_table :settings, :force => true do |t| t.string :label t.string :identifier t.text :description t.string :field_type, :default => 'string' t.text :value t.timestamps end # Create Users Table create_table :users do |t| t.string :login, :limit => 40 t.string :identity_url t.string :name, :limit => 100, :default => '', :null => true t.string :email, :limit => 100 t.string :mobile t.string :signaturenotes t.string :crypted_password, :limit => 40 t.string :salt, :limit => 40 t.string :remember_token, :limit => 40 t.string :activation_code, :limit => 40 t.string :state, :null => :false, :default => 'passive' t.datetime :remember_token_expires_at t.string :password_reset_code, :default => nil t.datetime :activated_at t.datetime :deleted_at t.timestamps end add_index :users, :login, :unique => true # Create Profile Table create_table :profiles do |t| t.references :user t.string :real_name t.string :location t.string :website t.string :mobile t.timestamps end # Create OpenID Tables create_table :open_id_authentication_associations, :force => true do |t| t.integer :issued, :lifetime t.string :handle, :assoc_type t.binary :server_url, :secret end create_table :open_id_authentication_nonces, :force => true do |t| t.integer :timestamp, :null => false t.string :server_url, :null => true t.string :salt, :null => false end create_table :roles do |t| t.column :name, :string end # generate the join table create_table :roles_users, :id => false do |t| t.column :role_id, :integer t.column :user_id, :integer end # Create admin role and user admin_role = Role.create(:name => 'admin') user = User.create do |u| u.login = 'admin' u.password = u.password_confirmation = 'advices' u.email = '[email protected]' end user.register! user.activate! user.roles << admin_role end def self.down # Drop all BaseApp drop_table :settings drop_table :users drop_table :profiles drop_table :open_id_authentication_associations drop_table :open_id_authentication_nonces drop_table :roles drop_table :roles_users end end

    Read the article

  • When using Bundler and Rails 2.3.5 I get uninitialized constant SubdomainFu when migrating

    - by user347480
    Hi I'm using bundler with rails 2.3.5 and I'm trying to make sure everything is working correctly but when I do a "rake db:migrate --trace" I get this ** Invoke db:migrate (first_time) ** Invoke environment (first_time) ** Execute environment rake aborted! uninitialized constant SubdomainFu /opt/local/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:443:inload_missing_constant' /opt/local/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:80:in const_missing' /opt/local/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:92:inconst_missing' /Users/node/Projects/Race-RX/config/initializers/subdomain_config.rb:1 /opt/local/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:145:in load_without_new_constant_marking' /opt/local/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:145:inload' /opt/local/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:521:in new_constants_in' /opt/local/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:145:inload' /opt/local/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/initializer.rb:622:in load_application_initializers' /opt/local/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/initializer.rb:621:ineach' /opt/local/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/initializer.rb:621:in load_application_initializers' /opt/local/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/initializer.rb:176:inprocess' /opt/local/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/initializer.rb:113:in send' /opt/local/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/initializer.rb:113:inrun' /Users/node/Projects/Race-RX/config/environment.rb:9 /opt/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in gem_original_require' /opt/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:inrequire' /opt/local/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:156:in require' /opt/local/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:521:innew_constants_in' /opt/local/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:156:in require' /opt/local/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/tasks/misc.rake:4 /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:636:incall' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:636:in execute' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:631:ineach' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:631:in execute' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:597:ininvoke_with_call_chain' /opt/local/lib/ruby/1.8/monitor.rb:242:in synchronize' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:590:ininvoke_with_call_chain' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:607:in invoke_prerequisites' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:604:ineach' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:604:in invoke_prerequisites' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:596:ininvoke_with_call_chain' /opt/local/lib/ruby/1.8/monitor.rb:242:in synchronize' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:590:ininvoke_with_call_chain' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:583:in invoke' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2051:ininvoke_task' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2029:in top_level' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2029:ineach' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2029:in top_level' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2068:instandard_exception_handling' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2023:in top_level' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2001:inrun' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2068:in standard_exception_handling' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:1998:inrun' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/bin/rake:31 /opt/local/bin/rake:19:in load' /opt/local/bin/rake:19 I don't know what could be causing this. I did however but my require "rubygems" require "bundler" Bundler.setup in my enviroment.rb file but that doesn't see to be the problem.

    Read the article

  • Unicorn_init.sh cannot find app root on capistrano cold deploy

    - by oFca
    I am deploying Rails app and upon running cap deploy:cold I get the error saying * 2012-11-02 23:53:26 executing `deploy:migrate' * executing "cd /home/mr_deployer/apps/prjct_mngr/releases/20121102225224 && bundle exec rake RAILS_ENV=production db:migrate" servers: ["xxxxxxxxxx"] [xxxxxxxxxx] executing command command finished in 7464ms * 2012-11-02 23:53:34 executing `deploy:start' * executing "/etc/init.d/unicorn_prjct_mngr start" servers: ["xxxxxxxxxx"] [xxxxxxxxxx] executing command ** [out :: xxxxxxxxxx] /etc/init.d/unicorn_prjct_mngr: 33: cd: can't cd to /home/mr_deployer/apps/prjct_mngr/current; command finished in 694ms failed: "rvm_path=$HOME/.rvm/ $HOME/.rvm/bin/rvm-shell '1.9.3-p125@prjct_mngr' -c '/etc/init.d/unicorn_prjct_mngr start'" on xxxxxxxxxx but my app root is there! Why can't it find it? Here's part of my unicorn_init.sh file : 1 #!/bin/sh 2 set -e 3 # Example init script, this can be used with nginx, too, 4 # since nginx and unicorn accept the same signals 5 6 # Feel free to change any of the following variables for your app: 7 TIMEOUT=${TIMEOUT-60} 8 APP_ROOT=/home/mr_deployer/apps/prjct_mngr/current 9 PID=$APP_ROOT/tmp/pids/unicorn.pid 10 CMD="cd $APP_ROOT; bundle exec unicorn -D -c $APP_ROOT/config/unicorn.rb - E production" 11 # INIT_CONF=$APP_ROOT/config/init.conf 12 AS_USER=mr_deployer 13 action="$1" 14 set -u 15 16 # test -f "$INIT_CONF" && . $INIT_CONF 17 18 old_pid="$PID.oldbin" 19 20 cd $APP_ROOT || exit 1 21 22 sig () { 23 test -s "$PID" && kill -$1 `cat $PID` 24 } 25 26 oldsig () { 27 test -s $old_pid && kill -$1 `cat $old_pid` 28 } 29 case $action in 30 31 start) 32 sig 0 && echo >&2 "Already running" && exit 0 33 $CMD 34 ;; 35 36 stop) 37 sig QUIT && exit 0 38 echo >&2 "Not running" 39 ;; 40 41 force-stop) 42 sig TERM && exit 0 43 echo >&2 "Not running" 44 ;; 45 46 restart|reload) 47 sig HUP && echo reloaded OK && exit 0 48 echo >&2 "Couldn't reload, starting '$CMD' instead" 49 $CMD 50 ;; 51 52 upgrade) 53 if sig USR2 && sleep 2 && sig 0 && oldsig QUIT 54 then 55 n=$TIMEOUT 56 while test -s $old_pid && test $n -ge 0 57 do 58 printf '.' && sleep 1 && n=$(( $n - 1 )) 59 done 60 echo 61 62 if test $n -lt 0 && test -s $old_pid 63 then 64 echo >&2 "$old_pid still exists after $TIMEOUT seconds" 65 exit 1 66 fi 67 exit 0 68 fi 69 echo >&2 "Couldn't upgrade, starting '$CMD' instead" 70 $CMD 71 ;; 72 73 reopen-logs) 74 sig USR1 75 ;; 76 77 *) 78 echo >&2 "Usage: $0 <start|stop|restart|upgrade|force-stop|reopen-logs>" 79 exit 1 80 ;; 81 esac

    Read the article

  • SharpDX: Render to bitmap using Direct2D 1.1

    - by mwhouser
    I have a command line application that I am currently using SharpDX (Direct2D 1.0) to render to PNG files. This is a window-less application. It's currently creating a SharpDX.WIC.WicBitmap, a WicRenderTarget, then rendering to that. I then save the WicBitmap to the PNG file. For various reasons, I need to migrate to Direct2D 1.1 to take advantage of some of the effects available in 1.1. I'm trying to get a SharpDX.Direct2D1.Bitmap that I can save as PNG. I cannot use FromWicBitmap because that copies the bitmap, it does not share it. I see CreateSharedBitmap in the Direct2D1 API that takes a IWICBitmapLock. However, I do not see this implemented as a constructor of SharpDX.Direct2D.Bitmap. This is what I'm trying to do: // Bunch of setup var d2dDevice = new SharpDX.Direct2D1.Device(dxgiDevice); var d2dDeviceContext = new SharpDX.Direct2D1.DeviceContext(d2dDevice, SharpDX.Direct2D1.DeviceContextOptions.None); using (var wicFactory = new SharpDX.WIC.ImagingFactory()) { using (SharpDX.WIC.Bitmap wicBitmap = new SharpDX.WIC.Bitmap(wicFactory, 500, 500, SharpDX.WIC.PixelFormat.Format32bppPBGRA, SharpDX.WIC.BitmapCreateCacheOption.CacheOnDemand)) { var wicLock = wicBitmap.Lock(SharpDX.WIC.BitmapLockFlags.Write); var props = new SharpDX.Direct2D1.BitmapProperties1(); props.BitmapOptions = SharpDX.Direct2D1.BitmapOptions.Target; var bitmap = new SharpDX.Direct2D1.Bitmap1(d2dDeviceContext, wicLock, props); // This is not available d2dDeviceContext.Target = bitmap; // Do the drawing // Save the PNG } } Is there a way to do what I'm trying to accomplish?

    Read the article

  • SQL SERVER – Microsoft SQL Server Migration Assistant V6.0 Released

    - by Pinal Dave
    Every company makes a different decision about the database when they start, but as they move forward they mature and make the decision which is based on their experience and best interest of the organization. Similarly, quite a many organizations make different decisions on database, like Sybase, MySQL, Oracle or Access and as time passes by they learn that now they want to move to a different platform. Microsoft makes it easy for SQL Server professional by releasing various Migration Assistant tools. Last week, Microsoft released Microsoft SQL Server Migration Assistant v6.0. Here are different tools released earlier last week to migrate various product to SQL Server. Microsoft SQL Server Migration Assistant v6.0 for Sybase SQL Server Migration Assistant (SSMA) is a free supported tool from Microsoft that simplifies database migration process from Sybase Adaptive Server Enterprise (ASE) to SQL Server and Azure SQL DB. SSMA automates all aspects of migration including migration assessment analysis, schema and SQL statement conversion, data migration as well as migration testing. Microsoft SQL Server Migration Assistant v6.0 for MySQL SQL Server Migration Assistant (SSMA) is a free supported tool from Microsoft that simplifies database migration process from MySQL to SQL Server and Azure SQL DB. SSMA automates all aspects of migration including migration assessment analysis, schema and SQL statement conversion, data migration as well as migration testing. Microsoft SQL Server Migration Assistant v6.0 for Oracle SQL Server Migration Assistant (SSMA) is a free supported tool from Microsoft that simplifies database migration process from Oracle to SQL Server and Azure SQL DB. SSMA automates all aspects of migration including migration assessment analysis, schema and SQL statement conversion, data migration as well as migration testing. Microsoft SQL Server Migration Assistant v6.0 for Access SQL Server Migration Assistant (SSMA) is a free supported tool from Microsoft that simplifies database migration process from Access to SQL Server. SSMA for Access automates conversion of Microsoft Access database objects to SQL Server database objects, loads the objects into SQL Server and Azure SQL DB, and then migrates data from Microsoft Access to SQL Server and Azure SQL DB. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: SQL Migration

    Read the article

  • Upgrade 10g Osso to 11g OAM (Part 2)

    - by Pankaj Chandiramani
    This is part 2 of http://blogs.oracle.com/pankaj/2010/11/upgrade_10g_osso_to_11g_oam.html So last post we saw the overview of upgrading osso to oam11g . Now some more details on same . As we are using the co-existence feature , we have to install the OAM server and upgrade the existing OSSO 10g server to the OAM servers. OAM Upgrade Steps Overview Pre-Req : You already have a OAM 11g Installed Upgrade Step 1: Configure User Store & Make it Primary Upgrade Step 2: Create Policy Domain , this is dome by UA automatically Upgrade Step 3: Migrate Partners : This is done by running Upgrade Assistant Verify successful Upgrade Details on UA step : To Upgrade the existing OSSO 10g servers to OAM server , this is done by running the UA script in OAM , which copies over all the partner app details from osso to OAM 11g , run_ua.sh is the script name which will ask you to input the Policies.properties from SSO $OH/sso/config folder of osso 10g & other variables like db password . Some pointers Upgrading oso to Oam 11g , by default enables the coexistence mode on the OAM Server Front-end the OAM server with the same Load Balancer that is the front end of the OSSO 10g servers. Now, OAM and OSSO 10g servers are working in a co-exist mode. OAM 11g is made to understand 10g OSSO Token format and session handling capabilities so as to co-exist with 10g OSSO servers./li How to test ? Try to access the partner applications and verify that single sign on works. Also, verify that user does not have to login in if the user is already authenticated by either OAM or OSSO 10g server. Screen-shots & Troubleshooting tips to be followed .......

    Read the article

  • Forum software advice needed

    - by David Thompson
    Hello All ... we want to migrate our sites current forum (proprietary built) to a newer, more modern (feature rich) platform. I've been looking around at the available options and have narrowed it down to vBulletin, Vanilla or Phorum (unless you have another suggestion ?). I hope someone here can give me some feedback on their experiences either migrating to a new forum or working deeply with one. The current forum we have has approx 2.2 million threads in it and is contained in a MySQL database. Data Migration is obviously the first issue, is one of the major Forum vendors better or worse in this regard ? The software needs to be able to be clustered and cached to ensure availability and performance. We want it to be PHP based and store it's data in MySQL. The code needs to be open to allow us to highly customise the software both to strip out a lot of stuff and be able to integrate our sites features. A lot of the forums I've looked at have a lot of duplicate features to our main site, in particular member management, profiles etc. I realise we'll have to do a good bit of development in removing these and tieing it all back to the main site so we want to find a platform that makes this kind of integration as easy as possible. Finally I guess if 'future proofing' the forum (as best as possible) given the above. Which platform will allow us to customise it but also allow us to keep instep with upgrades. Which forum software has the best track record for bringing online new features in a timely manner ? etc. etc. I know it's a big question but if anyone here has any experience in some or all of the above I'd be very grateful.

    Read the article

  • Would Java programmers hire C# programmers?

    - by Linx
    I learned and used Java in college. After graduating, I got a job in C#. Two years after, there are a lot more positions in Java. Would I have a good chance to be hired as a Java programmer? What interview questions would I be asked? Update (07/10/2012): Thank you for all your answers and comments. I really appreciate it. I had a chance to work on a Java project for 9 months. It was with a mix of Perl because we were trying to migrate from Perl to Java. Eclipse has definitely improved a lot. I used Maven and Spring MVC. Pretty fun. So, after the project ended, I did Ruby on Rails. That was a year-long fun project also. Two years later, I am back to .NET. Overall, being a programmer has been very sweet. Wouldn't trade it for anything else!

    Read the article

  • Random touchpad and keyboard freezes on new installation

    - by ancaleth
    My touchpad and keyboard freeze up on my newly installed Ubuntu 10.10. They remain frozen until you shut down manually. No keys work, cursor doesn't move - it's like a screenshot. I was using Ubuntu 10.4 via wubi before on this Laptop where this problem never occurred. (I did not migrate wubi or upgrade to 10.10, it's a fresh start. 64-bit on Dell Studio, plenty of RAM, plenty of free space on partition etc.) I can't say there is a pattern yet, once it happened during the download of packages with the Update Manager, once it was just using Firefox, no other program running. In between these crashes the laptop was booted once, updates were installed etc., firefox was used and there weren't any problems. Both crashes should be in the attached kern.log and I noticed there were some error problems before the last crash (at the end, obviously). It seems the wireless was experiencing problems. This wasn't noticed on the user end, since the touchpad + keyboard were already frozen. kern.log: http://paste.ubuntu.com/552617/ How can the freezes be fixed? Edit: I will try Ctrl+Alt+F1 and then Ctrl+Alt+F7 when next freeze occurs, to see if it works again after this, as suggested here. But the keyboard seemed pretty frozen to me.

    Read the article

  • SQL SERVER – NuoDB in Sixty Seconds – SQL in Sixty Seconds #053

    - by Pinal Dave
    Earlier this week, I have done five part blog series on NuoDB and it was very well received by audience. NuoDB is an elastically scalable SQL database that can run on local host, datacenter and cloud-based resources. t is an operational NewSQL database built on a patented emergent architecture with full support for SQL and ACID guarantees. In this blog post, I will explore how one can download and install NuoDB database. In this video I explain how one can install NuoDB in very few seconds and set up the entire environment in additional few seconds. One can get going with installation of NuoDB and sample database in total of less than 60 seconds. Let us see the same concept in following SQL in Sixty Seconds Video: You can Download NuoDB and reproduce the same Sixty Seconds experience. Related Tips in SQL in Sixty Seconds: Part 1 – Install NuoDB in 90 Seconds Part 2 – Manage NuoDB Installation Part 3 – Explore NuoDB Database Part 4 – Migrate from SQL Server to NuoDB Part 5 - NuoDB and Third Party Explorer What would you like to see in the next SQL in Sixty Seconds video? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Interview Questions and Answers, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology, Video Tagged: Identity

    Read the article

< Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >