Search Results

Search found 4849 results on 194 pages for 'schema migration'.

Page 83/194 | < Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >

  • Introducing Oracle Multitenant

    - by OracleMultitenant
    0 0 1 1142 6510 Oracle Corporation 54 15 7637 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-fareast-language:JA;} The First Database Designed for the Cloud Today Oracle announced the general availability (GA) of Oracle Database 12c, the first database designed for the Cloud. Oracle Multitenant, new with Oracle Database 12c, is a key component of this – a new architecture for consolidating databases and simplifying operations in the Cloud. With this, the inaugural post in the Multitenant blog, my goal is to start the conversation about Oracle Multitenant. We are very proud of this new architecture, which we view as a major advance for Oracle. Customers, partners and analysts who have had previews are very excited about its capabilities and its flexibility. This high level review of Oracle Multitenant will touch on our design considerations and how we re-architected our database for the cloud. I’ll briefly describe our new multitenant architecture and explain it’s key benefits. Finally I’ll mention some of the major use cases we see for Oracle Multitenant. Industry Trends We always start by talking to our customers about the pressures and challenges they’re facing and what trends they’re seeing in the industry. Some things don’t change. They face the same pressures and the same requirements as ever: Pressure to do more with less; be faster, leaner, cheaper, and deliver services 24/7. Big companies have achieved scale. Now they want to realize economies of scale. As ever, DBAs are faced with the challenges of patching and upgrading large numbers of databases, and provisioning new ones.  Requirements are familiar: Performance, scalability, reliability and high availability are non-negotiable. They need ever more security in this threatening climate. There’s no time to stop and retool with new applications. What’s new are the trends. These are the techniques to use to respond to these pressures within the constraints of the requirements. With the advent of cloud computing and availability of massively powerful servers – even engineered systems such as Exadata – our customers want to consolidate many applications into fewer larger servers. There’s a move to standardized services – even self-service. Consolidation Consolidation is not new; companies have tried various different approaches to consolidation of databases in the cloud. One approach is to partition a powerful server between several virtual machines, one per application. A downside of this is that you have the resource and management overheads of OS and RDBMS per VM – that is, per application. Another is that you have replaced physical sprawl with virtual sprawl and virtual sprawl is still expensive to manage. In the dedicated database model, we have a single physical server supporting multiple databases, one per application. So there’s a shared OS overhead, but RDBMS process and memory overhead are replicated per application. Let's think about our traditional Oracle Database architecture. Every time we create a database, be it a production database, a development or a test database, what do we do? We create a set of files, we allocate a bunch of memory for managing the data, and we kick off a series of background processes. This is replicated for every one of the databases that we create. As more and more databases are fired up, these replicated overheads quickly consume the available server resources and this limits the number of applications we can run on any given server. In Oracle Database 11g and earlier the highest degree of consolidation could be achieved by what we call schema consolidation. In this model we have one big server with one big database. Individual applications are installed in separate schemas or table-owners. Database overheads are shared between all applications, which affords maximum consolidation. The shortcomings are that application changes are often required. There is no tenant isolation. One bad apple can spoil the whole batch. New Architecture & Benefits In Oracle Database 12c, we have a new multitenant architecture, featuring pluggable databases. This delivers all the resource utilization advantages of schema consolidation with none of the downsides. There are two parts to the term “pluggable database”: "pluggable", which is new, and "database", which is familiar.  Before we get to the exciting new stuff let’s discuss what hasn’t changed. A pluggable database is a fully functional Oracle database. It’s not watered down in any way. From the perspective of an application or an end user it hasn’t changed at all. This is very important because it means that no application changes are required to adopt this new architecture. There are many thousands of applications built on Oracle databases and they are all ready to run on Oracle Multitenant. So we have these self-contained pluggable databases (PDBs), and as their name suggests, they are plugged into a multitenant container database (CDB). The CDB behaves as a single database from the operations point of view. Very much as we had with the schema consolidation model, we only have a single set of Oracle background processes and a single, shared database memory requirement. This gives us very high consolidation density, which affords maximum reduction in capital expenses (CapEx). By performing management operations at the CDB level – “managing many as one” – we can achieve great reductions in operating expenses (OpEx) as well, but we retain granular control where appropriate. Furthermore, the “pluggability” capability gives us portability and this adds a tremendous amount of agility. We can simply unplug a PDB from one CDB and plug it into another CDB, for example to move it from one SLA tier to another. I'll explore all these new capabilities in much more detail in a future posting.  Use Cases We can identify a number of use cases for Oracle Multitenant. Here are a few of the major ones. 0 0 1 113 650 Oracle Corporation 5 1 762 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-fareast-language:JA;} Development / Testing where individual engineers need rapid provisioning and recycling of private copies of a few "master test databases" Consolidation of disparate applications using fewer, more powerful servers Software as a Service deploying separate copies of identical applications to individual tenants Database as a Service typically self-service provisioning of databases on the private cloud Application Distribution from ISV / Installation by Customer Eliminating many typical installation steps (create schema, import seed data, import application code PL/SQL…) - just plug in a PDB! High volume data distribution literally via disk drives in envelopes distributed by truck! - distribution of things like GIS or MDM master databases …various others! Benefits Previous approaches to consolidation have involved a trade-off between reductions in Capital Expenses (CapEx) and Operating Expenses (OpEx), and they’ve usually come at the expense of agility. With Oracle Multitenant you can have your cake and eat it: Minimize CapEx More Applications per server Minimize OpEx Manage many as one Standardized procedures and services Rapid provisioning Maximize Agility Cloning for development and testing Portability through pluggability Scalability with RAC Ease of Adoption Applications run unchanged It’s a pure deployment choice. Neither the database backend nor the application needs to be changed. In future postings I’ll explore various aspects in more detail. However, if you feel compelled to devour everything you can about Oracle Multitenant this very minute, have no fear. Visit the Multitenant page on OTN and explore the various resources we have available there. Among these, Oracle Distinguished Product Manager Bryn Llewellyn has written an excellent, thorough, and exhaustively detailed White Paper about Oracle Multitenant, which is available here.  Follow me  I tweet @OraclePDB #OracleMultitenant

    Read the article

  • SharePoint: UI for ordering list items

    - by svdoever
    SharePoint list items have in the the base Item template a field named Order. This field is not shown by default. SharePoint 2007, 2010 and 2013 have a possibility to specify the order in a UI, using the _layouts page: {SiteUrl}/_layouts/Reorder.aspx?List={ListId} In SharePoint 2010 and 2013 it is possible to add a custom action to a list. It is possible to add a custom action to order list items as follows (SharePoint 2010 description): Open SharePoint Designer 2010 Navigate to a list Select Custom Actions > List Item Menu Fill in the dialog box: Open List Settings > Advanced Settings > Content Types, and set Allow management of content types to No  On List Settings select Column Ordering This results in the following UI in the browser: Selecting the custom Order Items action (under List Tools > Items) results in: You can change your custom action in SharePoint designer. On the list screen in the bottom right corner you can find the custom action: We now need to modify the view to include the order by adding the Order field to the view, and sorting on the Order field. Problem is that the Order field is hidden. It is possible to modify the schema of the Order field to set the Hidden attribute to FALSE. If we don’t want to write code to do it and still get the required result it is also possible to modify the view through SharePoint Designer: Modify the code of the view: This results in: Note that if you change the view through the web UI these changes are partly lost. If this is a problem modify the Order field schema for the list.

    Read the article

  • TFS SQL Deployment Data Script

    - by Greg
    We are using TFS and SQL 2005 (looking to upgrade to SQL 2012 if that makes a difference). We store our database schema in a Visual Studio Database project (VS 2010). When code is released to live we currently use the Visual Studio Database Project to build a script for all our schema changes. The problem we have been getting is having to alter or add to that script to add/fix data for the deployment. For example if we add a new non-nullable column to an existing table we need to populate that column with data during the insert. Other times we may want to create new records in transactional tables (e.g. assign specific users to a new security access). Do Visual Studio Database Projects have a way to store these scripts that only need to be run once and somehow include them in the build? Does it know which scripts need to be run (for example if we are inserting default data we don't want to do that again a second time)? OR Is there a better way to manage these scripts?

    Read the article

  • Entity Object Extension in Oracle Application R12

    - by Manoj Madhusoodanan
    In this blog I will explain how to perform Entity Object ( EO ) Extension.As a prerequisite please read my previous blog.I am doing this exercise based on PL/SQL EO. Following attributes are part of FndUserEO. Here I will add a validation to UserName attribute "Length should be > 5". Following steps need to perform. 1) Download all files of  "Entity Object Based on PL/SQL" to JDEV_USER_HOME/myprojects and JDEV_USER_HOME/myclasses.If you want to see the content of source java file decompile it and save it in JDEV_USER_HOME/myprojects. 2) Create new Entity Object XXFndUserEO as follows. Include all attributes of parent EO. 3) Add the validation code snippet to XXFndUserEOImpl.java as follows. 4) Create the substitution as follows. 5) Migrate files to $JAVA_TOP. xxcustom.oracle.apps.fnd.user.schema.server.XXFndUserEOImpl.javaxxcustom.oracle.apps.fnd.user.schema.server.XXFndUserEO.xml 6) Migrate the substitution.. 7) Bounce the server. 8) Verify the substitution has applied properly. Access Create User Page and create a User. You can see the validation message if user name length is less than 5. Give User Name as XXCUST4 and verify the table.   The FND_USER has created successfully.

    Read the article

  • SQL to XML open data made simple

    - by drrwebber
    The perennial question for people is how to easily generate XML from SQL table content?  The latest CAM Editor release really tackles this head on by providing a powerful and simple toolset.  Firstly you can visually browse your SQL tables and then drag and drop from columns and tables into the XML structure editor.   This gives you a code-free method of describing the transformation you require.  So you do not need to know about the vagaries of XML and XSD schema syntax. Second you can map directly into existing industry domain XML exchange structures in the XML visual editor, again no need to wrestle with XSD schema, you have WYSIWYG visual control over what your output will look like. If you do not have a target XML structure and need to build one from scratch, then the CAM Editor makes this simple.  Switch the SQL viewer into designer mode, then take your existing SQL table and drag and drop it into the XML structure editor.  Automatically the XML wizard tool will take your SQL column names and definitions and create equivalent XML for you and insert the mappings. Simply save the structure template, and run the Open Data generator menu option, and your XML is built for you. Completely code-free template driven development. To see this in action, see our video demonstration links and then download the tools and samples and try it yourself.

    Read the article

  • Search multiple tables

    - by gilden
    I have developed a web application that is used mainly for archiving all sorts of textual material (documents, references to articles, books, magazines etc.). There can be any given number of archive tables in my system, each with its own schema. The schema can be changed by a moderator through the application (imagine something similar to a really dumbed down version of phpMyAdmin). Users can search for anything from all of the tables. By using FULLTEXT indexes together with substring searching (fields which do not support FULLTEXT indexing) the script inserts the results of a search to a single table and by ordering these results by the similarity measure I can fairly easily return the paginated results. However, this approach has a few problems: substring searching can only count exact results the 50% rule applies to all tables separately and thus, mysql may not return important matches or too naively discards common words. is quite expensive in terms of query numbers and execution time (not an issue right now as there's not a lot of data yet in the tables). normalized data is not even searched for (I have different tables for categories, languages and file attatchments). My planned solution Create a single table having columns similar to id, table_id, row_id, data Every time a new row is created/modified/deleted in any of the data tables this central table also gets updated with the data column containing a concatenation of all the fields in a row. I could then create a single index for Sphinx and use it for doing searches instead. Are there any more efficient solutions or best practises how to approach this? Thanks.

    Read the article

  • What's wrong performing unit test against concrete implementation if your frameworks are not going to change?

    - by palm snow
    First a bit of background: We are re-architecting our product suite that was written 10 years ago and served its purpose. One thing that we cannot change is the database schema as we have 500+ client base using this system. Our db schema has over 150+ tables. We have decided on using Entity Framework 4.1 as DAL and still evaluating various frameworks for storing our business logic. I am investigation to bring unit testing into the mix but I also confused as to how far I need to go with setting up a full blown TDD environment. One aspect of setting up unit testing is by getting into implementing Repository, unit of work and mocking frameworks etc. This mean there will be cost and investment on the code-bloat associated with all these frameworks. I understand some of this could be auto-generated but when it comes to things like behaviors, that will be mostly hand written. Just to be clear, I am not questioning the important of unit testing your code. I am just not sure we need all its components (like repository, mocking etc.) when we are fairly certain of storage mechanism/framework (SQL Server/Entity Framework). All that code bloat with generic repositories make sense when you need a generic layers with ability to change this whenever you like however its very likely a YAGNI in our case. What we need is more of integration testing where we can unit-test our code with concrete repository objects and test data in database. In this scenario, just running integration test seem to be more beneficial in our case. Any thoughts if I am missing any thing here?

    Read the article

  • Ruby on Rail using MYSQL database

    - by Joseph Misiti
    Hey guys, New to rails, trying to figure out something simple. Seems as though I cannot migrate a very simple mysql database using "rake db:migrate" command. Here is the issue: I know rails defaults to sqllite right now, but I need to use mysql for a series of reasons. Use the following commands rails -d mysql MyMoviesSQL cd MyMoviesSQL script/generate scaffold Movies title:string rating:integer rake db:migrate never get past here because i see the following error: in /Users/user/websites/MyMovieSQL) rake aborted! NoMethodError: undefined method `ord' for 0:Fixnum: SET NAMES 'utf8' (See full trace by running task with --trace) using trace XXXXX-macbook-pro:MyMovieSQL user$ rake db:migrate --trace (in /Users/user/websites/MyMovieSQL) ** Invoke db:migrate (first_time) ** Invoke environment (first_time) ** Execute environment ** Execute db:migrate rake aborted! NoMethodError: undefined method ord' for 0:Fixnum: SET NAMES 'utf8' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract_adapter.rb:219:inlog' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/mysql_adapter.rb:323:in execute' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/mysql_adapter.rb:599:inconfigure_connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/mysql_adapter.rb:594:in connect' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/mysql_adapter.rb:203:ininitialize' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/mysql_adapter.rb:75:in new' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/mysql_adapter.rb:75:inmysql_connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:223:in send' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:223:innew_connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:245:in checkout_new_connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:188:incheckout' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:184:in loop' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:184:incheckout' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/monitor.rb:242:in synchronize' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:183:incheckout' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:98:in connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:326:inretrieve_connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_specification.rb:123:in retrieve_connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_specification.rb:115:inconnection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/migration.rb:435:in initialize' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/migration.rb:400:innew' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/migration.rb:400:in up' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/migration.rb:383:inmigrate' /Library/Ruby/Gems/1.8/gems/rails-2.3.5/lib/tasks/databases.rake:116 /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:636:in call' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:636:inexecute' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:631:in each' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:631:inexecute' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:597:in invoke_with_call_chain' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/monitor.rb:242:insynchronize' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:590:in invoke_with_call_chain' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:583:ininvoke' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2051:in invoke_task' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2029:intop_level' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2029:in each' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2029:intop_level' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2068:in standard_exception_handling' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2023:intop_level' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2001:in run' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2068:instandard_exception_handling' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:1998:in run' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/bin/rake:31 /usr/bin/rake:19:inload' /usr/bin/rake:19 no clue what is going on, if they want me to add a patch because the methods does not exist, please tell me which file to add it to, and also, how in the future do i figure out which file I need to patch (I see it looks like its a method in FixNum class) here is a patch to a problem that looks similar, but its a different version of ruby http://www.mail-archive.com/[email protected]/msg00250.html versions rails 2.3.5 ruby 1.8.6 gem list yeilds: * LOCAL GEMS * actionmailer (2.3.5, 1.3.6) actionpack (2.3.5, 1.13.6) actionwebservice (1.2.6) activerecord (2.3.5, 1.15.6) activeresource (2.3.5) activesupport (2.3.5, 1.4.4) acts_as_ferret (0.4.1) capistrano (2.0.0) cgi_multipart_eof_fix (2.5.0) daemons (1.0.9) dbi (0.4.3) deprecated (2.0.1) dnssd (0.6.0) fastthread (1.0.1) fcgi (0.8.7) ferret (0.11.4) gem_plugin (0.2.3) highline (1.2.9) hpricot (0.6) libxml-ruby (0.9.5, 0.3.8.4) mongrel (1.1.4) needle (1.3.0) net-sftp (1.1.0) net-ssh (1.1.2) rack (1.0.1) rails (2.3.5) rake (0.8.7, 0.7.3) RedCloth (3.0.4) ruby-openid (1.1.4) ruby-yadis (0.3.4) rubygems-update (1.3.6) rubynode (0.1.3) sqlite3-ruby (1.2.1) termios (0.9.4) thanks in advanced

    Read the article

  • subsonic.migrations and Oracle XE

    - by andrecarlucci
    Hello, Probably I'm doing something wrong but here it goes: I'm trying to create a database using subsonic.migrations in an OracleXE version 10.2.0.1.0. I have ODP v 10.2.0.2.20 installed. This is my app.config: <?xml version="1.0" encoding="utf-8" ?> <configuration> <configSections> <section name="SubSonicService" type="SubSonic.SubSonicSection, SubSonic" requirePermission="false"/> </configSections> <connectionStrings> <add name="test" connectionString="Data Source=XE; User Id=test; Password=test;"/> </connectionStrings> <SubSonicService defaultProvider="test"> <providers> <clear/> <add name="test" type="SubSonic.OracleDataProvider, SubSonic" connectionStringName="test" generatedNamespace="testdb"/> </providers> </SubSonicService> </configuration> And that's my first migration: public class Migration001_Init : Migration { public override void Up() { //Create the records table TableSchema.Table records = CreateTable("asdf"); records.AddColumn("RecordName"); } public override void Down() { DropTable("asdf"); } } When I run the sonic.exe, I get this exception: Setting ConfigPath: 'App.config' Building configuration from D:\Users\carlucci\Documents\Visual Studio 2008\Projects\Wum\Wum.Migration\App.config Adding connection to test ERROR: Trying to execute migrate Error Message: System.Data.OracleClient.OracleException: ORA-02253: especifica‡Æo de restri‡Æo nÆo permitida aqui at System.Data.OracleClient.OracleConnection.CheckError(OciErrorHandle errorHandle, Int32 rc) at System.Data.OracleClient.OracleCommand.Execute(OciStatementHandle statementHandle, CommandBehavior behavior, Boolean needRowid, OciRowidDescriptor& rowidDescriptor, ArrayList& resultParameterOrdinals) at System.Data.OracleClient.OracleCommand.ExecuteNonQueryInternal(Boolean needRowid, OciRowidDescriptor& rowidDescriptor) at System.Data.OracleClient.OracleCommand.ExecuteNonQuery() at SubSonic.OracleDataProvider.ExecuteQuery(QueryCommand qry) in D:\@SubSonic\SubSonic\SubSonic\DataProviders\OracleDataProvider.cs:line 350 at SubSonic.DataService.ExecuteQuery(QueryCommand cmd) in D:\@SubSonic\SubSonic\SubSonic\DataProviders\DataService.cs:line 544 at SubSonic.Migrations.Migrator.CreateSchemaInfo(String providerName) in D:\@SubSonic\SubSonic\SubSonic.Migrations\Migrator.cs:line 249 at SubSonic.Migrations.Migrator.GetCurrentVersion(String providerName) in D:\@SubSonic\SubSonic\SubSonic.Migrations\Migrator.cs:line 232 at SubSonic.Migrations.Migrator.Migrate(String providerName, String migrationDirectory, Nullable`1 toVersion) in D:\@SubSonic\SubSonic\SubSonic.Migrations\Migrator.cs:line 50 at SubSonic.SubCommander.Program.Migrate() in D:\@SubSonic\SubSonic\SubCommander\Program.cs:line 264 at SubSonic.SubCommander.Program.Main(String[] args) in D:\@SubSonic\SubSonic\SubCommander\Program.cs:line 90 Execution Time: 379ms What am I doing wrong? Thanks a lot for any help :) Andre Carlucci UPDATE: As pointed by Anton, the problem is the subsonic OracleSqlGenerator. It is trying to create the schema table using this sql: CREATE TABLE SubSonicSchemaInfo ( version int NOT NULL CONSTRAINT DF_SubSonicSchemaInfo_version DEFAULT (0) ) Which doesn't work on oracle. The correct sql would be: CREATE TABLE SubSonicSchemaInfo ( version int DEFAULT (0), constraint DF_SubSonicSchemaInfo_version primary key (version) ) The funny thing is that since this is the very first sql executed by subsonic migrations, NOBODY EVER TESTED it on oracle.

    Read the article

  • I don't want to convert solution files when switching from Visual Studio 2008-2010. How?

    - by Vibhu
    I just got a new work laptop. I want to run Visual Studio 2010 Beta 2. However, the rest of my team is using Visual Studio 2008 with .Net 3.5, and I don't want to check-in the solution migration code into TFS. In fact, I don't want any migration code at all - I just want to use the old .NET Framework with our old solution, with the new IDE. How can I do this? Is this possible? Thank you!

    Read the article

  • Migrating test cases & defects from Quality Center to TFS 2008/2010

    - by stackoverflowuser
    Tool that can be used to migrate (or even better..synchronize) test cases and bugs between: TFS 2008 and Quality Center 9.2 (or later) TFS 2010 and Quality Center 9.2 (or later) I am aware of the following tools: Test Case Migrator (Excel/MHT) Tool TFS Bug Item Synchronizer 2.2 for Quality Center Also shai raiten mentions on his blog about QC 2 Team System 2010 migration tool that he has been working on and its done. But could not find any link for downloading the tool. http://blogs.microsoft.co.il/blogs/shair/archive/2009/12/31/quality-center-migration-to-team-system-2010-done.aspx Before jumping on coding with TFS SDK and QC components to come up with my own tool I need some inputs from the stackoverflow community.

    Read the article

  • Rails: creating a custom data type / creating a shorthand

    - by Shyam
    Hi, I am wondering how I could create a custom data type to use within the rake migration file. Example: if you would be creating a model, inside the migration file you can add columns. It could look like this: def self.up create_table :products do |t| t.column :name, :string t.timestamps end end I would like to know how to create something like this: t.column :name, :my_custom_data_type The reason for this to create for example a "currency" type, which is nothing more than a decimal with a precision of 8 and a scale of 2. Since I use only MySQL, the solution for this database is sufficient enough. Thank you for your feedback and comments!

    Read the article

  • What does this rake db:seed error mean?

    - by Kenji Kina
    I've been trying to solve this problem for a couple of hours but I can't seem to understand what's going on. I'm using Rails 3 beta, and want to seed some data to the database. However, when I try to seed some values through db:seed, I get this error: rake aborted! Attribute(#81402440) expected, got Array(#69024170) The seeds.rb is: DataType.delete_all DataType.create( :name => 'String' ) And I got these classes: class DataType < ActiveRecord::Base has_many :attributes end class Attribute < ActiveRecord::Base belongs_to :data_types end While the migration definition for DataType is merely: class CreateDataTypes < ActiveRecord::Migration def self.up create_table :data_types do |t| t.string :name t.timestamps end end def self.down drop_table :data_types end end Can anyone tell me what I'm doing wrong?

    Read the article

  • Oracle: set timezone for column

    - by dbf
    Hi, I need to do migration date-timestamp with timezone similar to described here: http://stackoverflow.com/questions/1664627/migrating-oracle-date-columns-to-timestamp-with-timezone. But I need to make additional convertion (needed to work correctly with legacy apps): for all dates we need to change timezone to UTC and set time to 12:00 PM. So now dates are stored in local database (New York) timezone. I need to convert them this way 25/12/2009 09:12 AM (local timezone) in date column = 25/12/2009 12:00 PM UTC timestamp with local timezone column. Could you advice, how to set timezone for date value in Oracle (I found only suggestions how to convert from one timezone to another) (for example in Java there is setTimeZone method for Calendar objects). We want to make a convertion this way: rename old date column to NAME_BAK create new column timestamp with local timezone iterate over old column for not-null values set timezone to UTC, time to 12:00 PM drop old column after testing of this migration

    Read the article

  • Convert File System Website to IIS Website

    - by Noah
    We recently migrated from VS 2008 to VS 2010. The migration went fine, except for our web project. Before, in VS 2008, the site showed up as http://localhost/Website. Now, it appears as C:...\Website. It appears that when we did the migration, VS started to treat it as a file system website. I've tried removing the existing site and re-adding it as an existing website, but it still displays it as C:...\Website. Is there any way to convert it back to show it as a http://localhost/website, and run through IIS, as opposed to the default ASP.NET Development Server?

    Read the article

  • Extremely slow insert from Delphi to Remote MySQL Database

    - by MarkRobinson
    Having a major hair-pulling issue with extremely slow inserts from Delphi 2010 to a remote MySQL 5.09 server. So far, I have tried: ADO using MySQL ODBC Driver Zeoslib v7 Alpha I have used batching and direct insert with ADO (using table access), and with Zeos I have used SQL insertion with a Query, then used Table direct mode and also cached updates Table mode using applyupdates and commit. Both technologies I have tried with compression on and off. So far I have seen a pretty much the same across the board 7.5 records per second!!! Now, I would from this point assume that the remote server is just slow, but the MySQL Workbench is amazingly fast, and the Migration toolkit managed the initial migration very quickly (to be honest, I don't recall how quickly - which kind of means that it was quick) I'm just about to try the MyDAC components as we already use SDAC (wish there was a multi-buy discount or that we'd chosen UniDAC instead now!) Any ideas?

    Read the article

  • How to migrate existing udp application to raw sockets

    - by osgx
    Hello Is there a tutorial for migration from plain udp sockets (linux, C99/C++, recv syscall is used) to the raw sockets? According to http://aschauf.landshut.org/fh/linux/udp_vs_raw/ch03s04.html raw socket is much faster than udp. Application is client-server. client is proprietary and must use exactly same procotol as it was with udp server. But server can be a bit faster with raw sockets. What parts of udp I must to implement in server? Is there a "quick migration" libraries?

    Read the article

  • Ruby on Rails - Currency : commas causing an issue.

    - by easement
    Looking on SO, I see that the preferred way to currency using RoR is using decimal(8,2) and to output them using number_to_currency(); I can get my numbers out of the DB, but I'm having issues on getting them in. Inside my update action I have the following line: if @non_labor_expense.update_attributes(params[:non_labor_expense]) puts YAML::dump(params) The dump of params shows the correct value. xx,yyy.zz , but what gets stored in the DB is only xx.00 What do I need to do in order to take into account that there may be commas and a user may not enter .zz (the cents). Some regex and for comma? how would you handle the decimal if it were .2 versus .20 . There has to be a builtin or at least a better way. My Migration (I don't know if this helps): class ChangeExpenseToDec < ActiveRecord::Migration def self.up change_column :non_labor_expenses, :amount, :decimal, :precision => 8, :scale => 2 end def self.down change_column :non_labor_expenses, :amount, :integer end end

    Read the article

  • Is Django's manage.py syncdb or South used to create the test database?

    - by Thierry Lam
    With Django 1.1.1 and South 0.62, running a test from the CLI usually have the following output: Creating table some_model Installing index for my_app.SomeModel model . ----- Ran 1 test in 1s OK After upgrading to South 0.7, the output is invoking South's migration: Creating table some_model Installing index for my_app.SomeModel model Migrating... Running migrations for my_app: - Migrating forwards to 0001_initial > my_app:0001_initial - Loading initial data for my_app Migrated: - my_app . ----- Ran 1 test in 1s OK To create the test DB, has the test always used South migration in the past(before South 0.7) even if the output is not explicitly being shown?

    Read the article

  • Migrate a Django project from MySQL to Oracle

    - by pablo
    Hi, I have a Django1.1 project that works with a legacy MySQL db. I'm trying to migrate this project to Oracle (xe and 11g). We have two options for the migration: - Use SQL developer to create a migration sql script. - Use Django fixtures. The schema created with the sql script from sql developer doesn't match the schema created from syncdb. For example, Django expects TIMESTAMP columns while sql developer creates DATE columns. Using syncdb with Django fixtures could be great but when trying to load the MySQL fixtures into Oracle, after using syncdb, I'm getting: IntegrityError: ORA-00001: unique constraint (USER.SYS_C004253) violated How can I find what part create the integrity error? Thanks

    Read the article

  • Easiest way to rename a model using Django/South?

    - by vaughnkoch
    Hi everyone, I've been hunting for an answer to this on South's site, google, and SO, but couldn't find a simple way to do this. I want to rename a Django model using South. Say you have the following: class Foo(models.Model): name = models.CharField() class FooTwo(models.Model): name = models.CharField() foo = models.ForeignKey(Foo) and you want to convert Foo to Bar, namely class Bar(models.Model): name = models.CharField() class FooTwo(models.Model): name = models.CharField() foo = models.ForeignKey(Bar) To keep it simple, I'm just trying to change the name from Foo to Bar, but ignore the 'foo' member in FooTwo for now. What's the easiest way to do this using South? a) I could probably do a data migration, but that seems pretty involved. b) Write a custom migration, e.g. db.rename_table('city_citystate', 'geo_citystate'), but I'm not sure how to fix the foreign key in this case. c) An easier way that you know? Thanks!

    Read the article

  • Circular Reference for Parent Link on Work Item cannot be resolved by Retry

    - by Atters
    I receive the following error; OH-TFS-Connector-0051: Operation failed getCollectionMetaData. Server Error : TF201063: Adding a Parent link to work item 1737 would result in a circular relationship. To create this link, evaluate the existing links, and remove one of the other links in the cycle. I have completely flattened out the Work Items in the source project. When retrying the migration the timestamp is modified on the pending errors however the issues are not resolved. These Work Items now have no parents or children in the source project. So I'm wondering if the retry list is no longer valid but there doesn't appear to be a away to have it update? I can run the whole migration again, however it takes 5-7 hours to just do the work items so it would be great if there is a quick fix.

    Read the article

  • Porting VS2005 project to VS2008

    - by lucavb
    i need to port a VS2005 Project (.NET2) to a VS2008 (.NET3.5) (or to VS2010 .NET4 not yet defined). The project is composed by: resources and configuration files (VS project files, like: .settings .vbproj .myapp .config .xconfig .Designer.vb); a lot of VB codes; xsc, xsd, xss and xsx files; a lot of Crystal reports for VS2005; graphical resources. The application take data in order to generate reports from more DB SQL Server 2005 istances. What is the best way to approach to a migration activity? Is there an internal migration tool? If yes, what's the best practice to use it? Which kind of files will be automatically ported to the new VS version? Thanks in advance for all the provided information

    Read the article

< Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >