Search Results

Search found 1693 results on 68 pages for 'sqlalchemy migrate'.

Page 18/68 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • How do I migrate a ManyToOne to a ManyToMany relationship in Hibernate?

    - by spderosso
    I have a instance field of a class X that is mapped using Hibernate as a Many to One relationship. E.g: public class X{ ... @ManyToOne(optional=false) private Y iField; ... } That is correctly working with a particular schema. I know want to change this instance field iField to a List and a Many to Many relationship. public class X{ ... @ManyToMany(optional=false) private List<Y> iField; ... } What steps should I follow? Do I have to change the schema? in which way? In case you need more info let me know. Thanks!

    Read the article

  • How to migrate primary key generation from "increment" to "hi-lo"?

    - by Bevan
    I'm working with a moderate sized SQL Server 2008 database (around 120 tables, backups are around 4GB compressed) where all the table primary keys are declared as simple int columns. At present, primary key values are generated by NHibernate with the increment identity generator, which has worked well thus far, but precludes moving to a multiprocessing environment. Load on the system is growing, so I'm evaluating the work required to allow the use of multiple servers accessing a common database backend. Transitioning to the hi-lo generator seems to be the best way forward, but I can't find a lot of detail about how such a migration would work. Will NHibernate automatically create rows in the hi-lo table for me, or do I need to script these manually? If NHibernate does insert rows automatically, does it properly take account of existing key values? If NHibernate does take care of thing automatically, that's great. If not, are there any tools to help? Update NHibernate's increment identifier generator works entirely in-memory. It's seeded by selecting the maximum value of used identifiers from the table, but from that point on allocates new values by a simple increment, without reference back to the underlying database table. If any other process adds rows to the table, you end up with primary key collisions. You can run multiple threads within the one process just fine, but you can't run multiple processes. For comparison, the NHibernate identity generator works by configuring the database tables with identity columns, putting control over primary key generation in the hands of the database. This works well, but compromises the unit of work pattern. The hi-lo algorithm sits inbetween these - generation of primary keys is coordinated through the database, allowing for multiprocessing, but actual allocation can occur entirely in memory, avoiding problems with the unit of work pattern.

    Read the article

  • Is it a good idea to "migrate business logic code into our domain model"?

    - by Bytecode Ninja
    I am reading Hibernate in Action and the author suggests to move business logic into our domain models (p. 306). For instance, in the example presented by the book, we have three entities named Item, Bid, and User and the author suggests to add a placeBid(User bidder, BigDecimal amount) method to the Item class. Considering that usually we have a distinct layer for business logic (e.g. Manager or Service classes in Spring) that among other things control transactions, etc. is this really a good advice? Isn't it better not to add business logic methods to our entities? Thanks in advance.

    Read the article

  • Tool to migrate a repo with only read access?

    - by Corazu
    I have a subversion repository located on our school servers from a project my friends and I worked on the past semester. We want to take the project and polish it and make it more of a portfolio item and I'm not quite sure that the repo will stay intact on the school servers (they usually wipe it after a few semesters). I don't have access to dump the repo, although I've sent an email to see if I can get one - which would be the best solution. I tried svnsync but the school server doesn't support the replay command (probably turned off), so that won't work for me. Now, theoretically, wouldn't it be possible to (manually or programmatically) checkout each revision of the repository and check it in to a new repository on our own server? I think it would work - there's only 3 of us, and there's no working copy conflicts that we'd have to worry about, just want a copy of all the history in the repo in case we need to go back and look at it. That being said, before I go and reinvent a wheel by writing a script to do it - does something like this already exist? I have to figure there's already a tool to do it, or that someone has wrote a script to do it.

    Read the article

  • How do I migrate from a basic plaintext password authentication to an OAuth based system?

    - by different
    Hello, Found out today that Twitter will be discontinuing its basic authentication for its API; the push is now towards OAuth but I don’t have a clue as to how to use it or whether it’s the right path for me. All I want to be able to do is post a tweet linking to the most recently published post when I hit publish. Currently I’m sending the login credentials for my Twitter account as plaintext, which I realise isn’t that secure but as my site is fairly small it isn’t an issue at least for now. I’m using this basic PHP code: $status = urlencode(stripslashes(urldecode("Test tweet"))); $tweetUrl = 'http://www.twitter.com/statuses/update.xml'; $curl = curl_init(); curl_setopt($curl, CURLOPT_URL, "$tweetUrl"); curl_setopt($curl, CURLOPT_CONNECTTIMEOUT, 2); curl_setopt($curl, CURLOPT_RETURNTRANSFER, 1); curl_setopt($curl, CURLOPT_POST, 1); curl_setopt($curl, CURLOPT_POSTFIELDS, "status=$status"); curl_setopt($curl, CURLOPT_USERPWD, "$username:$password"); $result = curl_exec($curl); $resultArray = curl_getinfo($curl); if ($resultArray['http_code'] == 200) { curl_close($curl); $this->redirect(""); } else { curl_close($curl); echo 'Could not post to Twitter. Please go back and try again.'; } How do I move from this to an OAuth system? Do I need to?

    Read the article

  • How should I migrate DDL changes from one environment to the next?

    - by Rl
    I make DDL changes using SQL Developer's GUI. Problem is, I need to apply those same changes to the test environment. I'm wondering how others handle this issue. Currently I'm having to manually write ALTER statements to bring the test environment into alignment with the development environment, but this is prone to error (doing the same thing twice). In cases where there's no important data in the test environment I usually just blow everything away, export the DDL scripts from dev and run them from scratch in test. I know there are triggers that can store each DDL change, but this is a heavily shared environment and I would like to avoid that if possible. Maybe I should just write the DDL stuff manually rather than using the GUI?

    Read the article

  • Can you use MongoDB map/reduce to migrate data?

    - by Brian Armstrong
    I have a large collection where I want to modify all the documents by populating a field. A simple example might be caching the comment count on each post: class Post field :comment_count, type: Integer has_many :comments end class Comment belongs_to :post end I can run it in serial with something like: Post.all.each do |p| p.udpate_attribute :comment_count, p.comments.count end But it's taking 24 hours to run (large collection). I was wondering if mongo's map/reduce could be used for this? But I haven't seen a great example yet. I imagine you would map off the comments collection and then store the reduced results in the posts collection. Am I on the right track?

    Read the article

  • What next generation low level language is the best bet to migrate the code base ?

    - by e-satis
    Let's say you have a company running a lot of C/C++, and you want to start planning migration to new technologies so you don't end up like COBOL companies 15 years ago. For now, C/C++ runs more than fine and there is plenty dev on the market for it. But you want to start thinking about it now, because given the huge running code base and the data sensitivity, you feel it can take 5-10 years to move to the next step without overloading the budget and the dev teams. You have heard about D, starting to be quite mature, and Go, promising to be quite popular. What would be your choice and why?

    Read the article

  • How to migrate Django models from mysql to sqlite (or between any two database systems)?

    - by Daphna Shezaf
    I have a Django deployment in production that uses MySQL. I would like to do further development with SQLite, so I would like to import my existing data to an SQLite database. I There is a shell script here to convert a general MySQL dump to SQLite, but it didn't work for me (apparently the general problem isn't easy). I figured doing this using the Django models must be much easier. How would you do this? Does anyone have any script to do this?

    Read the article

  • migrate/ update core data app without erasing user data!

    - by treasure
    hello, i have a very complicated problem that i would like to share with you and maybe someone can answer it for me. before i start i have to say that i am very new in this. So, i have a coredata iphone app (much like the recipes app) that uses a pre-populated sql database. The user can add/edit his own data but the default data cannot be deleted. the useres data are ALL saved in the same sql database. QUESTION: what do i have to do in order to: - update some (not all) of the default data that are stored in the sql database without "touching" the user's data? (the model will stay the same - no new entities etc-) (if the user uninstall the app and then reinstall the new version everything will be ok but i dont want to do this, obviously). can someone PLEASE help in coding level?

    Read the article

  • How to migrate a codebase from one svn repo to another preserving history?

    - by chotchki
    I have a branch in a badly structured svn repo that needs to be stripped out and moved to another svn repository. (I'm trying to clean it up some). If I do an 'svn log' and not stop on copy/rename I can see all 3427 commits that I care about. Is there some way to dump the revisions out, short of writing some major scripts? I would follow the advice in this question but this branch has been moved all over the place and I would like to preserve the moves as well. Any ideas?

    Read the article

  • How to connect to SQLServer 2k5 using Ruby 1.8.7 over W2k3 with active record 2.3.5

    - by Luke
    Hi all, sorry for the blast. I'm trying to connect to an SQLServer 2k5 using Ruby 1.8.7 over W2k3 with active record 2.3.5. But, when I ran 'rake migrate' it throws the following: rake migrate --trace Hoe.new {...} deprecated. Switch to Hoe.spec. Invoke migrate (first_time) Invoke environment (first_time) Execute environment Execute migrate rake aborted! no such file to load -- odbc (...) C:/Program Files/test/Rakefile:146 (...) So, my Rakefile in the line 146 says: ActiveRecord::Migrator.migrate('db/migrate', ENV["VERSION"] ? ENV["VERSION"].to_i : nil ) The database.yml has been configured in so many ways without success. I've tried setup to mode in odbc, to configure a system dsn, to completely use the activerecord support for sqlserver but no success at all. The same Rakefile works fine over Postgres and Oracle with the proper gems installed off course. But I cann't get this work. Any help will be appreciated. Thanks in advance!

    Read the article

  • Trouble using South with Django and Heroku

    - by Dan
    I had an existing Django project that I've just added South to. I ran syncdb locally. I ran manage.py schemamigration app_name locally I ran manage.py migrate app_name --fake locally I commit and pushed to heroku master I ran syncdb on heroku I ran manage.py schemamigration app_name on heroku I ran manage.py migrate app_name on heroku I then receive this: $ heroku run python notecard/manage.py migrate notecards Running python notecard/manage.py migrate notecards attached to terminal... up, run.1 Running migrations for notecards: - Migrating forwards to 0005_initial. > notecards:0003_initial Traceback (most recent call last): File "notecard/manage.py", line 14, in <module> execute_manager(settings) File "/app/lib/python2.7/site-packages/django/core/management/__init__.py", line 438, in execute_manager utility.execute() File "/app/lib/python2.7/site-packages/django/core/management/__init__.py", line 379, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/app/lib/python2.7/site-packages/django/core/management/base.py", line 191, in run_from_argv self.execute(*args, **options.__dict__) File "/app/lib/python2.7/site-packages/django/core/management/base.py", line 220, in execute output = self.handle(*args, **options) File "/app/lib/python2.7/site-packages/south/management/commands/migrate.py", line 105, in handle ignore_ghosts = ignore_ghosts, File "/app/lib/python2.7/site-packages/south/migration/__init__.py", line 191, in migrate_app success = migrator.migrate_many(target, workplan, database) File "/app/lib/python2.7/site-packages/south/migration/migrators.py", line 221, in migrate_many result = migrator.__class__.migrate_many(migrator, target, migrations, database) File "/app/lib/python2.7/site-packages/south/migration/migrators.py", line 292, in migrate_many result = self.migrate(migration, database) File "/app/lib/python2.7/site-packages/south/migration/migrators.py", line 125, in migrate result = self.run(migration) File "/app/lib/python2.7/site-packages/south/migration/migrators.py", line 99, in run return self.run_migration(migration) File "/app/lib/python2.7/site-packages/south/migration/migrators.py", line 81, in run_migration migration_function() File "/app/lib/python2.7/site-packages/south/migration/migrators.py", line 57, in <lambda> return (lambda: direction(orm)) File "/app/notecard/notecards/migrations/0003_initial.py", line 15, in forwards ('user', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['auth.User'])), File "/app/lib/python2.7/site-packages/south/db/generic.py", line 226, in create_table ', '.join([col for col in columns if col]), File "/app/lib/python2.7/site-packages/south/db/generic.py", line 150, in execute cursor.execute(sql, params) File "/app/lib/python2.7/site-packages/django/db/backends/util.py", line 34, in execute return self.cursor.execute(sql, params) File "/app/lib/python2.7/site-packages/django/db/backends/postgresql_psycopg2/base.py", line 44, in execute return self.cursor.execute(query, args) django.db.utils.DatabaseError: relation "notecards_semester" already exists I have 3 models. Section, Semester, and Notecards. I've added one field to the Notecards model and I cannot get it added on Heroku. Thank you.

    Read the article

  • We Need More Migration!

    - by rickramsey
    source Eva Mendez says, "Oye chico, do you really want to keep your data in that tired legacy file system when it could be enjoying encryption, compression, deduplication, snapshots, remote replication and other benefits provided by ZFS in Oracle Solaris 11? It's really not that hard to cross over. If you know how." "I don't know how, me dices? Esta bien, papacito. Go to OTN. Take my word for it. They know how." <blushing> Aw shucks, Eva. Anything for you! </blushing> The Best Way to Migrate Data From Legacy File Systems to ZFS To migrate data from a legacy filesystem to ZFS in Oracle Solaris 11, you need to install the shadow-migration package and enable the shadowd service. Then follow the simple procedure described by Dominic Kay. How to Update to Oracle Solaris 11 Using the Image Packaging System Oracle Solaris 11.1 has been released. You can upgrade using either Oracle's official Solaris release repository or, if you have a support contract, the Support repository. Peter Dennis explains how. How to Migrate Oracle Database from Oracle Solaris 8 to Oracle Solaris 11 How to use the Oracle Solaris 8 P2V (physical to virtual) Archiver tool, which comes with Oracle Solaris Legacy Containers, to migrate a physical Oracle Solaris 8 system with Oracle Database and an Oracle Automatic Storage Management file system into an Oracle Solaris 8 branded zone inside an Oracle Solaris 10 guest domain on top of an Oracle Solaris 11 control domain. - Ricardo Website Newsletter Facebook Twitter

    Read the article

  • Different database for Membership and our web data or use just one?

    - by Jesus Rodriguez
    Is better to keep our Membership stuff on the DefaultConnection and create another connection (another database) for our data? Or just one database for all? If I have a MyAppContext and I want migrations for that context, It seems that I cannot have migrations for UserContext (In other words, I can just migrate one context) So, having two different databases I can migrate or the users (maybe membership migration is weird) or the web data. Or, I can mix the UserContext and MyAppContext in one UserAndAppContext and migrate all in one place, but this mixing also seems weird. What's the normal way to do this, one or two databases and what should be migrated?

    Read the article

  • Advisor Webcast: Hyperion Planning: Migrating Business Rules to Calc Manager

    - by inowodwo
    As you may be aware EPM 11.1.2.1 was the terminal release of Hyperion Business Rules (see Hyperion Business Rules Statement of Direction (Doc ID 1448421.1). This webcast aims to help you migrate from Business Rules to Calc Manager. Date: January 10, 2013 at 3:00 pm, GMT Time (London, GMT) / 4:00 pm, Europe Time (Berlin, GMT+01:00) / 07:00 am Pacific / 8:00 am Mountain / 10:00 am Eastern TOPICS WILL INCLUDE:    Calculation Manager in 11.1.2.2    Migration Consideration    How to migrate the the HBR rules from 11.1.2.1 to Calculation Manager 11.1.2.2    How to migrate the security of the Business Rules.    How to approach troubleshooting and known issues with migration. For registration details please go to Migrating Business Rules to Calc Manager (Doc ID 1506296.1). Alternatively, to view all upcoming webcasts go to Advisor Webcasts: Current Schedule and Archived recordings [ID 740966.1] and chose Oracle Business Analytics from the drop down menu.

    Read the article

  • Rails migration won't run, no error thrown

    - by kouak
    Here's a simple migration I'd like to run : class AddTimeOfRevisionToBrandWikis < ActiveRecord::Migration def self.up add_column :brand_wikis, :time_of_revision, :datetime end def self.down remove_column :brand_wikis, :time_of_revision end end Here's what I get when I try to run it : $ rake db:migrate (in /Users/kouak/Documents/workspace/wtb) You have 1 pending migrations: 20100404115341 AddTimeOfRevisionToBrandWikis Run "rake db:migrate" to update your database then try again. What's wrong with rake db:migrate ?

    Read the article

  • Oracle Database 11gR2 11.2.0.3 Certified with E-Business Suite on HP-UX PA-RISC

    - by John Abraham
    As a follow up to our original announcement, Oracle Database 11g Release 2 (11.2.0.3) is now certified with Oracle E-Business Suite Release 11i and Release 12 on the following HP-UX platforms: Release 11i (11.5.10.2 + ATG PF.H RUP 6 and higher) : HP-UX PA-RISC (64-bit) (11.31) Release 12 (12.0.4 and higher, 12.1.1 and higher): HP-UX PA-RISC (64-bit) (11.31) This announcement for Oracle E-Business Suite 11i and R12 includes: Real Application Clusters (RAC) Oracle Database Vault Transparent Data Encryption (Column Encryption) TDE Tablespace Encryption Advanced Security Option (ASO)/Advanced Networking Option (ANO) Export/Import Process for Oracle E-Business Suite Release 11i and Release 12 Database Instances Transportable Database and Transportable Tablespaces Data Migration Processes for Oracle E-Business Suite Release 11i and Release 12 References MOS Document 881505.1 - Interoperability Notes - Oracle E-Business Suite Release 11i with Oracle Database 11g Release 2 (11.2.0) MOS Document 1058763.1 - Interoperability Notes - Oracle E-Business Suite Release 12 with Oracle Database 11g Release 2 (11.2.0) MOS Document 1091086.1 - Integrating Oracle E-Business Suite Release 11i with Oracle Database Vault 11gR2 MOS Document 1091083.1 - Integrating Oracle E-Business Suite Release 12 with Oracle Database Vault 11gR2 MOS Document 216205.1 - Database Initialization Parameters for Oracle E-Business Suite 11i MOS Document 396009.1 - Database Initialization Parameters for Oracle Applications Release 12 MOS Document 761570.1 - Database Preparation Guidelines for an Oracle E-Business Suite Release 12.1.1 Upgrade MOS Document 823586.1 - Using Oracle 11g Release 2 Real Application Clusters with Oracle E-Business Suite Release 11i MOS Document 823587.1 - Using Oracle 11g Release 2 Real Application Clusters with Oracle E-Business Suite Release 12 MOS Document 403294.1 - Using Transparent Data Encryption (TDE) Column Encryption with Oracle E-Business Suite Release 11i MOS Document 732764.1 - Using Transparent Data Encryption (TDE) Column Encryption with Oracle E-Business Suite Release 12 MOS Document 828223.1 - Using TDE Tablespace Encryption with Oracle E-Business Suite Release 11i MOS Document 828229.1 - Using TDE Tablespace Encryption with Oracle E-Business Suite Release 12 MOS Document 391248.1 - Encrypting Oracle E-Business Suite Release 11i Network Traffic using Advanced Security Option and Advanced Networking Option MOS Document 732764.1 - Using Transparent Data Encryption (TDE) Column Encryption with Oracle E-Business Suite Release 12 MOS Document 557738.1 - Export/Import Process for Oracle E-Business Suite Release 11i Database Instances Using Oracle Database 11g Release 1 or 11g Release 2 MOS Document 741818.1 - Export/Import Process for Oracle E-Business Suite Release 12 Database Instances Using Oracle Database 11g Release 1 or 11g Release 2 MOS Document 1366265.1 - Using Transportable Tablespaces to Migrate Oracle Applications 11i Using Oracle Database 11g Release 2 MOS Document 1311487.1 - Using Transportable Tablespaces to Migrate Oracle E-Business Suite Release 12 Using Oracle Database 11g Release 2 MOS Document 729309.1 - Using Transportable Database to Migrate Oracle E-Business Suite Release 11i Using Oracle Database 10g Release 2 or 11g MOS Document 734763.1 - Using Transportable Database to Migrate Oracle E-Business Suite Release 12 Using Oracle Database 10g Release 2 or 11g Please also review the platform-specific Oracle Database Installation Guides for operating system and other prerequisites.

    Read the article

  • Migration Guide: Migrating to SQL Server 2012 Failover Clustering and Availability Groups from Prior Clustering and Mirroring Deployments

    This paper provides guidance for customers who prior to SQL Server 2012 have deployed SQL Failover Clustering for local high availability and database mirroring for disaster recovery, and who want to migrate to SQL Server AlwaysOn. It describes the corresponding SQL Server AlwaysOn scenario and the migration paths to SQL Server AlwaysOn. It also contains the important knowledge and considerations that you must know in order to successfully migrate to a HADR solution based on SQL Server AlwaysOn technology, which implements AlwaysOn Failover Cluster Instances for high availability and AlwaysOn Availability Groups for disaster recovery.

    Read the article

  • NFS v4, HA Migration, and stale handles on clients

    - by Karl Katzke
    I'm managing a server running NFS v4 with Pacemaker/OpenAIS. NFS is configured to use TCP. When I migrate the NFS server to another node in the Pacemaker cluster, even though the metadata is persisted, connections from the clients 'hang' and eventually time out after 90 seconds. After that 90 seconds, the old mountpoint becomes 'stale' and the mounted files can no longer be accessed. The 90 second grace period seems to be part of the server configuration and not the client configuration. I see this message on the server: kernel: NFSD: starting 90-second grace period If I restart the NFS client on the client nodes after I migrate (unmounting and then remounting the share), then I don't experience the problem, but connections and file transfers still interrupted. Three questions: What is the 90 second grace period? What's it there for? How can I keep the files from going stale on the clients without restarting them after I migrate the NFS server to another node? Is it actually possible to migrate the NFS server without having large file uploads drop?

    Read the article

  • How To Run Postgres locally

    - by Rohit Rayudu
    I read the Postgres docs for Flask and they said that to run Postgres you should have the following code app = Flask(__name__) app.config['SQLALCHEMY_DATABASE_URI'] = postgresql://localhost/[YOUR_DB_NAME]' db = SQLAlchemy(app) How do I know my database name? I wrote db as the name - but I got an error sqlalchemy.exc.OperationalError: (OperationalError) FATAL: database "[db]" does not exist Running Heroku with Flask if that helps

    Read the article

  • Scripting a Windows 2008 Cluster from Windows 2003

    - by glancep
    Our current environment is all Windows 2003. When we migrate a new version of our service to the cluster, we first stop the service with a command like: cluster.exe <clusterName> resource "<serviceName>" /offline We do similarly after the migrate to bring the service back online. Now, we are upgrading our environment to new Windows 2008 servers. However, our build/migrate machine will remain Windows 2003. When issuing the same command from Windwos 2003 to Windows 2008, we get: System error 1722 has occurred (0x000006ba). The RPC server is unavailable. We need to be able to remotely administer a Windows 2008 cluster from a Windows 2003 server in an automated fashion (such as the command-line cluster.exe utility). Is this possible? Thanks, Gideon

    Read the article

  • Migrating data from Oracle database to Pervasive .DAT files

    - by kaychaks
    The requirement is to migrate some tables with data from a Oracle database server to Pervasive database's .DAT file. Then those .DAT files will be used by a Pervasive database server. The restriction is that Oracle DB can not directly migrate to the Pervasive DB. It has to generate the .DAT files and then the new .DAT files will replace the old one for the Pervasive DB which will then use them for the new data. I was trying this task with SSIS. Exporting the Oracle table to a delimited .txt file and then creating a .DAT file from that text file. I can export the data from Oracle to .txt but I am not finding any way to migrate .txt to Pervasive .DAT? Is this the right approach? If not then please help with my problem.

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >