Search Results

Search found 2075 results on 83 pages for 'migration assistant'.

Page 26/83 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • Migrating from CPython to Jython

    - by itsadok
    I'm considering moving my code (around 30K LOC) from CPython to Jython, so that I could have better integration with my java code. Is there a checklist or a guide I should look at, to help my with the migration? Does anyone have experience with doing something similar? From reading the Jython site, most of the problems seem too obscure to bother me. I did notice that: thread safety is an issue Unicode support seems to be quite different, which may be a problem for me mysqldb doesn't work and needs to be replaced with zxJDBC Anything else? Related question: What are some strategies to write python code that works in CPython, Jython and IronPython

    Read the article

  • how do i load a csv file in rails from a migrate usiing load data local infile ?

    - by Chris Drappier
    Hi All, I have my csv file in my public folder, and i'm trying to load it from a migration, but I get a file not found error using this script : ActiveRecord::Base.connection.execute( "load data local infile '#{RAILS_ROOT}/public/muds_variables.csv' into table muds_variables " + "fields terminated by ',' " + "lines terminated by '\n' " + "(variable_name, definition)") I've checked and re-checked the file path, and that's definitely where it lives, I've also tried it just using the file name without any of the path, and a few other combos, but I can't make it work :(. can anyone help me out with this? here's the error : Mysql::Error: File '/home/chris/rails_projects/muds/public/muds_variables.csv' not found (Errcode: 2): load data local infile '/home/chris/rails_projects/muds/public/muds_variables.csv' into table muds_variables fields terminated by ',' lines terminated by ' ' (variable_name, definition) -C

    Read the article

  • Deleting QWinWidget

    - by user152508
    Hello I am using mfc to Qt migration and I am showing Qt dialogs in my Mfc app. Is it Ok to deleteLater QWinWidget in its winEvent handler? The thing is that I want all of my open Qt dialogs in My Mfc application to be automatically deleted when the main mfc window is closed. Since WM_DESTROY will be sent for all child windows ( and the Qt widgets too) So I added the following code in QwinWidget winEvent handler : QWinWidget::winEvent(MSG * message, long * result) { ........ if(message->message == WM_DESTROY ) deleteLater(); return false; } Can someone comment this Thanks

    Read the article

  • Migrating from SQL Server to firebird: pro and cons

    - by user193655
    I am considering the migration for 4 reasons: 1) SQLSERVER installation is a nightmare, expecially for 1-user software (Even if typically I have 3-20 users, sometimes I sell my software to single users: it is incredible to have troubles installing the DB, while installing the applicatino means copying an exe...). (note my max installation is 100 users, but there is no an upper limit). Software installs in 10 seconds, SQLServer in 1 hour. Firebird installation is much easier. 2) SQLSERVER runs on windows server only 3) My customers have all the express edition 4) i am not using any advanced feature, I am now starting using filestream, but the main reason for this is that Express edition has 4/10GB db size limit So these are all Pros of moving to Firebird. Which are the cons? I can also plan to support both platforms, but this will backfire I fear.

    Read the article

  • How do I mock a custom field that is deleted so that south migrations run?

    - by muhuk
    I have removed an app that contained a couple of custom fields from my project. Now when I try to run my migrations I get ImportError, naturally. These fields were very basic customizations like below: from django.db.models.fields import IntegerField class SomeField(IntegerField): def get_internal_type(self): return "SomeField" def db_type(self, connectio=None): return 'integer' def clean(self, value): # some custom cleanup pass So, none of them contain any database level customizations. When I removed this code, I've created migrations so the subsequent migration all ran fine. But when I try to run them on a pre-deletion database I realized my mistake. I can re-create a bare-bones app and make these imports work, but Ideally I would like to know if South has a mechanism to resolve these issues? Or is there any best practises? It would be cool if I could solve these issues just by modifying my migrations and not touching the codebase. (Django 1.3, South 0.7.3)

    Read the article

  • Migrating from Forms to WPF

    - by Jason Williams
    We're considering migrating a WinForms app to WPF, but are just starting on the WPF learning curve now that 4.0 is out. What I'd like to do is migrate our application commands (cut, copy, paste, etc) to a WPF-like command-binding system, while still running as a WinForms app - but in such a way as to make the migration easy when we go ahead with WPF. The ideal approach would be to implement our commands using the WPF command interfaces, classes and events directly, and simply hooking the WinForms events up to them with our own dispatcher. Has anyone tried something like this or know if it might be possible?

    Read the article

  • Migrating from Struts2 to Spring MVC

    - by Vincent Ramdhanie
    Scenario: A fairly mature project uses Struts2 and Spring and Hibernate. I say mature because it has been going on a for a while and there are many struts actions written already. Suppose we wanted to remove Struts2 from the project and instead depend entirely on Spring MVC without rewriting the entire project. Is this something that should even be considered? Are there any migration guides out there? Has anyone done this before and would like to warn me against it?

    Read the article

  • Smartest way to import massive datasets into a Rails application?

    - by williamjones
    I've got multiple massive (multi gigabyte) datasets I need to import into a Rails app. The datasets are currently each in their own database on my development machine, and I need to read from them and create rows in tables in my Rails database based on the information they contain. The tables in my Rails database will not be exactly the same as the tables in the source databases. What's the smartest way to go about this? I was thinking migrations, but I'm not exactly sure how to connect the migration to the databases, and even if that is possible, is that going to be ridiculously slow?

    Read the article

  • Easy way for Crystal Reports to MS SQL Server Reporting Services conversion

    - by scoob
    Is there a way to easily convert Crystal Reports reports to Reporting Services RDL format? We have quite a few reports that will be needing conversion soon. I know about the manual process (which is basically rebuilding all your reports from scratch in SSRS), but my searches pointed to a few possibilities with automatic conversion "acceleration" with several consulting firms. (As described on http://www.microsoft.com/sql/technologies/reporting/partners/crystal-migration.mspx). Do any of you have any valid experiences or recomendations regarding this particular issue? Are there any tools around that I do not know about?

    Read the article

  • Whats the best way to deal with backups for my php/mysql application

    - by spirytus
    I'm creating php application for my client and now thinking what would be the best way to do backups, automatically if possible? I don't have much experience in this area and in case something goes wrong, or if I need to migrate, I would like to have fast way of getting it all back online. I understand "something goes wrong" is a very wide term, but lets say that someone hacks my site and wipes out database and all the files. My app. is written in php/mysql and I got access to cpanel (hosted with justhost.com if that makes any difference :). I used Joomla and it has JoomlaPack that does complete backup almost automatically and in case site fails, its easy to revert, or migrate if necessary. Is there anything like that for my configuration that would make reverting/migration, easy?

    Read the article

  • Why does schema.rb change (in the eyes of Git) when just running rake db:migrate?!

    - by erskingardner
    This is a little general I know, but it's been bugging the hell out of me. I've been working on lots of rails projects remotely with Git and every time I do a git pull and see that there is some sort of data change (migration, or schema.rb change) I do a rake db:migrate. These generally run fine and I can continue working. But if you do a git pull and then git status, your working directory is clean (obviously) then do a rake db:migrate (obviously when there are changes) and another git status and all the sudden your db/schema.rb has changed. I have been just doing a git checkout immediately to reset back to the latest committed version of the schema.rb file, but why should this be necessary?! What is rails doing? Updating a timestamp? I can't seem to figure out what the diff is but maybe I'm just missing something?

    Read the article

  • Using Rails, how can I set my primary key to not be an integer-typed column?

    - by Rudd Zwolinski
    I'm using Rails migrations to manage a database schema, and I'm creating a simple table where I'd like to use a non-integer value as the primary key (in particular, a string). To abstract away from my problem, let's say there's a table employees where employees are identified by an alphanumeric string, e.g. "134SNW". I've tried creating the table in a migration like this: create_table :employees, {:primary_key => :emp_id} do |t| t.string :emp_id t.string :first_name t.string :last_name end What this gives me is what seems like it completely ignored the line t.string :emp_id and went ahead and made it an integer column. Is there some other way to have rails generate the PRIMARY_KEY constraint (I'm using PostgreSQL) for me, without having to write the SQL in an execute call? NOTE: I know it's not best to use string columns as primary keys, so please no answers just saying to add an integer primary key. I may add one anyway, but this question is still valid.

    Read the article

  • How to correctly migrate urls from custom asp.net solution to Wordpress?

    - by Marek
    I have a web site built using asp.net with ugly URLs like /DisplayContent.aspx?id=789564. I know how to migrate the database, but the Wordpress urls will be (naturally) different. Can I simply write some mapping or do I have to include a rewrite rule for each subpage (300 pages) in .htaccess? Should I provide a rewrite rule for each existing page that would transform a full old url to the known new url, like for example: /DisplayContent.aspx?id=789798 -> /2010-5-10/Title-Of-The-Post Even if I manage to migrate the URLs, the structure of the HTML for the new content will naturally be different. How does this affect SEO? Should I run asp.net and wordpress side by side and issue the redirects from the asp.net application? What is the most efficient solution to this kind of migration of URLs without doing PHP programming?

    Read the article

  • « Android, OS mobile le plus insécurisé qui soit » : à qui la faute ? Une simple migration vers Jelly Bean suffirait à résoudre plusieurs problèmes

    « Android, OS mobile le plus insécurisé qui soit ». À qui la faute ? Une simple migration vers Jelly Bean 4.2 suffirait à résoudre la majeure partie des problèmes que rencontrent les utilisateurs de l'OS mobileGoogle domine l'écosystème mobile avec son OS mobile. Malheureusement, ce dernier constitue une cible de choix pour les pirates informatiques qui en ont fait leur terrain de jeux privilégié. Un récent rapport des acteurs dans le domaine de la sécurité des mobiles (dont un article entier a été consacré sur DVP), fait état d'une augmenta...

    Read the article

  • Débats autour de la branche 4.x d'Eclipse, la fondation Eclipse est elle en train de rater sa migration ?

    Depuis plusieurs années deux versions majeures d'Eclipse existent: la branche 3.x (historique) et la branche 4.x (précédemment baptisée e4, reposant sur de nombreux nouveaux concepts). En 2012, deux versions ont été publiées: la fondation Eclipse sort Eclipse Juno 3.8 /4.2. Dans la pratique, la version 4.2 est la version officielle, celle qui est mise en avant. Les versions packagées d'Eclipse (EPP builds) proposées sur la page des téléchargements reposent sur la version 4.2. Pourtant cette migration ...

    Read the article

  • Gartner préconise la mise en place d'une stratégie de migration depuis Windows 7 pour éviter un scénario semblable à celui de Windows XP

    Gartner préconise la mise en place d'une stratégie de migration depuis Windows 7 pour éviter un scénario semblable à celui de Windows XPMême si cela a était annoncée depuis plusieurs années, la fin du support de Windows XP a pris de court plusieurs entreprises, les laissant avec plusieurs problèmes à gérer.La fin du support de Windows 7 est prévue pour 2020, toutefois pour la firme de conseil Gartner et son vice-président de la recherche Stephen Kleynhans, il est déjà l'heure de songer à la relève...

    Read the article

  • Windows 8.1 : le SDK et un guide de migration pour les applications Windows 8 également disponibles en téléchargement

    Windows 8.1 : un nouveau SDK et un guide de migration pour les applications Windows 8 Disponibles en téléchargementPromesse tenue. Microsoft a dévoilé hier soir la première pré-version de Windows 8.1 (alias « Blue »), la mise à jour très attendue de Windows 8 qui sera officiellement disponible gratuitement pour les utilisateurs de Windows 8 « un peu plus tard dans l'année ».Un premier tour d'horizon de l'OS montre quelques nouveautés cosmétiques, comme deux nouvelles tailles de tuiles dans l'interface Metro la Modern UI. La taille « très grande » permet d'afficher plus d'informations sans avoir à ouvrir l'application (météo du jour mais également des jours suivant par exemple) ....

    Read the article

  • Google ajoute le support des fichiers Excel et PowerPoint à ses Google Docs pour faciliter la migration vers sa suite bureautique Cloud

    Google ajoute le support des fichiers Excel et PowerPoint à ses Google Docs Pour faciliter la migration vers sa suite bureautique Cloud Mise à jour du 22/02/11 Les Google Docs sont de plus en plus populaires, y compris en entreprise. C'est en tout cas ce que révèlent des études et ce que nous a confirmé le responsable des produits professionnels de Google France. Seul petit problème, si Google peut héberger tout type de fichiers (vidéos, texte, image, pdf, etc.), il ne peut pas tous les lire. Autrement dit, la p...

    Read the article

  • Migration Guide: Migrating to SQL Server 2012 Failover Clustering and Availability Groups from Prior Clustering and Mirroring Deployments

    This paper provides guidance for customers who prior to SQL Server 2012 have deployed SQL Failover Clustering for local high availability and database mirroring for disaster recovery, and who want to migrate to SQL Server AlwaysOn. It describes the corresponding SQL Server AlwaysOn scenario and the migration paths to SQL Server AlwaysOn. It also contains the important knowledge and considerations that you must know in order to successfully migrate to a HADR solution based on SQL Server AlwaysOn technology, which implements AlwaysOn Failover Cluster Instances for high availability and AlwaysOn Availability Groups for disaster recovery.

    Read the article

  • Does Subversion have an analogue to VSS's links?

    - by bta
    I am migrating a Visual SourceSafe code repository to Subversion and I am running into a problem. Here is a simplified layout of our current source code tree (in VSS): project_root\ |-libs\ |-tools\ |-arch_1\ | |-include | |-source |-arch_2\ |-include |-source My problem is in our two arch_ folders. Each arch_ folder will be built for a different hardware architecture, but the contents of the two folders are practically identical. The files in arch_2 are merely VSS links to the files in arch_1, with only a small handful of exceptions. Work is generally checked into and out of the arch_1 folder, and the VSS links make sure that any code checked in here is updated in the arch_2 folder as well. Moving to Subversion, is there anything that will behave like VSS's links? That is, is there a way to have two files in separate folders magically associated with one another such that they will always be in sync with each other (changes to one will affect the other as well)? Note: I know the correct answer here is to fix the build system. The build system on this project was pieced together roughly a decade ago, back when our compiler/build system wasn't intelligent enough to compile the same folder full of source code for two different architectures. Thanks to make and updated compilers, we can re-write the build system to eliminate this dependency on two parallel source folders. However, this will take time that we don't have at the moment (we are losing our license to our VSS server and are being forced to migrate on rather short notice). I am hoping to find a Subversion solution to this problem because at the moment, our time would be much better spent making the migration run smoothly than re-writing the build system (which is next on my to-do list!). Thank you for your help!

    Read the article

  • Interoperability between Weblogic 10.3.1 and Oracle BPM 10.3.1

    - by alfredozn
    Hi, Im migrating an ALBPM 6.5 running on a WLS 10.0 to an Oracle BPM 10.3.1 running on WLS 10.3.1 I got some problems with the Oracle driver because the old driver (weblogic.jdbcx.oracle.OracleDataSource) was definitely removed from the server and is not longer supported. Instead I used the thin driver (oracle.jdbc.xa.OracleXADataSource), the database migration was executed succesfully but after that, when I try to deploy the engine ear in WebLogic I got exceptions asociated to the driver: [ (cont) ] Main: Caused by: weblogic.application.ModuleException: [HTTP:101216]Servlet: "engineStartup" failed to preload on startup in Web application: "/albpmServices/albpm_engine". [ (cont) ] Main: fuego.directory.DirectoryRuntimeException: Exception [java.sql.SQLException: Invalid column type]. [ (cont) ] Main: at fuego.directory.DirectoryRuntimeException.wrapException(DirectoryRuntimeException.java:85) [ (cont) ] Main: at fuego.directory.provider.jdbc.oracle.OraclePersistenceManager.mapSQLException(OraclePersistenceManager.java:145) [ (cont) ] Main: at fuego.directory.provider.jdbc.datadirect.oracle.DataDirectOraclePersistenceManager.mapSQLException(DataDirectOraclePersistenceManager.java:51) [ (cont) ] Main: at fuego.directory.provider.jdbc.JDBCServiceAccessor.mapSQLException(JDBCServiceAccessor.java:78) [ (cont) ] Main: at fuego.directory.provider.jdbc.JDBCObjectPropertiesAccessor.fetchAllDirectoryProperties(JDBCObjectPropertiesAccessor.java:442) [ (cont) ] Main: at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) I was looking around for a solution but all is pointing to use the old driver, I think isn't a good practice to force the server to use this driver after Oracle remove it completely. Any suggestions or similar experiences??

    Read the article

  • Migrating complex SVN branch hierarchy to Mercurial

    - by Christian Hang
    Our team has been using SVN for managing an application of decent size and over time a rather complex hierarchy of branches and tags has built up, which is following the basic standard layout for SVN repositories, but is more nested: |-trunk |-branches | |-releases | | |-releaseA | | `-releaseB | `-features | |-featureX | `-featureY |-tags |-releaseA | |-beta | `-RTP `-releaseB |-beta `-RTP (The feature branches are obviously temporary branches but we have to take them into consideration as it won't be feasible to close all of them at once in the near future) For several reasons but primarily because merges have been becoming an increasing pain, we are considering to switch to Mercurial. The main problem we are currently facing is migrating the existing code base without losing our history. I've tried several migration tools (e.g., yasvn2hg, hg convert and svn2hg) with yasvn2hg being the most promising, but none of them seem to be able to deal with nested hierarchies but they all assume that branches and tags are organized in one flat directory respectively. The choice between named branches or clones as the conversion target of old SVN branches is not a limiting factor in this case, as either solution would be appreciated. We are currently experimenting with both options and how they would fit into our current processes but haven't decided on one yet. I'd obviously be interested in recommendations or experiences with similar setups concerning that issue as well. So, what is the best way to convert a nested SVN branch hierarchy like this to Mercurial? Converting one branch at a time into a separate repository would be quite annoying and I am not sure if that would be the right approach in the first place, depending on how the tools handle historic merges and need to be aware of all other branches?

    Read the article

  • How do I programatically add an article page to a sharepoint site?

    - by soniiic
    I've been given the task of content migration from another CMS system to SharePoint 2010. The data in the old system is fairly easy to capture and the page hierarchy is simple so I'm not worried about that. However, I am completely flummoxed about how to even create a page in code. I'm using the Microsoft.SharePoint.Client namespace as I do not have sharepoint installed on my system and am wanting to code this up as a console application and so I'm using I'm using ClientContext. (On the other hand, I am willing to go into other solutions if necessary). My end-game: To get a page uploaded into some folder hierarchy which uses a master page, has the page title in a header web part, and a big ol' content-editable web part in the body so any user can come along and edit the content. Things I've tried so far: Using FileCollection.Add() to add an aspx file to the folder "Site Pages". This renders the html in the browser but doesn't enable any features for the user to edit the page Using ListItemCollection.Add() to add a page to the site, but I didn't know what fields I needed. Also I remember it came up with a runtime error saying I should use FileCollection.Add() Uploading to 'Site Pages' instead of 'Pages' So many others... ow my head :( The only plausible thing I can see on the net is to use the PublishingPage type along with PublishingWeb. However, PublishingWeb can only be constructed from an SPWeb object which requires me to be actually hosting the sharepoint application on my workstation. If anyone can lend a hand that would be greatly appreciated :)

    Read the article

  • How to migrate primary key generation from "increment" to "hi-lo"?

    - by Bevan
    I'm working with a moderate sized SQL Server 2008 database (around 120 tables, backups are around 4GB compressed) where all the table primary keys are declared as simple int columns. At present, primary key values are generated by NHibernate with the increment identity generator, which has worked well thus far, but precludes moving to a multiprocessing environment. Load on the system is growing, so I'm evaluating the work required to allow the use of multiple servers accessing a common database backend. Transitioning to the hi-lo generator seems to be the best way forward, but I can't find a lot of detail about how such a migration would work. Will NHibernate automatically create rows in the hi-lo table for me, or do I need to script these manually? If NHibernate does insert rows automatically, does it properly take account of existing key values? If NHibernate does take care of thing automatically, that's great. If not, are there any tools to help? Update NHibernate's increment identifier generator works entirely in-memory. It's seeded by selecting the maximum value of used identifiers from the table, but from that point on allocates new values by a simple increment, without reference back to the underlying database table. If any other process adds rows to the table, you end up with primary key collisions. You can run multiple threads within the one process just fine, but you can't run multiple processes. For comparison, the NHibernate identity generator works by configuring the database tables with identity columns, putting control over primary key generation in the hands of the database. This works well, but compromises the unit of work pattern. The hi-lo algorithm sits inbetween these - generation of primary keys is coordinated through the database, allowing for multiprocessing, but actual allocation can occur entirely in memory, avoiding problems with the unit of work pattern.

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >