Search Results

Search found 31483 results on 1260 pages for 'database migration'.

Page 574/1260 | < Previous Page | 570 571 572 573 574 575 576 577 578 579 580 581  | Next Page >

  • How do you use pip, virtual_env and Fabric to handle deployement?

    - by e-satis
    What are your settings, your tricks, and above all, your work flow? These tools are great but they are still no best practices attached to their usage, so I don't know what is the most efficient way to use them. Do you use pip bundles or always download? Do you set up Apache/Cherokee/MySQL by hand or do you have a script for than. Do you put everything in virtual_env and use --no-site-package? Do you use one virtual_env for several projects? What do you use Fabric for (which part of your deployment do you script)? Do you put your Fabric scripts in on the client or the server? How do you handle database and media file migration? Do you even need a build tool such as SCons? What are the steps of your deployment? How often do you perform each of them? etc.

    Read the article

  • Using RVM with GVim (Cream): rvm command not found.

    - by Alan Peabody
    I am trying to move to GVim(cream) as my primary editor on Ubuntu. I am using the wonderful rails.vim, however I also am using RVM. Rvm works fine when doing things in a shell, and the ruby version I would like to use in rails.vim is the version set as default (but not the system version). When I try to run things like :Rgenerate migration migration_name I get: ... Missing Rails 2.3.8 gem. ... If I try: :!rvm use default I get: /bin/bash: rvm: command not found Obviously cream is not using my bash profile. What can I do to remedy this and get it working? Thanks.

    Read the article

  • Mobile App Data Syncronization

    - by Matt Rogish
    Let's say I have a mobile app that uses HTML5 SQLite DB (and/or the HTML5 key-value store). Assets (media files, PDFs, etc.) are stored locally on the mobile device. Luckily enough, the mobile device is a read-only copy of the "centralized" storage, so the mobile device won't have to propagate changes upstream. However, as the server changes assets (creates new ones, modifies existing, deletes old ones) I need to propagate those changes back to the mobile app. Assume that server changes are grouped into changesets (version number n) that contain some information (added element XYZ, deleted id = 45, etc.) and that the mobile device has limited CPU/bandwidth, so most of the processing has to take place on the server. I can think of a couple of methods to do this. All have trade-offs and at this point, I'm unsure which is the right course of action... Method 1: For change set n, store the "diff" of the current n and previous n-1. When a client with version y asks if there have been any changes, send the change sets from version y up to the current version. e.g. added item 334, contents: xxx. Deleted picture 44. Deleted PDF 11. Changed 33. added picture 99. Characteristics: Diffs take up space, although in theory would be kept small. However, all diffs must be kept around indefinitely (should a v1 app have not been updated for a year, must apply v2..v100). High latency devices (mobile apps) will incur a penalty to send lots of small files (assume cannot be zipped or tarr'd up into one file) Very few server CPU resources required, as all it does is send the client a list of files "Dumb" - if I change an item in change set 3, and change it to something else in 4, the client is going to perform both actions, even though #3 is rendered moot by #4. Or, if an asset is added in #4 and removed in #5 - the client will download a file just to delete it later. Method 2: Very similar to method 1 except on the server, do some sort of a diff between the change sets represented by the app version and server version. Package that up and send that single change set to the client. Characteristics: Client-efficient: The client only has to process one file, duplicate or irrelevant changes are stripped out. Server CPU/space intensive. The change sets must be diff'd and then written out to a file that is then sent to the client. Makes diff server scalability an issue. Possibly ways to cache the results and re-use them, but in the wild there's likely to be a lot of different versions so the diff re-use has a limit Diff algorithm is complicated. The change sets must be structured in such a way that an efficient and effective diff can be performed. Method 3: Instead of keeping diffs, write out the entire versioned asset collection to a mobile-database import file. When client requests an update, send the entire database to client and have them update their assets appropriately. Characteristics: Conceptually simple -- easy to develop and deploy Very inefficient as the client database is restored every update. If only one new thing was added, the whole database is refreshed. Server space and CPU efficient. Only the latest version DB needs kept around and the server just throws the file to the client. Others?? Thoughts? Thanks!!

    Read the article

  • SQLAlchemy session management in long-running process

    - by codeape
    Scenario: A .NET-based application server (Wonderware IAS/System Platform) hosts automation objects that communicate with various equipment on the factory floor. CPython is hosted inside this application server (using Python for .NET). The automation objects have scripting functionality built-in (using a custom, .NET-based language). These scripts call Python functions. The Python functions are part of a system to track Work-In-Progress on the factory floor. The purpose of the system is to track the produced widgets along the process, ensure that the widgets go through the process in the correct order, and check that certain conditions are met along the process. The widget production history and widget state is stored in a relational database, this is where SQLAlchemy plays its part. For example, when a widget passes a scanner, the automation software triggers the following script (written in the application server's custom scripting language): ' wiget_id and scanner_id provided by automation object ' ExecFunction() takes care of calling a CPython function retval = ExecFunction("WidgetScanned", widget_id, scanner_id); ' if the python function raises an Exception, ErrorOccured will be true ' in this case, any errors should cause the production line to stop. if (retval.ErrorOccured) then ProductionLine.Running = False; InformationBoard.DisplayText = "ERROR: " + retval.Exception.Message; InformationBoard.SoundAlarm = True end if; The script calls the WidgetScanned python function: # pywip/functions.py from pywip.database import session from pywip.model import Widget, WidgetHistoryItem from pywip import validation, StatusMessage from datetime import datetime def WidgetScanned(widget_id, scanner_id): widget = session.query(Widget).get(widget_id) validation.validate_widget_passed_scanner(widget, scanner) # raises exception on error widget.history.append(WidgetHistoryItem(timestamp=datetime.now(), action=u"SCANNED", scanner_id=scanner_id)) widget.last_scanner = scanner_id widget.last_update = datetime.now() return StatusMessage("OK") # ... there are a dozen similar functions My question is: How do I best manage SQLAlchemy sessions in this scenario? The application server is a long-running process, typically running months between restarts. The application server is single-threaded. Currently, I do it the following way: I apply a decorator to the functions I make avaliable to the application server: # pywip/iasfunctions.py from pywip import functions def ias_session_handling(func): def _ias_session_handling(*args, **kwargs): try: retval = func(*args, **kwargs) session.commit() return retval except: session.rollback() raise return _ias_session_handling # ... actually I populate this module with decorated versions of all the functions in pywip.functions dynamically WidgetScanned = ias_session_handling(functions.WidgetScanned) Question: Is the decorator above suitable for handling sessions in a long-running process? Should I call session.remove()? The SQLAlchemy session object is a scoped session: # pywip/database.py from sqlalchemy.orm import scoped_session, sessionmaker session = scoped_session(sessionmaker()) I want to keep the session management out of the basic functions. For two reasons: There is another family of functions, sequence functions. The sequence functions call several of the basic functions. One sequence function should equal one database transaction. I need to be able to use the library from other environments. a) From a TurboGears web application. In that case, session management is done by TurboGears. b) From an IPython shell. In that case, commit/rollback will be explicit. (I am truly sorry for the long question. But I felt I needed to explain the scenario. Perhaps not necessary?)

    Read the article

  • How to use acts-as-commentable-with-threading in Rails

    - by alex
    Hi All, I am developing my first rails site (yup, i am a rails idiot). I'm writing a blog, and i got to the comments part. I installed acts-as-commentable-with-threading ( GitHub ), i made and ran the migration like the install instructions said. I have added acts_as_commentable to my Posts model and i have a Comments controller When i add @comment = Comment.build_from(params[:id],1, params[:body] ) I get the error. undefined method `build_from' for # Clearly i am doing something terribly wrong, and i don't really get the example. What should i be feeding to build_from? Can somebody explain this plugin step by step? :) Or is there an easier way to get simple threaded comments?

    Read the article

  • maven sonar problem

    - by senzacionale
    I want to use sonar for analysis but i can't get any data in localhost:9000 <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <artifactId>KIS</artifactId> <groupId>KIS</groupId> <version>1.0</version> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-antrun-plugin</artifactId> <version>1.4</version> <executions> <execution> <id>compile</id> <phase>compile</phase> <configuration> <tasks> <property name="compile_classpath" refid="maven.compile.classpath"/> <property name="runtime_classpath" refid="maven.runtime.classpath"/> <property name="test_classpath" refid="maven.test.classpath"/> <property name="plugin_classpath" refid="maven.plugin.classpath"/> <ant antfile="${basedir}/build.xml"> <target name="maven-compile"/> </ant> </tasks> </configuration> <goals> <goal>run</goal> </goals> </execution> </executions> </plugin> </plugins> </build> </project> output when running sonar: jar file is empty [INFO] Executed tasks [INFO] [resources:testResources {execution: default-testResources}] [WARNING] Using platform encoding (Cp1250 actually) to copy filtered resources, i.e. build is platform dependent! [INFO] skip non existing resourceDirectory J:\ostalo_6i\KIS deploy\ANT\src\test\resources [INFO] [compiler:testCompile {execution: default-testCompile}] [INFO] No sources to compile [INFO] [surefire:test {execution: default-test}] [INFO] No tests to run. [INFO] [jar:jar {execution: default-jar}] [WARNING] JAR will be empty - no content was marked for inclusion! [INFO] Building jar: J:\ostalo_6i\KIS deploy\ANT\target\KIS-1.0.jar [INFO] [install:install {execution: default-install}] [INFO] Installing J:\ostalo_6i\KIS deploy\ANT\target\KIS-1.0.jar to C:\Documents and Settings\MitjaG\.m2\repository\KIS\KIS\1.0\KIS-1.0.jar [INFO] ------------------------------------------------------------------------ [INFO] Building Unnamed - KIS:KIS:jar:1.0 [INFO] task-segment: [sonar:sonar] (aggregator-style) [INFO] ------------------------------------------------------------------------ [INFO] [sonar:sonar {execution: default-cli}] [INFO] Sonar host: http://localhost:9000 [INFO] Sonar version: 2.1.2 [INFO] [sonar-core:internal {execution: default-internal}] [INFO] Database dialect class org.sonar.api.database.dialect.Oracle [INFO] ------------- Analyzing Unnamed - KIS:KIS:jar:1.0 [INFO] Selected quality profile : KIS, language=java [INFO] Configure maven plugins... [INFO] Sensor SquidSensor... [INFO] Sensor SquidSensor done: 16 ms [INFO] Sensor JavaSourceImporter... [INFO] Sensor JavaSourceImporter done: 0 ms [INFO] Sensor AsynchronousMeasuresSensor... [INFO] Sensor AsynchronousMeasuresSensor done: 15 ms [INFO] Sensor SurefireSensor... [INFO] parsing J:\ostalo_6i\KIS deploy\ANT\target\surefire-reports [INFO] Sensor SurefireSensor done: 47 ms [INFO] Sensor ProfileSensor... [INFO] Sensor ProfileSensor done: 16 ms [INFO] Sensor ProjectLinksSensor... [INFO] Sensor ProjectLinksSensor done: 0 ms [INFO] Sensor VersionEventsSensor... [INFO] Sensor VersionEventsSensor done: 31 ms [INFO] Sensor CpdSensor... [INFO] Sensor CpdSensor done: 0 ms [INFO] Sensor Maven dependencies... [INFO] Sensor Maven dependencies done: 16 ms [INFO] Execute decorators... [INFO] ANALYSIS SUCCESSFUL, you can browse http://localhost:9000 [INFO] Database optimization... [INFO] Database optimization done: 172 ms [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESSFUL [INFO] ------------------------------------------------------------------------ [INFO] Total time: 6 minutes 16 seconds [INFO] Finished at: Fri Jun 11 08:28:26 CEST 2010 [INFO] Final Memory: 24M/43M [INFO] ------------------------------------------------------------------------ any idea why, i successfully compile with maven ant plugin java project.

    Read the article

  • Running Visual Studio 2005, 2008, and 2010 on same system.

    - by thelsdj
    I have around 50 projects in Visual Studio 2005 that I am building a new development machine for and I'd like to slowly move those projects to VS 2008 but also have 2010 available for select new projects. Can this work? Are there any gotchas for this sort of setup? Any general advice for running multiple versions of Visual Studio on the same system would be greatly appreciated. Specifically related to managing a controlled migration of projects to new versions but being able to selectively keep some on old versions.

    Read the article

  • Opening an SQL CE file at runtime with Entity Framework 4

    - by David Veeneman
    I am getting started with Entity Framework 4, and I an creating a demo app as a learning exercise. The app is a simple documentation builder, and it uses a SQL CE store. Each documentation project has its own SQL CE data file, and the user opens one of these files to work on a project. The EDM is very simple. A documentation project is comprised of a list of subjects, each of which has a title, a description, and zero or more notes. So, my entities are Subject, which contains Title and Text properties, and Note, which has Title and Text properties. There is a one-to-many association from Subject to Note. I am trying to figure out how to open an SQL CE data file. A data file must match the schema of the SQL CE database created by EF4's Create Database Wizard, and I will implement a New File use case elsewhere in the app to implement that requirement. Right now, I am just trying to get an existing data file open in the app. I have reproduced my existing 'Open File' code below. I have set it up as a static service class called File Services. The code isn't working quite yet, but there is enough to show what I am trying to do. I am trying to hold the ObjectContext open for entity object updates, disposing it when the file is closed. So, here is my question: Am I on the right track? What do I need to change to make this code work with EF4? Is there an example of how to do this properly? Thanks for your help. My existing code: public static class FileServices { #region Private Fields // Member variables private static EntityConnection m_EntityConnection; private static ObjectContext m_ObjectContext; #endregion #region Service Methods /// <summary> /// Opens an SQL CE database file. /// </summary> /// <param name="filePath">The path to the SQL CE file to open.</param> /// <param name="viewModel">The main window view model.</param> public static void OpenSqlCeFile(string filePath, MainWindowViewModel viewModel) { // Configure an SQL CE connection string var sqlCeConnectionString = string.Format("Data Source={0}", filePath); // Configure an EDM connection string var builder = new EntityConnectionStringBuilder(); builder.Metadata = "res://*/EF4Model.csdl|res://*/EF4Model.ssdl|res://*/EF4Model.msl"; builder.Provider = "System.Data.SqlServerCe"; builder.ProviderConnectionString = sqlCeConnectionString; var entityConnectionString = builder.ToString(); // Connect to the model m_EntityConnection = new EntityConnection(entityConnectionString); m_EntityConnection.Open(); // Create an object context m_ObjectContext = new Model1Container(); // Get all Subject data IQueryable<Subject> subjects = from s in Subjects orderby s.Title select s; // Set view model data property viewModel.Subjects = new ObservableCollection<Subject>(subjects); } /// <summary> /// Closes an SQL CE database file. /// </summary> public static void CloseSqlCeFile() { m_EntityConnection.Close(); m_ObjectContext.Dispose(); } #endregion }

    Read the article

  • What to look for in estimating a PowerBuilder Conversion Project?

    - by tekiegreg
    Hi there, I've been trying to do a spec for a PowerBuilder 9 to 11.5 migration of a relatively complex application. Granted PowerBuilder is not really my specialty I'm having issues trying to justify an estimate for this part of the project (and the PowerBuilder people I've been talking with have had some personal issues lately and are out of communication). These are some of the metrics that we have seen and can evaluate: -PBL Files -Main Windows -Data Windows -Functions (no we don't have the source available on this project) What metrics in particular are helpful and how long would any given "unit" such as a Data Window take?

    Read the article

  • SQLServer Binary Data with ActiveRecord and JDBC

    - by John Duff
    I'm using the activerecord-jdbc-adapter with ActiveRecord to be able to access a SQLServer database for Rails Application running under jRuby and am having trouble inserting binary data. The Exception I am getting is below. Note I just have a blurb for the binary data from the fixtures that was working fine for MySQL. ActiveRecord::StatementInvalid: ActiveRecord::ActiveRecordError: Operand type clash: nvarchar is incompatible with image: INSERT INTO blobstorage_datachunks ([id], [datafile_id], [chunk_number], [data]) VALUES (369397133, 663419003, 0, N'GIF89a@') When I created the tables the migration had binary and SQLServer used Image instead. We're using Rails 2.3.5, SQLServer Express 2008. What I'm looking for is a way to get the binary data into SQLServer with ActiveRecord. Thanks in advance for the help.

    Read the article

  • Core Data store corruption

    - by sehugg
    A handful of customers for my iPhone app are experiencing Core Data store corruption (I assume so, since the error is "Failed to save to data store: Operation could not be completed. (Cocoa error 259.)") Has anyone else experienced this kind of store corruption? I am worried since I aim to soon push an update which performs a schema migration, and I am worried that this will expose even more problems. I had assumed that the Core Data/SQLlite APIs use atomic operations and are immune to corruption except if the underlying filesystem experiences corruption. Is there a way to reduce/prevent corruption, or at least a good way to reproduce (I have been unsuccessful thus far).

    Read the article

  • Get In That DB! Parsing CSV Using Ruby...

    - by keruilin
    I have a CSV file formatted just like this: name,color,tasty,qty apple,red,true,3 orange,orange,false,4 pear,greenish-yellowish,true,1 As you can see, each column in the Ruby OO world represents a mix of types -- string, string, boolean, int. Now, ultimately, I want to parse each line in the file, determine the appropriate type, and insert that row into a database via a Rails migration. For ex: Fruit.create(:name => 'apple', :color => 'red', :tasty => true, :qty => 3) Help!

    Read the article

  • Ruby on rails link_to syntax

    - by mizipzor
    After following a tutorial Ive found. Im now redoing it again, without the scaffolding part, to learn it better. However, editing my \app\views\home\index.html.erb to contain: <h1>Rails test project</h1> <%= link_to "my blog", posts_path> I get an error: undefined local variable or method `posts_path' for #<ActionView::Base:0x4e1d954> Before I did this, I ran rake db:create, defined a migration class and ran rake db:migrate, everything without a problem. So the database should contain a posts table. But that link_to command cant seem to find posts_path. That variable (or is it even a function?) is probably defined through the scaffold routine. My question now is; how do I do that manually myself, define posts_path?

    Read the article

  • Import OLE Object from Access to MySQL

    - by SecretDeveloper
    I have a table in an access table which contains Product entries, one of the columns has a jpg image stored as an OLE Object. I am trying to import this table to MySQL but nothing seems to work. I have tried the MySQL migration tool but that has a known issue with Access and OLE Objects. (The issue being it doesnt work and leaves the fields blank) I also tried the suggestion on this site and while the data is imported it seems as though the image is getting corrupted in the transfer. When i try to preview the image i just get a binary view, if i save it on disk as a jpg image and try to open it i get an error stating the image is corrupt. The images in Access are fine and can be previewed. Access is storing the data as an OLE Object and when i import it to MySql it is saved in a MediumBlob field. Has anyone had this issue before and how did they resolve it ?

    Read the article

  • pylint ignore by directory

    - by Ciantic
    Following is from pylint docs: --ignore=<file> Add <file or directory> to the black list. It should be a base name, not a path. You may set this option multiple times. [current: %default] Yet I'm not having luck getting the directory part work. I have directory called migrations, which has django-south migration files. As I enter --ignore=migrations it still keeps giving me the errors/warnings in files inside migrations directory. Could it be that --ignore is not working for directories? If I could even use regexp to match the ignored files it would work, since django-south files are all named 0001_something, 0002_something...

    Read the article

  • process taking high time on Itanium server

    - by Vishwa
    Hi, we have an application which is recently migrated from PA-Risk server to itanium server. After migration we noticed that there is significant increase in time taken to complete a process. when we tracked the time taken by each part we found that the system time is 7.590000, user time is 3.990000 but elapsed time is 70.434882!! Due to this huge elapsed time, overall performance of the application has gone very low. i wrote a small progrms to perform I/O operations on Itanium. these scripts working faster on Itanium compared to PA-Risk. what could be the reason for this high elapsed time of this process?

    Read the article

  • Question about migrating from ASP .NET User Controls to .NET 3.5 Master Page technology

    - by Jim McFetridge
    When migrating from an ASP .NET user control -based page with a header, footer, and menu to a Master Page using the same HTML mark-up, is it normal for CSS or javascript behaviors to change slightly? In particular, the submenu bar text now appears run together (which looks like a CSS symptom) and the graphics on the line above it appear to have an incorrect z-order. (The menu operation is javascript-based.) (I tried to paste images here but couldn't.) Also, the site is very large and we've not been given permission to redo the menu for the entire site. This is a forward-only migration. (Because I know that someone will ask.) Assuming that there are no changes in scope, what are the things that I should check? Thanks! Jim

    Read the article

  • Sync GIT and ClearCase

    - by Senthil A Kumar
    I am currently working on ClearCase and now migrating to GIT. But we need this migration in a way that all work will be done in GIT and the data will be synced backed to ClearCase stream. We will have the same branch names and stream names in both GIT and CC, so scripting shouldn't be a problem. The problem here is, Can someone suggest which is the best model to sync CC and GIT Have all the Vobs in CC as single repo in GIT, and have the major stream in CC as various branches in GIT. - Single GIT repo (VOBS) and many branches (CC streams). - This takes up less space as VOBs are kept as single repo with many branches. Have important CC branches as independent GIT repositories and each repository having all the CC VOBs. - Many GIT repo for many CC branch - This will take up lots of space as VOBs will be replicated across. Which do you think is the best way to keep it in sync with ClearCase

    Read the article

  • Robots Crawling Across Namespace?

    - by Codex73
    I migrated site from one domain to another. Also placed permanent redirection on old account. My stats logs are capturing this: Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html) /libro_metaboforte_chap5.php/members/members/file_chap6.php I placed this on robots which wasn't present at time of migration. Robots.txt Contents User-agent: * Allow: / Disallow: /members/ Disallow: /includes/ HTACCESS FILE CONTENTS DirectoryIndex index.php index.html Options +FollowSymlinks RewriteEngine On # Turn on the rewriting engine RewriteBase / RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_URI} !^/store/?$ RewriteCond %{QUERY_STRING} !. RewriteRule ^.+/?$ index.php [QSA,L] RewriteCond %{QUERY_STRING} ^curlang=([a-z]*)$ RewriteRule ^.+/?$ index.php? [QSA,L] Will continue to log incoming bot captures. My htaccess does rewrite. I just added the robot file. The funny part is that is stepping in double directories... I don't know if the problem was not having the 'robots.txt' in place or the actual in place htaccess doing rewrites?

    Read the article

  • JPA IndirectSet changes not reflected in Spring frontend

    - by Jon
    I'm having an issue with Spring JPA and IndirectSets. I have two entities, Parent and Child, defined below. I have a Spring form in which I'm trying to create a new Child and link it to an existing Parent, then have everything reflected in the database and in the web interface. What's happening is that it gets put into the database, but the UI doesn't seem to agree. The two entities that are linked to each other in a OneToMany relationship like so: @Entity @Table(name = "parent", catalog = "myschema", uniqueConstraints = @UniqueConstraint(columnNames = "ChildLinkID")) public class Parent { private Integer id; private String childLinkID; private Set<Child> children = new HashSet<Child>(0); @Id @GeneratedValue(strategy = IDENTITY) @Column(name = "id", unique = true, nullable = false) public Integer getId() { return this.id; } public void setId(Integer id) { this.id = id; } @Column(name = "ChildLinkID", unique = true, nullable = false, length = 6) public String getChildLinkID() { return this.childLinkID; } public void setChildLinkID(String childLinkID) { this.childLinkID = childLinkID; } @OneToMany(cascade = CascadeType.ALL, fetch = FetchType.LAZY, mappedBy = "parent") public Set<Child> getChildren() { return this.children; } public void setChildren(Set<Child> children) { this.children = children; } } @Entity @Table(name = "child", catalog = "myschema") public class Child extends private Integer id; private Parent parent; @Id @GeneratedValue(strategy = IDENTITY) @Column(name = "id", unique = true, nullable = false) public Integer getId() { return this.id; } public void setId(Integer id) { this.id = id; } @ManyToOne(fetch = FetchType.LAZY) @JoinColumn(name = "ChildLinkID", referencedColumnName = "ChildLinkID", nullable = false) public Parent getParent() { return this.parent; } public void setParent(Parent parent) { this.parent = parent; } } And of course, assorted simple properties on each of them. Now, the problem is that when I edit those simple properties from my Spring interface, everything works beautifully. I can persist new entities of these types and they'll appear when using the JPATemplate to do a find on, say, all Parents (getJpaTemplate().find("select p from Parent p")) or on individual entities by ID or another property. The problem I'm running into is that now, I'm trying to create a new Child linked to an existing Parent through a link from the Parent's page. Here's the important bits of the Controller (note that I've placed the JPA foo in the controller here to make it clearer; the actual JpaDaoSupport is actually in another class, appropriately tiered): protected Object formBackingObject(HttpServletRequest request) throws Exception { String parentArg = request.getParameter("parent"); int parentId = Integer.parseInt(parentArg); Parent parent = getJpaTemplate().find(Parent.class, parentId); Child child = new Child(); child.setParent(parent); NewChildCommand command = new NewChildCommand(); command.setChild(child); return command; } protected ModelAndView onSubmit(Object cmd) throws Exception { NewChildCommand command = (NewChildCommand)cmd; Child child = command.getChild(); child.getParent().getChildren().add(child); getJpaTemplate().merge(child); return new ModelAndView(new RedirectView(getSuccessView())); } Like I said, I can run through the form and fill in the new values for the Child -- the Parent's details aren't even displayed. When it gets back to the controller, it goes through and saves it to the underlying database, but the interface never reflects it. Once I restart the app, it's all there and populated appropriately. What can I do to clear this up? I've tried to call extra merges, tried refreshes (which gave a transaction exception), everything short of just writing my own database access code. I've made sure that every class has an appropriate equals() and hashCode(), have full JPA debugging on to see that it's making appropriate SQL calls (it doesn't seem to make any new calls to the Child table) and stepped through in the debugger (it's all in IndirectSets, as expected, and between saving and displaying the Parent the object takes on a new memory address). What's my next step?

    Read the article

  • Facing Problem with Libxml.rb

    - by Rakesh
    Hi Folks, I downloaded one of the rails app from server and tried running it.I faced an error on migration D:\Radiant\trunkrake db:migrate (in D:/Radiant/trunk) rake aborted! 126: The specified module could not be found. - D:/Radiant/trunk/config/../ven dor/plugins/libxml-ruby/lib/libxml_ruby.so I got the windows version of libxml_ruby.so and tried again. i got the error saying " libxml2.dll " not found.Then i checked the file libxml.rb ..it says the below lines . If running on Windows, then add the current directory to the PATH for the current process so it can find the pre-built libxml2 and iconv2 shared libraries (dlls). Can anybody explain me what actually this mean? Regards Rakesh S

    Read the article

  • Oracle Unicode problem when using NLS_CHARACTERSET is WE8ISO8859P1 and NLS_NCHAR_CHARACTERSET is AL16UTF16, and ColdFusion as programming language

    - by tsurahman
    I have 2 Oracle 10g database, XE and Enterprise XE Enterprise and this are the data type I've use in the test table and then I tried to test to insert some Unicode char from http://www.sustainablegis.com/unicode/ and the results are XE Enterprise for this test, I use ColdFusion 9 developer edition <cfprocessingDirective pageencoding="utf-8"> <cfset setEncoding("form","utf-8")> <form action="" method="post"> Unicode : <br> <textarea name="txaUnicode" id="txaUnicode" cols="50" rows="10"></textarea> <br><br> Language : <br> <input type="Text" name="txtLanguage" id="txtLanguage"> <br><br> <input type="Submit"> </form> <cfset dsn = "theDSN"> <cfif StructKeyExists(FORM, "FIELDNAMES")> <cfquery name="qryInsert" datasource="#dsn#"> INSERT INTO UNICODE ( C_VARCHAR2, C_CHAR, C_CLOB, C_NVARCHAR2, LANGUAGE ) VALUES ( <cfqueryparam cfsqltype="CF_SQL_VARCHAR" value="#FORM.TXAUNICODE#">, <cfqueryparam cfsqltype="CF_SQL_CHAR" value="#FORM.TXAUNICODE#">, <cfqueryparam cfsqltype="CF_SQL_LONGVARCHAR" value="#FORM.TXAUNICODE#">, <cfqueryparam cfsqltype="CF_SQL_VARCHAR" value="#FORM.TXAUNICODE#">, <cfqueryparam cfsqltype="CF_SQL_VARCHAR" value="#FORM.TXTLANGUAGE#"> ) </cfquery> </cfif> <cfquery name="qryUnicode" datasource="#dsn#"> SELECT * FROM UNICODE ORDER BY LANGUAGE </cfquery> <table border="1"> <thead> <tr> <th>LANGUAGE</th> <th>C_VARCHAR2</th> <th>C_CHAR</th> <th>C_CLOB</th> <th>C_NVARCHAR2</th> </tr> </thead> <tbody> <cfoutput query="qryUnicode"> <tr> <td>#qryUnicode.LANGUAGE#</td> <td>#qryUnicode.C_VARCHAR2#</td> <td>#qryUnicode.C_CHAR#</td> <td>#qryUnicode.C_CLOB#</td> <td>#qryUnicode.C_NVARCHAR2#</td> </tr> </cfoutput> </tbody> </table> from this guide http://www.stanford.edu/dept/itss/docs/oracle/10g/server.101/b10749/ch6unicode.htm#i1007297 I think for my Enterprise database it should produce same thing as XE (at least for NVARCHAR2 column) since the typical solution from that guide said: Use NCHAR and NVARCHAR2 datatypes to store Unicode characters Keep WE8ISO8859P1 as the database character set Use AL16UTF16 as the national character set So, how to make it works too in my Enterprise database? Thank you :)

    Read the article

  • Ruby gems gone after after jruby install

    - by James
    Today I installed jruby by downloading it, extracting it to /home/james/jruby-1.4.0 and adding the following line to .bashrc export JRUBY_HOME=/home/james/jruby-1.4.0 export PATH=$JRUBY_HOME/bin:$PATH And then I installed some jruby gems via jruby -S gem install ... Jruby works fine, but this seemed to have cause two problems: 1) When I try to run a ruby (not jruby) on rails migration, I see: Missing the Rails gem. Please `gem install -v= rails`, update your RAILS_GEM_VERSION setting in config/environment.rb for the Rails version you do have installed, or comment out RAILS_GEM_VERSION to use the latest version installed. 2) when I do gem list --local, I only see the gems that I've installed for jruby. Launching web applications via ruby script/server succeeds without any warnings. Please help.

    Read the article

  • Globalize2 and migrations

    - by diegogs
    Hi, I have used globalize2 to add i18n to an old site. There is already a lot of content in spanish, however it isn't stored in globalize2 tables. Is there a way to convert this content to globalize2 with a migration in rails? The problem is I can't access the stored content: >> Panel.first => #<Panel id: 1, name: "RT", description: "asd", proje.... >> Panel.first.name => nil >> I18n.locale = nil => nil >> Panel.first.name => nil Any ideas?

    Read the article

  • Where are TFS Alerts stored in the TFS Databases? Receiving duplicate alerts after upgrade 2008 to

    - by MJ Hufford
    I recently performed a migration-upgrade from TFS 2008 to TFS 2010. Almost everything is working properly now. However, our team is getting duplicate emails now. I'm guessing this is because I used the TFS 2008 power tools to setup alerts. After the upgrade, I installed the TFS 2010 power tools and noticed that there were not alerts configured. I setup new alerts and now we get duplicates. Is it possible the old alerts configuration is floating around in the db somewhere?

    Read the article

< Previous Page | 570 571 572 573 574 575 576 577 578 579 580 581  | Next Page >