Search Results

Search found 1682 results on 68 pages for 'tron legacy'.

Page 44/68 | < Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >

  • net.sf.hibernate.PropertyAccessException: Null value was assigned to a property of primitive type

    - by lot
    Hi! What might be reasons for running in the quoted exception upon a HQL select, when there is actually no NULL value in the respective field in the database? I've checked the database multiple times. I've manually submitted the query statement produced by hibernate (extracted from debug). Null values for the respective field are actually never returned. However, the corresponding row in the database is in fact nullable. Does hibernate care about the data base definition? I know, I could simple change the filed to a wrapper type, but we have a number of such fields in a legacy application and this only occurs upon a single query. I'm just wondering about the actual reason for getting this exception only with a single property. Any hints would be greatly appreciated.

    Read the article

  • Java Application/Thread Server

    - by Manrico Corazzi
    I am looking for something very close to an application server with these features: it should handle a series of threads/daemons, allowing the user to start-stop-reload each one without affecting the others it should keep libraries separated between different threads/daemons it should allow to share some libraries Currently we have some legacy code reinventing the wheel... and not a perflectly round-shaped one at that! I thought to use Tomcat, but I don't need a web server, except maybe for the simple backoffice user interface (/manager/html). Any suggestion? Is there a non-web application server, or is there a better alternative to Tomcat (more lightweight, for example, or easier to configure)? Thanks in advance.

    Read the article

  • Rails with DB2 and multiple schemas

    - by GNUMatrix
    I have a 'legacy' DB2 database that has many other applications and users. Trying to experiment with a rails app. Got everything working great with the ibm_db driver. Problem is that I have some tables like schema1.products, schema1.sales and other tables like schema2.employees and schema2.payroll. In the ibm_db adapter connection, I specify a schema, like schema1 or schema2, and I can work within that one schema, but I need to be able to easily (and transparently) reference both schemas basically interchangeably. I don't want to break the other apps, and the SQL I would normally write against DB2 doesn't have any of these restrictions (schemas can be mixed in SQL against DB2 without any trouble at all). I would like to just specify table names as "schema1.products" for example and be done with it, but that doesn't seem to jive with the "rails way" of going about it. Suggestions?

    Read the article

  • How can I map "insert='false' update='false'" on a composite-id key-property which is also used in a one-to-many FK?

    - by Gweebz
    I am working on a legacy code base with an existing DB schema. The existing code uses SQL and PL/SQL to execute queries on the DB. We have been tasked with making a small part of the project database-engine agnostic (at first, change everything eventually). We have chosen to use Hibernate 3.3.2.GA and "*.hbm.xml" mapping files (as opposed to annotations). Unfortunately, it is not feasible to change the existing schema because we cannot regress any legacy features. The problem I am encountering is when I am trying to map a uni-directional, one-to-many relationship where the FK is also part of a composite PK. Here are the classes and mapping file... CompanyEntity.java public class CompanyEntity { private Integer id; private Set<CompanyNameEntity> names; ... } CompanyNameEntity.java public class CompanyNameEntity implements Serializable { private Integer id; private String languageId; private String name; ... } CompanyNameEntity.hbm.xml <?xml version="1.0"?> <!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD 3.0//EN" "http://www.jboss.org/dtd/hibernate/hibernate-mapping-3.0.dtd"> <hibernate-mapping package="com.example"> <class name="com.example.CompanyEntity" table="COMPANY"> <id name="id" column="COMPANY_ID"/> <set name="names" table="COMPANY_NAME" cascade="all-delete-orphan" fetch="join" batch-size="1" lazy="false"> <key column="COMPANY_ID"/> <one-to-many entity-name="vendorName"/> </set> </class> <class entity-name="companyName" name="com.example.CompanyNameEntity" table="COMPANY_NAME"> <composite-id> <key-property name="id" column="COMPANY_ID"/> <key-property name="languageId" column="LANGUAGE_ID"/> </composite-id> <property name="name" column="NAME" length="255"/> </class> </hibernate-mapping> This code works just fine for SELECT and INSERT of a Company with names. I encountered a problem when I tried to update and existing record. I received a BatchUpdateException and after looking through the SQL logs I saw Hibernate was trying to do something stupid... update COMPANY_NAME set COMPANY_ID=null where COMPANY_ID=? Hibernate was trying to dis-associate child records before updating them. The problem is that this field is part of the PK and not-nullable. I found the quick solution to make Hibernate not do this is to add "not-null='true'" to the "key" element in the parent mapping. SO now may mapping looks like this... CompanyNameEntity.hbm.xml <?xml version="1.0"?> <!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD 3.0//EN" "http://www.jboss.org/dtd/hibernate/hibernate-mapping-3.0.dtd"> <hibernate-mapping package="com.example"> <class name="com.example.CompanyEntity" table="COMPANY"> <id name="id" column="COMPANY_ID"/> <set name="names" table="COMPANY_NAME" cascade="all-delete-orphan" fetch="join" batch-size="1" lazy="false"> <key column="COMPANY_ID" not-null="true"/> <one-to-many entity-name="vendorName"/> </set> </class> <class entity-name="companyName" name="com.example.CompanyNameEntity" table="COMPANY_NAME"> <composite-id> <key-property name="id" column="COMPANY_ID"/> <key-property name="languageId" column="LANGUAGE_ID"/> </composite-id> <property name="name" column="NAME" length="255"/> </class> </hibernate-mapping> This mapping gives the exception... org.hibernate.MappingException: Repeated column in mapping for entity: companyName column: COMPANY_ID (should be mapped with insert="false" update="false") My problem now is that I have tryed to add these attributes to the key-property element but that is not supported by the DTD. I have also tryed changing it to a key-many-to-one element but that didn't work either. So... How can I map "insert='false' update='false'" on a composite-id key-property which is also used in a one-to-many FK?

    Read the article

  • Vb6 project files and source safe

    - by Andrew
    A part of the application that I am working on is a legacy Vb6 Windows forms application. All the files in the project are under source control (VSS) except the Vb6 project file. From what I can establish from the other developers working on the project the reason for this is that the com components used in the projects have different references on each developers machine. I want to move the project files into VSS so that when files are added to the project these can be updated in the project files and other developers (and more importantly an automated build script) can get the latest project files from source safe. Does anyone know if/how I can achieve this in such a way as to not corrupt the references to other com components on different development machines?

    Read the article

  • 302 Redirect to Images in IE8 do not render image

    - by empire29
    I am helping migrate a legacy application. One of the requirements is we are able to handle requests for old images. What we have is: New site on new.com Old site on old.com Images to links (imported content) point to /imgs/cat.png however the actual image is hosted on old.com/assets/images/cat.png (for now). <img src="/imgs/cat.png"/> I setup a redirect for all png, jpg, jpeg, gif that 302's requests for new.com/imgs/(.*).(png|jpg|jpeg|gif) to http://old.com/assets/images/$1.$2 Everything works find in Chrome, Firefox and IE9 - however it was noted in IE8 the image does not render. Its possible that it has the same issue in IE7, 6 and 5.5 however I have not been able to test this. Does anyone know why this is happening and how to fix? I tried setting the contentType header on the response of the 302's to image/(png|jpg|jpeg|gif) and this did not have any impact. Any insight would be appreciated.

    Read the article

  • disabling transactional fixtures in Rspec has no effect

    - by Dia
    Due to a legacy database I'm using, I'm stuck with MySQL using MyISAM, which means my tables don't support transactions. This is causing the tests to fail, since table data generated (I'm using factory_girl for fixtures) by the tests are not reverted for each scenario. I discovered that Rspec exposes the config.use_transactional_fixtures config setting in spec_helper.rb. which is set to true by default. When I set it to false, I don't see any effect on my tests; they still fail due to duplicate records. Isn't that setting supposed to automatically unroll any changes made to the DB? Or am I supposed to do that manually?

    Read the article

  • stting environment variables in powershell by calling python script that prints $env:myVar=myvalue

    - by leeg
    I have some legacy python scripts that manage my shell environment for all the programs and plugins I am running on Linux (bash) and windows (cmd.exe). I want to port this to powershell. How do I set environment variables in powershell by calling python script that prints $env:myVar=myvalue and causes my environment variable to persist in the powershell. In Bash I can use a bash function to call my python script which prints export var=value to stdout and the function will set the environment variables in my shell. This will also work in windows cmd shell by calling a .bat file. I cannot figure out how to do this in powershell. I think it should be something like this: setvar.ps1: function SETVAR {c:\python26\python.exe varconfig.py } varconfig.py: import sys print >> sys.stdout, '$env:myVar=foo'

    Read the article

  • Implementation code for GetDateFormat Win32 function

    - by morpheous
    I am porting some legacy code from windows to Linux (Ubuntu Karmic to be precise). I have come across a Win32 function GetDateFormat(). The statements I need to port over are called like this: GetDateFormat(LOCALE_USER_DEFAULT, 0, &datetime, "MMMM", 'January', 31); OR GetDateFormat(LOCALE_USER_DEFAULT, 0, &datetime, "MMMM", 'May', 30); Where datetime is a SYSTEMTIME struct. Does anyone know where I can get the code for the function - or failing that, tips on how to "roll my own" equivalent function?

    Read the article

  • Interview question - c#

    - by ltech
    I was tasked to conduct my first interview and would like to pose my question to this world for both their feedback on my question and also on their solutions. Question: I have a legacy system with users and files, the info of all files pertaining to a user are stored on a flat file. I want to upgrade this system by storing all info on a db, design tables, and create a C# system that will populate the new db as well as ftp the files to a new path. Define the desgin consideration and develop a prototype. Note: We are looking more for what design one would use and why rather than code that compiles. If it does then kudos to you and we will give it more weight. @Tim C, I did show the interviewee the file: User1234.txt UserID=1234 ParentPath=\\somewhere\nowehere\everywhere\1234 FileCount=20 File0=something0.ext .. File19=something19.ext @Tim C, I have never conducted an interview and I followed a script given to me by my senior developer who was absent.

    Read the article

  • Multiple ports listed in SQL Server connection string

    - by BBlake
    I have a legacy VB6 app where the servername, databasename, username, etc are defined in an INI file, but the port number for the connection string (the default 1433) is hard coded in the app. It's being moved to a new sql server back end that runs off a different port number. I'm trying to avoid having to alter and recompile the application which entails signifigant retesting, documentation, etc. I tried altering the INI file so that for the new server I have put in: SERVERNAME\INSTANCE,NEWPORTNUMBER This effectively builds the connection with Data Source = SERVERNAME\INSTANCE,NEWPORTNUMBER,1433; This appears to work correctly as it connects to the database when I run the app. It appears to me that the ,1433 portion is being ignored. Is this a valid assumption or will this cause me some problem I'm not seeing here?

    Read the article

  • Why does derivative trading position always require C++ knowledge?

    - by Jeffrey
    I’ve never worked in trading environment before and I was curious to see that few of the trading houses seem to use C# but most of them do heavily rely on C++. Why is it? Is it because C++ is better performance wise? Is it because of legacy code base? Is it because cross platform issue? What about dynamic languages (ruby, python)? Are they too slow for this kind of work in terms of performance? Updated: If realibility and performance are important would "Erlang" be the "next big thing" in trading platform?

    Read the article

  • Can I automatically overwrite repository files using svn_load_dirs.pl or similiar?

    - by Andy Strang
    I am working with a legacy VSS repository which was transferred over to a new SVN repository a few months ago. In the meantime, before we go live with the SVN repository, we need to bring over all the changes that have happened on the VSS one between then and now. I was looking at different ways to do this which seem to be things such as: 1.) svn_load_dirs.pl then merge the files manually? 2.) svn import straight into the trunk and merge files manually 3.) checkout a working copy of my SVN repository, copy in the changed files which will overwrite some of the ones in my working copy then commit the changes. My question is, can any of these options be used (or any other options) to automate things so that I don't have to merge the files, and can instead just overwrite them? I think only Option 3 would do this but any help is appreciated.

    Read the article

  • Get Rails to save a record to the database in a non-UTC time

    - by Shaun
    Is there a way to get Rails to save records to the database without it automagically converting the timestamp into UTC before saving? The problem is that I have a few models that pull data from a legacy database that saves everything in Mountain Time and occasionally I have to have my Rails app write to that database. The problem is that every time it does, it converts the time I give it from Mountain Time to UTC, which is 6-7 hours ahead (depending on DST)! Needless to say, this really messes with reporting on that database. If I could get around doing this, I would. Unfortunately, I can't do anything about the fact that this other database uses a different timezone, nor can I really get away from the need for this app to save to that database occasionally. If I could just get Rails to stop trying to help me, it'd be great.

    Read the article

  • MS SQL Server Text Datatype Maxlength = 65,535?

    - by craigmj
    Software I'm working with uses a text field to store XML. From my searches online, the text datatype is supposed to hold 2^31 - 1 characters. Currently SQL Server is truncating the XML at 65,535 characters every time. I know this is caused by sqlserver, because if I add a 65,536th character to the field directly in Management Studio, it states that it will not update because characters will be truncated. Is the Maxlength really 65,535 or could this be because the database was designed in an earlier version of MS SQL Server (2000) and it's using the legacy text datatype instead of 2005's? If this is the case, will Altering the datatype to Text in sql server 2005 fix this issue?

    Read the article

  • Embeddable unit testing framework for mixed Windows app

    - by Andy Dent
    I want to test portions of a very complex app which includes both a major native Windows component and a substantial WPF GUI. Due to complexities I can't detail, it is impossible to run the native portion independently nor can I isolate the areas I want to test (spare me the lectures, we're talking a huge legacy code base and we do have refactoring plans). I'm looking for a unit test kit I can invoke on the native side but must be able to run with the app launched with the managed portion initialised. That seems to rule out the run executable feature of the cfix Windows unit test kit. I really like their philosophy, like WinUnit, of using DLL compilation as a way to add the reflective capabilities missing in C++ and gain a more NUnit-like experience. Ideally, I want something like WinUnit running within the application code and generating an HTML report. I'm trying to introduce more TDD and having things as lean as possible is important.

    Read the article

  • Usual hibernate performance pitfall

    - by Antoine Claval
    Hi, We have just finish to profile our application. ( she's begin to be slow ). the problem seems to be "in hibernate". It's a legacy mapping. Who work's, and do it's job. The relational shema behind is ok too. But some request are slow as hell. So, we would appreciate any input on common and usual mistake made with hibernate who end up with slow response. Exemple : Eager in place of Lazy can change dramaticly the response time....

    Read the article

  • How can I determine which dependency would cause a C++ compilation unit to be rebuilt?

    - by Seb Rose
    I have a legacy C++ application with a deep graph of #includes. Changes to any header file often cause recompiles of seemingly unrelated source files. The application is built using a Visual Studio 2005 solution (sln) file. Can MSBUILD be invoked in a way that it reports which dependency(ies) are causing a source file to be recompiled? Is there any other tool that might be able to help? NOTE: I'm only looking for a tool to tell me why a file would be rebuilt, not some restrospective magic telling me why it was rebuilt.

    Read the article

  • Can I use WCF on Visual Studio 2005?

    - by Hemant
    I am about to start a project which consumes third party web services. Because of a legacy system, I am told that I can only use Visual Studio 2005/.NET 2.0. (Though I would have preferred Visual Studio 2008 on .NET 3.5) My understanding is that WCF was released with .NET 3.0. So is there any possibility to use WCF on Visual Studio 2005 by using just the WCF assemblies of .NET 3.0? I will then try to convince them that it is just like using external framework which doesn't disturb anything.

    Read the article

  • Documents stored in SQL table

    - by vradenburg
    I have a legacy FoxPro application which stores documents in an SQL table in a field with the image datatype. FoxPro accesses the image datatype as a "General" field which can be used to store various files. I have a FoxPro control which interfaces with the General field for modifying/viewing the document that was stored. I need to migrate this control to .NET and make it easy for users to view/modify documents of various types. Does anyone have any suggestions on some ways to go about this or know of things that I'll need to consider for the migration to .NET? I'm pretty sure that I'll need to migrate the field to either a varbinary(max) or FileStream data type.

    Read the article

  • Choosing between assembler and COBOL

    - by Azares Cob
    I have to rewrite and greatly modify parts of a legacy COBOL application. The COBOL source-code is available (around 100.000 lines of copy & pasted code mixed with GOTOs). Some more details on the system: It is a general management system controlling transactions, bank management, customer data and employees of the company I work for. The COBOL-powered database is about 4 Terabytes distributed over 50 old HDDs. (But messing around with them is the sysadmins job) They are using COBOL85 only. Now I have two options: Rewrite and refactor 50% of the old COBOL system, or use X86 assembly. Should I use X86 assembler or COBOL?

    Read the article

  • Tool to convert inline C' into a code behind

    - by Jon Jones
    Hi I have a number of legacy web controls (ascx) that contains huge amounts of inline C#. The forms contain a number of repeated and duplicate code. Our first plan is to move the code into code behinds per file, then refactor etc... were doing this to upgrade the client to the latest version of their cms At the moment we are going to have to manually copy and paste hundreds files, convert the namespace client-side imports into usings, etc... does anybody PLEASE know of a tool that can do the majority of this work for us ? Thanks

    Read the article

  • Unit testing with serialization mock objects in C++

    - by lhumongous
    Greetings, I'm fairly new to TDD and ran across a unit test that I'm not entirely sure how to address. Basically, I'm testing a couple of legacy class methods which read/write a binary stream to a file. The class functions take a serializable object as a parameter, which handles the actual reading/writing to the file. For testing this, I was thinking that I would need a serialization mock object that I would pass to this function. My initial thought was to have the mock object hold onto a (char*) which would dynamically allocate memory and memcpy the data. However, it seems like the mock object might be doing too much work, and might be beyond the scope of this particular test. Is my initial approach correct, or can anyone think of another way of correctly testing this? Thanks!

    Read the article

  • How is an SOA architecture really supposed to be implemented?

    - by smaye81
    My project is converting a legacy fat-client desktop application into the web. The database is not changing as a result. Consequently, we are being forced to call external web services to access data in our own database. Couple this with the fact that some parts of our application are allowed to access the database directly through DAOs (a practice that is much faster and easier). The functionality we're supposed to call web services for are what has been deemed necessary for downstream, dependent systems. Is this really how SOA is supposed to work? Admittedly, this is my first foray into the SOA world, but I have to think this is the complete wrong way to go about this.

    Read the article

  • storing huge amount of records into classic asp cache object is SLOW

    - by aspm
    we have some nasty legacy asp that is performing like a dog and i narrowed it down to because we are trying to store 15K+ records into the application cache object. but that's not the killer. before it stores it, it converts the ADO stream to XML then stores it. this conversion of the huge record set to XML spikes the CPU and causes all kinds of havoc on users when it's happening. and unfortunately we do this XML conversion to read the cache a lot, causing site wide performance problems. i don't have the resources to convert everything to .net. so that's out. but i need to obviously use caching, but int his case the caching is hurting instead of helping. is there a more effecient way to store this data instead of doing this xml conversion to/from every time we read/update the cache?

    Read the article

< Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >