Search Results

Search found 8028 results on 322 pages for 'strategy pattern'.

Page 261/322 | < Previous Page | 257 258 259 260 261 262 263 264 265 266 267 268  | Next Page >

  • factorygirl rails, says "top required" in my spec - don't know how to fix

    - by user924088
    I get the following error message when I run my tests. It says that the problem is in my lecture_spec, and that the top is required. I don't know if this has something to do with requiring my spec_helper.rb file. 1) Lecture has a valid factory Failure/Error: FactoryGirl.create(:lecture).should be_valid NoMethodError: undefined method `after_build=' for #<Lecture:0x007fe7747bce70> # ./spec/models/lecture_spec.rb:21:in `block (2 levels) in <top (required)>' My factory looks like the following: require 'faker' FactoryGirl.define do factory :question do association :lecture name { Faker::Lorem.words(1) } description {Faker::Lorem.words(7)} factory :question_one do answer 1 end factory :question_two do answer 2 end factory :question_three do answer 3 end end end And this is my lecture_spec file require 'spec_helper' describe Lecture do it "has a valid factory" do FactoryGirl.create(:lecture).should be_valid end end and this is my lecture factory, where I defined the lecture factory. FactoryGirl.define do factory :lecture do #association :question name {Faker::Lorem.words(1)} description {Faker::Lorem.words(7)} soundfile_file_name {Faker::Lorem.words(1)} soundfile_content_type {Faker::Lorem.words(3)} soundfile_file_size {Faker::Lorem.words(8)} after_build do |question| [:question_one, :question_two, :question_three].each do |question| association :questions, factory: :question, strategy: :build end end end end

    Read the article

  • Can this be considered Clean Code / Best Practice?

    - by MRFerocius
    Guys, How are you doing today? I have the following question because I will follow this strategy for all my helpers (to deal with the DB entities) Is this considered a good practice or is it going to be unmaintainable later? public class HelperArea : AbstractHelper { event OperationPerformed<Area> OnAreaInserting; event OperationPerformed<Area> OnAreaInserted; event OperationPerformed<Area> OnExceptionOccured; public void Insert(Area element) { try { if (OnAreaInserting != null) OnAreaInserting(element); DBase.Context.Areas.InsertOnSubmit(new AtlasWFM_Mapping.Mapping.Area { areaDescripcion = element.Description, areaNegocioID = element.BusinessID, areaGUID = Guid.NewGuid(), areaEstado = element.Status, }); DBase.Context.SubmitChanges(); if (OnAreaInserted != null) OnAreaInserted(element); } catch (Exception ex) { LogManager.ChangeStrategy(LogginStrategies.EVENT_VIEWER); LogManager.LogError(new LogInformation { logErrorType = ErrorType.CRITICAL, logException = ex, logMensaje = "Error inserting Area" }); if (OnExceptionOccured != null) OnExceptionOccured(elemento); } } I want to know if it is a good way to handle the event on the Exception to let subscribers know that there has been an exception inserting that Area. And the way to log the Exception, is is OK to do it this way? Any suggestion to make it better?

    Read the article

  • Jackson object mapping - map incoming JSON field to protected property in base class

    - by Pete
    We use Jersey/Jackson for our REST application. Incoming JSON strings get mapped to the @Entity objects in the backend by Jackson to be persisted. The problem arises from the base class that we use for all entities. It has a protected id property, which we want to exchange via REST as well so that when we send an object that has dependencies, hibernate will automatically fetch these dependencies by their ids. Howevery, Jackson does not access the setter, even if we override it in the subclass to be public. We also tried using @JsonSetter but to no avail. Probably Jackson just looks at the base class and sees ID is not accessible so it skips setting it... @MappedSuperclass public abstract class AbstractPersistable<PK extends Serializable> implements Persistable<PK> { @Id @GeneratedValue(strategy = GenerationType.AUTO) private PK id; public PK getId() { return id; } protected void setId(final PK id) { this.id = id; } Subclasses: public class A extends AbstractPersistable<Long> { private String name; } public class B extends AbstractPersistable<Long> { private A a; private int value; // getter, setter // make base class setter accessible @Override @JsonSetter("id") public void setId(Long id) { super.setId(id); } } Now if there are some As in our database and we want to create a new B via the REST resource: @POST @Consumes(MediaType.APPLICATION_JSON) @Produces(MediaType.APPLICATION_JSON) @Transactional public Response create(B b) { if (b.getA().getId() == null) cry(); } with a JSON String like this {"a":{"id":"1","name":"foo"},"value":"123"}. The incoming B will have the A reference but without an ID. Is there any way to tell Jackson to either ignore the base class setter or tell it to use the subclass setter instead? I've just found out about @JsonTypeInfo but I'm not sure this is what I need or how to use it. Thanks for any help!

    Read the article

  • Method for finding memory leak in large Java heap dumps

    - by Rickard von Essen
    I have to find a memory leak in a Java application. I have some experience with this but would like advice on a methodology/strategy for this. Any reference and advice is welcome. About our situation: Heap dumps are larger than 1 GB We have heap dumps from 5 occasions. We don't have any test case to provoke this. It only happens in the (massive) system test environment after at least a weeks usage. The system is built on a internally developed legacy framework with so many design flaws that they are impossible to count them all. Nobody understands the framework in depth. It has been transfered to one guy in India who barely keeps up with answering e-mails. We have done snapshot heap dumps over time and concluded that there is not a single component increasing over time. It is everything that grows slowly. The above points us in the direction that it is the frameworks homegrown ORM system that increases its usage without limits. (This system maps objects to files?! So not really a ORM) Question: What is the methodology that helped you succeed with hunting down leaks in a enterprise scale application?

    Read the article

  • Delphi dbExpress and Interbase: Unicode migration steps and risks?

    - by mjustin
    Currently, our database uses Win1252 as the only character encoding. We will have to support Unicode in the database tables soon, which means we have to perform this migration for four databases and around 80 Delphi applications which run in-house in a 24/7 environment. Are there recommendations for database migrations to UTF-8 (or UNICODE_FSS) for Delphi applications? Some questions listed below. Many thanks in advance for your answers! are there tools which help with the migration of the existing databases (sizes between 250 MB and 2 GB, no Blob fields), by dumping the data, recreating the database with UNICODE_FSS or UTF-8, and loading the data back? are there known problems with Delphi 2009, dbExpress and Interbase 7.5 related to Unicode character sets? would you recommend to upgrade the databases to Interbase 2009 first? (This upgrade is planned but does not have a high priority) can we simply migrate the database and Delphi will handle the Unicode character sets automatically, or will we have to change all character field types in every Datamodule (dfm and source code) too? which strategy would you recommend to work on the migration in parallel with the normal development and maintenance of the existing application? The application runs in-house so development and database administration is done internally. Update: one problem I found now is that there are two different persistent field types for Unicode and non Unicode character fields. For the existing database, dbExpress creates TStringField objects. For the Unicode database fields, dbExpress creates (or expects!) TWideStringField objects. So we can not just change the database and the connection code page to Unicode. We also have to modify all datamodules to use the new field type. The modified datamodule however will not be backwards compatible.

    Read the article

  • Map enum in JPA with fixed values ?

    - by Kartoch
    I'm looking for the different ways to map an enum using JPA. I especially want to set the integer value of each enum entry and to save only the integer value. @Entity @Table(name = "AUTHORITY_") public class Authority implements Serializable { public enum Right { READ(100), WRITE(200), EDITOR (300); private int value; Right(int value) { this.value = value; } public int getValue() { return value; } }; @Id @GeneratedValue(strategy = GenerationType.AUTO) @Column(name = "AUTHORITY_ID") private Long id; // the enum to map : private Right right; } A simple solution is to use the Enumerated annotation with EnumType.ORDINAL: @Column(name = "RIGHT") @Enumerated(EnumType.ORDINAL) private Right right; But in this case JPA maps the enum index (0,1,2) and not the value I want (100,200,300). Th two solutions I found do not seem simple... First Solution A solution, proposed here, uses @PrePersist and @PostLoad to convert the enum to an other field and mark the enum field as transient: @Basic private int intValueForAnEnum; @PrePersist void populateDBFields() { intValueForAnEnum = right.getValue(); } @PostLoad void populateTransientFields() { right = Right.valueOf(intValueForAnEnum); } Second Solution The second solution proposed here proposed a generic conversion object, but still seems heavy and hibernate-oriented (@Type doesn't seem to exist in JEE): @Type( type = "org.appfuse.tutorial.commons.hibernate.GenericEnumUserType", parameters = { @Parameter( name = "enumClass", value = "Authority$Right"), @Parameter( name = "identifierMethod", value = "toInt"), @Parameter( name = "valueOfMethod", value = "fromInt") } ) Is there any other solutions ? I've several ideas in mind but I don't know if they exist in JPA: use the setter and getter methods of right member of Authority Class when loading and saving the Authority object an equivalent idea would be to tell JPA what are the methods of Right enum to convert enum to int and int to enum Because I'm using Spring, is there any way to tell JPA to use a specific converter (RightEditor) ?

    Read the article

  • Dynamic where clause using Linq to SQL in a join query in a MVC application

    - by jhoefnagels
    Dear .Net Linq experts, I am looking for a way to query for products in a catalog using filters on properties which have been assigned to the product based on the category to which the product belongs. So I have the following entities involved: Products -Id -CategoryId Categories [Id] Properties [Id, CategoryId] PropertyValues [Id, PropertyId] ProductProperties [ProductId, PropertyValueId] When I ad a product to the catalog, multiple ProductProperties will be added based on the category and I would like to be able to filter all products from a category by selecting values for one or more properties. I will gather all filters, which I will hold in a list, by reading the URL. Now it is time to actually get the products based on multiple properties and I have been trying to find the right strategy but untill now it does not really work. Is there a way to make this work without writing SQL? I was trying something like this: productsInCategory = ProductRepository.Where(p => p.Category.Name == category); foreach (PropertyFilter pf in filterList) { productsInCategory = (from product in productsInCategory join pp in ProductPropertyRepository on product.Id equals pp.ProductId where pp.PropertyValueId == pf.ValueId select product); }

    Read the article

  • How do you pass .net objects values around in F#?

    - by Russell
    I am currently learning F# and functional programming in general (from a C# background) and I have a question about using .net CLR objects during my processing. The best way to describe my problem will be to give an example: let xml = new XmlDocument() |> fun doc -> doc.Load("report.xml"); doc let xsl = new XslCompiledTransform() |> fun doc -> doc.Load("report.xsl"); doc let transformedXml = new MemoryStream() |> fun mem -> xsl.Transform(xml.CreateNavigator(), null, mem); mem This code transforms an XML document with an XSLT document using .net objects. Note XslCompiledTransform.Load works on an object, and returns void. Also the XslCompiledTransform.Transform requires a memorystream object and returns void. The above strategy used is to add the object at the end (the ; mem) to return a value and make functional programming work. When we want to do this one after another we have a function on each line with a return value at the end: let myFunc = new XmlDocument("doc") |> fun a -> a.Load("report.xml"); a |> fun a -> a.AppendChild(new XmlElement("Happy")); a Is there a more correct way (in terms of functional programming) to handle .net objects and objects that were created in a more OO environment? The way I returned the value at the end then had inline functions everywhere feels a bit like a hack and not the correct way to do this. Any help is greatly appreciated!

    Read the article

  • Extract <name> attribute from KML

    - by Ozaki
    I am using OpenLayers for a mapping service. In which I have several KML layers that are using KML feeds from server to populate data on the map. It currently plots: images / points / vector lines & shapes. But on these points it will not add a label with the value of the for the placemark in the KML. What I have currently tried is: ////////////////////KML Feed for * Layer// var surveylinelayer = new OpenLayers.Layer.Vector("First KML Layer", { projection: new OpenLayers.Projection("EPSG:4326"), strategies: [new OpenLayers.Strategy.Fixed()], protocol: new OpenLayers.Protocol.HTTP({ url: firstKMLURL, format: new OpenLayers.Format.KML({ extractStyles: true, extractAttributes: true }) }), styleMap: new OpenLayers.StyleMap({ "default": KMLStyle }) }); then the Style as follows: var KMLStyle = new OpenLayers.Style({ //label: "${name}", // This method will display nothing fillOpacity: 1, pointRadius: 10, fontColor: "#7E3C1C", fontSize: "13px", fontFamily: "Courier New, monospace", fontWeight: "strong", labelXOffset: "0", labelYOffset: "-15" }, { //dynamic label context: { label: function(feature) { return "Feature Name: " + feature.attributes.name; // also displays nothing } } }); Example of the KML: <Placemark> <name>POI1</name> <Style> <LabelStyle> <color>ffffffff</color> </LabelStyle> </Style> <Point> <coordinates>0.000,0.000</coordinates> </Point> </Placemark> When debugging I just hit "feature is undefined" and am unsure why it would be undefined in this instance?

    Read the article

  • Getting Visual Studio macros in console app

    - by Paul Steckler
    In a Visual Studio extension, you can get the default include paths for all projects with C# code like: String dirs = dte2.get_Properties("Projects", "VCDirectories"); where dte2 is the Visual Studio application object. Usually, those directories contain macros like $(INCLUDE). You can expand those macros by looking at dte2.Solution.Projects, finding the relevant project in that collection; from the project, look at project.Configurations, find the relevant configuration, and call its Evaluate method. In VS2005/VS2008, there's a .vssettings file that contains the VCDirectories. In VS2010, there's a property sheet with the same information. A console application can just parse those files -- great. But how can you expand the macros? As a first step, I tried instantiating a VCProjectEngine object in a console app, but that just resulted in a COM failure. So I don't know how to instantiate a VCProject object in order to follow the same strategy I used in a VS extension. Where are the macro bindings stored?

    Read the article

  • Mapping issue with multi-field primary keys using hibernate/JPA annotations

    - by Derek Clarkson
    Hi all, I'm stuck with a database which is using multi-field primary keys. I have a situation where I have a master and details table, where the details table's primary key contains fields which are also the foreign key's the the master table. Like this: Master primary key fields: master_pk_1 Details primary key fields: master_pk_1 details_pk_2 details_pk_3 In the Master class we define the hibernate/JPA annotations like this: @Id @GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "idGenerator") @Column(name = "master_pk_1") private long masterPk1; @OneToMany(cascade=CascadeType.ALL) @JoinColumn(name = "master_pk_1", referencedColumnName = "master_pk_1") private List<Details> details = new ArrayList<Details>(); And in the details class I have defined the id and back reference like this: @EmbeddedId @AttributeOverrides( { @AttributeOverride( name = "masterPk1", column = @Column(name = "master_pk_1")), @AttributeOverride(name = "detailsPk2", column = @Column(name = "details_pk_2")), @AttributeOverride(name = "detailsPk2", column = @Column(name = "details_pk_2")) }) private DetailsPrimaryKey detailsPrimaryKey = new DetailsPrimaryKey(); @ManyToOne @JoinColumn(name = "master_pk_1", referencedColumnName = "master_pk_1", insertable=false) private Master master; The goal of all of this was that I could create a new master, add some details to it, and when saved, JPA/Hibernate would generate the new id for master in the masterPk1 field, and automatically pass it down to the details records, storing it in the matching masterPk1 field in the DetailsPrimaryKey class. At least that's what the documentation I've been looking at implies. What actually happens is that hibernate appears to corectly create and update the records in the database, but not pass the key to the details classes in memory. Instead I have to manually set it myself. I also found that without the insertable=true added to the back reference to master, that hibernate would create sql that had the master_pk_1 field listed twice in the insert statement, resulting in the database throwing an exception. My question is simply is this arrangement of annotations correct? or is there a better way of doing it?

    Read the article

  • How to compile Python scripts for use in FORTRAN?

    - by Vincent Poirier
    Hello, Although I found many answers and discussions about this question, I am unable to find a solution particular to my situation. Here it is: I have a main program written in FORTRAN. I have been given a set of python scripts that are very useful. My goal is to access these python scripts from my main FORTRAN program. Currently, I simply call the scripts from FORTRAN as such: CALL SYSTEM ('python pyexample.py') Data is read from .dat files and written to .dat files. This is how the python scripts and the main FORTRAN program communicate to each other. I am currently running my code on my local machine. I have python installed with numpy, scipy, etc. My problem: The code needs to run on a remote server. For strictly FORTRAN code, I compile the code locally and send the executable to the server where it waits in a queue. However, the server does not have python installed. The server is being used as a number crunching station between universities and industry. Installing python along with the necessary modules on the server is not an option. This means that my “CALL SYSTEM ('python pyexample.py')” strategy no longer works. Solution?: I found some information on a couple of things in thread http://stackoverflow.com/questions/138521/is-it-feasible-to-compile-python-to-machine-code Shedskin, Psyco, Cython, Pypy, Cpython API These “modules”(? Not sure if that's what to call them) seem to compile python script to C code or C++. Apparently not all python features can be translated to C. As well, some of these appear to be experimental. Is it possible to compile my python scripts with my FORTRAN code? There exists f2py which converts FORTRAN code to python, but it doesn't work the other way around. Any help would be greatly appreciated. Thank you for your time. Vincent PS: I'm using python 2.6 on Ubuntu

    Read the article

  • How to do Grouping using JPA annotation with mapping given field

    - by hemal
    I am using JPA Annotation mapping with the table given below, but having problem that i am doing mapping on same table but on diffrent field given ProductImpl.java @Entity @Table(name = "Product") public class ProductImpl extends SimpleTagGroup implements Product { @Id @GeneratedValue(strategy = GenerationType.IDENTITY) private long id = -1; @OneToMany(fetch = FetchType.EAGER, cascade = CascadeType.ALL) @JoinTable(name = "ProductTagMapping", joinColumns =@JoinColumn(name = "productId"), inverseJoinColumns =@JoinColumn(name = "tagId")) private List<SimpleTag> tags; @OneToMany(fetch = FetchType.EAGER, cascade = CascadeType.ALL) @JoinTable(name = "ProductTagMapping", joinColumns =@JoinColumn(name = "productId"), inverseJoinColumns =@JoinColumn(name = "tagId")) private List<SimpleTag> licenses; @OneToMany(fetch = FetchType.EAGER, cascade = CascadeType.ALL) @JoinTable(name = "ProductTagMapping", joinColumns =@JoinColumn(name = "productId"), inverseJoinColumns =@JoinColumn(name = "tagId")) private List<SimpleTag> os; I want to get values like windows and linux in os , GPLv2 and GPLv3 in licenses ,so we are using TagGroup table . but here i got all tagValues in each of the os,licenses and tag fileds,so how could i do group by or some other things with JPA. and ProductTagMapping is the mapping table between Tag and TagGroup TagGroup Table ID TAGGROUPNAME 1 PRODUCTTYPE 2 LICENSE 3 TAGS 4 OS SimpleTag ID TAGVALUE 1 Application 2 Framework 3 Apache2 4 GPLv2 5 GPLv3 6 learning 7 Linux 8 Windows 9 mature

    Read the article

  • XNA 2D mouse picking

    - by Corndog
    I'm working on a simple 2D Real time strategy game using XNA. Right now I have reached the point where I need to be able to click on the sprite for a unit or building and be able to reference the object associated with that sprite. From the research I have done over the last three days I have found many references on how to do "Mouse picking" in 3D which does not seem to apply to my situation. I understand that another way to do this is to simply have an array of all "selectable" objects in the world and when the player clicks on a sprite it checks the mouse location against the locations of all the objects in the array. the problem I have with this approach is that it would become rather slow if the number of units and buildings grows to larger numbers. (it also does not seem very elegant) so what are some other ways I could do this. (Please note that I have also worked over the ideas of using a Hash table to associate the object with the sprite location, and using a 2 dimensional array where each location in the array represents one pixel in the world. once again they seem like rather clunky ways of doing things.)

    Read the article

  • Banning by IP with php/mysql

    - by incrediman
    I want to be able to ban users by IP. My idea is to keep a list of IP's as rows in an BannedIPs table (the IP column would be an index). To check users' IP's against the table, I will keep a session variable called $_SESSION['IP'] for each session. If on any request, $_SESSION['IP'] doesn't match $_SERVER['REMOTE_ADDR'], I will update $_SESSION['IP'] and check the BannedIPs table to see if the IP is banned. (A flag will also be saved as a session variable specifying whether or not the user is banned) Here are the things I'm wondering: Does that sound like a good strategy with regards to speed and security (would someone be able to get around the IP ban somehow, other than changing IP's)? What's the best way to structure a mysql query that checks to see if a row exists? That is, what's the best way to query the db to see if a row with a certain IP exists (to check if it's banned)? Should I save the IP's as integers or strings? Note that... I estimate there will be between 1,000-10,000 banned IP's stored in the database. $_SERVER['REMOTE_ADDR'] is the IP from which the current request was sent.

    Read the article

  • Duplicate column name by JPA with @ElementCollection and @Inheritance

    - by gerry
    I've created the following scenario: @javax.persistence.Entity @Inheritance(strategy = InheritanceType.TABLE_PER_CLASS) public class MyEntity implements Serializable{ @Id @GeneratedValue protected Long id; ... @ElementCollection @CollectionTable(name="ENTITY_PARAMS") @MapKeyColumn (name = "ENTITY_KEY") @Column(name = "ENTITY_VALUE") protected Map<String, String> parameters; ... } As well as: @javax.persistence.Entity public class Sensor extends MyEntity{ @Id @GeneratedValue protected Long id; ... // so here "protected Map<String, String> parameters;" is inherited !!!! ... } So running this example, no tables are created and i get the following message: WARNUNG: Got SQLException executing statement "CREATE TABLE ENTITY_PARAMS (Entity_ID BIGINT NOT NULL, ENTITY_VALUE VARCHAR(255), ENTITY_KEY VARCHAR(255), Sensor_ID BIGINT NOT NULL, ENTITY_VALUE VARCHAR(255))": com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Duplicate column name 'ENTITY_VALUE' I also tried overriding the attributes on the Sensor class... @AttributeOverrides({ @AttributeOverride(name = "ENTITY_KEY", column = @Column(name = "SENSOR_KEY")), @AttributeOverride(name = "ENTITY_VALUE", column = @Column(name = "SENSOR_VALUE")) }) ... but the same error. Can anybody help me?

    Read the article

  • In Ruby, how to implement global behaviour?

    - by Gordon McAllister
    Hi all, I want to implement the concept of a Workspace. This is a global concept - all other code will interact with one instance of this Workspace. The Workspace will be responsible for maintaining the current system state (i.e. interacting with the system model, persisting the system state etc) So what's the best design strategy for my Workspace, bearing in mind this will have to be testable (using RSpec now, but happy to look at alternatives). Having read thru some open source projects out there and I've seen 3 strategies. None of which I can identify as "the best practice". They are: Include the singleton class. But how testable is this? Will the global state of Workspace change between tests? Implemented all behaviour as class methods. Again how do you test this? Implemented all behaviour as module methods. Not sure about this one at all! Which is best? Or is there another way? Thanks, Gordon

    Read the article

  • executing stored procedure from Spring-Hibernate using Annotations

    - by HanuAthena
    I'm trying to execute a simple stored procedure with Spring/Hibernate using Annotations. Here are my code snippets: DAO class: public class UserDAO extends HibernateDaoSupport { public List selectUsers(final String eid){ return (List) getHibernateTemplate().execute(new HibernateCallback() { public Object doInHibernate(Session session) throws HibernateException, SQLException { Query q = session.getNamedQuery("SP_APPL_USER"); System.out.println(q); q.setString("eid", eid); return q.list(); } }); } } my entity class: @Entity @Table(name = "APPL_USER") @Inheritance(strategy = InheritanceType.SINGLE_TABLE) @DiscriminatorFormula(value = "SUBSCRIBER_IND") @DiscriminatorValue("N") @NamedQuery(name = "req.all", query = "select n from Requestor n") @org.hibernate.annotations.NamedNativeQuery(name = "SP_APPL_USER", query = "call SP_APPL_USER(?, :eid)", callable = true, readOnly = true, resultClass = Requestor.class) public class Requestor { @Id @Column(name = "EMPL_ID") public String getEmpid() { return empid; } public void setEmpid(String empid) { this.empid = empid; } @Column(name = "EMPL_FRST_NM") public String getFirstname() { return firstname; } ... } public class Test { public static void main(String[] args) { ApplicationContext ctx = new ClassPathXmlApplicationContext( "applicationContext.xml"); APFUser user = (APFUser)ctx.getBean("apfUser"); List selectUsers = user.getUserDAO().selectUsers("EMP456"); System.out.println(selectUsers); } } and the stored procedure: create or replace PROCEDURE SP_APPL_USER (p_cursor out sys_refcursor, eid in varchar2) as empId varchar2(8); fname varchar2(50); lname varchar2(50); begin empId := null; fname := null; lname := null; open p_cursor for select l.EMPL_ID, l.EMPL_FRST_NM, l.EMPL_LST_NM into empId, fname, lname from APPL_USER l where l.EMPL_ID = eid; end; If i enter invalid EID, its returning empty list which is OK. But when record is there, following exception is thrown: Exception in thread "main" org.springframework.jdbc.BadSqlGrammarException: Hibernate operation: could not execute query; bad SQL grammar [call SP_APPL_USER(?, ?)]; nested exception is java.sql.SQLException: Invalid column name Do I need to modify the entity(Requestor.class) ? How will the REFCURSOR be converted to the List? The stored procedure is expected to return more than one record.

    Read the article

  • Hibernate one-to-one mapping

    - by Andrey Yaskulsky
    I have one-to-one hibernate mapping between class Student and class Points: @Entity @Table(name = "Users") public class Student implements IUser { @Id @Column(name = "id") private int id; @Column(name = "name") private String name; @Column(name = "password") private String password; @OneToOne(fetch = FetchType.EAGER, mappedBy = "student") private Points points; @Column(name = "type") private int type = getType(); //gets and sets... @Entity @Table(name = "Points") public class Points { @GenericGenerator(name = "generator", strategy = "foreign", parameters = @Parameter(name = "property", value = "student")) @Id @GeneratedValue(generator = "generator") @Column(name = "id", unique = true, nullable = false) private int Id; @OneToOne @PrimaryKeyJoinColumn private Student student; //gets and sets And then i do: Student student = new Student(); student.setId(1); student.setName("Andrew"); student.setPassword("Password"); Points points = new Points(); points.setPoints(0.99); student.setPoints(points); points.setStudent(student); Session session = HibernateUtil.getSessionFactory().getCurrentSession(); session.beginTransaction(); session.save(student); session.getTransaction().commit(); And hibernate saves student in the table but not saves corresponding points. Is it OK? Should i save points separately?

    Read the article

  • Delphi 2009 dbExpress and Interbase: Unicode migration steps and risks?

    - by mjustin
    Currently, our database uses Win1252 as the only character encoding. We will have to support Unicode in the database tables soon, which means we have to perform this migration for four databases and around 80 Delphi applications which run in-house in a 24/7 environment. Are there recommendations for database migrations to UTF-8 (or UNICODE_FSS) for Delphi applications? Some questions listed below. Many thanks in advance for your answers! are there tools which help with the migration of the existing databases (sizes between 250 MB and 2 GB, no Blob fields), by dumping the data, recreating the database with UNICODE_FSS or UTF-8, and loading the data back? are there known problems with Delphi 2009, dbExpress and Interbase 7.5 related to Unicode character sets? would you recommend to upgrade the databases to Interbase 2009 first? (This upgrade is planned but does not have a high priority) can we simply migrate the database and Delphi will handle the Unicode character sets automatically, or will we have to change all character field types in every Datamodule (dfm and source code) too? which strategy would you recommend to work on the migration in parallel with the normal development and maintenance of the existing application? The application runs in-house so development and database administration is done internally. Update: one problem I found now is that there are two different persistent field types for Unicode and non Unicode character fields. For the existing database, dbExpress creates TStringField objects. For the Unicode database fields, dbExpress creates (or expects!) TWideStringField objects. This looks like a lot of work lies ahead. While we could try to avoid persistent fields (and add calculated fields at run time), Of course we would prefer a solution which does not require so many changes in existing units and DFM files.

    Read the article

  • Cascading persist and existing object

    - by user322061
    Hello, I am working with JPA and I would like to persist an object (Action) composed of an object (Domain). There is the Action class code: @Entity(name="action") @Table(name="action") public class Action { @Id @GeneratedValue(strategy=GenerationType.IDENTITY) @Column(name="num") private int num; @OneToOne(cascade= { CascadeType.PERSIST, CascadeType.MERGE, CascadeType.REFRESH }) @JoinColumn(name="domain_num") private Domain domain; @Column(name="name") private String name; @Column(name="description") private String description; public Action() { } public Action(Domain domain, String name, String description) { super(); this.domain=domain; this.name=name; this.description=description; } public int getNum() { return num; } public Domain getDomain() { return domain; } public String getName() { return name; } public String getDescription() { return description; } } When I persist an action with a new Domain, it works. Action and Domain are persisted. But if I try to persist an Action with an existing Domain, I get this error: javax.persistence.EntityExistsException: Exception Description: Cannot persist detached object [isd.pacepersistence.common.Domain@1716286]. Class> isd.pacepersistence.common.Domain Primary Key> [8] How can I persist my Action and automatically persist a Domain if it does not exist? If it exists, how can I just persist the Action and link it with the existing Domain. Best Regards, FF

    Read the article

  • Testing system where App-level and Request-level IoC containers exist

    - by Bobby
    My team is in the process of developing a system where we're using Unity as our IoC container; and to provide NHibernate ISessions (Units of work) over each HTTP Request, we're using Unity's ChildContainer feature to create a child container for each request, and sticking the ISession in there. We arrived at this approach after trying others (including defining per-request lifetimes in the container, but there are issues there) and are now trying to decide on a unit testing strategy. Right now, the application-level container itself is living in the HttpApplication, and the Request container lives in the HttpContext.Current. Obviously, neither exist during testing. The pain increases when we decided to use Service Location from our Domain layer, to "lazily" resolve dependencies from the container. So now we have more components wanting to talk to the container. We are also using MSTest, which presents some concurrency dilemmas during testing as well. So we're wondering, what do the bright folks out there in the SO community do to tackle this predicament? How does one setup an application that, during "real" runtime, relies on HTTP objects to hold the containers, but during test has the flexibility to build-up and tear-down the containers consistently, and have the ServiceLocation bits get to those precise containers. I hope the question is clear, thanks!

    Read the article

  • Passing custom Python objects to nosetests

    - by Rob
    I am attempting to re-organize our test libraries for automation and nose seems really promising. My question is, what is the best strategy for passing Python objects into nose tests? Our tests are organized in a testlib with a bunch of modules that exercise different types of request operations. Something like this: testlib \-testmoda \-testmodb \-testmodc In some cases the test modules (i.e. testmoda) is nothing but test_something(), test_something2() functions while in some cases we have a TestModB class in testmob with the test_anotherthing1(), test_anotherthing2() functions. The cool thing is that nose easily finds both. Most of those test functions are request factory stuff that can easily share a single connection to our server farm. Thus we do a lot of test_something1(cnn), TestModB.test_anotherthing2(cnn), etc. Currently we don't use nose, instead we have a hodge-podge of homegrown driver scripts with hard-coded lists of tests to execute. Each of those driver scripts creates its own connection object. Maintaining those scripts and the connection minutia is painful. I'd like to take free advantage of nose's beautiful discovery functionality, passing in a connection object of my choosing. Thanks in advance! Rob P.S. The connection objects are not pickle-able. :(

    Read the article

  • How do I process a nested list?

    - by ddbeck
    Suppose I have a bulleted list like this: * list item 1 * list item 2 (a parent) ** list item 3 (a child of list item 2) ** list item 4 (a child of list item 2 as well) *** list item 5 (a child of list item 4 and a grand-child of list item 2) * list item 6 I'd like to parse that into a nested list or some other data structure which makes the parent-child relationship between elements explicit (rather than depending on their contents and relative position). For example, here's a list of tuples containing an item and a list of its children (and so forth): [('list item 1',), ('list item 2', [('list item 3',), [('list item 4', [('list item 5'),]] ('list item 6',)] I've attempted to do this with plain Python and some experimentation with Pyparsing, but I'm not making progress. I'm left with two major questions: What's the strategy I need to employ to make this work? I know recursion is part of the solution, but I'm having a hard time making the connection between this and, say, a Fibonacci sequence. I'm certain I'm not the first person to have done this, but I don't know the terminology of the problem to make fruitful searches for more information on this topic. What problems are related to this so that I can learn more about solving these kinds of problems in general?

    Read the article

  • how to push different local git branches to heroku/master

    - by lsiden
    Heroku has a policy of ignoring all branches but 'master'. While I'm sure Heroku's designers have excellent reasons for for this policy (I'm guessing for storage and performance optimization), the consequence to me as a developer is that whatever local topic branch I may be working on, I would like an easy way to switch Heroku's master to that local topic branch and do a "git push heroku -f" to over-write master on Heroku. What I got from reading the "Pushing Refspecs" section of http://progit.org/book/ch9-5.html is git push -f heroku local-topic-branch:refs/heads/master What I'd really like is a way to set this up in the config file so that "git push heroku" always does the above, replacing local-topic-branch with the name of whatever my current branch happens to be. If anyone knows how to accomplish that, please let me know! The caveat for this, of course, is that this is only sensible if I am the only one who can push to that Heroku app/repository. A test or QA team might manage such a repository to try out different candidate branches, but they would have to coordinate so that they all agree on what branch they are pushing to it on any given day. Needless to say, it would also be a very good idea to have a separate remote repository (like Github) without this restriction for backing everything up to. I'd call that one "origin" and use "heroku" for Heroku so that "git push" always backs up everything to origin, and "git push heroku" pushes whatever branch I'm currently on to Heroku's master branch, overwriting it if necessary. Can anybody tell me if this would work? [remote "heroku"] url = [email protected]:my-app.git push = +refs/heads/*:refs/heads/master I'd like to hear from someone more experienced before I begin to experiment, although I suppose I could create a dummy app on Heroku and experiment with that. As for fetching, I don't really care if the Heroku repository is write-only. I still have a separate repository, like Github, for backup and cloning of all my work. Footnote: This question is similar to, but not quite the same as http://stackoverflow.com/questions/1489393/good-git-deployment-using-branches-strategy-with-heroku

    Read the article

< Previous Page | 257 258 259 260 261 262 263 264 265 266 267 268  | Next Page >