Search Results

Search found 2703 results on 109 pages for 'generation'.

Page 88/109 | < Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >

  • Ruby or Rails reporting tools?

    - by Anushank
    I am looking for a report generation tool in ruby or rails which allows the user to define a template, then fetch data into the created template. I have been looking through "The Ruby Box: reporting section." There are two reporting tools I have looked at: Thin Reports: It is really good. You can create your own report template with the template editor. Then you can produce PDF reports using thinreports gems. ODF Report: You can create a template ODF file using Open Office and MS Word, and you can use that template to generate the report. Both of these solutions lack the ability to draw charts. Does anyone know of similar reporting tools that can draw charts within a given report? I have tried the RTF Ruby Library. It works, but shares the limitation that it cannot draw charts and graphs. The minimum requirements are: Able to create customizable templates. (e.g. design layout, set font size, color, embed images etc.) Able to draw tables and charts. Template could be in Docx or excel or xml or any other common file format. Report output report must be in Docx or RTF format. Thanks

    Read the article

  • Maven doesn't compile target/hibernate3/generated-sources

    - by mmm
    Can someone tell me how to configure maven for it also to compile sources from the target/hibernate3/generated-sources directory? I have already read this and other posts but they don't seem to solve my problem (which indeed seems trivial). I have used the bottom-up approach hibernate configuration for cfg.xml, hbm.xml and POJO generation (i.e. auto-generated the complete hibernate configuration out of an existing database schema). I'm also only using standard maven and hibernate3-plugin directory layouts. Yet, when executing mvn compile in the command-line while my sources are in the src/main/java and the generated sources in /target/hibernate3/generated-sources only the ones from src/main/java get compiled and copied into target/classes. I wouldn't like to generate sources into src/main/java as I'd like mvn clean to clean them. I'd like to solve the problem using command-line, plugins and pom.xml only. Is there a way to configure maven-compiler-plugin to do so? Or is there another way? Regards and thanks for any help.

    Read the article

  • DDD and Client/Server apps

    - by Christophe Herreman
    I was wondering if any of you had successfully implemented DDD in a Client/Server app and would like to share some experiences. We are currently working on a smart client in Flex and a backend in Java. On the server we have a service layer exposed to the client that offers CRUD operations amongst some other service methods. I understand that in DDD these services should be repositories and services should be used to handle use cases that do not fit inside a repository. Right now, we mimic these services on the client behind an interface and inject implementations (Webservices, RMI, etc) via an IoC container. So some questions arise: should the server expose repositories to the client or do we need to have some sort of a facade (that is able to handle security for instance) should the client implement repositories (and DDD in general?) knowing that in the client, most of the logic is view related and real business logic lives on the server. All communication with the server happens asynchronously and we have a single threaded programming model on the client. how about mapping client to server objects and vice versa? We tried DTO's but reverted back to exposing the state of our objects and mapping directly to them. I know this is considered bad practice, but it saves us an incredible amount of time) In general I think a new generation of applications is coming with the growth of Flex, Silverlight, JavaFX and I'm curious how DDD fits into this.

    Read the article

  • Credit card system implementation?

    - by Mark
    My site is going to have a credit system that basically works a lot like a credit card. Each user has an unlimited credit limit, but at the end of each week, they have to pay it off. For example, a user might make several purchases between March 1st and 7th, and then at the end of March 7th, they would be emailed an invoice that lists all their purchases during the week and a total that is due by the 14th. If they don't pay it off, their account is simply deactivated until they do. I'm just trying to wrap my head around how to implement this. I have a list of all their purchases, that's not a problem, but I'm just trying to figure out what to do with it. On the end of the 7th day, I could set up a cronjob to generate an invoice, which would basically have an id, and due date, and then I would need another many-to-many table to link all the purchases to the invoice. Then when a user adds money to their account, I guess it's applied against their current outstanding invoice? And what if they don't pay off their invoice by the time a new invoice rolls around, so now they have 2 outstanding ones, how do I know which to apply it against? Or do I make the cronjob check for any previous outstanding invoices, cancel them, and add a new item to the new invoice as "balance forward (+interest)"? How would you apply the money against an invoice? Would each payment have to be linked to an invoice, or could I just deposit it to their account credit, and then somehow figure out whats been paid and what hasn't? What if they pay in advance, before their invoice has been generated? Do I deduct it from their credit from the invoice upon generation, or at the end of the week when its due? There are so many ways to do this... Can anyone describe what approach they would take?

    Read the article

  • fluent nHibernate mapping of subclassed structure

    - by Codezy
    I have a workflow class that has a collection of phases, each phase has a collection of tasks. You can design a workflow that will be used by many engagements. When used in engagement I want to be able to add properties to each class (workflow, phase, and task). For example a task in the designer does not have people assigned, but a task in an engagement would need extra properties like who is assigned to it. I have tried many different approaches using subclasses or interfaces but I just can't get it to map the way I want. Currently I have the engagement level versions as subclasses, but I can't get Engagement phases to map to engagement workflows. Public Class WorkflowMapping Inherits ClassMap(Of Workflow) Sub New() Id(Function(x As Workflow) x.Id).Column("Workflow_Id").GeneratedBy.Identity() Map(Function(x As Workflow) x.Description) Map(Function(x As Workflow) x.Generation) Map(Function(x As Workflow) x.IsActive) HasMany(Function(x As Workflow) x.Phases).Cascade.All() End Sub End Class Public Class EngagementWorkflowMapping Inherits SubclassMap(Of EngagementWorkflow) Sub New() Map(Function(x As EngagementWorkflow) x.ClientNo) Map(Function(x As EngagementWorkflow) x.EngagementNo) End Sub End Class How would you approach mapping this in fluent (or hbm) so that you could load just the workflow base class when designing the flow, or the engagement subclass versions of each when being used by an engagement?

    Read the article

  • RegAsm for Class Library Used in VB6 Application

    - by michael.lukatchik
    To be short and to the point, I've built a C# class library that is both COM-Visible and Registered for COM Interop. I've compiled the library, which resulted in the generation of .dll and .tlb files. I have another machine that's running a VB6 application. So, I copied the .dll and .tlb files over to C:/Windows/system32 folder on the machine. I then registered those files using the following: C:\Windows\Microsoft.NET\Framework\v2.0.50727\RegAsm C:\Windows\system32\TestClass.dll /tlb:TestClass.tlb After the files were registered successfully, I added a project reference to the Test.tlb file from inside my VB6 app, then I tried to invoke a method in my new referenced class like so: Dim myObject As TestNamespace.TestClass Set myObject = New TestNamespace.TestClass MsgBox (myObject.TestMethod()) It doesn't work, and I receive a -2147024894 Automation Error. I've read that I shouldn't install the dll into a private folder like system32. I should either be registering in the GAC or I should be registering in another location using the "/codebase" option: C:\Windows\Microsoft.NET\Framework\v2.0.50727\RegAsm C:\TestClass.dll /tlb:TestClass.tlb /codebase Is there any reason I shouldn't be using system32? Past devs that have worked on this project have placed assembly files used by this VB6 project into system32 and there haven't seemed to be any issues. When I register my dll in the system32 location, I get the Automation Error. When I register my dll in another location (i.e. C:/), the method call into my class library from VB6 works as expected. What gives? I should mention that we will NOT be using the GAC to register any DLL's. That's just the way it is. Any help is appreciated. Mike

    Read the article

  • Visual Studio 2005 to VS 2008

    - by Adi
    hi all, I am a newbie in working on VS IDE and have not much experience in how the different libraries and files are linked in it. I have to build a OpenCV project which was made in VS2005 by one of my colleagues into VS2008. The project is for blob detection. Following is what he has to say in readme : Steps to use the library (using MSVC++ sp 5): 1 - open the project of the library and build it 2 - in the project where the library should be used, add: 2.1 In "Project/Settings/C++/Preprocessor/Additional Include directories" add the directory where the blob library is stored 2.2 In "Project/Settings/Link/Input/Additional library path" add the directory where the blob library is stored and in "Object/Library modules" add the cvblobslib.lib file 3- Include the file "BlobResult.h" where you want to use blob variables. 4- To see an example on using the blob library, see the file example.txt inside the zip file. NOTE: Verify that in the project where the cvblobslib.lib is used, the MFC Runtime Libraries are not mixed: Check in "Project-Settings-C/C++-Code Generation-Use run-time library" of your project and set it to Debug Multithreaded DLL (debug version ) or to Multithreaded DLL ( release version ). 2 Check in "Project-Settings-General" how it uses the MFC. It should be "Use MFC in a shared DLL". NOTE: The library can be compiled and used in .NET using this steps, but the menu options may differ a little NOTE2: In the .NET version, the character sets must be equal in the .lib and in the project. [OpenCV yahoo group: Msg 35500] Can anyone explain me , how to go about in doing this in VS2008. I would also appreciate if someone can explain me how the different libraries are linked , what is Debug, What is Release and all in a Visual Studio project folder we have.\ Thanks in advance Aditya

    Read the article

  • Idiomatic Scala way to deal with base vs derived class field names?

    - by Gregor Scheidt
    Consider the following base and derived classes in Scala: abstract class Base( val x : String ) final class Derived( x : String ) extends Base( "Base's " + x ) { override def toString = x } Here, the identifier 'x' of the Derived class parameter overrides the field of the Base class, so invoking toString like this: println( new Derived( "string" ).toString ) returns the Derived value and gives the result "string". So a reference to the 'x' parameter prompts the compiler to automatically generate a field on Derived, which is served up in the call to toString. This is very convenient usually, but leads to a replication of the field (I'm now storing the field on both Base and Derived), which may be undesirable. To avoid this replication, I can rename the Derived class parameter from 'x' to something else, like '_x': abstract class Base( val x : String ) final class Derived( _x : String ) extends Base( "Base's " + _x ) { override def toString = x } Now a call to toString returns "Base's string", which is what I want. Unfortunately, the code now looks somewhat ugly, and using named parameters to initialize the class also becomes less elegant: new Derived( _x = "string" ) There is also a risk of forgetting to give the derived classes' initialization parameters different names and inadvertently referring to the wrong field (undesirable since the Base class might actually hold a different value). Is there a better way? Edit: To clarify, I really only want the Base values; the Derived ones just seem necessary for initializing the Base ones. The example only references them to illustrate the ensuing issues. It might be nice to have a way to suppress automatic field generation if the derived class would otherwise end up hiding a base class field.

    Read the article

  • change custom mapping - sharp architecture/ fluent nhibernate

    - by csetzkorn
    I am using the sharp architecture which also deploys FNH. The db schema sql code is generated during the testing like this: [TestFixture] [Category("DB Tests")] public class MappingIntegrationTests { [SetUp] public virtual void SetUp() { string[] mappingAssemblies = RepositoryTestsHelper.GetMappingAssemblies(); configuration = NHibernateSession.Init( new SimpleSessionStorage(), mappingAssemblies, new AutoPersistenceModelGenerator().Generate(), "../../../../app/XXX.Web/NHibernate.config"); } [TearDown] public virtual void TearDown() { NHibernateSession.CloseAllSessions(); NHibernateSession.Reset(); } [Test] public void CanConfirmDatabaseMatchesMappings() { var allClassMetadata = NHibernateSession.GetDefaultSessionFactory().GetAllClassMetadata(); foreach (var entry in allClassMetadata) { NHibernateSession.Current.CreateCriteria(entry.Value.GetMappedClass(EntityMode.Poco)) .SetMaxResults(0).List(); } } /// <summary> /// Generates and outputs the database schema SQL to the console /// </summary> [Test] public void CanGenerateDatabaseSchema() { System.IO.TextWriter writeFile = new StreamWriter(@"d:/XXXSqlCreate.sql"); var session = NHibernateSession.GetDefaultSessionFactory().OpenSession(); new SchemaExport(configuration).Execute(true, false, false, session.Connection, writeFile); } private Configuration configuration; } I am trying to use: using FluentNHibernate.Automapping; using xxx.Core; using SharpArch.Data.NHibernate.FluentNHibernate; using FluentNHibernate.Automapping.Alterations; namespace xxx.Data.NHibernateMaps { public class x : IAutoMappingOverride<x> { public void Override(AutoMapping<Tx> mapping) { mapping.Map(x => x.text, "text").CustomSqlType("varchar(max)"); mapping.Map(x => x.url, "url").CustomSqlType("varchar(max)"); } } } To change the standard mapping of strings from NVARCHAR(255) to varchar(max). This is not picked up during the sql schema generation. I also tried: mapping.Map(x = x.text, "text").Length(100000); Any ideas? Thanks. Christian

    Read the article

  • Should the entity framework + self tracking entities be saving me time

    - by sipwiz
    I've been using the entity framework in combination with the self tracking entity code generation templates for my latest silverlight to WCF application. It's the first time I've used the entity framework in a real project and my hope was that I would save myself a lot of time and effort by being able to automatically update the whole data access layer of my project when my database schema changed. Happily I've found that to be the case, updating my database schema by adding a new table, changing column names, adding new columns etc. etc. can be propagated to my business object classes by using the update from database option on the entity framework model. Where I'm hurting is the CRUD operations within my WCF service in response to actions on my Silverlight client. I use the same self tracking entity framework business objects in my Silverlight app but I find I'm continually having to fight against problems such as foreign key associations not being handled correctly when updating an object or the change tracker getting confused about the state of an object at the Silverlight end and the data access operation within the WCF layer throwing a wobbly. It's got to a point where I have now spent more time dealing with this quirks than I have on my previous project where I used Linq-to-SQL as the starting point for rolling my own business objects. Is it just me being hopeless or is the self tracking entities approach something that should be avoided until it's more mature?

    Read the article

  • Apache tomcat7 + couchdb in the same host

    - by demotics2002
    I couldn't find any guide on the internet about how to make them work together. I found some couchdb tutorials but they are mostly having the web pages hosted in couchdb's own webserver. My requirement: 1. Use tomcat 7 (or other versions) - i will be using jsp for the website. It has some features that require file upload and processing of files, file generation, and etc., that will require java. It also has admin console that will require the next item, 2. ExtJS (maybe V4) - I will be needing this in the admin console page for restful access to couchdb and other ui components (sorry but I am not considering jquery at the moment because I am already familiar with . 3. Couchdb - because the client needs a dynamic structure of data. Now my question is how to make tomcat and couchdb run on the same host (and port of course)? As much as possible I would like to avoid making my pages doing cross domain js calls. Worst case I may have to create a servlet that overrides put|get|post|delete that calls couchdb (either by using a driver or httpclient).

    Read the article

  • Transaction Isolation Level of Serializable not working for me

    - by Shahriar
    I have a website which is used by all branches of a store and what it does is that it records customer purchases into a table called myTransactions.myTransactions table has a column named SerialNumber.For each purchase i create a record in the transactions table and assign a serial to it.The stored procedure that does this calls a UDF function to get a new serialNumber before inserting the record.Like below : Create Procedure mytransaction_Insert as begin insert into myTransactions(column1,column2,column3,...SerialNumber) values( Value1 ,Value2,Value3,...., getTransactionNSerialNumber()) end Create function getTransactionNSerialNumber as begin RETURN isnull(SELECT TOP (1) SerialNumber FROM myTransactions READUNCOMMITTED ORDER BY SerialNumber DESC),0) + 1 end The website is being used by so many users in different stores at the same time and it is creating many duplicate serialNumbers(same SerialNumbers).So i added a Sql transaction with ReadCommitted level to the transaction and i still got duplicate transaction numbers.I changed it to SERIALIZABLE in order to lock the resources and i not only got duplicate transaction numbers(!!HOW!!) but i also got sporadic deadlocks between the same stored procedure calls.This is what i tried : (With ommissions of try catch blocks and rollbacks) Create Procedure mytransaction_Insert as begin SET TRANSACTION ISOLATION LEVEL SERIALIZABLE BEGIN TRASNACTION ins insert into myTransactions(column1,column2,column3,...SerialNumber) values( Value1 ,Value2 , Value3, ...., getTransactionNSerialNumber()) COMMIT TRANSACTION ins SET TRANSACTION ISOLATION READCOMMITTED end I even copied the function that gets the serial number directly into the stored procedure instead of the UDF function call and still got duplicate serialNumbers.So,How can a stored procedure line create something Like the c# lock() {} block. By the way,i have to implement the transaction serial number using the same pattern and i can't change the serialNumber to any other identity field or whatever.And for some reasons i need to generate the serialNumber inside the databse and i can't move SerialNumber generation to application level. Thank you.

    Read the article

  • Make seems to think a prerequisite is an intermediate file, removes it

    - by James
    For starters, this exercise in GNU make was admittedly just that: an exercise rather than a practicality, since a simple bash script would have sufficed. However, it brought up interesting behavior I don't quite understand. I've written a seemingly simple Makefile to handle generation of SSL key/cert pairs as necessary for MySQL. My goal was for make <name> to result in <name>-key.pem, <name>-cert.pem, and any other necessary files (specifically, the CA pair if any of it is missing or needs updating, which leads into another interesting follow-up exercise of handling reverse deps to reissue any certs that had been signed by a missing/updated CA cert). After executing all rules as expected, make seems to be too aggressive at identifying intermediate files for removal; it removes a file I thought would be "safe" since it should have been generated as a prereq to the main rule I'm invoking. (Humbly translated, I likely have misinterpreted make's documented behavior to suit my expectation, but don't understand how. ;-) Edited (thanks, Chris!) Adding %-cert.pem to .PRECIOUS does, of course, prevent the deletion. (I had been using the wrong syntax.) Makefile: OPENSSL = /usr/bin/openssl # Corrected, thanks Chris! .PHONY: clean default: ca clean: rm -I *.pem %: %-key.pem %-cert.pem @# Placeholder (to make this implicit create a rule and not cancel one) Makefile: @# Prevent the catch-all from matching Makefile ca-cert.pem: ca-key.pem $(OPENSSL) req -new -x509 -nodes -days 1000 -key ca-key.pem $@ %-key.pem: $(OPENSSL) genrsa 2048 $@ %-cert.pem: %-csr.pem ca-cert.pem ca-key.pem $(OPENSSL) x509 -req -in $ $@ Output: $ make host1 /usr/bin/openssl genrsa 2048 ca-key.pem /usr/bin/openssl req -new -x509 -nodes -days 1000 -key ca-key.pem ca-cert.pem /usr/bin/openssl genrsa 2048 host1-key.pem /usr/bin/openssl req -new -days 1000 -nodes -key host1-key.pem host1-csr.pem /usr/bin/openssl x509 -req -in host1-csr.pem -days 1000 -CA ca-cert.pem -CAkey ca-key.pem -set_serial 01 host1-cert.pem rm host1-csr.pem host1-cert.pem This is driving me crazy, and I'll happily try any suggestions and post results. If I'm just totally noobing out on this one, feel free to jibe away. You can't possibly hurt my feelings. :)

    Read the article

  • Need Advice on designing ATL inproc Server (dll) that serves as both a soure and a sink of events.

    - by Andrew
    Hi, I need to design an ATL inproc server that besides exposing methods and properties, also can fire events (source) and serve as a sink for a third party COM control that fires events. I would assume that this is a fairly common requirement. I can also foresee several "gotchas" that I would like to read up on before commencing the design. My questions/concerns are: Can someone point me to an example? Which threading model to use? Should I have a seperate COM object for the sink? Should I, and how do I, protect certain memory. For example, my server will receive data from the third party control. It will save this, and in some cases, fire an event to interested clients. The interested clients will request the data through a standard method or property. I did try to research this myself. I can find many examples of COM servers that are soures, and some that are sinks, but never both. The only post I did find was this: http://www.generation-nt.com/us/atl-control-an-event-source-sink-help-9098542.html Which strongly advocates putting the sink on a seperate COM object. Any leads, tutorials or ideas would be much appreciated. Thanks, Andrew

    Read the article

  • HTML Doctype Setting / IE Quirks Mode

    - by Frederico
    I'm working within the parameters of an un-reachable developer, whom has created an html generation system for our products.. Whenever a new page is generated he places: <!-- updated page at 05/MAY/2010 02:58.58 --> <!-- You must use the template manager to modify the formatting of this page. --> resulting in my code looking like: <!-- updated page at 05/MAY/2010 02:55.30 --> <!-- You must use the template manager to modify the formatting of this page. --> <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> <html lang="en" xmlns="http://www.w3.org/1999/xhtml" xml:lang="en"> I don't believe IE read's this doctype at all, as when looking at the developer screen, it renders in Quirks mode... is there any other way to force IE out of this horrible Quirks mode? I've been trying to reach the developer but he has been rather un-available.. Thank you in advance for any help you might have to offer.

    Read the article

  • Sort on DataView does not work if DataTable has zero rows

    - by BigBlondeViking
    We have a WPF app that has a DataGrid insode a ListView. private DataTable table_; We do a bunch or dynamic column generation ( depending on the report we are showing ) We then do the a query and fill the DataTable row by row, this query may or may not have data.( not the problem, an empty grid is expected ) We set the ListView's ItemsSource to the DefaultView of the DataTable. lv.ItemsSource = table_.DefaultView; We then (looking at the user's pass usage of the app, set the sort on the column) Sort Method below: private void Sort(string sortBy, ListSortDirection direction) { var dataView = CollectionViewSource.GetDefaultView(lv.ItemsSource); dataView.SortDescriptions.Clear(); var sd = new SortDescription(sortBy, direction); dataView.SortDescriptions.Add(sd); dataView.Refresh(); } In the Zero DataTable rows scenario, the sort does not "hold"? and if we dynamically add rows they will not be in sorted order. If the DataTable has at-least 1 row when the sort is applied, and we dynamically add rows to the DataTable, the rows com in sorted correctly. I have built a standalone app that replicate this... It is an annoyance and I can add a check to see if the DataTable was empty, and re-sort... Anyone know whats going on here, and am I doing something wrong? FYI: What we based this off if comes from the MSDN as well: http://msdn.microsoft.com/en-us/library/ms745786.aspx

    Read the article

  • Makefile trickery using VPATH and include.

    - by roe
    Hi, I'm playing around with make files and the VPATH variable. Basically, I'm grabbing source files from a few different places (specified by the VPATH), and compile them into the current directory using simply a list of .o-files that I want. So far so good, now I'm generating dependency information into a file called '.depend' and including that. Gnumake will attempt to use the rules defined so far to create the included file if it doesn't exist, so that's ok. Basically, my makefile looks like this. VPATH=A/source:B/source:C/source objects=first.o second.o third.o executable: $(objects) .depend: $(objects:.o=.c) $(CC) -MM $^ > $@ include .depend Now for the real question, can I suppress the generation of the .depend file in any way? I'm currently working in a clearcase environment - sloooow, so I'd prefer to have it a bit more under control when to update the dependency information. It's more or less an academic exercise as I could just wrap the thing in a script which is touching the .depend file before executing make (thus making it more recent than any source file), but it'd interesting to know if I can somehow suppress it using 'pure' make. I cannot remove the dependency to the source files (i.e. using simply .depend:), as I'm depending on the $^ variable to do the VPATH resolution for me. If there'd be any way to only update dependencies as a result of updated #include directives, that'd be even better of course.. But I'm not holding my breath for that one.. :)

    Read the article

  • Understanding CSRF - Simple Question

    - by byronh
    I know this might make me seem like an idiot, I've read everything there is to read about CSRF and I still don't understand how using a 'challenge token' would add any sort of prevention. Please help me clarify the basic concept, none of the articles and posts here on SO I read seemed to really explicitly state what value you're comparing with what. From OWASP: In general, developers need only generate this token once for the current session. After initial generation of this token, the value is stored in the session and is utilized for each subsequent request until the session expires. If I understand the process correctly, this is what happens. I log in at http://example.com and a session/cookie is created containing this random token. Then, every form includes a hidden input also containing this random value from the session which is compared with the session/cookie upon form submission. But what does that accomplish? Aren't you just taking session data, putting it in the page, and then comparing it with the exact same session data? Seems like circular reasoning. These articles keep talking about following the "same-origin policy" but that makes no sense, because all CSRF attacks ARE of the same origin as the user, just tricking the user into doing actions he/she didn't intend. Is there any alternative other than appending the token to every single URL as a query string? Seems very ugly and impractical, and makes bookmarking harder for the user.

    Read the article

  • Outputing value of TrueClass / FalseClass to integer or string/

    - by Nick Gorbikoff
    Hello. I'm trying to figure out if there is an easy way to do the following short of adding to_i method to TrueClass/FalseClass. Here is a dilemma: I have a boolean field in my rails app - that is obviously stored as Tinyint in mysql. However - I need to generate xml based of the data in mysql and send it to customer - there SOAP service requires the field in question to have 0 or 1 as the value of this field. So at the time of the xml generation I need to convert my False to 0 and my True to 1 ( which is how they are stored in the DB). Since True & False lack to_i method I could write some if statement that generate either 1 or 0 depending on true/false state. However I have about 10 of these indicators and creating and if/else for each is not very DRY. So what you recommend I do? Or I could add a to_i method to the True / False class. But I'm not sure where should I scope it in my rails app? Just inside this particular model or somewhere else?

    Read the article

  • What data structure to use / data persistence

    - by Dave
    I have an app where I need one table of information with the following fields: field 1 - int or char field 2 - string (max 10 char) field 3 - string (max 20 char) field 4 - float I need the program to filter on field 1 based upon a segmented control and select a field 2 from a picker. From this data I need to look up field 4 to use in a calculation. Total records will be about 200. I never see it go above 400 - 500. I am going to use a singleton which I am able to do, I just need help with the structure for this with data persistence. What type of data structure should I use for this and should I use NSNumber, NSString, etc. or old data types like float, Char, etc. I thought about a struct put into an array but there is probably a better way. This is new to me so any help or reference to examples would be great. I also thought about a plist or dictionary but it looks like it is just a lookup and a field which obviously won't work. Core data looked like overkill to me. Also, with any recommendation how should I get initial data into it? I want the user to be able to edit and add to the database. Sorry for the old terms, you can see what generation I am from... Thanks in advance!!!!

    Read the article

  • Syncing two separate structures to the same master data

    - by Mike Burton
    I've got multiple structures to maintain in my application. All link to the same records, and one of them could be considered the "master" in that it reflects actual relationships held in files on disk. The other structures are used to "call out" elements of the main design for purchase and work orders. I'm struggling to come up with a pattern that deals appropriately with changes to the master data. As an example, the following trees might refer to the same data: A |_ B |_ C |_ D |_ E |_ B |_ C |_ D A |_ B E C |_ D A |_ B C D E These secondary structures follow internal rules, but their overall structure is usually user-determined. In all cases (including the master), any element can be used in multiple locations and in multiple trees. When I add a child to any element in the tree, I want to either automatically build the secondary structure for each instance of the "master" element or at least advertise the situation to the user and allow them to manually generate the data required for the secondary trees. Is there any pattern which might apply to this situation? I've been treating it as a view problem, but it turns out to be more complicated than that when you look at the initial generation of the data.

    Read the article

  • Compiling and linking libcurl to create a stand alone dll

    - by Haraldo
    Hi, I've managed to compile a dll with the necessary linked libraries (*.lib) and with CURL_STATICLIB set in the preprocessor section among other settings. I'm using "libcurl-7.19.3-win32-ssl-msvc.zip" package and compiling with VS 2008 express. This has been the first version I managed to get compiled properly with no link issues etc. The problem I have now is that my dll needs libcurl.dll to function and this is not ok. My dll needs to be independent. I have no idea how to implement this. I've taken all day just to get what I've got compiled. I've got runtime library set to Multi threaded dll (debug/release) respectively under C/C++ - code generation. I've a number of preprocessor set - CURL_STATICLIB being one of them. Configuration Type is set to Dynamic Library Use of MFC is set to Use MFC in a static library Additional Library Directories is set to the lib folders (debug/release) respectively. I've noticed there is a curllib_static.lib file which I've tried instead of curllib.lib as an additional dependency but it only compiles with the later. This is driving me nuts! So I guess I need some guidance as to how to make my dll completely static so it doesn't have any dependencies. I notice my dll is currently dependent on: CURLLIB.DLL MSVCR90D.DLL As I'm pretty new to C++ it could be a setting I'm missing in VS 2008 but I'm not sure. One person said I should be using a static library with *.a files (libcurl.a) etc but when I do this I get link errors which I haven't been able to resolve. Any guidance here would be much appreciated.

    Read the article

  • Using Zend Framework and Doctrine with independend modular structures

    - by stefax
    I've seen a lot of articles about integrating ZF and Doctrine. There is also a proposal for ZF here but they have always two possible structures. Either they put all models into one top level model directory or they put it into a module related model directory. application |-- Bootstrap.php |-- configs |-- controllers |-- models - EITHER HERE |-- modules | -- examplemodule | |-- controllers | |-- models - OR HERE | |-- views |-- views For our projects I see problems for either of the two options: 1. One directory: application/models - in a complex system after a short time there will be hundreds of files, over all when you have the table classes two (e.g. User.php and UserTable.php). 2. Module based model directories: application/modules/examplemodule/models - in many cases we use models in multiple modules at the same time. So the "User" is required e.g. in the modules "game", "administration", ... Is there a way to use some kind of sub directories under the top level directory "models" to get some grouping. It should be completely independent of the module structure. application |-- Bootstrap.php ... |-- models | -- user | |-- User.php | |-- Friend.php | |-- other user related models | -- game | |-- Game.php | |-- Score.php | |-- ... ... Any solution should support autoloading and the class generation from yaml files. Any ideas, links or solutions? Thanks!

    Read the article

  • Eclipselink update existing tables

    - by javydreamercsw
    Maybe I got it wrong but i though that JPA was able to update an existing table (model changed adding a column) but is not working in my case. I can see in the logs eclipselink attempting to create it but failing because it already exists. Instead of trying an update to add the column it keeps going. (Had to remove some < so it displays) property name="javax.persistence.jdbc.url" value="jdbc:mysql://localhost:3306/jwrestling"/ property name="javax.persistence.jdbc.password" value="password"/ property name="javax.persistence.jdbc.driver" value="com.mysql.jdbc.Driver"/ property name="javax.persistence.jdbc.user" value="user"/ property name="eclipselink.ddl-generation" value="create-tables"/ property name="eclipselink.logging.logger" value="org.eclipse.persistence.logging.DefaultSessionLog"/ property name="eclipselink.logging.level" value="INFO"/ And here's the table with the change (online column added) [EL Warning]: 2010-05-31 14:39:06.044--ServerSession(16053322)--Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.1.0.v20100517-r7246): org.eclipse.persistence.exceptions.DatabaseException Internal Exception: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Table 'account' already exists Error Code: 1050 Call: CREATE TABLE account (ID INTEGER NOT NULL, USERNAME VARCHAR(32) NOT NULL, SECURITY_KEY VARCHAR(255) NOT NULL, EMAIL VARCHAR(64) NOT NULL, STATUS VARCHAR(8) NOT NULL, TIMEDATE DATETIME NOT NULL, PASSWORD VARCHAR(255) NOT NULL, ONLINE TINYINT(1) default 0 NOT NULL, PRIMARY KEY (ID)) Query: DataModifyQuery(sql="CREATE TABLE account (ID INTEGER NOT NULL, USERNAME VARCHAR(32) NOT NULL, SECURITY_KEY VARCHAR(255) NOT NULL, EMAIL VARCHAR(64) NOT NULL, STATUS VARCHAR(8) NOT NULL, TIMEDATE DATETIME NOT NULL, PASSWORD VARCHAR(255) NOT NULL, ONLINE TINYINT(1) default 0 NOT NULL, PRIMARY KEY (ID))") [EL Warning]: 2010-05-31 14:39:06.074--ServerSession(16053322)--Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.1.0.v20100517-r7246): org.eclipse.persistence.exceptions.DatabaseException After this it continues with the following. Am I doing something wrong or is a bug?

    Read the article

  • DAL Layer : EF 4.0 or Normal Data access layer with Stored Procedure

    - by Harryboy
    Hello Experts, Application : I am working on one mid-large size application which will be used as a product, we need to decide on our DAL layer. Application UI is in Silverlight and DAL layer is going to be behind service layer. We are also moving ahead with domain model, so our DB tables and domain classes are not having same structure. So patterns like Data Mapper and Repository will definitely come into picture. I need to design DAL Layer considering below mentioned factors in priority manner Speed of Development with above average performance Maintenance Future support and stability of the technology Performance Limitation : 1) As we need to strictly go ahead with microsoft, we can not use NHibernate or any other ORM except EF 4.0 2) We can use any code generation tool (Should be Open source or very cheap) but it should only generate code in .Net, so there would not be any licensing issue on per copy basis. Questions I read so many articles about EF 4.0, on outset it looks like that it is still lacking in features from NHibernate but it is considerably better then EF 1.0 So, Do you people feel that we should go ahead with EF 4.0 or we should stick to ADO .Net and use any code geneartion tool like code smith or any other you feel best Also i need to answer questions like what time it will take to port application from EF 4.0 to ADO .Net if in future we stuck up with EF 4.0 for some features or we are having serious performance issue. In reverse case if we go ahead and choose ADO .Net then what time it will take to swith to EF 4.0 Lastly..as i was going through the article i found the code only approach (with POCO classes) seems to be best suited for our requirement as switching is really easy from one technology to other. Please share your thoughts on the same and please guide on the above questions

    Read the article

< Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >