Search Results

Search found 81 results on 4 pages for 'persistance'.

Page 3/4 | < Previous Page | 1 2 3 4  | Next Page >

  • RESTful application validation. Mix of frontend/backend validation. How?

    - by Julian Davchev
    Hi. Using RESTful for all backend persistance and operations. I just pass data from frontend (by frontend I don't mean clientside but the part that is making use of the REST) to rest and data gets back success or no with validation errors if any. Thing is I have stuff that should be validated on frontend too..like csrf tokens, captcha etc. Only reasonable way is I mix validation coming from token/captcha checks and validation errors coming back from REST. Issue with this will be kinda automation as I wouldn't want form field names to map 1:1 with backend field names use by the REST documents. Any pointers ideas are more than welcome.

    Read the article

  • Can I write to different jetty databases using JPA that is using the same "entity class"

    - by Per
    I am using Java persistance and there EntityManager class and have it assigned to storage a class object that shall be written to the database. My problem is that I want to write to different databases using the same storage class. My solution to that was to write a StorageManagerfactory that has a Map holding all EntityManagers. The solution looked good until I looked at the databases and realized that all information (undepending of the Map, which gets the correct value) was written to the same database (one of the initialised in the Map). So my question is: Can I write to different databases using JPA that is using the same storage class (the class holding the structure of my database)? Thanks

    Read the article

  • Embedded analog of CouchDB, same as sqlite for SQL Server

    - by Mike Chaliy
    I like an idea of document oriented databases like CouchDB. I am looking for simple analog. My requirements is just: persistance storage for schema less data; some simple in-proc quering; good to have transactions and versioning; ruby API; map/reduce is aslo good to have; should work on shared hosting What I do not need is REST/HTTP interfaces (I will use it in-proc). Also I do not need all scalability stuff.

    Read the article

  • How to boot Linux from a 16gb USB flash drive

    - by Chris Harris
    I'm trying to install Linux on a single partition of a USB flash drive that's larger than 4gb. The first place I went to is http://pendrivelinux.com. I can follow these instructions for installing Xubuntu 9.04 perfectly, which unfortunately break down when I try to scale it up beyond 4gb. There are several other tools to do this (unetbootin and usb-creator) which follow a very similar formula. I figured out that a big problem of mine was that all of these tools assume the USB drive is formatted in FAT32, which unfortunately cannot hold a single file larger than 4gb. This is unfortunate because I want to use just one partition, so that my persistance file, casper-rw, looks like one big partition to the OS once I've booted off of the USB drive. I then tried following a myriad of instructions involving formatting the drive as one large ext2 filesystem and using extlinux to create a single bootable ext2 file system. This doesn't work for me however, after about 20 attempts verifying and slightly tweaking the formula, I cannot seem to get a "good" bootable ext2 file system built. I'm not entirely sure what's going on, but it seems as though no matter how hard I try, I cannot get the ext2 file system to remain coherent after copying the Linux ISO contents over, copying the MBR, and executing extlinux to create the ext bootloader. Every time, after I follow these steps (in any order) and reboot, I get an unbootable USB drive. If I then mount the drive under Linux again, I see a mess of a file system (inodes have clearly been screwed up somewhere along the way). I suspected that the USB drive wasn't being fully flushed, so I tried using the "sync" and "unmount" commands before rebooting which didn't affect things at all. I guess I have several possible questions - but let's start with the obvious - is there something I'm missing to create a bootable ext2 USB flash drive that's large (e.g. 16gb)?

    Read the article

  • Redis connection issue

    - by mre
    We are currently experiencing a lot of Redis errors with the message Unable to connect: read error on connection, trying next server We run Redis on FreeBSD using PHP Redis and we have a hard time reproducing the error on Ubuntu so this might be a hint. There's a long-running issue on that topic on github. Basically we get a socket from the operating system with a call to connect(host, port, timeout) in phpredis, but when we do a select(db_index) afterwards, we get an exception. Could there be an issue with persistance? I assume that connect does nothing in the background and select tries to access the connection, which is actually closed. We don't run into a timeout. We tried tuning TIME_WAIT without success. Any other ideas on where the problem might come from? What is the best way to track the issue down? dtrace maybe? Update We are currently looking into our BGSAVE settings. Interestingly it takes half a second and more to create a fork for the process which regularly writes the data to disk (persistence) and maybe redis can't respond to connect() requests during that timespan.

    Read the article

  • Multi-partition USB stick

    - by nightcracker
    In my freelance job as "the dude that fixes your computer" I have an extremely handy tool, a bootable USB stick with Ubuntu LiveCD that allows me to recover and investigate in a known, working environment. Now, I want to reformat this USB stick and reinstall with Casper-RW persistance. I did this a few times before with a FAT-formatted USB stick. It was a horror. The USB drive corrupted constantly, by people accidently removing the USB stick, the computer not properly shutting down, ETC. Now what I want to create a multi-partition USB stick so I can put Ubuntu on a ext partition, but still be able to store some Windows stuff in it, by having a secondary FAT partition. However I read somewhere that Windows will only check the first partition on USB sticks, giving a problem with the first bootable linux partition. Is this possible on some way? EDIT Perhaps it wasn't clear what the problem is. The problem is that I read somewhere that Windows will only recognize the first partition on a USB stick. But I want two partitions, a ext partition and a FAT partition. No issues so far, but in order to be bootable the ext partition must be the first one!

    Read the article

  • C# Design How to Elegantly wrap a DAL class

    - by guazz
    I have an application which uses MyGeneration's dOODads ORM to generate it's Data Access Layer. dOODad works by generating a persistance class for each table in the database. It works like so: // Load and Save Employees emps = new Employees(); if(emps.LoadByPrimaryKey(42)) { emps.LastName = "Just Got Married"; emps.Save(); } // Add a new record Employees emps = new Employees(); emps.AddNew(); emps.FirstName = "Mr."; emps.LastName = "dOOdad"; emps.Save(); // After save the identity column is already here for me. int i = emps.EmployeeID; // Dynamic Query - All Employees with 'A' in thier last name Employees emps = new Employees(); emps.Where.LastName.Value = "%A%"; emps.Where.LastName.Operator = WhereParameter.Operand.Like; emps.Query.Load(); For the above example(i.e. Employees DAL object) I would like to know what is the best method/technique to abstract some of the implementation details on my classes. I don't believe that an Employee class should have Employees(the DAL) specifics in its methods - or perhaps this is acceptable? Is it possible to implement some form of repository pattern? Bear in mind that this is a high volume, perfomacne critical application. Thanks, j

    Read the article

  • How can I implement NHibernate session per request without a dependency on NHibernate?

    - by Ben
    I've raised this question before but am still struggling to find an example that I can get my head around (please don't just tell me to look at the S#arp Architecture project without at least some directions). So far I have achieved near persistance ignorance in my web project. My repository classes (in my data project) take an ISession in the constructor: public class ProductRepository : IProductRepository { private ISession _session; public ProductRepository(ISession session) { _session = session; } In my global.asax I expose the current session and am creating and disposing session on beginrequest and endrequest (this is where I have the dependency on NHibernate): public static ISessionFactory SessionFactory = CreateSessionFactory(); private static ISessionFactory CreateSessionFactory() { return new Configuration() .Configure() .BuildSessionFactory(); } protected MvcApplication() { BeginRequest += delegate { CurrentSessionContext.Bind(SessionFactory.OpenSession()); }; EndRequest += delegate { CurrentSessionContext.Unbind(SessionFactory).Dispose(); }; } And finally my StructureMap registry: public AppRegistry() { For<ISession>().TheDefault .Is.ConstructedBy(x => MvcApplication.SessionFactory.GetCurrentSession()); For<IProductRepository>().Use<ProductRepository>(); } It would seem I need my own generic implementations of ISession and ISessionFactory that I can use in my web project and inject into my repositories? I'm a little stuck so any help would be appreciated. Thanks, Ben

    Read the article

  • dynamically horizontal scalable key value store

    - by Zubair
    Hi, Is there a key value store that will give me the following: Allow me to simply add and remove nodes and will redstribute the data automatically Allow me to remove nodes and still have 2 extra data nodes to provide redundancy Allow me to store text or images up to 1GB in size Can store small size data up to 100TB of data Fast (so will allow queries to be performed on top of it) Make all this transparent to the client Works on Ubuntu/FreeBSD or Mac Free or open source I basically want something I can use a "single", and not have to worry about having memcached, a db, and several storage components so yes, I do want a database "silver bullet" you could say. Thanks Zubair Answers so far: MogileFS on top of BackBlaze - As far as I can see this is just a filesystem, and after some research it only seems to be appropriate for large image files Tokyo Tyrant - Needs lightcloud. This doesn't auto scale as you add new nodes. I did look into this and it seems it is very fast for queries which fit onto a single node though Riak - This is one I am looking into myself, but I don't have any results yet Amazon S3 - Is anyone using this as their sole persistance layer in production? From what I have seen it seems to be used for storage of images as complex queries are too expensive @shaman suggested Cassandra - definitely one I am looking into So far it seems that there is no database or key value store that fulfills the criteria I mentioned, not even after offering a bounty of 100 points did the question get answered!

    Read the article

  • ASP.NET MVC and NHibernate coupling

    - by Ben
    I have just started learning NHibernate. Over the past few months I have been using IoC / DI (structuremap) and the repository pattern and it has made my applications much more loosely coupled and easier to test. When switching my persistence layer to NHibernate I decided to stick with my repositories. Currently I am creating a new session on each method call but of course this means that I can not benefit from lazy loading. Therefore I wish to implement session-per-request but in doing so this will make my web project dependent on NHibernate (perhaps this is not such a bad thing?). I was planning to inject ISession into my repositories and create and dispose sessions on beginrequest/endrequest events (see http://ayende.com/Blog/archive/2009/08/05/do-you-need-a-framework.aspx) Is this a good approach? Presumably I cannot use session-per-request without having a reference to NHibernate in my web project? Having the web project dependent on NHibernate prompts my next (few) questions - why even bother with the repository? Since my web app is calling services that talk to the repositories, why not ditch the repositories and just add my NHibernate persistance code inside the services? And finally, is there really any need to split out into so many projects. Is a web project and an infrastructure project sufficient? I realise that I have veered off a bit from my original question but it seems that everyone seems to have their own opinion on these topics. Some people use the repository pattern with NHibernate, some don't. Some people stick their mapping files with the related classes, others have a separate project for this. Many thanks, Ben

    Read the article

  • Moq, a translator and an expression

    - by jeriley
    I'm working with an expression within a moq-ed "Get Service" and ran into a rather annoying issue. In order to get this test to run correctly and the get service to return what it should, there's a translator in between that takes what you've asked for, sends it off and gets what you -really- want. So, thinking this was easy I attempt this ... the fakelist is the TEntity objects (translated, used by the UI) and TEnterpriseObject is the actual persistance. mockGet.Setup(mock => mock.Get(It.IsAny<Expression<Func<TEnterpriseObject, bool>>>())).Returns( (Expression<Func<TEnterpriseObject, bool>> expression) => { var items = new List<TEnterpriseObject>(); var translator = (IEntityTranslator<TEntity, TEnterpriseObject>) ObjectFactory.GetInstance(typeof (IEntityTranslator<TEntity, TEnterpriseObject>)); fakeList.ForEach(fake => items.Add(translator.ToEnterpriseObject(fake))); items = items.Where(expression); var result = new List<TEnterpriseObject>(items); fakeList.Clear(); result.ForEach(item => translator.ToEntity(item)); return items; }); I'm getting the red squigglie under there items.where(expression) -- says it can't be infered from usage (confused between <Func<TEnterpriseObject,bool>> and <Func<TEnterpriseObject,int,bool>>) A far simpler version works great ... mockGet.Setup(mock => mock.Get(It.IsAny<Expression<Func<TEntity, bool>>>())).Returns( (Expression<Func<TEntity, bool>> expression) => fakeList.AsQueryable().Where(expression)); so I'm not sure what I'm missing... ideas?

    Read the article

  • Using Sub-Types And Return Types in Scala to Process a Generic Object Into a Specific One

    - by pr1001
    I think this is about covariance but I'm weak on the topic... I have a generic Event class used for things like database persistance, let's say like this: class Event( subject: Long, verb: String, directobject: Option[Long], indirectobject: Option[Long], timestamp: Long) { def getSubject = subject def getVerb = verb def getDirectObject = directobject def getIndirectObject = indirectobject def getTimestamp = timestamp } However, I have lots of different event verbs and I want to use pattern matching and such with these different event types, so I will create some corresponding case classes: trait EventCC case class Login(user: Long, timestamp: Long) extends EventCC case class Follow( follower: Long, followee: Long, timestamp: Long ) extends EventCC Now, the question is, how can I easily convert generic Events to the specific case classes. This is my first stab at it: def event2CC[T <: EventCC](event: Event): T = event.getVerb match { case "login" => Login(event.getSubject, event.getTimestamp) case "follow" => Follow( event.getSubject, event.getDirectObject.getOrElse(0), event.getTimestamp ) // ... } Unfortunately, this is wrong. <console>:11: error: type mismatch; found : Login required: T case "login" => Login(event.getSubject, event.getTimestamp) ^ <console>:12: error: type mismatch; found : Follow required: T case "follow" => Follow(event.getSubject, event.getDirectObject.getOrElse(0), event.getTimestamp) Could someone with greater type-fu than me explain if, 1) if what I want to do is possible (or reasonable, for that matter), and 2) if so, how to fix event2CC. Thanks!

    Read the article

  • Fetching Core Data for Tableview on iPhone - Tutorial leaves me with crashes when adding items to a

    - by Gordon Fontenot
    Been following the Core Data tutorial on Apple's developer site, and all is good until I have to add something to the fetched store. I am getting this error after a successful build and load when I try to add a new item to the list: Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Invalid update: invalid number of rows in section 0. The number of rows contained in an existing section after the update (0) must be equal to the number of rows contained in that section before the update (0), plus or minus the number of rows inserted or deleted from that section (1 inserted, 0 deleted). Due to the fact that the fetch goes through fine, and that if I replace the fetching with eventList = [[NSMutableArray alloc] init] it works as expected (without persistance, of course), I am led to believe that the problem comes from not creating the Mutable Array correctly. Here's the problematic part of the code: NSFetchRequest *request = [[NSFetchRequest alloc] init]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"Event" inManagedObjectContext:managedObjectContext]; [request setEntity:entity]; NSSortDescriptor *sortDescriptor = [[NSSortDescriptor alloc] initWithKey:@"creationDate" ascending:NO]; NSArray *sortDescriptors = [[NSArray alloc] initWithObjects:sortDescriptor, nil]; [request setSortDescriptors:sortDescriptors]; [sortDescriptors release]; [sortDescriptor release]; NSError *error; NSMutableArray *mutableFetchResults = [[managedObjectContext executeFetchRequest:request error:&error] mutableCopy]; if (mutableFetchResults = nil) { //Handle the error } [self setEventList:mutableFetchResults]; [mutableFetchResults release]; [request release]; I have tried switching the NSArrays in the second chunk out with NSMutableArrays, but I still get the same error. For reference, the section of code that is throwing the error when I try adding an entry is here: [eventList insertObject:event atIndex:0]; NSIndexPath *indexPath = [NSIndexPath indexPathForRow:0 inSection:0]; [self.tableView insertRowsAtIndexPaths:[NSArray arrayWithObject:indexPath] withRowAnimation:UITableViewRowAnimationFade]; [self.tableView scrollToRowAtIndexPath:[NSIndexPath indexPathForRow:0 inSection:0] atScrollPosition:UITableViewScrollPositionTop animated:YES]; it errors out at the insertRowsAtIndexPaths call. Thanks in advance for any help

    Read the article

  • How to bundle extension methods requiring configuration in a library

    - by Greg
    Hi, I would like to develop a library that I can re-use to add various methods involved in navigating/searching through a graph (nodes/relationships, or if you like vertexs/edges). The generic requirements would be: There are existing classes in the main project that already implement the equivalent of the graph class (which contains the lists of nodes / relationships), node class and relationship class (which links nodes together) - the main project likely already has persistence mechanisms for the info (e.g. these classes might be built using Entity Framework for persistance) Methods would need to be added to each of these 3 classes: (a) graph class - methods like "search all nodes", (b) node class - methods such as "find all children to depth i", c) relationship class - methods like "return relationship type", "get parent node", "get child node". I assume there would be a need to inform the library with the extending methods the class names for the graph/node/relationships table (as different project might use different names). To some extent it would need to be like how a generics collection works (where you pass the classes to the collection so it knows what they are). Need to be a way to inform the library of which node property to use for equality checks perhaps (e.g. if it were a graph of webpages the equality field to use might be the URI path) I'm assuming that using abstract base classes wouldn't really work as this would tie usage down to have to use the same persistence approach, and same class names etc. Whereas really I want to be able to, for a project that has "graph-like" characteristics, the ability to add graph searching/walking methods to it.

    Read the article

  • EMF ecore and xsd out of sync, how to resolve ?

    - by SeB
    Hi there, My application is using a model base on an xsd that have been converted to an ecore before generation of the java classes. One of my team member modified the .ecore metamodel in a previous version ,one attribute that used to be generated. He modified the attribute name but not the Extended MetaData specifying the element name used for xml persistance. <eStructuralFeatures xsi:type="ecore:EReference" name="javaDocsAndUserApi" upperBound="-1" eType="#//JavaDocsAndUserApi" containment="true" resolveProxies="false"> <eAnnotations source="http:///org/eclipse/emf/ecore/util/ExtendedMetaData"> <details key="kind" value="element"/> <details key="name" value="docsAndUserApi"/> </eAnnotations> </eStructuralFeatures> so we have an attribute name which is javaDocsAndUserApi and the persisted element named docsAndUserApi, and of course if I create change the attribute in the xsd to be named javaDocsAndUserApi, the ecore transformation will generate a metadata name javaDocsAndUserApi as well, which will break compatibility with previously persisted models. I have looked at xsd authoring guide to find an ecore:som_attribute that would allow me to specify which key to use in the xsd to force the metadata to be named docsAndUserApi during the xsd to ecore transformation but did not find anything. Does anybody have an idea to help me? Thank you.

    Read the article

  • Fastest way to learn Flex and Java EE?

    - by LostWebNewbie
    Ok so me and my 2 friends have to make a webapp and well we think it's a good opportunity to learn JEE and Flex. The thing is we have very little knowledge about them and we have only 3 months to do it (it doesn't have to be super complicated). So my question is: what, in your opinion, would be the fastest way to learn them both? I guess we need to know some JSP, Serlvets, JPA (??), Flex, maybe JavaScript+CSS? Anything else like EJB? Should we also learn Spring (or Struts)? Obviously reading books would be a good idea, but I bet we won't make the deadline if we try to read all the books... @Edit: I know the basics of JSP/Servlets (read Head First JSP&Servlets) but I made only 1 project so far (a semi-decent hangman with JSP/Serlvets and JPA for persistance) that's about it. Flex - I'm just starting, I know really the basics of mxml and as3. As for why: 1) because we need to do a project for the uni and well I was thinking bout becoming a web dev (yup - jee+flex) after graduation, this is the perfect opportunity to learn them.

    Read the article

  • Domain model for an online WYSYWG webpage generator / runtime

    - by CharlieBrown
    Hi all, I'm using C#, MVC, NHibernate and StructureMap as my IoC container, and need some ideas regarding my domain model. The application I'm working has two parts: an Authoring part and a Runtime part. The idea is to allow the user to create a webpage in Authoring (mostly a form actually) by choosing from a set of predefined controls. That webpage will be later used as a form in a call center environment (Runtime part), or may be used in an intranet portal, etc. Basically something similar to what a CMS would do. The difference is, of course, that the webpage/form the author generates will be used and fulfilled in runtime, and that authros should be able to freely create the webpage they want without limitations. I have a draft working model that allows a RunController to iterate over the ScriptPage (my class for the "generated webpage") Controls collection and uses partial views to render each of them. Works kind of fine. Basically I have a common ScriptControl class, and then I can create for example a TextInputControl or a DropDownControl by inheriting from that base class. I can also figure out the Authoring part of the app, although that will surely be fun in itself for sure. :) The biggest problem I have now is persistance. In order to be flexible, I want to be able to add more controls, and template controls (think of an Address composite control) in sepparate DLLs, so I think having a relational model that handles very possible control is not the way to go. My current thinking is using a kind of ObjectStore: binary-serializing the ScriptPage object that contains the List collection and deserializing at Runtime, but I'm not sure how good will it work with NHibernate and how good the performance will be. Serializing a small "page" with 10 controls results in 7964 bytes, for example. Any ideas out there? Thanks in advance, excuse the length. ;)

    Read the article

  • NHibernate Master-Detail and Detail Deletion.

    - by JMSA
    Role.cs public class Role { public virtual string RoleName { get; set; } public virtual bool IsActive { get; set; } public virtual IList<Permission> PermissionItems { get; set; } } Role.hbm.xml <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="POCO" namespace="POCO"> <class name="Role" table="Role"> <id name="ID" column="ID"> <generator class="native" /> </id> <property name="RoleName" column="RoleName" /> <property name="IsActive" column="IsActive" type="System.Boolean" /> <bag name="PermissionItems" table="Permission" cascade="all" inverse="true"> <key column="RoleID"/> <one-to-many class="Permission" /> </bag> </class> </hibernate-mapping> Permission.cs public class Permission { public virtual string MenuItemKey { get; set; } public virtual int RoleID { get; set; } public virtual Role Role { get; set; } } Permission.hbm.xml <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="POCO" namespace="POCO"> <class name="Permission" table="Permission"> <id name="ID" column="ID"> <generator class="native"/> </id> <property name="MenuItemKey" column="MenuItemKey" /> <property name="RoleID" column="RoleID" /> <many-to-one name="Role" column="RoleID" not-null="true" cascade="all"> </many-to-one> </class> </hibernate-mapping> Suppose, I have saved some permissions in the database by using this code: RoleRepository roleRep = new RoleRepository(); Role role = new Role(); role.Permissions = Permission.GetList(); role.SaveOrUpdate(); Now, I need this code to delete all Permissions, since role.Permissions == null. Here is the code: RoleRepository roleRep = new RoleRepository(); Role role = roleRep.Get(roleId); role.Permissions = null; role.SaveOrUpdate(); But this is not happening. What should be the correct way to cope with this situation? What/how should I change, hbm-file or persistance code?

    Read the article

  • Flex+JPA/Hibernate+BlazeDS+MySQL how to debug this monster?!

    - by Zenzen
    Ok so I'm making a "simple" web app using the technologies from the topic, recently I found http://www.adobe.com/devnet/flex/articles/flex_hibernate.html so I'm following it and I try to apply it to my app, the only difference being I'm working on a Mac and I'm using MAMP for the database (so no command line for me). The thing is I'm having some trouble with retrieving/connecting to the database. I have the remoting-config.xml, persistance.xml, a News.java class (my Entity), a NewsService.java class, a News.as class - all just like in the tutorial. I have of course this line in one of my .mxmls: <mx:RemoteObject id="loaderService" destination="newsService" result="handleLoadResult(event)" fault="handleFault(event)" showBusyCursor="true" /> And my remoting-config.xml looks like this (well part of it): <destination id="newsService"> <properties><source>com.gamelist.news.NewsService</source></properties> </destination> NewsService has a method: public List<News> getLatestNews() { EntityManagerFactory emf = Persistence.createEntityManagerFactory(PERSISTENCE_UNIT); EntityManager em = emf.createEntityManager(); Query findLatestQuery = em.createNamedQuery("news.findLatest"); List<News> news = findLatestQuery.getResultList(); return news; } And the named query is in the News class: @Entity @Table(name="GLT_NEWS") @NamedQueries({ @NamedQuery(name="news.findLatest", query="from GLT_NEWS order by new_date_added limit 5 ") }) The handledLoadResult looks like this: private function handleLoadResult(ev:ResultEvent):void { newsList = ev.result as ArrayCollection; newsRecords = newsList.length; } Where: [Bindable] private var newsList:ArrayCollection = new ArrayCollection(); But when I try to trigger: loaderService.getLatestNews(); nothing happens, newsList is empty. Few things I need to point out: 1) as I said I didn't install mysql manually, but I'm using MAMP (yes, the server's running), could this cause some trouble? 2) I already have a "gladm" database and I have a "GLT_NEWS" table with all the fields, is this bad? Basically the question is how am I suppose to debug this thing so I can find the mistake I'm making? I know that loadData() is executed (did a trace()), but I have no idea what happens with loaderService.getLatestNews()... @EDIT: ok so I see I'm getting an error in the "fault handler" which says "Error: Client.Error.MessageSend - Channel.Connect.Failed error NetConnection.Call.Failed: HTTP: Status 404: url: 'http://localhost:8080/WebContent/messagebroker/amf' - "

    Read the article

  • Public class: The best way to store and access NSMutableDictionary?

    - by meridimus
    I have a class to help me store persistent data across sessions. The problem is I want to store a running sample of the property list or "plist" file in an NSMutableArray throughout the instance of the Persistance class so I can read and edit the values and write them back when I need to. The problem is, as the methods are publicly defined I cannot seem to access the declared NSMutableDictionary without errors. The particular error I get on compilation is: warning: 'Persistence' may not respond to '+saveData' So it kind of renders my entire process unusable until I work out this problem. Here is my full persistence class (please note, it's unfinished so it's just to show this problem): Persistence.h #import <UIKit/UIKit.h> #define kSaveFilename @"saveData.plist" @interface Persistence : NSObject { NSMutableDictionary *saveData; } @property (nonatomic, retain) NSMutableDictionary *saveData; + (NSString *)dataFilePath; + (NSDictionary *)getSaveWithCampaign:(NSUInteger)campaign andLevel:(NSUInteger)level; + (void)writeSaveWithCampaign:(NSUInteger)campaign andLevel:(NSUInteger)level withData:(NSDictionary *)saveData; + (NSString *)makeCampaign:(NSUInteger)campaign andLevelKey:(NSUInteger)level; @end Persistence.m #import "Persistence.h" @implementation Persistence @synthesize saveData; + (NSString *)dataFilePath { NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString *documentsDirectory = [paths objectAtIndex:0]; return [documentsDirectory stringByAppendingPathComponent:kSaveFilename]; } + (NSDictionary *)getSaveWithCampaign:(NSUInteger)campaign andLevel:(NSUInteger)level { NSString *filePath = [self dataFilePath]; if([[NSFileManager defaultManager] fileExistsAtPath:filePath]) { NSLog(@"File found"); [[self saveData] setDictionary:[[NSDictionary alloc] initWithContentsOfFile:filePath]]; // This is where the warning "warning: 'Persistence' may not respond to '+saveData'" occurs NSString *campaignAndLevelKey = [self makeCampaign:campaign andLevelKey:level]; NSDictionary *campaignAndLevelData = [[self saveData] objectForKey:campaignAndLevelKey]; return campaignAndLevelData; } else { return nil; } } + (void)writeSaveWithCampaign:(NSUInteger)campaign andLevel:(NSUInteger)level withData:(NSDictionary *)saveData { NSString *campaignAndLevelKey = [self makeCampaign:campaign andLevelKey:level]; NSDictionary *saveDataWithKey = [[NSDictionary alloc] initWithObjectsAndKeys:saveData, campaignAndLevelKey, nil]; //[campaignAndLevelKey release]; [saveDataWithKey writeToFile:[self dataFilePath] atomically:YES]; } + (NSString *)makeCampaign:(NSUInteger)campaign andLevelKey:(NSUInteger)level { return [[NSString stringWithFormat:@"%d - ", campaign+1] stringByAppendingString:[NSString stringWithFormat:@"%d", level+1]]; } @end I call this class like any other, by including the header file in my desired location: @import "Persistence.h" Then I call the function itself like so: NSDictionary *tempSaveData = [[NSDictionary alloc] [Persistence getSaveWithCampaign:currentCampaign andLevel:currentLevel]];

    Read the article

  • [NHibernate and ASP.NET MVC] How can I implement a robust session-per-request pattern in my project,

    - by Guillaume Gervais
    I'm currently building an ASP.NET MVC project, with NHibernate as its persistance layer. For now, some functionnalities have been implemented, but only use local NHibernate sessions: each method that accessed the database (read or write) needs to instanciate its own NHibernate session, with the "using()" directive. The problem is that I want to leverage NHibernate's Lazy-Loading capabilities to improve the performance of my project. This implies an open NHibernate session per request until the view is rendered. Furthermore, simultaneous request must be supported (multiple Sessions at the same time). How can I achieve that as cleanly as possible? I searched the Web a little bit and learned about the session-per-request pattern. Most of the implementations I saw used some sort of Http* (HttpContext, etc.) object to store the session. Also, using the Application_BeginRequest/Application_EndRequest functions is complicated, since they get fired for each HTTP request (aspx files, css files, js files, etc.), when I only want to instanciate a session once per request. The concern that I have is that I don't want my views or controllers to have access to NHibernate sessions (or, more generally, NHibernate namespaces and code). That means that I do not want to handle sessions at the controller level nor the view one. I have a few options in mind. Which one seems the best ? Use interceptors (like in GRAILS) that get triggered before and after the controller action. These would open and close sessions/transactions. Is it possible in the ASP.NET MVC world? Use the CurrentSessionContext Singleton provided by NHibernate in a Web context. Using this page as an example, I think this is quite promising, but that still requires filters at the controller level. Use the HttpContext.Current.Items to store the request session. This, coupled with a few lines of code in Global.asax.cs, can easily provide me with a session on the request level. However, it means that dependencies will be injected between NHibernate and my views (HttpContext). Thank you very much!

    Read the article

  • Object as itemValue in <f:selectItems>

    - by Ehsun
    Is it possible to have objects as itemValue in tag? for example I have a class Foo: public class Foo { private int id; private String name; private Date date; } And another class Bar public class Bar { private Foo foos; } public class BarBean { private Set<Foo> foos; } Now in a Bean called BarBean I need to have a to get the Foo of the current Bar from User like this: <h:selectOneMenu value="#{barBean.bar.foo}" required="true"> <f:selectItems value="#{barBean.foos}" var="foo" itemLabel="#{foo.name}" itemValue="#{foo}" /> </h:selectOneMenu> ---------------edited: my converter: package ir.khorasancustoms.g2g.converters; import ir.khorasancustoms.g2g.persistance.CatalogValue; import java.util.ResourceBundle; import javax.faces.application.FacesMessage; import javax.faces.component.UIComponent; import javax.faces.context.FacesContext; import javax.faces.convert.Converter; import javax.faces.convert.FacesConverter; import org.hibernate.Session; import org.hibernate.SessionFactory; import org.hibernate.Transaction; import org.hibernate.cfg.Configuration; @FacesConverter("ir.khorasancustoms.CatalogValueConverter") public class CatalogValueConverter implements Converter { @Override public Object getAsObject(FacesContext context, UIComponent component, String value) { SessionFactory factory = new Configuration().configure().buildSessionFactory(); Session session = factory.openSession(); try { int id = Integer.parseInt(value); CatalogValue catalogValue = (CatalogValue) session.load(CatalogValue .class, id); return catalogValue; } catch (Exception ex) { Transaction tx = session.getTransaction(); if (tx.isActive()) { tx.rollback(); } ResourceBundle rb = ResourceBundle.getBundle("application"); String message = rb.getString("databaseConnectionFailed"); FacesContext.getCurrentInstance().addMessage(null, new FacesMessage(FacesMessage.SEVERITY_FATAL, message, message)); } finally { session.close(); } return null; } @Override public String getAsString(FacesContext context, UIComponent component, Object value) { return ((CatalogValue) value).getId() + ""; } } and my facelet: <h:outputText value="#{lbls.paymentUnit}:"/> <h:selectOneMenu id="paymentUnit" label="#{lbls.paymentUnit}" value="#{price.price.ctvUnit}" required="true"> <f:selectItems value="#{price.paymentUnits}"/> <f:converter converterId="ir.khorasancustoms.CatalogValueConverter"/> </h:selectOneMenu> <h:message for="paymentUnit" infoClass="info" errorClass="error" warnClass="warning" fatalClass="fatal"/>

    Read the article

  • multiple-inheritance substitution

    - by Luigi
    I want to write a module (framework specific), that would wrap and extend Facebook PHP-sdk (https://github.com/facebook/php-sdk/). My problem is - how to organize classes, in a nice way. So getting into details - Facebook PHP-sdk consists of two classes: BaseFacebook - abstract class with all the stuff sdk does Facebook - extends BaseFacebook, and implements parent abstract persistance-related methods with default session usage Now I have some functionality to add: Facebook class substitution, integrated with framework session class shorthand methods, that run api calls, I use mostly (through BaseFacebook::api()), authorization methods, so i don't have to rewrite this logic every time, configuration, sucked up from framework classes, insted of passed as params caching, integrated with framework cache module I know something has gone very wrong, because I have too much inheritance that doesn't look very normal.Wrapping everything in one "complex extension" class also seems too much. I think I should have few working togheter classes - but i get into problems like: if cache class doesn't really extend and override BaseFacebook::api() method - shorthand and authentication classes won't be able to use the caching. Maybe some kind of a pattern would be right in here? How would you organize these classes and their dependencies? EDIT 04.07.2012 Bits of code, related to the topic: This is how the base class of Facebook PHP-sdk: abstract class BaseFacebook { // ... some methods public function api(/* polymorphic */) { // ... method, that makes api calls } public function getUser() { // ... tries to get user id from session } // ... other methods abstract protected function setPersistentData($key, $value); abstract protected function getPersistentData($key, $default = false); // ... few more abstract methods } Normaly Facebook class extends it, and impelements those abstract methods. I replaced it with my substitude - Facebook_Session class: class Facebook_Session extends BaseFacebook { protected function setPersistentData($key, $value) { // ... method body } protected function getPersistentData($key, $default = false) { // ... method body } // ... implementation of other abstract functions from BaseFacebook } Ok, then I extend this more with shorthand methods and configuration variables: class Facebook_Custom extends Facebook_Session { public funtion __construct() { // ... call parent's constructor with parameters from framework config } public function api_batch() { // ... a wrapper for parent's api() method return $this->api('/?batch=' . json_encode($calls), 'POST'); } public function redirect_to_auth_dialog() { // method body } // ... more methods like this, for common queries / authorization } I'm not sure, if this isn't too much for a single class ( authorization / shorthand methods / configuration). Then there comes another extending layer - cache: class Facebook_Cache extends Facebook_Custom { public function api() { $cache_file_identifier = $this->getUser(); if(/* cache_file_identifier is not null and found a valid file with cached query result */) { // return the result } else { try { // call Facebook_Custom::api, cache and return the result } catch(FacebookApiException $e) { // if Access Token is expired force refreshing it parent::redirect_to_auth_dialog(); } } } // .. some other stuff related to caching } Now this pretty much works. New instance of Facebook_Cache gives me all the functionality. Shorthand methods from Facebook_Custom use caching, because Facebook_Cache overwrited api() method. But here is what is bothering me: I think it's too much inheritance. It's all very tight coupled - like look how i had to specify 'Facebook_Custom::api' instead of 'parent:api', to avoid api() method loop on Facebook_Cache class extending. Overall mess and ugliness. So again, this works but I'm just asking about patterns / ways of doing this in a cleaner and smarter way.

    Read the article

  • Why can't Java servlet sent out an object ?

    - by Frank
    I use the following method to send out an object from a servlet : public void doGet(HttpServletRequest request,HttpServletResponse response) throws IOException { String Full_URL=request.getRequestURL().append("?"+request.getQueryString()).toString(); String Contact_Id=request.getParameter("Contact_Id"); String Time_Stamp=Get_Date_Format(6),query="select from "+Contact_Info_Entry.class.getName()+" where Contact_Id == '"+Contact_Id+"' order by Contact_Id desc"; PersistenceManager pm=null; try { pm=PMF.get().getPersistenceManager(); // note that this returns a list, there could be multiple, DataStore does not ensure uniqueness for non-primary key fields List<Contact_Info_Entry> results=(List<Contact_Info_Entry>)pm.newQuery(query).execute(); Write_Serialized_XML(response.getOutputStream(),results.get(0)); } catch (Exception e) { Send_Email(Email_From,Email_To,"Check_License_Servlet Error [ "+Time_Stamp+" ]",new Text(e.toString()+"\n"+Get_Stack_Trace(e)),null); } finally { pm.close(); } } /** Writes the object and CLOSES the stream. Uses the persistance delegate registered in this class. * @param os The stream to write to. * @param o The object to be serialized. */ public static void writeXMLObject(OutputStream os,Object o) { // Classloader reference must be set since netBeans uses another class loader to loead the bean wich will fail in some circumstances. ClassLoader oldClassLoader=Thread.currentThread().getContextClassLoader(); Thread.currentThread().setContextClassLoader(Check_License_Servlet.class.getClassLoader()); XMLEncoder encoder=new XMLEncoder(os); encoder.setExceptionListener(new ExceptionListener() { public void exceptionThrown(Exception e) { e.printStackTrace(); }}); encoder.writeObject(o); encoder.flush(); encoder.close(); Thread.currentThread().setContextClassLoader(oldClassLoader); } private static ByteArrayOutputStream writeOutputStream=new ByteArrayOutputStream(16384); /** Writes an object to XML. * @param out The boject out to write to. [ Will not be closed. ] * @param o The object to write. */ public static synchronized void writeAsXML(ObjectOutput out,Object o) throws IOException { writeOutputStream.reset(); writeXMLObject(writeOutputStream,o); byte[] Bt_1=writeOutputStream.toByteArray(); byte[] Bt_2=new Des_Encrypter().encrypt(Bt_1,Key); out.writeInt(Bt_2.length); out.write(Bt_2); out.flush(); out.close(); } public static synchronized void Write_Serialized_XML(OutputStream Output_Stream,Object o) throws IOException { writeAsXML(new ObjectOutputStream(Output_Stream),o); } At the receiving end the code look like this : File_Url="http://"+Site_Url+App_Dir+File_Name; try { Contact_Info_Entry Online_Contact_Entry=(Contact_Info_Entry)Read_Serialized_XML(new URL(File_Url)); } catch (Exception e) { e.printStackTrace(); } private static byte[] readBuf=new byte[16384]; public static synchronized Object readAsXML(ObjectInput in) throws IOException { // Classloader reference must be set since netBeans uses another class loader to load the bean which will fail under some circumstances. ClassLoader oldClassLoader=Thread.currentThread().getContextClassLoader(); Thread.currentThread().setContextClassLoader(Tool_Lib_Simple.class.getClassLoader()); int length=in.readInt(); readBuf=new byte[length]; in.readFully(readBuf,0,length); byte Bt[]=new Des_Encrypter().decrypt(readBuf,Key); XMLDecoder dec=new XMLDecoder(new ByteArrayInputStream(Bt,0,Bt.length)); Object o=dec.readObject(); Thread.currentThread().setContextClassLoader(oldClassLoader); in.close(); return o; } public static synchronized Object Read_Serialized_XML(URL File_Url) throws IOException { return readAsXML(new ObjectInputStream(File_Url.openStream())); } But I can't get the object from the Java app that's on the receiving end, why ? The error messages look like this : java.lang.ClassNotFoundException: PayPal_Monitor.Contact_Info_Entry Continuing ... java.lang.NullPointerException: target should not be null Continuing ... java.lang.NullPointerException: target should not be null Continuing ... java.lang.NullPointerException: target should not be null Continuing ...

    Read the article

  • Deleting unreferenced child records with nhibernate

    - by Chev
    Hi There I am working on a mvc app using nhibernate as the orm (ncommon framework) I have parent/child entities: Product, Vendor & ProductVendors and a one to many relationship between them with Product having a ProductVendors collection Product.ProductVendors. I currently am retrieving a Product object and eager loading the children and sending these down the wire to my asp.net mvc client. A user will then modify the list of Vendors and post the updated Product back. I am using a custom model binder to generate the modified Product entity. I am able to update the Product fine and insert new ProductVendors. My problem is that dereferenced ProductVendors are not cascade deleted when specifying Product.ProductVendors.Clear() and calling _productRepository.Save(product). The problem seems to be with attaching the detached instance. Here are my mapping files: Product <?xml version="1.0" encoding="utf-8" ?> <id name="Id"> <generator class="guid.comb" /> </id> <version name="LastModified" unsaved-value="0" column="LastModified" /> <property name="Name" type="String" length="250" /> ProductVendors <?xml version="1.0" encoding="utf-8" ?> <id name="Id"> <generator class="guid.comb" /> </id> <version name="LastModified" unsaved-value="0" column="LastModified" /> <property name="Price" /> <many-to-one name="Product" class="Product" column="ProductId" lazy="false" not-null="true" /> <many-to-one name="Vendor" class="Vendor" column="VendorId" lazy="false" not-null="true" /> Custom Model Binder: using System; using Test.Web.Mvc; using Test.Domain; namespace Spoked.MVC { public class ProductUpdateModelBinder : DefaultModelBinder { private readonly ProductSystem ProductSystem; public ProductUpdateModelBinder(ProductSystem productSystem) { ProductSystem = productSystem; } protected override void OnModelUpdated(ControllerContext controllerContext, ModelBindingContext bindingContext) { var product = bindingContext.Model as Product; if (product != null) { product.Category = ProductSystem.GetCategory(new Guid(bindingContext.ValueProvider["Category"].AttemptedValue)); product.Brand = ProductSystem.GetBrand(new Guid(bindingContext.ValueProvider["Brand"].AttemptedValue)); product.ProductVendors.Clear(); if (bindingContext.ValueProvider["ProductVendors"] != null) { string[] productVendorIds = bindingContext.ValueProvider["ProductVendors"].AttemptedValue.Split(','); foreach (string id in productVendorIds) { product.AddProductVendor(ProductSystem.GetVendor(new Guid(id)), 90m); } } } } } } Controller: [AcceptVerbs(HttpVerbs.Post)] public ActionResult Update(Product product) { using (var scope = new UnitOfWorkScope()) { //product.ProductVendors.Clear(); _productRepository.Save(product); scope.Commit(); } using (new UnitOfWorkScope()) { IList<Vendor> availableVendors = _productSystem.GetAvailableVendors(product); productDetailEditViewModel = new ProductDetailEditViewModel(product, _categoryRepository.Select(x => x).ToList(), _brandRepository.Select(x => x).ToList(), availableVendors); } return RedirectToAction("Edit", "Products", new {id = product.Id.ToString()}); } The following test does pass though: [Test] [NUnit.Framework.Category("ProductTests")] public void Can_Delete_Product_Vendors_By_Dereferencing() { Product product; using(UnitOfWorkScope scope = new UnitOfWorkScope()) { Console.Out.WriteLine("Selecting..."); product = _productRepository.First(); Console.Out.WriteLine("Adding Product Vendor..."); product.AddProductVendor(_vendorRepository.First(), 0m); scope.Commit(); } Console.Out.WriteLine("About to delete Product Vendors..."); using (UnitOfWorkScope scope = new UnitOfWorkScope()) { Console.Out.WriteLine("Clearing Product Vendor..."); _productRepository.Save(product); // seems to be needed to attach entity to the persistance manager product.ProductVendors.Clear(); scope.Commit(); } } Going nuts here as I almost have a very nice solution between mvc, custom model binders and nhibernate. Just not seeing my deletes cascaded. Any help greatly appreciated. Chev

    Read the article

< Previous Page | 1 2 3 4  | Next Page >