Search Results

Search found 64715 results on 2589 pages for 'spring data rest'.

Page 88/2589 | < Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >

  • How to get the set of beans that are to be created in Spring?

    - by cyborg
    So here's the scenario: I have a Spring XML configuration with some lazy-beans, some not lazy-beans and some beans that depend on other beans. Eventually Spring will resolve all this so that only the beans that are meant to be created are created. The question: how can I programmatically tell what this set is? When I use context.getBean(name) that initializes the bean. BeanDefinition.isLazyInit() will only tell me how I defined the bean. Any other ideas? ETA: In DefaultListableBeanFactory: public void preInstantiateSingletons() throws BeansException { if (this.logger.isInfoEnabled()) { this.logger.info("Pre-instantiating singletons in " + this); } synchronized (this.beanDefinitionMap) { for (Iterator it = this.beanDefinitionNames.iterator(); it.hasNext();) { String beanName = (String) it.next(); RootBeanDefinition bd = getMergedLocalBeanDefinition(beanName); if (!bd.isAbstract() && bd.isSingleton() && !bd.isLazyInit()) { if (isFactoryBean(beanName)) { FactoryBean factory = (FactoryBean) getBean(FACTORY_BEAN_PREFIX + beanName); if (factory instanceof SmartFactoryBean && ((SmartFactoryBean) factory).isEagerInit()) { getBean(beanName); } } else { getBean(beanName); } } } } } The set of instantiable beans is initialized. When initializing this set any beans not in this set referenced by this set will also be created. From looking through the source it does not look like there's going to be any easy way to answer my question.

    Read the article

  • Can I add a spring mvc filter using jetty with a jar file?

    - by Juan Manuel
    I have a simple web application disguised as a java application (as in, it's a .jar instead of a .war), and I'd like to use a filter for my requests. If it was a .war, I could initialize it with a WebAppContext and specify a web.xml file where I'd have my filter declaration like this <filter> <filter-name>myFilter</filter-name> <filter-class>MyFilterClass</filter-class> </filter> <filter-mapping> <filter-name>myFilter</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> However, I'm using a simple Context to initialize my application with Spring. Server server = new Server(8082); Context root = new Context(server, "/", Context.SESSIONS); DispatcherServlet dispatcherServlet = new DispatcherServlet(); dispatcherServlet.setContextConfigLocation("classpath:application-context.xml"); root.addServlet(new ServletHolder(dispatcherServlet), "/*"); server.start(); Is there a way to programmatically specify filters for the spring servlet, without using a web.xml file?

    Read the article

  • Spring security or BCrypt algorithm which one is good for accounts like project?

    - by Ranjith Kumar Nethaji
    I am using spring security for hashing my password.And is it safe ,because am using spring security for first time. my code here <security:http auto-config="true"> <security:intercept-url pattern="/welcome*" access="ROLE_USER" /> <security:form-login login-page="/login" default-target-url="/welcome" authentication-failure-url="/loginfailed" /> <security:logout logout-success-url="/logout" /> </security:http> authentication-failure-url="/loginfailed" /> <security:logout logout-success-url="/logout" /> </security:http> <authentication-manager> <authentication-provider> <password-encoder hash="sha" /> <user-service> <user name="k" password="7c4a8d09ca3762af61e59520943dc26494f8941b" authorities="ROLE_USER" /> </user-service> </authentication-provider> </authentication-manager> .And I havnt used bcrypt algorithm.what is your feedback for both?any recommendation?

    Read the article

  • Speaking at Microsoft's Duth DevDays

    - by gsusx
    Last week I had the pleasure of presenting two sessions at Microsoft's Dutch DevDays at Den Hague. On Tuesday I presented a sessions about how to implement real world RESTFul services patterns using WCF, WCF Data Services and ASP.NET MVC2. During that session I showed a total of 15 small demos that highlighted how to implement key aspects of RESTful solutions such as Security, LowREST clients, URI modeling, Validation, Error Handling, etc. As part of those demos I used the OAuth implementation created...(read more)

    Read the article

  • DonXml does WCF in NYC

    - by gsusx
    Tomorrow is WCF day in New York city!!!!! My good friend and Tellago's CTO Don Demsak will be doing a session WCF Data and RIA Services at the WCF fire-starter event to be hosted at the Microsoft offices in New York city. Don has a encyclopedic knowledge of both technologies and will be sharing lots of best practices learned from applying these technologies in large service oriented environments. In addition to Don, my crazy Cuban friend Miguel Castro will also be presenting three sessions at the...(read more)

    Read the article

  • Data Structures for Logic Games / Deduction Rules / Sufficient Set of Clues?

    - by taserian
    I've been cogitating about developing a logic game similar to Einstein's Puzzle , which would have different sets of clues for every new game replay. What data structures would you use to handle the different entities (pets, colors of houses, nationalities, etc.), deduction rules, etc. to guarantee that the clues you provide point to a unique solution? I'm having a hard time thinking about how to get the deduction rules to play along with the possible clues; any insight would be appreciated.

    Read the article

  • How do we keep dependent data structures up to date?

    - by Geo
    Suppose you have a parse tree, an abstract syntax tree, and a control flow graph, each one logically derived from the one before. In principle it is easy to construct each graph given the parse tree, but how can we manage the complexity of updating the graphs when the parse tree is modified? We know exactly how the tree has been modified, but how can the change be propagated to the other trees in a way that doesn't become difficult to manage? Naturally the dependent graph can be updated by simply reconstructing it from scratch every time the first graph changes, but then there would be no way of knowing the details of the changes in the dependent graph. I currently have four ways to attempt to solve this problem, but each one has difficulties. Nodes of the dependent tree each observe the relevant nodes of the original tree, updating themselves and the observer lists of original tree nodes as necessary. The conceptual complexity of this can become daunting. Each node of the original tree has a list of the dependent tree nodes that specifically depend upon it, and when the node changes it sets a flag on the dependent nodes to mark them as dirty, including the parents of the dependent nodes all the way down to the root. After each change we run an algorithm that is much like the algorithm for constructing the dependent graph from scratch, but it skips over any clean node and reconstructs each dirty node, keeping track of whether the reconstructed node is actually different from the dirty node. This can also get tricky. We can represent the logical connection between the original graph and the dependent graph as a data structure, like a list of constraints, perhaps designed using a declarative language. When the original graph changes we need only scan the list to discover which constraints are violated and how the dependent tree needs to change to correct the violation, all encoded as data. We can reconstruct the dependent graph from scratch as though there were no existing dependent graph, and then compare the existing graph and the new graph to discover how it has changed. I'm sure this is the easiest way because I know there are algorithms available for detecting differences, but they are all quite computationally expensive and in principle it seems unnecessary so I'm deliberately avoiding this option. What is the right way to deal with these sorts of problems? Surely there must be a design pattern that makes this whole thing almost easy. It would be nice to have a good solution for every problem of this general description. Does this class of problem have a name?

    Read the article

  • Attachment handling for web application with Jackrabbit

    - by Andrea Girardi
    I need to manage attachments on my Spring web application and I thought to use an open source repository. My app it's a job approval system using J2EE / SPRING 3 Framework and Postgress DB to allow user to tracks the job,right through every step of the approval process. It is a fully managed, collaborative system that operates from a central server and is accessed by a standard internet browser. An user should be able to add an attach to a request or an approval step, so, I though to use Jackrabbit with Postgres database persistence manager. I took a look to this post: http://onjava.com/pub/a/onjava/2006/10/04/what-is-java-content-repository.html?page=1 It's really interesting but, I've some question about this kind of solution :- I seen that Jackrabbit standalone as a Derby database embedded solution for persistence, is it enough for a professional use of the repository with more than 50 request / days (with attachment) ? Is there a reason for which I should use another database manager for persistence instead of the default one ?

    Read the article

  • Building non (jsp/freemarker) template based website [on hold]

    - by Ismail Marmoush
    If my web app is supposed to work in one page, something like asana.com, and I wanted to make the whole website free of templates, meaning I would serve data and make js/mobile app call them, or even let other developers create new interfaces for it. So is it acceptable to have such a design for such a problem ? or you think I would eventually have use jsps/freemarker for a certain case. I found something when I started asking the right questions, here is it wiki: Single Page Application

    Read the article

  • Data structures for a 2D multi-layered and multi-region map?

    - by DevilWithin
    I am working on a 2D world editor and a world format subsequently. If I were to handle the game "world" being created just as a layered set of structures, either in top or side views, it would be considerably simple to do most things. But, since this editor is meant for 3rd parties, I have no clue how big worlds one will want to make and I need to keep in mind that eventually it will become simply too much to check, handling and comparing stuff that are happening completely away from the player position. I know the solution for this is to subdivide my world into sub regions and stream them on the fly, loading and unloading resources and other data. This way I know a virtually infinite game area is achievable. But, while I know theoretically what to do, I really have a few questions I'd hoped to get answered for some hints about the topic. The logic way to handle the regions is some kind of grid, would you pick evenly distributed blocks with equal sizes or would you let the user subdivide areas by taste with irregular sized rectangles? In case of even grids, would you use some kind of block/chunk neighbouring system to check when the player transposes the limit or just put all those in a simple array? Being a region a different data structure than its owner "game world", when streaming a region, would you deliver the objects to the parent structures and track them for unloading later, or retain the objects in each region for a more "hard-limit" approach? Introducing the subdivision approach to the project, and already having a multi layered scene graph structure on place, how would i make it support the new concept? Would you have the parent node have the layers as children, and replicate in each layer node, a node per region? Or the opposite, parent node owns all the regions possible, and each region has multiple layers as children? Or would you just put the region logic outside the graph completely(compatible with the first suggestion in Q.3) When I say virtually infinite worlds, I mean it of course under the contraints of the variable sizes and so on. Using float positions, a HUGE world can already be made. Do you think its sane to think beyond that? Because I think its ok to stick to this limit since it will never be reached so easily.. As for when to stream a region, I'm implementing it as a collection of watcher cameras, which the streaming system works with to know what to load/unload. The problem here is, i will be needing some kind of warps/teleports built in for my game, and there is a chance i will be teleporting a player to a unloaded region far away. How would you approach something like this? Is it sane to load any region to memory which can be teleported to by a warp within a radius from the player? Sorry for the huge question, any answers are helpful!

    Read the article

  • How to survive if you can only do things your way as a programmer?

    - by niceguyjava
    I hate hibernate, I hate spring and I am the kind of programmer who likes to do things his way. I hate micro-management and other people making decisions for me about what framework I should use, what patterns I should apply (hate patterns too) and what architecture I should design. I consider myself a successful programmer and have a descent financial situation due to my performance in past jobs, but I just can't take the standard Java jobs out there. I really love to design things from scratch and hate when I have to maintain other people's bad code, design and architecture, which is the majority you find out there for sure. Does anybody relate to that? What do you guys recommend me? Open up my on company, do consulting, or just keep looking hard until I find a job that suits my preferences, as hard as this may look like with all the hibernate and spring crap out there?

    Read the article

  • Move Data into the Grid for Scalable, Predictable Response Times

    - by JuergenKress
    CloudTran is pleased to introduce the availability of the CloudTran Transaction and Persistence Manager for creating scalable, reliable data services on the Oracle Coherence In-Memory Data Grid (IMDG). Use of IMDG architectures has been key to handling today’s web-scale loads because it eliminates database latency by storing important and frequently access data in memory instead of on disk. The CloudTran product lets developers easily use an IMDG for full ACID-compliant transactions without having to be concerned about the location or spread of data. The system has its own implementation of fast, scalable distributed transactions that does NOT depend on XA protocols but still guarantees all ACID properties. Plus, CloudTran asynchronously replicates data going into the IMDG to back-end datastores and back-up data centers, again ensuring ACID properties. CloudTran can be accessed through Java Persistence API (JPA via TopLink Grid) and now, through a new Low-Level API, or LLAPI. This is ideal for use in SOA applications that need data reliability, high availability, performance, and scalability. Still in limited beta release, the LLAPI gives developers the ability to use standard put/remove logic available in Coherence and then wrap logic with simple Spring annotations or XML+AspectJ to start transactions. An important feature of LLAPI is the ability to join transactions. This is a common outcome for SOA applications that need to reduce network traffic by aggregating data into single cache entries and then doing SOA service processing in the node holding the data. This results in the need to orchestrate transaction processing across multiple service calls. CloudTran has the capability to handle these “multi-client” transactions at speed with no loss in ACID properties. Developing software around an IMDG like Oracle Coherence is an important choice for today’s web-scale applications and services. But this introduces new architectural considerations to maintain scalability in light of increased network loads and data movement. Without using CloudTran, developers are faced with an incredibly difficult task to ensure data reliability, availability, performance, and scalability when working with an IMDG. Working with highly distributed data that is entirely volatile while stored in memory presents numerous edge cases where failures can result in data loss. The CloudTran product takes care of all of this, leaving developers with the confidence and peace of mind that all data is processed correctly. For those interested in evaluating the CloudTran product and IMDGs, take a look at this link for more information: http://www.CloudTran.com/downloadAPI.php, or, send your questions to [email protected]. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. BlogTwitterLinkedInMixForumWiki Technorati Tags: Coherence,cloudtran,cache,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Why the static data members have to be defined outside the class separately in C++ (unlike Java)?

    - by iammilind
    class A { static int foo () {} // ok static int x; // <--- needed to be defined separately in .cpp file }; I don't see a need of having A::x defined separately in a .cpp file (or same file for templates). Why can't be A::x declared and defined at the same time? Has it been forbidden for historical reasons? My main question is, will it affect any functionality if static data members were declared/defined at the same time (same as Java) ?

    Read the article

  • Propel-load-data is causing an error

    - by Jon Winstanley
    I am trying to load fixtures but myproject is erroring at the CLI and starting the indexer process. I have tried: Rebuilding the schema and model Emptying the database and starting again Clearing the cache Validating the YML file and trying much simpler data-dumps My platform is Symfony 1.0 on Windows Some also seems to have had the same issue in the past. C:\web\my_project>symfony propel-load-data backend >> propel load data from "C:\web\my_project\data\fixtures" PHP Warning: session_start(): Cannot send session cookie - headers already sent by (output started at C:\php\PEAR\symfony\vendor\pake\pakeFunction.php:366) in C:\php\PEAR\symfony\storage\sfSessionStorage.class.php on line 77 Warning: session_start(): Cannot send session cookie - headers already sent by (output started at C:\php\PEAR\symfony\vendor\pake\pakeFunction.php:366) in C:\php\PEAR\symfony\storage\sfSessionStorage.class.php on line 77 PHP Warning: session_start(): Cannot send session cache limiter - headers already sent (output started at C:\php\PEAR\symfony\vendor\pake\pakeFunction.php:366) in C:\php\PEAR\symfony\storage\sfSessionStorage.class.php on line 77 Warning: session_start(): Cannot send session cache limiter - headers already sent (output started at C:\php\PEAR\symfony\vendor\pake\pakeFunction.php:366) in C:\php\PEAR\symfony\storage\sfSessionStorage.class.php on line 77

    Read the article

  • java.lang.NoClassDefFoundError: org/springframework/transaction/interceptor/TransactionInterceptor

    - by user1137146
    I am trying to integrate spring 3.1.1 with hibernate 4.0. This is my dispatcher-servlet.xml: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:aop="http://www.springframework.org/schema/aop" xmlns:context="http://www.springframework.org/schema/context" xmlns:jee="http://www.springframework.org/schema/jee" xmlns:lang="http://www.springframework.org/schema/lang" xmlns:p="http://www.springframework.org/schema/p" xmlns:tx="http://www.springframework.org/schema/tx" xmlns:util="http://www.springframework.org/schema/util" xmlns:mvc="http://www.springframework.org/schema/mvc" xsi:schemaLocation="http://www.springframework.org/schema/lang http://www.springframework.org/schema/lang/spring-lang.xsd http://www.springframework.org/schema/jee http://www.springframework.org/schema/jee/spring-jee.xsd http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop.xsd http://www.springframework.org/schema/mvc http://www.springframework.org/schema/mvc/spring-mvc-3.1.xsd http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-3.1.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd"> <context:component-scan base-package="com.future.controllers" /> <context:annotation-config /> <context:component-scan base-package="com.future.services.menu" /> <context:component-scan base-package="com.future.dao" /> <bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close" p:driverClassName="com.mysql.jdbc.Driver" p:url="jdbc:mysql://localhost:3306/bar_visitor2" p:username="root" p:password=""/> <bean id="viewResolver" class="org.springframework.web.servlet.view.InternalResourceViewResolver"> <property name="viewClass" value="org.springframework.web.servlet.view.JstlView" /> <property name="prefix" value="/WEB-INF/views/" /> <property name="suffix" value=".jsp" /> </bean> <bean id="sessionFactory" class="org.springframework.orm.hibernate4.LocalSessionFactoryBean"> <property name="dataSource" ref="dataSource" /> <property name="configLocation"> <value>classpath:hibernate.cfg.xml</value> </property> </bean> <tx:annotation-driven /> <bean id="transactionManager" class="org.springframework.orm.hibernate4.HibernateTransactionManager"> <property name="sessionFactory" ref="sessionFactory" /> </bean> When I try to use @Transactional annotation I am getting an error java.lang.NoClassDefFoundError: org/springframework/transaction/interceptor/TransactionInterceptor. I checked my classpath and there is TransactionInterceptor.class. What am I doing wrong? Should I add something? Edit This is my lib folder:

    Read the article

  • How does LinqPad support WCF Data Services?

    - by user341127
    LinqPad supports WCF Data Services. If you assign an URL, such as http://services.odata.org/Northwind/Northwind.svc/. It will list all available data objects and you can query them. I guess LinqPad generates all available data classes at run time by reflection.Emit. I am wondering who can show me to how to do so. Or maybe someone has done it before. Any feedback are appreciated. Ying

    Read the article

  • Test data generators / quickest route to generating solid, non-repetitive, but not-real database sam

    - by Jamo
    I need to build a quick feasibility test / proof-of-concept of a remote database for a client, that will be populated with mostly-typical Company and People data (names, addresses, etc); 150K records or so. The sample databases mentioned here were helpful: http://stackoverflow.com/questions/57068/good-databases-with-sample-data ...but, I'd like to be able to generate sample data like this easily on less-typical datasets as well. Anyone have any recommendations for off-the-shelf (or off-the-web) solutions?

    Read the article

  • Resources related to data-mining and gaming on social networks

    - by darren
    Hi all I'm interested in the problem of patterning mining among players of social networking games. For example detecting cheaters of a game, given a company's user database. So far I have been following the usual recipe for a data mining project: construct a data warehouse that aggregates significant information select a classifier, and train it with a subsectio of records from the warehouse validate classifier with another test set lather, rinse, repeat Surprisingly, I've found very little in this area regarding literature, best practices, etc. I am hoping to crowdsource the information gathering problem here. Specifically what I'm looking for: What classifiers have worked will for this type of pattern mining (it seems highly temporal, users playing games, users receiving rewards, users transferring prizes etc). Are there any highly agreed upon attributes specific to social networking / gaming data? What is a practical amount of information that should be considered? One problem I've run into is data overload, where queries and data cleansing may take days to complete. Related to point above, what hardware resources are required to produce results? I've found it difficult to estimate the amount of computing power I will require for production use. It has become apparent that a white box in the corner does not have enough horse-power for such a project. Are companies generally resorting to cloud solutions? Are they buying clusters? Basically, any resources (theoretical, academic, or practical) about implementing a social networking / gaming pattern-mining program would be very much appreciated. Thanks.

    Read the article

  • improve my code for collapsing a list of data.frames

    - by romunov
    Dear StackOverFlowers (flowers in short), I have a list of data.frames (walk.sample) that I would like to collapse into a single (giant) data.frame. While collapsing, I would like to mark (adding another column) which rows have came from which element of the list. This is what I've got so far. This is the data.frame that needs to be collapsed/stacked. > walk.sample [[1]] walker x y 1073 3 228.8756 -726.9198 1086 3 226.7393 -722.5561 1081 3 219.8005 -728.3990 1089 3 225.2239 -727.7422 1032 3 233.1753 -731.5526 [[2]] walker x y 1008 3 205.9104 -775.7488 1022 3 208.3638 -723.8616 1072 3 233.8807 -718.0974 1064 3 217.0028 -689.7917 1026 3 234.1824 -723.7423 [[3]] [1] 3 [[4]] walker x y 546 2 629.9041 831.0852 524 2 627.8698 873.3774 578 2 572.3312 838.7587 513 2 633.0598 871.7559 538 2 636.3088 836.6325 1079 3 206.3683 -729.6257 1095 3 239.9884 -748.2637 1005 3 197.2960 -780.4704 1045 3 245.1900 -694.3566 1026 3 234.1824 -723.7423 I have written a function to add a column that denote from which element the rows came followed by appending it to an existing data.frame. collapseToDataFrame <- function(x) { # collapse list to a dataframe with a twist walk.df <- data.frame() for (i in 1:length(x)) { n.rows <- nrow(x[[i]]) if (length(x[[i]])>1) { temp.df <- cbind(x[[i]], rep(i, n.rows)) names(temp.df) <- c("walker", "x", "y", "session") walk.df <- rbind(walk.df, temp.df) } else { cat("Empty list", "\n") } } return(walk.df) } > collapseToDataFrame(walk.sample) Empty list Empty list walker x y session 3 1 -604.5055 -123.18759 1 60 1 -562.0078 -61.24912 1 84 1 -594.4661 -57.20730 1 9 1 -604.2893 -110.09168 1 43 1 -632.2491 -54.52548 1 1028 3 240.3905 -724.67284 1 1040 3 232.5545 -681.61225 1 1073 3 228.8756 -726.91980 1 1091 3 209.0373 -740.96173 1 1036 3 248.7123 -694.47380 1 I'm curious whether this can be done more elegantly, with perhaps do.call() or some other more generic function?

    Read the article

  • VBA-Sorting the data in a listbox, sort works but data in listbox not changed

    - by Mike Clemens
    A listbox is passed, the data placed in an array, the array is sort and then the data is placed back in the listbox. The part that does work is putting the data back in the listbox. Its like the listbox is being passed by value instead of by ref. Here's the sub that does the sort and the line of code that calls the sort sub. Private Sub SortListBox(ByRef LB As MSForms.ListBox) Dim First As Integer Dim Last As Integer Dim NumItems As Integer Dim i As Integer Dim j As Integer Dim Temp As String Dim TempArray() As Variant ReDim TempArray(LB.ListCount) First = LBound(TempArray) ' this works correctly Last = UBound(TempArray) - 1 ' this works correctly For i = First To Last TempArray(i) = LB.List(i) ' this works correctly Next i For i = First To Last For j = i + 1 To Last If TempArray(i) > TempArray(j) Then Temp = TempArray(j) TempArray(j) = TempArray(i) TempArray(i) = Temp End If Next j Next i ! data is now sorted LB.Clear ! this doesn't clear the items in the listbox For i = First To Last LB.AddItem TempArray(i) ! this doesn't work either Next i End Sub Private Sub InitializeForm() ' There's code here to put data in the list box Call SortListBox(FieldSelect.CompleteList) End Sub Thanks for your help.

    Read the article

  • Problem with core data migration mapping model

    - by dpratt
    I have an iphone app that uses Core Data to do storage. I have successfully deployed it, and now I'm working on the second version. I've run into a problem with the data model that will require a few very simple data transformations at the time that the persistent store gets upgraded, so I can't just use the default inferred mapping model. My object model is stored in an .xcdatamodeld bundle, with versions 1.0 and 1.1 next to each other. Version 1.1 is set as the active version. Everything works fine when I use the default migration behavior and set NSInferMappingModelAutomaticallyOption to YES. My sqlite storage gets upgraded from the 1.0 version of the model, and everything is good except for, of course, the few transformations I need done. As an additional experimental step, I added a new Mapping Model to the core data model bundle, and have made no changes to what xcode generated. When I run my app (with an older version of the data store), I get the following * Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: 'Object's persistent store is not reachable from this NSManagedObjectContext's coordinator' What am I doing wrong? Here's my code for to get the managed object model and the persistent store coordinator. - (NSPersistentStoreCoordinator *)persistentStoreCoordinator { if (_persistentStoreCoordinator != nil) { return _persistentStoreCoordinator; } _persistentStoreCoordinator = [[NSPersistentStoreCoordinator alloc] initWithManagedObjectModel:[self managedObjectModel]]; NSURL *storeUrl = [NSURL fileURLWithPath: [[self applicationDocumentsDirectory] stringByAppendingPathComponent: @"gti_store.sqlite"]]; NSError *error; NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithBool:YES], NSMigratePersistentStoresAutomaticallyOption, [NSNumber numberWithBool:YES], NSInferMappingModelAutomaticallyOption, nil]; if (![_persistentStoreCoordinator addPersistentStoreWithType:NSSQLiteStoreType configuration:nil URL:storeUrl options:options error:&error]) { NSLog(@"Eror creating persistent store coodinator - %@", [error localizedDescription]); } return _persistentStoreCoordinator; } - (NSManagedObjectModel *)managedObjectModel { if(_managedObjectModel == nil) { _managedObjectModel = [[NSManagedObjectModel mergedModelFromBundles:nil] retain]; NSDictionary *entities = [_managedObjectModel entitiesByName]; //add a sort descriptor to the 'Foo' fetched property so that it can have an ordering - you can't add these from the graphical core data modeler NSEntityDescription *entity = [entities objectForKey:@"Foo"]; NSFetchedPropertyDescription *fetchedProp = [[entity propertiesByName] objectForKey:@"orderedBar"]; NSSortDescriptor* sortDescriptor = [[[NSSortDescriptor alloc] initWithKey:@"index" ascending:YES] autorelease]; NSArray* sortDescriptors = [NSArray arrayWithObjects:sortDescriptor, nil]; [[fetchedProp fetchRequest] setSortDescriptors:sortDescriptors]; } return _managedObjectModel; }

    Read the article

  • Visibility of Class field-data of Mouse Clicked ImageButton located within WrapPanel

    - by Bill
    I am attempting to obtain the class-data behind an ImageButton that is mouse-clicked; which ImageButton is located within a WrapPanel filled with ImageButtons. The problem I am having is obtaining the visibility of the field data within the class behind the image-button. Although I can see the class, I can neither see nor access the field data. Can anyone please point me in the right direction? // Handles the ImageButton mouseClick event within the WrapPanel. private void SolarSystem_Click(Object sender, RoutedEventArgs e) { FrameworkElement fe = e.OriginalSource as FrameworkElement; SelectedPlanet PlanetSelected = new SelectedPlanet(fe); PlanetSelected.Owner = this; MessageBox.Show(PlanetSelected.PlanetName); } // Used to initiate instance of Class and some field data. public SelectedPlanet(FrameworkElement fe) { InitializeComponent(); string sPlanetName = ((PlanetClass)(fe)).PlanetName; return sPlanetName } // Class Data public class PlanetClass { string planetName; public PlanetClass(string planetName) { PlanetName = planetName; } public string PlanetName { set { planetName = value; } get { return planetName; } } }

    Read the article

  • Generic Data Structure Description Language

    - by Jon Purdy
    I am wondering whether there exists any declarative language for arbitrarily describing the format and semantics of a data structure, that can be compiled to a specific implementation of that structure in any of a set of target languages. That is, something like a generic data definition language but geared toward describing arbitrary data structures such as vectors, lists, trees, etc., and the semantics of operations on those structures. I ask because I had an idea for a feasible implementation of this concept, and I'm just wondering whether it's worth it, and, consequently, whether it's been done before. Another, slightly more abstract question: is there any real difference between the normative specification of a data structure (what it does) and its implementation (how it does it)?

    Read the article

  • How does cobol store and retrieve data?

    - by controlfreak123
    I'm starting to learn about COBOL. I have some experience writing programs that deal with sql databases and I guess I'm confused how cobol stores and retrieves data that is stored in a mainframe for example. I know that it's not like relational databases but every example program I've seen takes data straight from the command line and I know thats not how real world COBOL programs process the data. Can someone explain or show me a good resource that can explain it?

    Read the article

< Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >