Search Results

Search found 12107 results on 485 pages for 'pinned objects'.

Page 145/485 | < Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >

  • Variant datatype library for C

    - by Joey Adams
    Is there a decent open-source C library for storing and manipulating dynamically-typed variables (a.k.a. variants)? I'm primarily interested in atomic values (int8, int16, int32, uint, strings, blobs, etc.), while JSON-style arrays and objects as well as custom objects would also be nice. A major case where such a library would be useful is in working with SQL databases. The most obvious feature of such a library would be a single type for all supported values, e.g.: struct Variant { enum Type type; union { int8_t int8_; int16_t int16_; // ... }; }; Other features might include converting Variant objects to/from C structures (using a binding table), converting values to/from strings, and integration with an existing database library such as SQLite. Note: I do not believe this is question is a duplicate of http://stackoverflow.com/questions/649649/any-library-for-generic-datatypes-in-c , which refers to "queues, trees, maps, lists". What I'm talking about focuses more on making working with SQL databases roughly as smooth as working with them in interpreted languages.

    Read the article

  • JAXB - Beans to XSD or XSD to beans?

    - by bajafresh4life
    I have an existing data model. I would like to express this data model in terms of XML. It looks like I have two options if I'm to use JAXB: Create an XSD that mirrors my data model, and use xjc to create binding objects. Marshalling and unmarshalling will involve creating a "mapping" class that would take my existing data objects and map them to the objects that xjc created. For example, in my data model I have a Doc class, and JAXB would create another Doc class with basically the same exact fields, and I would have to map from my Doc class to xjc's Doc class. Annotate my existing data model with JAXB annotations, and use schemagen to generate an XSD from my annotated classes. I can see advantanges and disadvantages of both approaches. It seems that most people using JAXB start with the XSD file. It makes sense that the XSD should be the gold standard truth, since it expresses the data model in a truly cross-platform way. I'm inclined to start with the XSD first, but it seems icky that I have to write and maintain a separate mapping class that shuttles data in between my world and JAXB world. Any recommendations?

    Read the article

  • Linq to sql DataContext cannot set load options after results been returned

    - by David Liddle
    I have two tables A and B with a one-to-many relationship respectively. On some pages I would like to get a list of A objects only. On other pages I would like to load A with objects in B attached. This can be handled by setting the load options DataLoadOptions options = new DataLoadOptions(); options.LoadWith<A>(a => a.B); dataContext.LoadOptions = options; The trouble occurs when I first of all view all A's with load options, then go to edit a single A (do not use load options), and after edit return to the previous page. I understand why the error is occurring but not sure how to best get round this problem. I would like the DataContext to be loaded up per request. I thought I was achieving this by using StructureMap to load up my DataContext on a per request basis. This is all part of an n-tier application where my Controllers call Services which in turn call Repositories. ForRequestedType<MyDataContext>() .CacheBy(InstanceScope.PerRequest) .TheDefault.Is.Object(new MyDataContext()); ForRequestedType<IAService>() .TheDefault.Is.OfConcreteType<AService>(); ForRequestedType<IARepository>() .TheDefault.Is.OfConcreteType<ARepository>(); Here is a brief outline of my Repository public class ARepository : IARepository { private MyDataContext db; public ARepository(MyDataContext context) { db = context; } public void SetLoadOptions(DataLoadOptions options) { db.LoadOptions = options; } public IQueryable<A> Get() { return from a in db.A select a; } So my ServiceLayer, on View All, sets the load options and then gets all A's. On editing A my ServiceLayer should spin up a new DataContext and just fetch a list of A's. When sql profiling, I can see that when I go to the Edit page it is requesting A with B objects.

    Read the article

  • How to deal with recursive dependencies between static libraries using the binutils linker?

    - by Jack Lloyd
    I'm porting an existing system from Windows to Linux. The build is structured with multiple static libraries. I ran into a linking error where a symbol (defined in libA) could not be found in an object from libB. The linker line looked like g++ test_obj.o -lA -lB -o test The problem of course being that by the time the linker finds it needs the symbol from libA, it has already passed it by, and does not rescan, so it simply errors out even though the symbol is there for the taking. My initial idea was of course to simply swap the link (to -lB -lA) so that libA is scanned afterwards, and any symbols missing from libB that are in libA are picked up. But then I find there is actually a recursive dependency between libA and libB! I'm assuming the Visual C++ linker handles this in some way (does it rescan by default?). Ways of dealing with this I've considered: Use shared objects. Unfortunately this is undesirable from the perspective of requiring PIC compliation (this is performance sensitive code and losing %ebx to hold the GOT would really hurt), and shared objects aren't needed. Build one mega ar of all of the objects, avoiding the problem. Restructure the code to avoid the recursive dependency (which is obviously the Right Thing to do, but I'm trying to do this port with minimal changes). Do you have other ideas to deal with this? Is there some way I can convince the binutils linker to perform rescans of libraries it has already looked at when it is missing a symbol?

    Read the article

  • Django Forms - change the render multiple select widget

    - by John
    Hi, In my model I have a manytomany field mentors = models.ManyToManyField(MentorArea, verbose_name='Areas', blank=True) In my form I want to render this as: drop down box with list of all MentorArea objects which has not been associated with the object. Next to that an add button which will call a javascript function which will add it to the object. Then under that a ul list which has each selected MentorArea object with a x next to it which again calls a javascript function which will remove the MentorArea from the object. I know that to change how an field element is rendered you create a custom widget and override the render function and I have done that to create the add button. class AreaWidget(widgets.Select): def render(self, name, value, attrs=None, choices=()): jquery = u''' <input class="button def" type="button" value="Add" id="Add Area" />''' output = super(AreaWidget, self).render(name, value, attrs, choices) return output + mark_safe(jquery) However I don't know how to list the currently selected ones underneath as a list. Can anyone help me? Also what is the best way to filter down the list so that it only shows MentorArea objects which have not been added? I currently have the field as mentors = forms.ModelMultipleChoiceField(queryset=MentorArea.objects.all(), widget = AreaWidget, required=False) but this shows all mentors no matter if they have been added or not. Thanks

    Read the article

  • detecting double free object, release or not release ...

    - by mongeta
    Hello, If we have this code in our interface .h file: NSString *fieldNameToStoreModel; NSFetchedResultsController *fetchedResultsController; NSManagedObjectContext *managedObjectContext; DataEntered *dataEntered; In our implementation file .m we must have: - (void)dealloc { [fieldNameToStoreModel release]; [fetchedResultsController release]; [managedObjectContext release]; [dataEntered release]; [super dealloc]; } The 4 objects are assigned from a previous UIViewController, like this: UIViewController *detailViewController; detailViewController = [[CarModelSelectViewController alloc] initWithStyle:UITableViewStylePlain]; ((CarModelSelectViewController *)detailViewController).dataEntered = self.dataEntered; ((CarModelSelectViewController *)detailViewController).managedObjectContext = self.managedObjectContext; ((CarModelSelectViewController *)detailViewController).fieldNameToStoreModel = self.fieldNameToStoreModel; [self.navigationController pushViewController:detailViewController animated:YES]; [detailViewController release]; The objects that now live in the new UIViewController, are the same as the previous UIViewController, and I can't release them in the new UIViewController ? The problems is that sometimes, my app crashes when I leave the new UIViewController and go to the previous one, not always. Normally the error that I'm getting is a double free object. I've used the malloc_error_break but I'm still not sure wich object is. Sometimes I can go from the previous UIViewController to the next one and come back 4 or 5 times, and the double free object appears. If I don't release any object, all is working and Instruments says that there are no memory leaks ... So, the final question, should I release those objects here or not ? Thanks, m.

    Read the article

  • Implementing the ‘defer’ statement from Go in Objective-C?

    - by zoul
    Hello! Today I read about the defer statement in the Go language: A defer statement pushes a function call onto a list. The list of saved calls is executed after the surrounding function returns. Defer is commonly used to simplify functions that perform various clean-up actions. I thought it would be fun to implement something like this in Objective-C. Do you have some idea how to do it? I thought about dispatch finalizers, autoreleased objects and C++ destructors. Autoreleased objects: @interface Defer : NSObject {} + (id) withCode: (dispatch_block_t) block; @end @implementation Defer - (void) dealloc { block(); [super dealloc]; } @end #define defer(__x) [Defer withCode:^{__x}] - (void) function { defer(NSLog(@"Done")); … } Autoreleased objects seem like the only solution that would last at least to the end of the function, as the other solutions would trigger when the current scope ends. On the other hand they could stay in the memory much longer, which would be asking for trouble. Dispatch finalizers were my first thought, because blocks live on the stack and therefore I could easily make something execute when the stack unrolls. But after a peek in the documentation it doesn’t look like I can attach a simple “destructor” function to a block, can I? C++ destructors are about the same thing, I would create a stack-based object with a block to be executed when the destructor runs. This would have the ugly disadvantage of turning the plain .m files into Objective-C++? I don’t really think about using this stuff in production, I’m just interested in various solutions. Can you come up with something working, without obvious disadvantages? Both scope-based and function-based solutions would be interesting.

    Read the article

  • Django access data passed to form

    - by realshadow
    Hey, I have got a choiceField in my form, where I display filtered data. To filter the data I need two arguments. The first one is not a problem, because I can take it directly from an object, but the second one is dynamically generated. Here is some code: class GroupAdd(forms.Form): def __init__(self, *args, **kwargs): self.pid = kwargs.pop('parent_id', None) super(GroupAdd, self).__init__(*args, **kwargs) parent_id = forms.IntegerField(widget=forms.HiddenInput) choices = forms.ChoiceField( choices = [ [group.node_id, group.name] for group in Objtree.objects.filter( type_id = ObjtreeTypes.objects.values_list('type_id').filter(name = 'group'), parent_id = 50 ).distinct()] + [[0, 'Add a new one'] ], widget = forms.Select( attrs = { 'id': 'group_select' } ) ) I would like to change the parent_id that is passed into the Objtree.objects.filter. As you can see I tried in the init function, as well with kwargs['initial']['parent_id'] and then calling it with self, but that doesnt work, since its out of scope... it was pretty much my last effort. I need to acccess it either trough the initial parameter or directly trough parent_id field, since it already holds its value (passed trough initial). Any help is appreciated, as I am running out of ideas.

    Read the article

  • Is There a Time at which to ignore IDisposable.Dispose?

    - by Mystagogue
    Certainly we should call Dipose() on IDisposable objects as soon as we don't need them (which is often merely the scope of a "using" statement). If we don't take that precaution then bad things, from subtle to show-stopping, might happen. But what about "the last moment" before process termination? If your IDisposables have not been explicitly disposed by that point in time, isn't it true that it no longer matters? I ask because unmanaged resources, beneath the CLR, are represented by kernel objects - and the win32 process termination will free all unmanaged resources / kernel objects anyway. Said differently, no resources will remain "leaked" after the process terminates (regardless if Dispose() was called on lingering IDisposables). Can anyone think of a case where process termination would still leave a leaked resource, simply because Dispose() was not explicitly called on one or more IDisposables? Please do not misunderstand this question: I am not trying to justify ignoring IDisposables. The question is just technical-theoretical. EDIT: And what about mono running on Linux? Is process termination there just as "reliable" at cleaning up unmanaged "leaks?"

    Read the article

  • Aren't Information Expert / Tell Don't Ask at odds with Single Responsibility Principle?

    - by moffdub
    It is probably just me, which is why I'm asking the question. Information Expert, Tell Don't Ask, and SRP are often mentioned together as best practices. But I think they are at odds. Here is what I'm talking about: Code that favors SRP but violates Tell Don't Ask, Info Expert: Customer bob = ...; // TransferObjectFactory has to use Customer's accessors to do its work, // violates Tell Don't Ask CustomerDTO dto = TransferObjectFactory.createFrom(bob); Code that favors Tell Don't Ask / Info Expert but violates SRP: Customer bob = ...; // Now Customer is doing more than just representing the domain concept of Customer, // violates SRP CustomerDTO dto = bob.toDTO(); If they are indeed at odds, that's a vindication of my OCD. Otherwise, please fill me in on how these practices can co-exist peacefully. Thank you. Edit: someone wants a definition of the terms - Information Expert: objects that have the data needed for the operation should host the operation Tell Don't Ask: don't ask objects for data in order to do work; tell the objects to do the work Single Responsibility Principle: each object should have a narrowly defined responsibility

    Read the article

  • Silverlight Data Access - how to keep the gruntwork on the server

    - by akaphenom
    What technologies are used / recommended for HTTP Rpc Calls from Silverlight. My Server Side stack is JBoss (servlets / json_rpc [jabsorb]), and we have a ton of business logic (object creation, validation, persistence, server side events) in place that I still want to take advantage of. This is our first attempt at bringing an applet style ria to our product, and ideally we keep both HTML and Silverlight versions. For better or worse the powers that be have pushed us down the silverlight path, and while flex / java fx / silverlight is an interesting debate, that question is removed from the equation. We just have to find a way to get silverlight to behave with our classes. Should I be defining .NET Class representation of our JSON objects and the methodology to serialize / deserialize access to those objects? IE "blah.com/dispenseRpc?servlet=xxxx&p1=blah&p2=blahblah creating functions that invoke the web request and convert the incomming response string to objects? Another way would be to reverse engineer the .NET wcf(or whatever) communications and implement the handler on the Java side that invokes the correct server side code and returns what .NET expects back. But that sounds much trickier. T

    Read the article

  • Requested Service not found - .NET Remoting

    - by bharat
    I am Getting this exception System.Runtime.Remoting.RemotingException occurred Message="Object '/55337266_9751_4f58_8446_c54ff254222e/rkutlpt5hvsxipmzhb+jkqyl_98.rem' has been disconnected or does not exist at the server." Source="mscorlib" StackTrace: Server stack trace: at System.Runtime.Remoting.Channels.ChannelServices.CheckDisconnectedOrCreateWellKnownObject(IMessage msg) at System.Runtime.Remoting.Channels.ChannelServices.DispatchMessage(IServerChannelSinkStack sinkStack, IMessage msg, IMessage& replyMsg) Exception rethrown at [0]: at System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg, IMessage retMsg) at System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData, Int32 type) at Common.Interface.Repository.NA.INRR.GetNRun(Int32 NetworkRunId) at Module.NA.ViewModel.NRDViewModel.get_AS() InnerException: Most of the times i am getting this exception this is my remotingDomain method string tcpURL; TcpChannel channel; channel = new TcpChannel(); ChannelServices.RegisterChannel(channel, false); //-- // Remote domain objects //-- tcpURL = string.Format("tcp://{0}:{1}/DomainComposition", ServerName, TcpPort); RemotingConfiguration.RegisterWellKnownClientType(typeof(DomainComposition), tcpURL); //-- // Remote repository objects //-- tcpURL = string.Format("tcp://{0}:{1}/RepositoryComposition", ServerName, TcpPort); RemotingConfiguration.RegisterWellKnownClientType(typeof(RepositoryComposition), tcpURL); //-- // Remote utility objects //-- tcpURL = string.Format("tcp://{0}:{1}/UtilityComposition", ServerName, TcpPort); RemotingConfiguration.RegisterWellKnownClientType(typeof(UtilityComposition), tcpURL); this.Domain = new DomainComposition(); this.Repository = new RepositoryComposition(); this.Utility = new UtilityComposition(); How to Check if the object is disconnect then re initiate the service

    Read the article

  • How do I copy or move an NSManagedObject from one context to another?

    - by Aeonaut
    I have what I assume is a fairly standard setup, with one scratchpad MOC which is never saved (containing a bunch of objects downloaded from the web) and another permanent MOC which persists objects. When the user selects an object from scratchMOC to add to her library, I want to either 1) remove the object from scratchMOC and insert into permanentMOC, or 2) copy the object into permanentMOC. The Core Data FAQ says I can copy an object like this: NSManagedObjectID *objectID = [managedObject objectID]; NSManagedObject *copy = [context2 objectWithID:objectID]; (In this case, context2 would be permanentMOC.) However, when I do this, the copied object is faulted; the data is initially unresolved. When it does get resolved, later, all of the values are nil; none of the data (attributes or relationships) from the original managedObject are actually copied or referenced. Therefore I can't see any difference between using this objectWithID: method and just inserting an entirely new object into permanentMOC using insertNewObjectForEntityForName:. I realize I can create a new object in permanentMOC and manually copy each key-value pair from the old object, but I'm not very happy with that solution. (I have a number of different managed objects for which I have this problem, so I don't want to have to write and update copy: methods for all of them as I continue developing.) Is there a better way?

    Read the article

  • Why am I getting an error on Heroku that suggests I need to migrate my app to Bamboo?

    - by user242065
    When I type: git push heroku master, this is what happens @68-185-86-134:sample_app git push heroku master Counting objects: 110, done. Delta compression using up to 2 threads. Compressing objects: 100% (94/94), done. Writing objects: 100% (110/110), 87.48 KiB, done. Total 110 (delta 19), reused 0 (delta 0) -----> Heroku receiving push -----> Rails app detected ! This version of Rails is only supported on the Bamboo stack ! Please migrate your app to Bamboo and push again. ! See http://docs.heroku.com/bamboo for more information ! Heroku push rejected, incompatible Rails version error: hooks/pre-receive exited with error code 1 To [email protected]:blazing-frost-89.git ! [remote rejected] master -> master (pre-receive hook declined) error: failed to push some refs to '[email protected]:blazing-frost-89.git' My .gems file: rails --version 2.3.8 My .git/config file: [core] repositoryformatversion = 0 filemode = true bare = false logallrefupdates = true ignorecase = true [remote "origin"] url = [email protected]:csmeder/sample_app.git fetch = +refs/heads/*:refs/remotes/origin/* [remote "heroku"] url = [email protected]:blazing-frost-89.git fetch = +refs/heads/*:refs/remotes/heroku/*

    Read the article

  • SQL Express under IIS 7.5

    - by fampinheiro
    I´m developing a web service that access a SQL Express database, it works very well in the Visual Studio host but when i deploy it to IIS 7.5 i get this exception. Please help me. Stack Trace: System.Data.EntityException: The underlying provider failed on Open. ---> System.Data.SqlClient.SqlException: Failed to generate a user instance of SQL Server due to failure in retrieving the user's local application data path. Please make sure the user has a local user profile on the computer. The connection will be closed. at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlInternalConnectionTds.CompleteLogin(Boolean enlistOK) at System.Data.SqlClient.SqlInternalConnectionTds.AttemptOneLogin(ServerInfo serverInfo, String newPassword, Boolean ignoreSniOpenTimeout, Int64 timerExpire, SqlConnection owningObject) at System.Data.SqlClient.SqlInternalConnectionTds.LoginNoFailover(String host, String newPassword, Boolean redirectedUserInstance, SqlConnection owningObject, SqlConnectionString connectionOptions, Int64 timerStart) at System.Data.SqlClient.SqlInternalConnectionTds.OpenLoginEnlist(SqlConnection owningObject, SqlConnectionString connectionOptions, String newPassword, Boolean redirectedUserInstance) at System.Data.SqlClient.SqlInternalConnectionTds..ctor(DbConnectionPoolIdentity identity, SqlConnectionString connectionOptions, Object providerInfo, String newPassword, SqlConnection owningObject, Boolean redirectedUserInstance) at System.Data.SqlClient.SqlConnectionFactory.CreateConnection(DbConnectionOptions options, Object poolGroupProviderInfo, DbConnectionPool pool, DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionFactory.CreatePooledConnection(DbConnection owningConnection, DbConnectionPool pool, DbConnectionOptions options) at System.Data.ProviderBase.DbConnectionPool.CreateObject(DbConnection owningObject) at System.Data.ProviderBase.DbConnectionPool.UserCreateRequest(DbConnection owningObject) at System.Data.ProviderBase.DbConnectionPool.GetConnection(DbConnection owningObject) at System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory) at System.Data.SqlClient.SqlConnection.Open() at System.Data.EntityClient.EntityConnection.OpenStoreConnectionIf(Boolean openCondition, DbConnection storeConnectionToOpen, DbConnection originalConnection, String exceptionCode, String attemptedOperation, Boolean& closeStoreConnectionOnFailure) --- End of inner exception stack trace --- at System.Data.EntityClient.EntityConnection.OpenStoreConnectionIf(Boolean openCondition, DbConnection storeConnectionToOpen, DbConnection originalConnection, String exceptionCode, String attemptedOperation, Boolean& closeStoreConnectionOnFailure) at System.Data.EntityClient.EntityConnection.Open() at System.Data.Objects.ObjectContext.EnsureConnection() at System.Data.Objects.ObjectQuery`1.GetResults(Nullable`1 forMergeOption) at System.Data.Objects.ObjectQuery`1.System.Collections.Generic.IEnumerable<T>.GetEnumerator() at System.Linq.Enumerable.FirstOrDefault[TSource](IEnumerable`1 source) at WSCinema.CinemaService.Movie() in D:\Documents\My Dropbox\Projects\sd.v0910\trab3\code\WSCinema\CinemaService.asmx.cs:line 46

    Read the article

  • Makefile trickery using VPATH and include.

    - by roe
    Hi, I'm playing around with make files and the VPATH variable. Basically, I'm grabbing source files from a few different places (specified by the VPATH), and compile them into the current directory using simply a list of .o-files that I want. So far so good, now I'm generating dependency information into a file called '.depend' and including that. Gnumake will attempt to use the rules defined so far to create the included file if it doesn't exist, so that's ok. Basically, my makefile looks like this. VPATH=A/source:B/source:C/source objects=first.o second.o third.o executable: $(objects) .depend: $(objects:.o=.c) $(CC) -MM $^ > $@ include .depend Now for the real question, can I suppress the generation of the .depend file in any way? I'm currently working in a clearcase environment - sloooow, so I'd prefer to have it a bit more under control when to update the dependency information. It's more or less an academic exercise as I could just wrap the thing in a script which is touching the .depend file before executing make (thus making it more recent than any source file), but it'd interesting to know if I can somehow suppress it using 'pure' make. I cannot remove the dependency to the source files (i.e. using simply .depend:), as I'm depending on the $^ variable to do the VPATH resolution for me. If there'd be any way to only update dependencies as a result of updated #include directives, that'd be even better of course.. But I'm not holding my breath for that one.. :)

    Read the article

  • git pull crashes after other member push something

    - by naiad
    Here it's the story... we have a Github account. I clone the repository ... then I can work with it, commit things, push things, etc ... I use Linux with command line and git version 1.7.7.3 Then other user, using Eclipse and git plugin for eclipse eGit 1.1.0 pushes something, and it appears in the github web pages as the last commit, but when I try to pull: $ git pull remote: Counting objects: 13, done. remote: Compressing objects: 100% (6/6), done. remote: Total 9 (delta 2), reused 7 (delta 0) Unpacking objects: 100% (9/9), done. error: unable to find 3e6c5386cab0c605877f296642d5183f582964b6 fatal: object 3e6c5386cab0c605877f296642d5183f582964b6 not found "3e6c5386cab0c605877f296642d5183f582964b6" is the commit hash of the last commit, done by the other user ... there's no problem at all to browse it through web page ... but for me it's impossible to pull it. It's strange, because my command line output tells about that commit hash, so it knows that one is the last one commit in the github system, but my git can not pull it ! Maybe the git protocol used in eGit is incompatible with the console git 1.7.7.3...

    Read the article

  • Handling incremental Data Modeling Changes in Functional Programming

    - by Adam Gent
    Most of the problems I have to solve in my job as a developer have to do with data modeling. For example in a OOP Web Application world I often have to change the data properties that are in a object to meet new requirements. If I'm lucky I don't even need to programmatically add new "behavior" code (functions,methods). Instead I can declarative add validation and even UI options by annotating the property (Java). In Functional Programming it seems that adding new data properties requires lots of code changes because of pattern matching and data constructors (Haskell, ML). How do I minimize this problem? This seems to be a recognized problem as Xavier Leroy states nicely on page 24 of "Objects and Classes vs. Modules" - To summarize for those that don't have a PostScript viewer it basically says FP languages are better than OOP languages for adding new behavior over data objects but OOP languages are better for adding new data objects/properties. Are there any design pattern used in FP languages to help mitigate this problem? I have read Phillip Wadler's recommendation of using Monads to help this modularity problem but I'm not sure I understand how?

    Read the article

  • JSON Serialization of a Django inherited model

    - by Simon Morris
    Hello, I have the following Django models class ConfigurationItem(models.Model): path = models.CharField('Path', max_length=1024) name = models.CharField('Name', max_length=1024, blank=True) description = models.CharField('Description', max_length=1024, blank=True) active = models.BooleanField('Active', default=True) is_leaf = models.BooleanField('Is a Leaf item', default=True) class Location(ConfigurationItem): address = models.CharField(max_length=1024, blank=True) phoneNumber = models.CharField(max_length=255, blank=True) url = models.URLField(blank=True) read_acl = models.ManyToManyField(Group, default=None) write_acl = models.ManyToManyField(Group, default=None) alert_group= models.EmailField(blank=True) The full model file is here if it helps. You can see that Company is a child class of ConfigurationItem. I'm trying to use JSON serialization using either the django.core.serializers.serializer or the WadofStuff serializer. Both serializers give me the same problem... >>> from cmdb.models import * >>> from django.core import serializers >>> serializers.serialize('json', [ ConfigurationItem.objects.get(id=7)]) '[{"pk": 7, "model": "cmdb.configurationitem", "fields": {"is_leaf": true, "extension_attribute_10": "", "name": "", "date_modified": "2010-05-19 14:42:53", "extension_attribute_11": false, "extension_attribute_5": "", "extension_attribute_2": "", "extension_attribute_3": "", "extension_attribute_1": "", "extension_attribute_6": "", "extension_attribute_7": "", "extension_attribute_4": "", "date_created": "2010-05-19 14:42:53", "active": true, "path": "/Locations/London", "extension_attribute_8": "", "extension_attribute_9": "", "description": ""}}]' >>> serializers.serialize('json', [ Location.objects.get(id=7)]) '[{"pk": 7, "model": "cmdb.location", "fields": {"write_acl": [], "url": "", "phoneNumber": "", "address": "", "read_acl": [], "alert_group": ""}}]' >>> The problem is that serializing the Company model only gives me the fields directly associated with that model, not the fields from it's parent object. Is there a way of altering this behaviour or should I be looking at building a dictionary of objects and using simplejson to format the output? Thanks in advance ~sm

    Read the article

  • PHPUnit - multiple stubs of same class

    - by keithjgrant
    I'm building unit tests for class Foo, and I'm fairly new to unit testing. A key component of my class is an instance of BarCollection which contains a number of Bar objects. One method in Foo iterates through the collection and calls a couple methods on each Bar object in the collection. I want to use stub objects to generate a series of responses for my test class. How do I make the Bar stub class return different values as I iterate? I'm trying to do something along these lines: $stubs = array(); foreach ($array as $value) { $barStub->expects($this->any()) ->method('GetValue')) ->will($this->returnValue($value)); $stubs[] = $barStub; } // populate stubs into `Foo` // assert results from `Foo->someMethod()` So Foo->someMethod() will produce data based on the results it receives from the Bar objects. But this gives me the following error whenever the array is longer than one: There was 1 failure: 1) testMyTest(FooTest) with data set #2 (array(0.5, 0.5)) Expectation failed for method name is equal to <string:GetValue> when invoked zero or more times. Mocked method does not exist. /usr/share/php/PHPUnit/Framework/MockObject/Mock.php(193) : eval()'d code:25 One thought I had was to use ->will($this->returnCallback()) to invoke a callback method, but I don't know how to indicate to the callback which Bar object is making the call (and consequently what response to give). Another idea is to use the onConsecutiveCalls() method, or something like it, to tell my stub to return 1 the first time, 2 the second time, etc, but I'm not sure exactly how to do this. I'm also concerned that if my class ever does anything other than ordered iteration on the collection, I won't have a way to test it.

    Read the article

  • Problem with interface implementation in partial classes.

    - by Bas
    I have a question regarding a problem with L2S, Autogenerated DataContext and the use of Partial Classes. I have abstracted my datacontext and for every table I use, I'm implementing a class with an interface. In the code below you can see I have the Interface and two partial classes. The first class is just there to make sure the class in the auto-generated datacontext inherets Interface. The other autogenerated class makes sure the method from Interface is implemented. namespace PartialProject.objects { public interface Interface { Interface Instance { get; } } //To make sure the autogenerated code inherits Interface public partial class Class : Interface { } //This is autogenerated public partial class Class { public Class Instance { get { return this.Instance; } } } } Now my problem is that the method implemented in the autogenerated class gives the following error: - Property 'Instance' cannot implement property from interface 'PartialProject.objects.Interface'. Type should be 'PartialProjects.objects.Interface'. <- Any idea how this error can be resolved? Keep in mind that I can't edit anything in the autogenerated code. Thanks in advance!

    Read the article

  • How would I go about sharing variables in a class with Lua?

    - by Nicholas Flynt
    I'm fairly new to Lua, I've been working on trying to implement Lua scripting for logic in a Game Engine I'm putting together. I've had no trouble so far getting Lua up and running through the engine, and I'm able to call Lua functions from C and C functions from Lua. The way the engine works now, each Object class contains a set of variables that the engine can quickly iterate over to draw or process for physics. While game objects all need to access and manipulate these variables in order for the Game Engine itself to see any changes, they are free to create their own variables, a Lua is exceedingly flexible about this so I don't forsee any issues. Anyway, currently the Game Engine side of things are sitting in C land, and I really want them to stay there for performance reasons. So in an ideal world, when spawning a new game object, I'd need to be able to give Lua read/write access to this standard set of variables as part of the Lua object's base class, which its game logic could then proceed to run wild with. So far, I'm keeping two separate tables of objects in place-- Lua spawns a new game object which adds itself to a numerically indexed global table of objects, and then proceeds to call a C++ function, which creates a new GameObject class and registers the Lua index (an int) with the class. So far so good, C++ functions can now see the Lua object and easily perform operations or call functions in Lua land using dostring. What I need to do now is take the C++ variables, part of the GameObject class, and expose them to Lua, and this is where google is failing me. I've encountered a very nice method here which details the process using tags, but I've read that this method is deprecated in favor of metatables. What is the ideal way to accomplish this? Is it worth the hassle of learning how to pass class definitions around using libBind or some equivalent method, or is there a simple way I can just register each variable (once, at spawn time) with the global lua object? What's the "current" best way to do this, as of Lua 5.1.4?

    Read the article

  • Show me your Linq to SQL architectures!

    - by Brad Heller
    I've been using Linq to SQL for a new implementation that I've been working on. I have about 5000 lines of code and am a little ways from a solid demo. I've been pretty satisfied with Linq to SQL so far -- the tools are excellent and pretty painless and it allows you to get a DAL up and running quickly. That said, there are some major draw backs that I just keep hitting over and over again. Namely how to handle separation of concerns between my DAL and my business layer and juggling that with different data contexts. Here is the architecture I've been using: My repositories do all my data access and they return Linq to SQL objects. Each of my Linq to SQL objects implements an IDetachable interface. A typical implementation looks like this: partial class PaymentDetail : IDetachable { #region IDetachable Members public bool IsAttached { get { return PropertyChanging != null; } } public void Detach() { if (IsAttached) { PropertyChanged = null; PropertyChanging = null; Transaction.Detach(); } } #endregion } Every time I do a DAL operation in my repository I "detach" when I'm done with the object (and it should theoretically detach from any child objects) to remove the DataContext's context. Like I said, this works pretty well, but there are some edge cases that seem to be a big pain in the ass. For instance, my Transaction object has many PaymentDetails. Even when there are no PaymentDetails in that collection it's still attached to the DataContext's context! Thus, if I try to update (I update by Attach()ing to the object and then SubmitChanges()) I get that dreaded "An attempt has been made to Attach or Add an entity that is not new, perhaps having been loaded from another DataContext. This is not supported." message. Anyway, I'm starting to doubt that this technology was a good gamble. Has anyone got a decent architecture that they're willing to share? I'd really love to use this technology but I feel like I spend 1/3 of my time just debugging is retarded quirks!

    Read the article

  • Flex 3: should I provide prepared data to my component or make it to process data before display?

    - by grapkulec
    I'm starting to learn a little Flex just for fun and maybe to prove that I still can learn something new :) I have some idea for a project and one of its parts is a tree component which could display data in different ways depending on configuration. The idea There is list of objects having properties like id, date, time, name, description. And sometimes list should be displayed like this: first level: date second level: time third level: name and sometimes like this: first level: year second level: month third level: day fourth level: time and name By level I mean level of nesting of course. So, we can have years, that have months, that have days, that have hours and so forth. The problem What could be the best way to do it? I mean, should I prepare data for different ways of nesting outside of component or even outside of flex? I can do it at web service level in C# where I plan to have database access layer and send to flex nice and ready to display XML or array of objects. But I wonder if that won't cause additional and maybe unneccessary network traffic. I tried to hack some code in my component to convert my data objects into XML or ArrayCollection but I don't know enough of Flex and got stuck on elimination of duplicates or getting specific data by some key value. Usually to do such things I have STL with maps, sets and vectors and I find Flex arrays and even Dictionary a little bit confusing (I've read language reference and googled without any significant luck). The question So, to sum things up: should I give my tree component data prepared just for chosen type of display or should I try to do it internally inside component (or some helper class written in ActionScript)?

    Read the article

  • django url user id versus userprofile id problem

    - by dana
    hello there, i have a mini comunity where each user can search and find another user profile. Userprofile is a class model, indexed differently compared to user model class (user id is not equal to userprofile id) But i cannot see a user profile by typing in the url the corresponding id. I only see the profile of the currently logged in user. Why is that? I'd also want to have in my url the username (a primary key of the user table also) and NOT the id (a number). The guilty part of the code is: what can i replace that request.user with so that it wil actually display the user i searched for, and not the currently logged in? def profile_view(request, id): u = UserProfile.objects.get(pk=id) cv = UserProfile.objects.filter(created_by = request.user) blog = New.objects.filter(created_by = request.user) return render_to_response('profile/publicProfile.html', { 'u':u, 'cv':cv, 'blog':blog, }, context_instance=RequestContext(request)) in urls (of the accounts app): url(r'^profile_view/(?P<id>\d+)/$', profile_view, name='profile_view'), and in template: <h3>Recent Entries:</h3> {% load pagination_tags %} {% autopaginate list 10 %} {% paginate %} {% for object in list %} <li>{{ object.post }} <br /> Voted: {{ vote.count }} times.<br /> {% for reply in object.reply_set.all %} {{ reply.reply }} <br /> {% endfor %} <a href=''> {{ object.created_by }}</a> <br /> {{object.date}} <br /> <a href = "/vote/save_vote/{{object.id}}/">Vote this</a> <a href="/replies/save_reply/{{object.id}}/">Comment</a> </li> {% endfor %} thanks in advance!

    Read the article

< Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >