Search Results

Search found 13429 results on 538 pages for 'model reject'.

Page 76/538 | < Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >

  • Java classloader delegation model ?

    - by Tony
    When calling a loadClass() on a class loader, the class loader firstly check the class if had been loaded or directly delegate this check to it's parent class loader ? Java api says: When requested to find a class or resource, a ClassLoader instance will delegate the search for the class or resource to its parent class loader before attempting to find the class or resource itself. But there's a specific chapter about class loader in the book <java reflection in action> and says: Class loader calls findLoadedClass to check if the class has been loaded already.If a class loader does not find a loaded class, calls loadClass on the parent class loader. which is correct ?

    Read the article

  • RESTful API: How to model JSON representation?

    - by Jan P.
    I am designing a RESTful API for a booking application. You can request a list of accommodations. And that's where I don't really know how to design the JSON represenation. This is my XML representation: <?xml version="1.0" encoding="utf-8"?> <accommodations> <accommodation> <name>...</name> <category>couch</category> </accommodation> <accommodation> <name>...</name> <category>room</category> </accommodation> <accommodations> My first try to convert this to JSON resulted in this output (1): { "0": { "name": "...", "category": "couch" }, "1": { "name": "...", "category": "room" } } But as I looked how others APIs did it, I found something looking more like this (2): [ { "name": "...", "category": "couch" }, { "name": "...", "category": "room" } ] I know version 1 is an object, and version 2 an array. But which one is better in this case?

    Read the article

  • Avoiding repetition with libraries that use a setup + execute model

    - by lijie
    Some libraries offer the ability to separate setup and execution, esp if the setup portion has undesirable characteristics such as unbounded latency. If the program needs to have this reflected in its structure, then it is natural to have: void setupXXX(...); // which calls the setup stuff void doXXX(...); // which calls the execute stuff The problem with this is that the structure of setupXXX and doXXX is going to be quite similar (at least textually -- control flow will prob be more complex in doXXX). Wondering if there are any ways to avoid this. Example: Let's say we're doing signal processing: filtering with a known kernel in the frequency domain. so, setupXXX and doXXX would probably be something like... void doFilter(FilterStuff *c) { for (int i = 0; i < c->N; ++i) { doFFT(c->x[i], c->fft_forward_setup, c->tmp); doMultiplyVector(c->tmp, c->filter); doFFT(c->tmp, c->fft_inverse_setup, c->x[i]); } } void setupFilter(FilterStuff *c) { setupFFT(..., &(c->fft_forward_setup)); // assign the kernel to c->filter ... setupFFT(..., &(c->fft_inverse_setup)); }

    Read the article

  • Is Appfogs pricing model sustainable?

    - by Kyle Finley
    I was looking at AppFog's Pricing and they appear to be giving 2GB of ram away for free, to nonpaying customers. This seems unprecedented for PAAS provodes--providers like Heroku and App Engine remove the app from memory if it has been inactive for certain amount of time. Does cloudfoundry work similarly? Am I wrong in assuming that in a few years appfog servers will be filled with inactive non paying applications?

    Read the article

  • Limiting records in a model action...

    - by bgadoci
    How do I limit the number of records that I am outputting with the following code to only 3 records: User.rb def workouts_on_which_i_commented comments.map{|x|x.workout}.uniq end def comment_stream workouts_on_which_i_commented.map do |w| w.comments end.flatten.sort{|x,y| y.created_at <=> x.created_at} end html.erb file <% current_user.comment_stream.each do |comment| %> ... <% end %> UPDATE: I'm using Rails 2.3.9

    Read the article

  • How dinamically set validation attributes to a Model MVC 2?

    - by Tar
    Lets says I have the following model public class Person { [NameIsValid] public string Name { get; set;} public string LastName { get; set; } } I created a custom attribute NameIsValid for this model. Lets says for ViewA I need the custom attribute validation in the model, but for ViewB I dont need this custom validation attribute, how can I dinamically set or remove the custom attribute from the model when need it? Thanks!

    Read the article

  • How to save the values of one model in another?

    - by ragupathi
    I have user model and Language model where the language model contains different languages and i want the user to select the languages from that model and it should be stored for the corresponding user. Consider there are five languages A, B, C, D, E then the user has to select from the languages. Suppose user 1 selects A and C whereas user 2 selects B and D then the languages has to be stored for that user. How can i do this? please help me.

    Read the article

  • counter_cache not updating on the model after save

    - by sehnsucht
    I am using a counter_cache to let MySQL do some of the bookkeeping for me: class Container has_many :items end class Item belongs_to :container, :counter_cache => true end Now, if I do this: container = Container.find(57) item = Item.new item.container = container item.save in the SQL log there will be an INSERT followed by something like: UPDATE `containers` SET `items_count` = COALESCE(`items_count`, 0) + 1 WHERE `containers`.`id` = 57 which is what I expected it to do. However, the container[:items_count] will be stale! ...unless I container.reload to pick up the updated value. Which in my mind sort of defeats part of the purpose of using the :counter_cache in favor of a custom built one, especially since I may not actually want a reload before I try to access the items_count attribute. (My models are pretty code-heavy because of the nature of the domain logic, so I sometimes have to save and create multiple things in one controller call.) I understand I can tinker with callbacks myself but this seems to me a fairly basic expectation of the simple feature. Again, if I have to write additional code to make it fully work, it might as well be easier to implement a custom counter. What am I doing/assuming wrong?

    Read the article

  • Rails 2.3 using another model's named_scope or alternative

    - by mustafi
    Hi Let's say I have two models like so: class Comment < ActiveRecord::Base belongs_to :user named_scope :about_x :conditions => "comments.text like '%x%')" end class User < ActiveRecord::Base has_many :comments end I would like to use the models so that I can return all the users and all comments with text like '%x%' all_user_comments_about_x = User.comments.about_x How to proceed? Thank you

    Read the article

  • Acr.ExtDirect &ndash; Part 1 &ndash; Method Resolvers

    - by Allan Ritchie
    One of the most important things of any open source libraries in my opinion is to be as open as possible while avoiding having your library become invasive to your code/business model design.  I personally could never stand marking my business and/or data access code with attributes everywhere.  XML also isn’t really a fav with too many people these days since it comes with a startup performance hit and requires runtime compiling.  I find that there is a whole ton of communication libraries out there currently requiring this (ie. WCF, RIA, etc).  Even though Acr.ExtDirect comes with its own set of attributes, you can piggy-back the [ServiceContract] & [OperationContract] attributes from WCF if you choose.  It goes beyond that though, there are 2 others “out-of-the-box” implementations – Convention based & XML Configuration.    Convention – I don’t actually recommend using this one since it opens up all of your public instance methods to remote execution calls. XML Configuration – This isn’t so bad but requires you enter all of your methods and there operation types into the Castle XML configuration & as I said earlier, XML isn’t the fav these days.   So what are your options if you don’t like attributes, convention, or XML Configuration?  Well, Acr.ExtDirect has its own extension base to give the API a list of methods and components to make available for remote execution.  1: public interface IDirectMethodResolver { 2:   3: bool IsServiceType(ComponentModel model, Type type); 4: string GetNamespace(ComponentModel model); 5: string[] GetDirectMethodNames(ComponentModel model); 6: DirectMethodType GetMethodType(ComponentModel model, MethodInfo method); 7: }   Now to implement our own method resolver:   1: public class TestResolver : IDirectMethodResolver { 2:   3: #region IDirectMethodResolver Members 4:   5: /// <summary> 6: /// Determine if you are calling a service 7: /// </summary> 8: /// <param name="model"></param> 9: /// <param name="type"></param> 10: /// <returns></returns> 11: public bool IsServiceType(ComponentModel model, Type type) { 12: return (type.Namespace == "MyBLL.Data"); 13: } 14:   15: /// <summary> 16: /// Return the calling name for the client side 17: /// </summary> 18: /// <param name="model"></param> 19: /// <returns></returns> 20: public string GetNamespace(ComponentModel model) { 21: return model.Name; 22: } 23:   24: public string[] GetDirectMethodNames(ComponentModel model) { 25: switch (model.Name) { 26: case "Products" : 27: return new [] { 28: "GetProducts", 29: "LoadProduct", 30: "Save", 31: "Update" 32: }; 33:   34: case "Categories" : 35: return new [] { 36: "GetProducts" 37: }; 38:   39: default : 40: throw new ArgumentException("Invalid type"); 41: } 42: } 43:   44: public DirectMethodType GetMethodType(ComponentModel model, MethodInfo method) { 45: if (method.Name.StartsWith("Save") || method.Name.StartsWith("Update")) 46: return DirectMethodType.FormSubmit; 47: 48: else if (method.Name.StartsWith("Load")) 49: return DirectMethodType.FormLoad; 50:   51: else 52: return DirectMethodType.Direct; 53: } 54:   55: #endregion 56: }   And there you have it, your own custom method resolver.  Pretty easy and pretty open ended!

    Read the article

  • Quick guide to Oracle IRM 11g: Classification design

    - by Simon Thorpe
    Quick guide to Oracle IRM 11g indexThis is the final article in the quick guide to Oracle IRM. If you've followed everything prior you will now have a fully functional and tested Information Rights Management service. It doesn't matter if you've been following the 10g or 11g guide as this next article is common to both. ContentsWhy this is the most important part... Understanding the classification and standard rights model Identifying business use cases Creating an effective IRM classification modelOne single classification across the entire businessA context for each and every possible granular use caseWhat makes a good context? Deciding on the use of roles in the context Reviewing the features and security for context roles Summary Why this is the most important part...Now the real work begins, installing and getting an IRM system running is as simple as following instructions. However to actually have an IRM technology easily protecting your most sensitive information without interfering with your users existing daily work flows and be able to scale IRM across the entire business, requires thought into how confidential documents are created, used and distributed. This article is going to give you the information you need to ask the business the right questions so that you can deploy your IRM service successfully. The IRM team here at Oracle have over 10 years of experience in helping customers and it is important you understand the following to be successful in securing access to your most confidential information. Whatever you are trying to secure, be it mergers and acquisitions information, engineering intellectual property, health care documentation or financial reports. No matter what type of user is going to access the information, be they employees, contractors or customers, there are common goals you are always trying to achieve.Securing the content at the earliest point possible and do it automatically. Removing the dependency on the user to decide to secure the content reduces the risk of mistakes significantly and therefore results a more secure deployment. K.I.S.S. (Keep It Simple Stupid) Reduce complexity in the rights/classification model. Oracle IRM lets you make changes to access to documents even after they are secured which allows you to start with a simple model and then introduce complexity once you've understood how the technology is going to be used in the business. After an initial learning period you can review your implementation and start to make informed decisions based on user feedback and administration experience. Clearly communicate to the user, when appropriate, any changes to their existing work practice. You must make every effort to make the transition to sealed content as simple as possible. For external users you must help them understand why you are securing the documents and inform them the value of the technology to both your business and them. Before getting into the detail, I must pay homage to Martin White, Vice President of client services in SealedMedia, the company Oracle acquired and who created Oracle IRM. In the SealedMedia years Martin was involved with every single customer and was key to the design of certain aspects of the IRM technology, specifically the context model we will be discussing here. Listening carefully to customers and understanding the flexibility of the IRM technology, Martin taught me all the skills of helping customers build scalable, effective and simple to use IRM deployments. No matter how well the engineering department designed the software, badly designed and poorly executed projects can result in difficult to use and manage, and ultimately insecure solutions. The advice and information that follows was born with Martin and he's still delivering IRM consulting with customers and can be found at www.thinkers.co.uk. It is from Martin and others that Oracle not only has the most advanced, scalable and usable document security solution on the market, but Oracle and their partners have the most experience in delivering successful document security solutions. Understanding the classification and standard rights model The goal of any successful IRM deployment is to balance the increase in security the technology brings without over complicating the way people use secured content and avoid a significant increase in administration and maintenance. With Oracle it is possible to automate the protection of content, deploy the desktop software transparently and use authentication methods such that users can open newly secured content initially unaware the document is any different to an insecure one. That is until of course they attempt to do something for which they don't have any rights, such as copy and paste to an insecure application or try and print. Central to achieving this objective is creating a classification model that is simple to understand and use but also provides the right level of complexity to meet the business needs. In Oracle IRM the term used for each classification is a "context". A context defines the relationship between.A group of related documents The people that use the documents The roles that these people perform The rights that these people need to perform their role The context is the key to the success of Oracle IRM. It provides the separation of the role and rights of a user from the content itself. Documents are sealed to contexts but none of the rights, user or group information is stored within the content itself. Sealing only places information about the location of the IRM server that sealed it, the context applied to the document and a few other pieces of metadata that pertain only to the document. This important separation of rights from content means that millions of documents can be secured against a single classification and a user needs only one right assigned to be able to access all documents. If you have followed all the previous articles in this guide, you will be ready to start defining contexts to which your sensitive information will be protected. But before you even start with IRM, you need to understand how your own business uses and creates sensitive documents and emails. Identifying business use cases Oracle is able to support multiple classification systems, but usually there is one single initial need for the technology which drives a deployment. This need might be to protect sensitive mergers and acquisitions information, engineering intellectual property, financial documents. For this and every subsequent use case you must understand how users create and work with documents, to who they are distributed and how the recipients should interact with them. A successful IRM deployment should start with one well identified use case (we go through some examples towards the end of this article) and then after letting this use case play out in the business, you learn how your users work with content, how well your communication to the business worked and if the classification system you deployed delivered the right balance. It is at this point you can start rolling the technology out further. Creating an effective IRM classification model Once you have selected the initial use case you will address with IRM, you need to design a classification model that defines the access to secured documents within the use case. In Oracle IRM there is an inbuilt classification system called the "context" model. In Oracle IRM 11g it is possible to extend the server to support any rights classification model, but the majority of users who are not using an application integration (such as Oracle IRM within Oracle Beehive) are likely to be starting out with the built in context model. Before looking at creating a classification system with IRM, it is worth reviewing some recognized standards and methods for creating and implementing security policy. A very useful set of documents are the ISO 17799 guidelines and the SANS security policy templates. First task is to create a context against which documents are to be secured. A context consists of a group of related documents (all top secret engineering research), a list of roles (contributors and readers) which define how users can access documents and a list of users (research engineers) who have been given a role allowing them to interact with sealed content. Before even creating the first context it is wise to decide on a philosophy which will dictate the level of granularity, the question is, where do you start? At a department level? By project? By technology? First consider the two ends of the spectrum... One single classification across the entire business Imagine that instead of having separate contexts, one for engineering intellectual property, one for your financial data, one for human resources personally identifiable information, you create one context for all documents across the entire business. Whilst you may have immediate objections, there are some significant benefits in thinking about considering this. Document security classification decisions are simple. You only have one context to chose from! User provisioning is simple, just make sure everyone has a role in the only context in the business. Administration is very low, if you assign rights to groups from the business user repository you probably never have to touch IRM administration again. There are however some obvious downsides to this model.All users in have access to all IRM secured content. So potentially a sales person could access sensitive mergers and acquisition documents, if they can get their hands on a copy that is. You cannot delegate control of different documents to different parts of the business, this may not satisfy your regulatory requirements for the separation and delegation of duties. Changing a users role affects every single document ever secured. Even though it is very unlikely a business would ever use one single context to secure all their sensitive information, thinking about this scenario raises one very important point. Just having one single context and securing all confidential documents to it, whilst incurring some of the problems detailed above, has one huge value. Once secured, IRM protected content can ONLY be accessed by authorized users. Just think of all the sensitive documents in your business today, imagine if you could ensure that only everyone you trust could open them. Even if an employee lost a laptop or someone accidentally sent an email to the wrong recipient, only the right people could open that file. A context for each and every possible granular use case Now let's think about the total opposite of a single context design. What if you created a context for each and every single defined business need and created multiple contexts within this for each level of granularity? Let's take a use case where we need to protect engineering intellectual property. Imagine we have 6 different engineering groups, and in each we have a research department, a design department and manufacturing. The company information security policy defines 3 levels of information sensitivity... restricted, confidential and top secret. Then let's say that each group and department needs to define access to information from both internal and external users. Finally add into the mix that they want to review the rights model for each context every financial quarter. This would result in a huge amount of contexts. For example, lets just look at the resulting contexts for one engineering group. Q1FY2010 Restricted Internal - Engineering Group 1 - Research Q1FY2010 Restricted Internal - Engineering Group 1 - Design Q1FY2010 Restricted Internal - Engineering Group 1 - Manufacturing Q1FY2010 Restricted External- Engineering Group 1 - Research Q1FY2010 Restricted External - Engineering Group 1 - Design Q1FY2010 Restricted External - Engineering Group 1 - Manufacturing Q1FY2010 Confidential Internal - Engineering Group 1 - Research Q1FY2010 Confidential Internal - Engineering Group 1 - Design Q1FY2010 Confidential Internal - Engineering Group 1 - Manufacturing Q1FY2010 Confidential External - Engineering Group 1 - Research Q1FY2010 Confidential External - Engineering Group 1 - Design Q1FY2010 Confidential External - Engineering Group 1 - Manufacturing Q1FY2010 Top Secret Internal - Engineering Group 1 - Research Q1FY2010 Top Secret Internal - Engineering Group 1 - Design Q1FY2010 Top Secret Internal - Engineering Group 1 - Manufacturing Q1FY2010 Top Secret External - Engineering Group 1 - Research Q1FY2010 Top Secret External - Engineering Group 1 - Design Q1FY2010 Top Secret External - Engineering Group 1 - Manufacturing Now multiply the above by 6 for each engineering group, 18 contexts. You are then creating/reviewing another 18 every 3 months. After a year you've got 72 contexts. What would be the advantages of such a complex classification model? You can satisfy very granular rights requirements, for example only an authorized engineering group 1 researcher can create a top secret report for access internally, and his role will be reviewed on a very frequent basis. Your business may have very complex rights requirements and mapping this directly to IRM may be an obvious exercise. The disadvantages of such a classification model are significant...Huge administrative overhead. Someone in the business must manage, review and administrate each of these contexts. If the engineering group had a single administrator, they would have 72 classifications to reside over each year. From an end users perspective life will be very confusing. Imagine if a user has rights in just 6 of these contexts. They may be able to print content from one but not another, be able to edit content in 2 contexts but not the other 4. Such confusion at the end user level causes frustration and resistance to the use of the technology. Increased synchronization complexity. Imagine a user who after 3 years in the company ends up with over 300 rights in many different contexts across the business. This would result in long synchronization times as the client software updates all your offline rights. Hard to understand who can do what with what. Imagine being the VP of engineering and as part of an internal security audit you are asked the question, "What rights to researchers have to our top secret information?". In this complex model the answer is not simple, it would depend on many roles in many contexts. Of course this example is extreme, but it highlights that trying to build many barriers in your business can result in a nightmare of administration and confusion amongst users. In the real world what we need is a balance of the two. We need to seek an optimum number of contexts. Too many contexts are unmanageable and too few contexts does not give fine enough granularity. What makes a good context? Good context design derives mainly from how well you understand your business requirements to secure access to confidential information. Some customers I have worked with can tell me exactly the documents they wish to secure and know exactly who should be opening them. However there are some customers who know only of the government regulation that requires them to control access to certain types of information, they don't actually know where the documents are, how they are created or understand exactly who should have access. Therefore you need to know how to ask the business the right questions that lead to information which help you define a context. First ask these questions about a set of documentsWhat is the topic? Who are legitimate contributors on this topic? Who are the authorized readership? If the answer to any one of these is significantly different, then it probably merits a separate context. Remember that sealed documents are inherently secure and as such they cannot leak to your competitors, therefore it is better sealed to a broad context than not sealed at all. Simplicity is key here. Always revert to the first extreme example of a single classification, then work towards essential complexity. If there is any doubt, always prefer fewer contexts. Remember, Oracle IRM allows you to change your mind later on. You can implement a design now and continue to change and refine as you learn how the technology is used. It is easy to go from a simple model to a more complex one, it is much harder to take a complex model that is already embedded in the work practice of users and try to simplify it. It is also wise to take a single use case and address this first with the business. Don't try and tackle many different problems from the outset. Do one, learn from the process, refine it and then take what you have learned into the next use case, refine and continue. Once you have a good grasp of the technology and understand how your business will use it, you can then start rolling out the technology wider across the business. Deciding on the use of roles in the context Once you have decided on that first initial use case and a context to create let's look at the details you need to decide upon. For each context, identify; Administrative rolesBusiness owner, the person who makes decisions about who may or may not see content in this context. This is often the person who wanted to use IRM and drove the business purchase. They are the usually the person with the most at risk when sensitive information is lost. Point of contact, the person who will handle requests for access to content. Sometimes the same as the business owner, sometimes a trusted secretary or administrator. Context administrator, the person who will enact the decisions of the Business Owner. Sometimes the point of contact, sometimes a trusted IT person. Document related rolesContributors, the people who create and edit documents in this context. Reviewers, the people who are involved in reviewing documents but are not trusted to secure information to this classification. This role is not always necessary. (See later discussion on Published-work and Work-in-Progress) Readers, the people who read documents from this context. Some people may have several of the roles above, which is fine. What you are trying to do is understand and define how the business interacts with your sensitive information. These roles obviously map directly to roles available in Oracle IRM. Reviewing the features and security for context roles At this point we have decided on a classification of information, understand what roles people in the business will play when administrating this classification and how they will interact with content. The final piece of the puzzle in getting the information for our first context is to look at the permissions people will have to sealed documents. First think why are you protecting the documents in the first place? It is to prevent the loss of leaking of information to the wrong people. To control the information, making sure that people only access the latest versions of documents. You are not using Oracle IRM to prevent unauthorized people from doing legitimate work. This is an important point, with IRM you can erect many barriers to prevent access to content yet too many restrictions and authorized users will often find ways to circumvent using the technology and end up distributing unprotected originals. Because IRM is a security technology, it is easy to get carried away restricting different groups. However I would highly recommend starting with a simple solution with few restrictions. Ensure that everyone who reasonably needs to read documents can do so from the outset. Remember that with Oracle IRM you can change rights to content whenever you wish and tighten security. Always return to the fact that the greatest value IRM brings is that ONLY authorized users can access secured content, remember that simple "one context for the entire business" model. At the start of the deployment you really need to aim for user acceptance and therefore a simple model is more likely to succeed. As time passes and users understand how IRM works you can start to introduce more restrictions and complexity. Another key aspect to focus on is handling exceptions. If you decide on a context model where engineering can only access engineering information, and sales can only access sales data. Act quickly when a sales manager needs legitimate access to a set of engineering documents. Having a quick and effective process for permitting other people with legitimate needs to obtain appropriate access will be rewarded with acceptance from the user community. These use cases can often be satisfied by integrating IRM with a good Identity & Access Management technology which simplifies the process of assigning users the correct business roles. The big print issue... Printing is often an issue of contention, users love to print but the business wants to ensure sensitive information remains in the controlled digital world. There are many cases of physical document loss causing a business pain, it is often overlooked that IRM can help with this issue by limiting the ability to generate physical copies of digital content. However it can be hard to maintain a balance between security and usability when it comes to printing. Consider the following points when deciding about whether to give print rights. Oracle IRM sealed documents can contain watermarks that expose information about the user, time and location of access and the classification of the document. This information would reside in the printed copy making it easier to trace who printed it. Printed documents are slower to distribute in comparison to their digital counterparts, so time sensitive information in printed format may present a lower risk. Print activity is audited, therefore you can monitor and react to users abusing print rights. Summary In summary it is important to think carefully about the way you create your context model. As you ask the business these questions you may get a variety of different requirements. There may be special projects that require a context just for sensitive information created during the lifetime of the project. There may be a department that requires all information in the group is secured and you might have a few senior executives who wish to use IRM to exchange a small number of highly sensitive documents with a very small number of people. Oracle IRM, with its very flexible context classification system, can support all of these use cases. The trick is to introducing the complexity to deliver them at the right level. In another article i'm working on I will go through some examples of how Oracle IRM might map to existing business use cases. But for now, this article covers all the important questions you need to get your IRM service deployed and successfully protecting your most sensitive information.

    Read the article

  • Sharing data between graphics and physics engine in the game?

    - by PolGraphic
    I'm writing the game engine that consists of few modules. Two of them are the graphics engine and the physics engine. I wonder if it's a good solution to share data between them? Two ways (sharing or not) looks like that: Without sharing data GraphicsModel{ //some common for graphics and physics data like position //some only graphic data //like textures and detailed model's verticles that physics doesn't need }; PhysicsModel{ //some common for graphics and physics data like position //some only physics data //usually my physics data contains A LOT more informations than graphics data } engine3D->createModel3D(...); physicsEngine->createModel3D(...); //connect graphics and physics data //e.g. update graphics model's position when physics model's position will change I see two main problems: A lot of redundant data (like two positions for both physics and graphics data) Problem with updating data (I have to manually update graphics data when physics data changes) With sharing data Model{ //some common for graphics and physics data like position }; GraphicModel : public Model{ //some only graphics data //like textures and detailed model's verticles that physics doesn't need }; PhysicsModel : public Model{ //some only physics data //usually my physics data contains A LOT more informations than graphics data } model = engine3D->createModel3D(...); physicsEngine->assingModel3D(&model); //will cast to //PhysicsModel for it's purposes?? //when physics changes anything (like position) in model //(which it treats like PhysicsModel), the position for graphics data //will change as well (because it's the same model) Problems here: physicsEngine cannot create new objects, just "assing" existing ones from engine3D (somehow it looks more anti-independent for me) Casting data in assingModel3D function physicsEngine and graphicsEngine must be careful - they cannot delete data when they don't need them (because second one may need it). But it's rare situation. Moreover, they can just delete the pointer, not the object. Or we can assume that graphicsEngine will delete objects, physicsEngine just pointers to them. Which way is better? Which will produce more problems in the future? I like the second solution more, but I wonder why most graphics and physics engines prefer the first one (maybe because they normally make only graphics or only physics engine and somebody else connect them in the game?). Have they any more hidden pros & contras?

    Read the article

  • How do I Integrate Production Database Hot Fixes into Shared Database Development model?

    - by TetonSig
    We are using SQL Source Control 3, SQL Compare, SQL Data Compare from RedGate, Mercurial repositories, TeamCity and a set of 4 environments including production. I am working on getting us to a dedicated environment per developer, but for at least the next 6 months we are stuck with a shared model. To summarize our current system, we have a DEV SQL server where developers first make changes/additions. They commit their changes through SQL Source Control to a local hgdev repository. When they execute an hg push to the main repository, TeamCity listens for that and then (among other things) pushes hgdev repository to hgrc. Another TeamCity process listens for that and does a pull from hgrc and deploys the latest to a QA SQL Server where regression and integration tests are run. When those are passed a push from hgrc to hgprod occurs. We do a compare of hgprod to our PREPROD SQL Server and generate deployment/rollback scripts for our production release. Separate from the above we have database Hot Fixes that will need to be applied in between releases. The process there is for our Operations team make changes on the PreProd database, and then after testing, to use SQL Source Control to commit their hot fix changes to hgprod from the PREPROD database, and then do a compare from hgprod to PRODUCTION, create deployment scripts and run them on PRODUCTION. If we were in a dedicated database per developer model, we could simply automatically push hgprod back to hgdev and merge in the hot fix change (through TeamCity monitoring for hgprod checkins) and then developers would pick it up and merge it to their local repository and database periodically. However, given that with a shared model the DEV database itself is the source of all changes, this won't work. Pushing hotfixes back to hgdev will show up in SQL Source Control as being different than DEV SQL Server and therefore we need to overwrite the reposistory with the "change" from the DEV SQL Server. My only workaround so far is to just have OPS assign a developer the hotfix ticket with a script attached and then we run their hotfixes against DEV ourselves to merge them back in. I'm not happy with that solution. Other than working faster to get to dedicated environment, are they other ways to keep this loop going automatically?

    Read the article

  • Why do meshes show up as bones in the Model class?

    - by Itamar Marom
    Right now I'm working on a 3D game and I've come across something very weird. When I created the model in Blender, I added an armature named "MyBone" to the stage and attached a cube ("MyCube") to it, so that when I move the armature, the cube moves with it. I exported this as an FBX and loaded it as a Model object. What I expected to see was: But what I got was this: I'm really confused. Why is the mesh I created showing up in the bone list? And what's Root Node? Here are the .blend and .fbx files: here or here. Thanks.

    Read the article

  • Let a model instance choose appropriate view class using category. Is it good design?

    - by Denis Mikhaylov
    Assume I have abstract base model class called MoneySource. And two realizations BankCard and CellularAccount. In MoneysSourceListViewController I want to display a list of them, but with ListItemView different for each MoneySource subclass. What if I define a category on MoneySource @interface MoneySource (ListItemView) - (Class)listItemViewClass; @end And then override it for each concrete sublcass of MoneySource, returning suitable view class. @implementation CellularAccount (ListItemView) - (Class)listItemViewClass { return [BankCardListView class]; } @end @implementation BankCard (ListItemView) - (Class)listItemViewClass { return [CellularAccountListView class]; } @end so I can ask model object about its view, not violating MVC principles, and avoiding class introspection or if constructions. Thank you!

    Read the article

  • Core Data Migration - "Can't add source store" error

    - by Tofrizer
    Hi, In my iPhone app I'm using Core Data and I've made changes to my data model that cannot be automatically migrated over (i.e. added new relationships). I added the data model version (Design - Data Model - Add Model Version) and applied my new data model changes to the new version 2. I then created a mapping object model and set the Source and Destination models to their correct data models (old and new respectively). When I run the app and call the persistentStoreCoordinator, my app barfs with the following: 2010-02-27 02:40:30.922 XXXX[73578:20b] Unresolved error Error Domain=NSCocoaErrorDomain Code=134110 UserInfo=0xfc2240 "Operation could not be completed. (Cocoa error 134110.)", { NSUnderlyingError = Error Domain=NSCocoaErrorDomain Code=134130 UserInfo=0xfbb3a0 "Operation could not be completed. (Cocoa error 134130.)"; reason = "Can't add source store"; } FWIW (not much i think) I've also made the usual code changes in persistentStoreCoordinator to use the NSMigratePersistentStoresAutomaticallyOption and NSInferMappingModelAutomaticallyOption (for future data model changes that can be automatically migrated). More relevantly, my managedObjectModel is created by calling initWithContentsOfURL where the file/resource type is "momd". I've tried updating both the source and destination model in the mapping model (Design - Mapping Model - Update XXX Model) as well as deleted the mapping model and recreated it. I've cleaned and re-built but all to no avail. I still get the above error message. Any pointers/thoughts on how I can further debug or resolve this problem please? I haven't posted any code snippets because this feels much more like a build environment issue (and my code is very standard - just the usual core data code to handle migrations using a mapping model but I'm happy to show the code if it helps). Appreciate any help. Thanks

    Read the article

  • Box2dx: Usage of World.QueryAABB?

    - by Rosarch
    I'm using Box2dx with C#/XNA. I'm trying to write a function that determines if a body could exist in a given point without colliding with anything: /// <summary> /// Can gameObject exist with start Point without colliding with anything? /// </summary> internal bool IsAvailableArea(GameObjectModel model, Vector2 point) { Vector2 originalPosition = model.Body.Position; model.Body.Position = point; // less risky would be to use a deep clone AABB collisionBox; model.Body.GetFixtureList().GetAABB(out collisionBox); // how is this supposed to work? physicsWorld.QueryAABB(x => true, ref collisionBox); model.Body.Position = originalPosition; return true; } Is there a better way to go about doing this? How is World.QueryAABB supposed to work? Here is an earlier attempt. It is broken; it always returns false. /// <summary> /// Can gameObject exist with start Point without colliding with anything? /// </summary> internal bool IsAvailableArea(GameObjectModel model, Vector2 point) { Vector2 originalPosition = model.Body.Position; model.Body.Position = point; // less risky would be to use a deep clone AABB collisionBox; model.Body.GetFixtureList().GetAABB(out collisionBox); ICollection<GameObjectController> gameObjects = worldQueryEngine.GameObjectsForPredicate(x => ! x.Model.Passable); foreach (GameObjectController controller in gameObjects) { AABB potentialCollidingBox; controller.Model.Body.GetFixtureList().GetAABB(out potentialCollidingBox); if (AABB.TestOverlap(ref collisionBox, ref potentialCollidingBox)) { model.Body.Position = originalPosition; return false; // there is something that will collide at this point } } model.Body.Position = originalPosition; return true; }

    Read the article

  • Big smart ViewModels, dumb Views, and any model, the best MVVM approach?

    - by Edward Tanguay
    The following code is a refactoring of my previous MVVM approach (Fat Models, skinny ViewModels and dumb Views, the best MVVM approach?) in which I moved the logic and INotifyPropertyChanged implementation from the model back up into the ViewModel. This makes more sense, since as was pointed out, you often you have to use models that you either can't change or don't want to change and so your MVVM approach should be able to work with any model class as it happens to exist. This example still allows you to view the live data from your model in design mode in Visual Studio and Expression Blend which I think is significant since you could have a mock data store that the designer connects to which has e.g. the smallest and largest strings that the UI can possibly encounter so that he can adjust the design based on those extremes. Questions: I'm a bit surprised that I even have to "put a timer" in my ViewModel since it seems like that is a function of INotifyPropertyChanged, it seems redundant, but it was the only way I could get the XAML UI to constantly (once per second) reflect the state of my model. So it would be interesting to hear anyone who may have taken this approach if you encountered any disadvantages down the road, e.g. with threading or performance. The following code will work if you just copy the XAML and code behind into a new WPF project. XAML: <Window x:Class="TestMvvm73892.Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:local="clr-namespace:TestMvvm73892" Title="Window1" Height="300" Width="300"> <Window.Resources> <ObjectDataProvider x:Key="DataSourceCustomer" ObjectType="{x:Type local:CustomerViewModel}" MethodName="GetCustomerViewModel"/> </Window.Resources> <DockPanel DataContext="{StaticResource DataSourceCustomer}"> <StackPanel DockPanel.Dock="Top" Orientation="Horizontal"> <TextBlock Text="{Binding Path=FirstName}"/> <TextBlock Text=" "/> <TextBlock Text="{Binding Path=LastName}"/> </StackPanel> <StackPanel DockPanel.Dock="Top" Orientation="Horizontal"> <TextBlock Text="{Binding Path=TimeOfMostRecentActivity}"/> </StackPanel> </DockPanel> </Window> Code Behind: using System; using System.Windows; using System.ComponentModel; using System.Threading; namespace TestMvvm73892 { public partial class Window1 : Window { public Window1() { InitializeComponent(); } } //view model public class CustomerViewModel : INotifyPropertyChanged { private string _firstName; private string _lastName; private DateTime _timeOfMostRecentActivity; private Timer _timer; public string FirstName { get { return _firstName; } set { _firstName = value; this.RaisePropertyChanged("FirstName"); } } public string LastName { get { return _lastName; } set { _lastName = value; this.RaisePropertyChanged("LastName"); } } public DateTime TimeOfMostRecentActivity { get { return _timeOfMostRecentActivity; } set { _timeOfMostRecentActivity = value; this.RaisePropertyChanged("TimeOfMostRecentActivity"); } } public CustomerViewModel() { _timer = new Timer(CheckForChangesInModel, null, 0, 1000); } private void CheckForChangesInModel(object state) { Customer currentCustomer = CustomerViewModel.GetCurrentCustomer(); MapFieldsFromModeltoViewModel(currentCustomer, this); } public static CustomerViewModel GetCustomerViewModel() { CustomerViewModel customerViewModel = new CustomerViewModel(); Customer currentCustomer = CustomerViewModel.GetCurrentCustomer(); MapFieldsFromModeltoViewModel(currentCustomer, customerViewModel); return customerViewModel; } public static void MapFieldsFromModeltoViewModel(Customer model, CustomerViewModel viewModel) { viewModel.FirstName = model.FirstName; viewModel.LastName = model.LastName; viewModel.TimeOfMostRecentActivity = model.TimeOfMostRecentActivity; } public static Customer GetCurrentCustomer() { return Customer.GetCurrentCustomer(); } //INotifyPropertyChanged implementation public event PropertyChangedEventHandler PropertyChanged; private void RaisePropertyChanged(string property) { if (PropertyChanged != null) { PropertyChanged(this, new PropertyChangedEventArgs(property)); } } } //model public class Customer { public string FirstName { get; set; } public string LastName { get; set; } public DateTime TimeOfMostRecentActivity { get; set; } public static Customer GetCurrentCustomer() { return new Customer { FirstName = "Jim", LastName = "Smith", TimeOfMostRecentActivity = DateTime.Now }; } } }

    Read the article

  • After extending the User model in django, how do you create a ModelForm?

    - by mlissner
    I extended the User model in django to include several other variables, such as location, and employer. Now I'm trying to create a form that has the following fields: First name (from User) Last name (from User) Location (from UserProfile, which extends User via a foreign key) Employer (also from UserProfile) I have created a modelform: from django.forms import ModelForm from django.contrib import auth from alert.userHandling.models import UserProfile class ProfileForm(ModelForm): class Meta: # model = auth.models.User # this gives me the User fields model = UserProfile # this gives me the UserProfile fields So, my question is, how can I create a ModelForm that has access to all of the fields, whether they are from the User model or the UserProfile model? Hope this makes sense. I'll be happy to clarify if there are any questions.

    Read the article

  • Should I design the application or model (database) first?

    - by YonahW
    I am getting ready to start building a new web project in my spare time to bring to fruition an idea that has been bouncing around my head for a while. I have never gotten down whether I am better off first building the model and then the consuming application or the other way around. What are the best practices? What would you build first and why? I imagine that in general the application should generally drive the model, however the application like many websites really doesn't do much without the model. For some reason I find it easier at times to think in terms of the model since the application is really just actions on the model. Is this a poor way of thinking about things? What advantages/disadvantages does each option have?

    Read the article

  • How can I not render a button in my view if a given property off my model has no value?

    - by Lee Warner
    I'm new to web development. In my view, I want to conditionally show a button if Model.TemplateLocation (which is a string) is not null or empty. Below is the code that is rendering the button currently: <div class="WPButton MyButton"> <%=Html.ActionLink(Model.TemplateLinkName, "DownloadTemplate", "ExternalData", new ArgsDownloadTemplate { Path = Model.TemplateLocation, FileName = Model.TemplateFileNameForDownload }, new{})%> </div> Can I wrap some C# code in the <% %'s to test for Model.TemplateLocations value before I render that? I was told to look into @style = "display:none" somehow. Could that be what I need?

    Read the article

  • What is good practice in .NET system architecture design concerning multiple models and aggregates

    - by BuzzBubba
    I'm designing a larger enterprise architecture and I'm in a doubt about how to separate the models and design those. There are several points I'd like suggestions for: - models to define - way to define models Currently my idea is to define: Core (domain) model Repositories to get data to that domain model from a database or other store Business logic model that would contain business logic, validation logic and more specific versions of forms of data retrieval methods View models prepared for specifically formated data output that would be parsed by views of different kind (web, silverlight, etc). For the first model I'm puzzled at what to use and how to define the mode. Should this model entities contain collections and in what form? IList, IEnumerable or IQueryable collections? - I'm thinking of immutable collections which IEnumerable is, but I'd like to avoid huge data collections and to offer my Business logic layer access with LINQ expressions so that query trees get executed at Data level and retrieve only really required data for situations like the one when I'm retrieving a very specific subset of elements amongst thousands or hundreds of thousands. What if I have an item with several thousands of bids? I can't just make an IEnumerable collection of those on the model and then retrieve an item list in some Repository method or even Business model method. Should it be IQueryable so that I actually pass my queries to Repository all the way from the Business logic model layer? Should I just avoid collections in my domain model? Should I void only some collections? Should I separate Domain model and BusinessLogic model or integrate those? Data would be dealt trough repositories which would use Domain model classes. Should repositories be used directly using only classes from domain model like data containers? This is an example of what I had in mind: So, my Domain objects would look like (e.g.) public class Item { public string ItemName { get; set; } public int Price { get; set; } public bool Available { get; set; } private IList<Bid> _bids; public IQueryable<Bid> Bids { get { return _bids.AsQueryable(); } private set { _bids = value; } } public AddNewBid(Bid newBid) { _bids.Add(new Bid {.... } } Where Bid would be defined as a normal class. Repositories would be defined as data retrieval factories and used to get data into another (Business logic) model which would again be used to get data to ViewModels which would then be rendered by different consumers. I would define IQueryable interfaces for all aggregating collections to get flexibility and minimize data retrieved from real data store. Or should I make Domain Model "anemic" with pure data store entities and all collections define for business logic model? One of the most important questions is, where to have IQueryable typed collections? - All the way from Repositories to Business model or not at all and expose only solid IList and IEnumerable from Repositories and deal with more specific queries inside Business model, but have more finer grained methods for data retrieval within Repositories. So, what do you think? Have any suggestions?

    Read the article

< Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >