Search Results

Search found 28476 results on 1140 pages for 'information architecture'.

Page 54/1140 | < Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >

  • Accessing Repositories from Domain

    - by Paul T Davies
    Say we have a task logging system, when a task is logged, the user specifies a category and the task defaults to a status of 'Outstanding'. Assume in this instance that Category and Status have to be implemented as entities. Normally I would do this: Application Layer: public class TaskService { //... public void Add(Guid categoryId, string description) { var category = _categoryRepository.GetById(categoryId); var status = _statusRepository.GetById(Constants.Status.OutstandingId); var task = Task.Create(category, status, description); _taskRepository.Save(task); } } Entity: public class Task { //... public static void Create(Category category, Status status, string description) { return new Task { Category = category, Status = status, Description = descrtiption }; } } I do it like this because I am consistently told that entities should not access the repositories, but it would make much more sense to me if I did this: Entity: public class Task { //... public static void Create(Category category, string description) { return new Task { Category = category, Status = _statusRepository.GetById(Constants.Status.OutstandingId), Description = descrtiption }; } } The status repository is dependecy injected anyway, so there is no real dependency, and this feels more to me thike it is the domain that is making thedecision that a task defaults to outstanding. The previous version feels like it is the application layeer making that decision. Any why are repository contracts often in the domain if this should not be a posibility? Here is a more extreme example, here the domain decides urgency: Entity: public class Task { //... public static void Create(Category category, string description) { var task = new Task { Category = category, Status = _statusRepository.GetById(Constants.Status.OutstandingId), Description = descrtiption }; if(someCondition) { if(someValue > anotherValue) { task.Urgency = _urgencyRepository.GetById (Constants.Urgency.UrgentId); } else { task.Urgency = _urgencyRepository.GetById (Constants.Urgency.SemiUrgentId); } } else { task.Urgency = _urgencyRepository.GetById (Constants.Urgency.NotId); } return task; } } There is no way you would want to pass in all possible versions of Urgency, and no way you would want to calculate this business logic in the application layer, so surely this would be the most appropriate way? So is this a valid reason to access repositories from the domain?

    Read the article

  • C# Role Provider for multiple applications

    - by Juventus18
    I'm making a custom RoleProvider that I would like to use across multiple applications in the same application pool. For the administration of roles (create new role, add users to role, etc..) I would like to create a master application that I could login to and set the roles for each additional application. So for example, I might have AppA and AppB in my organization, and I need to make an application called AppRoleManager that can set roles for AppA and AppB. I am having troubles implementing my custom RoleProvider because it uses an initialize method that gets the application name from the config file, but I need the application name to be a variable (i.e. "AppA" or "AppB") and passed as a parameter. I thought about just implementing the required methods, and then also having additional methods that pass application name as a parameter, but that seems clunky. i.e. public override CreateRole(string roleName) { //uses the ApplicationName property of this, which is set in web.config //creates role in db } public CreateRole(string ApplicationName, string roleName) { //creates role in db with specified params. } Also, I would prefer if people were prevented from calling CreateRole(string roleName) because the current instance of the class might have a different applicationName value than intended (what should i do here? throw NotImplementedException?). I tried just writing the class without inheriting RoleProvider. But it is required by the framework. Any general ideas on how to structure this project? I was thinking make a wrapper class that uses the role provider, and explicitly sets the application name before (and after) and calls to the provider something like this: static class RoleProviderWrapper { public static CreateRole(string pApplicationName, string pRoleName) { Roles.Provider.ApplicationName = pApplicationName; Roles.Provider.CreateRole(pRoleName); Roles.Provider.ApplicationName = "Generic"; } } is this my best-bet?

    Read the article

  • Finite state machine in C++

    - by Electro
    So, I've read a lot about using FSMs to do game state management, things like what and FSM is, and using a stack or set of states for building one. I've gone through all that. But I'm stuck at writing an actual, well-designed implementation of an FSM for that purpose. Specifically, how does one cleanly resolve the problem of transitioning between states, (how) should a state be able to use data from other states, and so on. Does anyone have any tips on designing and writing a implementation in C++, or better yet, code examples?

    Read the article

  • User-Defined Customer Events & their impact (FA Type Profile)

    - by Rajesh Sharma
    CC&B automatically creates field activities when a specific Customer Event takes place. This depends on the way you have setup your Field Activity Type Profiles, the templates within, and associated SP Condition(s) on the template. CC&B uses the service point type, its state and referenced customer event to determine which field activity type to generate.   Customer events available in the base product include: Cut for Non-payment (CNP) Disconnect Warning (DIWA) Reconnect for Payment (REPY) Reread (RERD) Stop Service (STOP) Start Service (STRT) Start/Stop (STSP)   Note the Field values/codes defined for each event.   CC&B comes with a flexibility to define new set of customer events. These can be defined in the Look Up - CUST_EVT_FLG. Values from the Look Up are used on the Field Activity Type Profile Template page.     So what's the use of having user-defined Customer Events? And how will the system detect such events in order to create field activity(s)?   Well, system can only detect such events when you reference a user-defined customer event on a Severance Event Type for an event type Create Field Activities.     This way you can create additional field activities of a specific field activity type for user-defined customer events.   One of our customers adopted this feature and created a user-defined customer event CNPW - Cut for Non-payment for Water Services. This event was then linked on a Field Activity Type Profile and referenced on a Severance Event - CUT FOR NON PAY-W. The associated Severance Process was configured to trigger a reconnection process if it was cancelled (done by defining a Post Cancel Algorithm). Whenever this Severance Event was executed, a specific type of Field Activity was generated for disconnection purposes. The Field Activity type was determined by the system from the Field Activity Type Profile referenced for the SP Type, SP's state and the referenced user-defined customer event. All was working well until the time when they realized that in spite of the Severance Process getting cancelled (when a payment was made); the Post Cancel Algorithm was not executed to start a Reconnection Severance Process for the purpose of generating a reconnection field activity and reconnecting the service.   Basically, the Post Cancel algorithm (if specified on a Severance Process Template) is triggered when a Severance Process gets cancelled because a credit transaction has affected/relieved a Service Agreement's debt.   So what exactly was happening? Now we come to actual question as to what is the impact in having a user-defined customer event.   System defined/base customer events are hard-coded across the entire system. There is an impact even if you remove any customer event entry from the Look Up. User-defined customer events are not recognized by the system anywhere else except in the severance process, as described above.   There are few programs which have routines to first validate the completion of disconnection field activities, which were raised as a result of customer event CNP - Cut for Non-payment in order to perform other associated actions. One such program is the Post Cancel Algorithm, referenced on a Severance Process Template, generally used to reconnect services which were disconnected from other Severance Event, specifically CNP - Cut for Non-Payment. Post cancel algorithm provided by the product - SEV POST CAN does the following (below is the algorithm's description):   This algorithm is called after a severance process has been cancelled (typically because the debt was paid and the SA is no longer eligible to be on the severance process). It checks to see if the process has a completed 'disconnect' event and, if so, starts a reconnect process using the Reconnect Severance Process Template defined in the parameter.    Notice the underlined text. This algorithm implicitly checks for Field Activities having completed status, which were generated from Severance Events as a result of CNP - Cut for Non-payment customer event.   Now if we look back to the customer's issue, we can relate that the Post Cancel algorithm was triggered, but was not able to find any 'Completed' CNP - Cut for Non-payment related field activity. And hence was not able to start a reconnection severance process. This was because a field activity was generated and completed for a customer event CNPW - Cut for Non-payment of Water Services instead.   To conclude, if you introduce new customer events that extend or simulate base customer events, the ones that are included in the base product, ensure that there is no other impact either direct or indirect to other business functions that the application has to offer.  

    Read the article

  • Building a database class in PHP

    - by Sprottenwels
    I wonder if I should write a database class for my application, and if so, how to accomplish it? Over there on SO, a guy mentioned it should be written as an abstract class. However, I can't understand why this would be a benefit. Do I understand correctly, that if I would write an abstract class, every other class that methods will need a database connection, could simply extend this abstract class and have it's own database object? If so, how is this different from a "normal" class where I could instantiate an database object? Another method would be to completely forget about my own class and to instantiate a mysqli object on demand. What do you recommend?

    Read the article

  • Working on someone else's code

    - by Xavi Valero
    I have hardly a year's experience in coding. After I started working, most of the time I would be working on someone else's code, either adding new features over the existing ones or modifying the existing features. The guy who has written the actual code doesn't work in my company any more. I am having a hard time understanding his code and doing my tasks. Whenever I tried modifying the code, I have in some way messed with the working features. What all should I keep in mind, while working over someone else's code?

    Read the article

  • What are the benefits of designing a KeyBinding relay?

    - by Adam Naylor
    The input system of Quake3 is handled using a Keybinding relay, whereby each keypress is matched against a 'binding' which is then passed to the CLI along with a time stamp of when the keypress (or release) occurred. I just wanted to get an idea from developers what they considered to be the key benefits of designing your input system around this approach? One thing i don't particularly like is the appending of the timestamp to the bound command. This seems like a bit of a hack to bend the CLI into handling the games input? Also I feel that detecting the keypress only to add the command to a stream of text that gets parsed at a later date to be a slightly latent way of responding to input? (or is this unfounded?) The only real benefit i can see is that it allows you to bind 'complex' commands to keypresses; like 'switch weapon;+fire;' for example. Or maybe for journaling purposes? Thanks for any insights!

    Read the article

  • What to choose API based server or Socket based server for data driven application

    - by Imdad
    I am working on a project which has a Desktop Application for MAC/COCOA, a native application for iPhone another native application in iPad. All the application do almost same thing. The applications are data driven applications. Every communication to server is made via a restful API developed in PHP. When a user logs in a lot of data is fetched from server. And to remain in sync with server pooling is done. As there are lot of data to pool it makes application slower and un-reliable. A possible solution that comes into my mind is to use Socket based server. My question is that will it reasonably improve the performance? And which technology (of sockets) will be good as a server side solution for data driven application? I have heard a lot about Node.js. Please give your suggestions.

    Read the article

  • Staying OO and Testable while working with a database

    - by Adam Backstrom
    What are some OOP strategies for working with a database but keeping thing testable? Say I have a User class and my production environment works against MySQL. I see a couple possible approaches, shown here using PHP: Pass in a $data_source with interfaces for load() and save(), to abstract the backend source of data. When testing, pass a different data store. $user = new User( $mysql_data_source ); $user-load( 'bob' ); $user-setNickname( 'Robby' ); $user-save(); Use a factory that accesses the database and passes the result row to User's constructor. When testing, manually generate the $row parameter, or mock the object in UserFactory::$data_source. (How might I save changes to the record?) class UserFactory { static $data_source; public static function fetch( $username ) { $row = self::$data_source->get( [params] ); $user = new User( $row ); return $user; } } I have Design Patterns and Clean Code here next to me, but I'm struggling to find applicable concepts.

    Read the article

  • What is enterprise software, exactly?

    - by good_computer
    I don't understand the difference between "normal" software and enterprise software. Even after reading these... "Enterprise Software" on Wikipedia "Enterprise Software Is Sexy Again" on Techcrunch "The Great Enterprise Software Swindle" on Coding Horror I can't really wrap my head around the real differences. Is there any difference at all between the two? Why do people say enterprise software sucks?

    Read the article

  • Is it possible to migrate struts/spring based application to GWT?

    - by Satish Pandey
    I am using the combination of spring, spring-security, struts and iBatis in my application. Now I am looking to migrate the struts UI to GWT. The new combination must be spring, spring-security, GWT and iBatis. I applied a layered approach to develop my application. In Controller/UI layer i am using Struts. I want to replace struts and use GWT in Controller/UI layer. Is is possible to use GWT without affecting another layers DAO/BL/SL?

    Read the article

  • How to design application for scaling the application?

    - by Muhammad
    I have one application which handles hardware events connected on the same computer's PCIe slots. The maximum number of PCIe slots on motherboard are two. I have utilized both slots. Now for scaling the application I need either more PCIe slots in same computer or I use another computer. So consider I am using another computer with same application and hardware connected on the PCIe Slots. Now my problem is that I want to design application over it which can access both computers hardware devices and does the process on it. The processed data should be send back to the respective PC's hardware. Please refer the attached diagram for expansion.

    Read the article

  • Framework 4 Features: Login Id Support

    - by Anthony Shorten
    Given that Oracle Utilities Application Framework 4 is available as part of Mobile Work Force Management and other product progressively I am preparing a number of short but sweet blog entries highlighting some of the new functionality that has been implemented. This is the first entry and it is on a new security feature called Login Id. In past releases of the Oracle Utilities Application Framework, the userid used for authentication and authorization was limited to eight (8) characters in length. This mirrored what the market required in the past with LAN userids and even legacy userids being that length. The technology market has since progressed to longer userid lengths. It is very common to hear that email addresses are being used as credentials for production systems. To achieve this in past versions of the Oracle Utilities Application Framework, sites had to introduce a short userid (8 characters in length) as an alias in your preferred security store. You then configured your J2EE Web Application Server to use the alias as credentials. This sometimes was a standard feaure of the security store and/or the J2EE Web Application Server, if you were lucky. If not, some java code has to be written to implement the solution. In Oracle Utilities Application Framework 4 we introduced a new attribute on the user object called Login Id. The Login Id can be up to 256 characters in length and is an alternative to the existing userid stored on the user object. This means the Oracle Utilities Application Framework can support both long and short userids. For backward compatibility we use the Login Id for authentication but the short userid for authorization and auditing. The user object within the Oracle Utilities Application Framework holds the translation. Backward compatibility is always a consideration in any of our designs for future or changed functionality. You will see reference to this fact in the blog entries I will be composing over the next few months. We have also thought about the flexibility in implementing this feature. The Login Id can be the same value of the Userid (the default for backward compatibility) or can be different. Both the Login Id and Userid have to be unique. This avoids sharing of credentials and is also backward compatible. You can manually enter the Login Id or provision it from Oracle Identity Manager (or other tool). If you use the Login Id only, then we will not autogenerate a short userid automatically as the rules for this can vary from site to site. You have a number of options there. Most Identity provisioning tools can generate a short userid at user creation time and this can be used. If you do not use provisioning tools, then you can write a class extension using the SDK to autoegenerate the userid based upon your sites preference. When we designed the feature there were lots of styles of generating userids (random, initial and surname, numbers etc). We could not really see a clear winner in that respect so we just allowed the extension to be inserted in if necessary. Most customers indicated to us that identity provisioning was the preferred way. This is why we released an Oracle Identity Manager integration with the framework. The Login id is case sensitive now which was not supported under userid. The introduction of the Login Id allows the product to offer flexible options when configuring security whilst maintaining backward compatibility.

    Read the article

  • Get entities ids from two similar collections using one method

    - by Patryk Roszczyniala
    I've got two lists: List<Integer, ZooEntity> zoos; List<Integer, List<ZooEntity>> groupOfZoos; These operations will return collections of values: Collection<ZooEntity> cz = zoos.values(); Collection<List<ZooEntity>> czList = groupOfZoos.values(); What I want to achieve is to get list of all zoo ids. List<Integer> zooIds = cz ids + czList ids; Of course I can create two methods to do what I want: public List<Integer> getIdsFromFlatList(Collection<ZooEntity> list) { List<Integer> ids = new ArrayList<Integer>(); for (ZooEntity z : list) { ids.add(z.getId()); } return ids; } public List<Integer> getIdsFromNestedList(Collection<List<ZooEntity>> list) { List<Integer> ids = new ArrayList<Integer>(); for (List<ZooEntity> zList : list) { for (ZooEntity z : zList) { ids.add(z.getId()); } } return ids; } As you can see those two methods are very similar and here is my question: Is it good to create one method (for example using generics) which will get ids from those two lists (zoos and groupOfZoos). If yes how it should look like? If no what is the best solution? BTW. This is only the example. I've got very similar problem at job and I want to do it in preety way (I can't change enities, I can change only getIds...() methods).

    Read the article

  • Generating Report for NUnit

    - by thangchung
     All source codes for this post can be found at my github.Time ago, I received a request that people ask me how they can generate reports of the results of testing using NUnit? In fact, I may never do this. In the little world of my programming, I only care about the test results, red-green-refactoring, and that was it. When I got that question quite a bit unexpected, I knew that I could use NCover to generate reports, but reports of NCover too simple, it did not give us more details on the number of test cases, test methods, ... And I began to see about creating interesting report for NUnit.I was lucky to find an open source here. Its authors call it NUnit2Report, but one disadvantage is it only running on .NET 1.0. Indeed too old compared to the current version 4.0. And I try to download the preview, but I could not run. I had to open its source code and found that it uses XSLT to convert the output of NUnit results from XML to HTML. Nothing really special, because I also knew that after NUnit run output file extension is XML is created. Author only use this file to convert to HTML using XSLT. And I decided to convert it to. NET 4.0, because I will not have to code from scratch. Conversion work made me take some time, but was lucky that I finally have what I want. Thanks Gilles for the this OSS. I will send a mail to thank him for his efforts but put this out for the OSS. Now I will show people how to do it. I used the auto built NAnt and NUnit for running TestCase, and I use Selenium testing framework. After writing three TestCase using Selenium, I ran NUnit, and got the following results: There are 1 fail and 2s success. In the bin directory of this project will have the NUnit output file as shown below: Then I create a build file, and a bat file for easy running (can use PowerShell is here also.) Double click in the bat file to create a report like this:       Finally open the index.html file in the folder to view report. As everyone can see, it is the TestCase and divide very clearly, that I meet the requirements. This is really good. Once again I really thank NUnit2Report from Gilles. People can contact him via the mail address [email protected] or website  http://nunit2report.sourceforge.net. It really is useful to those who promised to QA. Hopefully this post will help anyone really interested in doing reports for NUnit.   

    Read the article

  • How to apply Data Oriented Design with Object Oriented Programming?

    - by Pombal
    I've read lots of articles about Data Oriented Design (DOD) and I understand it but I can't design an Object Oriented Programming (OOP) system with DOD in mind, I think my OOP education is blocking me. How should I think to mix the two? The objective is to have a nice OOP interface while using DOD behind the scenes. I saw this too but didn't help much: http://stackoverflow.com/questions/3872354/how-to-apply-dop-and-keep-a-nice-user-interface

    Read the article

  • Access Control and Accessibility in Oracle IRM 11g

    - by martin.abrahams
    A recurring theme you'll find throughout this blog is that IRM needs to balance security with usability and manageability. One of the innovations in Oracle IRM 11g typifies this, as we have introduced a new right that may be included in any role - Accessibility. When creating or modifying a role, you simply select Accessibility along with Open, Print, Edit or whatever rights you want to include in the role. You might, for example, have parallel roles of Reader and Reader with Accessibility and Contributor and Contributor with Accessibility. The effect of the Accessibility right is to relax some of the protection of content in use such that selected users can use accessibility tools. For example, a user with the Accessibility right would be able to use the screen magnification tool, which IRM would ordinarily prevent because it involves screen capture. This new right makes it easy for you to apply security to documents yet, subject to suitable approval processes, cater for the fact that a subset of users might be disproportionately inconvenienced by some of the normal usage constraints. Rather than make those users put up with the restrictions, or perhaps exempt them from using sealed documents altogether, this new right allows you to accommodate them in a controlled manner, and to balance security with corporate accessibility goals.

    Read the article

  • How do you develop web applications? [closed]

    - by ck3g
    How do you and/or your team develop your web applications? Language, framework or platform doesn't matter. I would like to know about the structure of your environment. For example: Using IDE on workstation and project files on remote host accessing via sftp. Files are saved instantly on remote host; All files are local and are uploaded on remote host during saving; Files are local, web server is running on local computer and is tested at local host. etc. You could write down also about the benefits of your approach, this will be useful for me. Thanks upd: Here must be a question and here it is: which is the best approach by your opinion?

    Read the article

  • What is involved for a simple UDP game?

    - by acidzombie24
    I once tried to write a simple game with UDP in a week as a throwaway test. It went horribly. I threw it away early. The main problem i had was restoring the game state of all players/enemies/objects to an old state and fast forward the game to the point of time the player is playing (ie half a second before a jump. A little early or late can make the player miss the jump) Maybe this method is not the easiest way? i suspect it to be but i designed it wrong from the beginning and realized at the end of 2nd day. (so i didnt learn too much or wasted that much time) For myself and others, What is involved for a simple UDP game and how do i write one? Or how do i solve the prediction problem restoring to state properly. I'll mark this as CW bc i know there will be lots of helpful answers.

    Read the article

  • Keeping game model and graphics/animation separate but in sync

    - by AJM
    Suppose I'm building a chess game where I want to have animations. Pieces glide to their new squares when moved. Pieces perform attack animations when capturing other pieces. I'm not sure how to effectively separate the data and logic needed for these animations and the actual game model (in the MVC sense). The pieces themselves should ideally not have to worry about their pixel coordinates or current animation frame. At the same time, many changes to the model are effectively driven by animations. A moved piece changes its position after (before?) its sprite is done gliding. A piece is removed from the board after the capturing piece is finished its attack animation. How would you suggest I manage the game model, the graphics and animations, and their relationships? For example, where would the animations "live"? How would animations be created and managed in response to player moves? How would animations drive updates to the game model, or how would the game model drive animations?

    Read the article

  • Diagram to show code responsibility

    - by Mike Samuel
    Does anyone know how to visually diagram the ways in which the flow of control in code passes between code produced by different groups and how that affects the amount of code that needs to be carefully written/reviewed/tested for system properties to hold? What I am trying to help people visualize are arguments of the form: For property P to hold, nd developers have to write application code, Ca, without certain kinds of errors, and nm maintainers have to make sure that the code continues to not have these kinds of errors over the project lifetime. We could reduce the error rate by educating nd developers and nm maintainers. For us to be confident that the property holds, ns specialists still need to test or check |Ca| lines of code and continue to test/check the changes by nm maintainers. Alternatively, we could be confident that P holds if all code paths that could violate P went through tool code, Ct, written by our specialists. In our case, test suites alone cannot give confidence that P holdsnd » nsnm ns|Ca| » |Ct| so writing and maintaining Ct is economical, frees up our developers to worry about other things, and reduces the ongoing education commitment by our specialists. or those conditions do not hold, so focusing on education and testing is preferable. Example 1 As a concrete example, suppose we want to ensure that our web-service only produces valid JSON output. Our web-service provides several query and mutation operators that can be composed in interesting ways. We could try to educate everyone who maintains those operations about the JSON syntax, the importance of conformance, and libraries available so that when they write to an output buffer, every possible sequence of appends results in syntactically valid JSON. Alternatively, we don't expose an output stream handle to application code, and instead expose a JSON sink so that every code path that writes a response is channeled through a JSON sink that is written and maintained by a specialist who knows JSON syntax and can use well-written libraries to produce only valid output. Example 2 We need to make sure that a service that receives a URL from an untrusted source and tries to fetch its content does not end up revealing sensitive files from the file-system, like file:///etc/passwd. If there is a single standard way that any developer familiar with the application language's libraries would use to fetch URLs, which has file-system access turned off by default, then simply educating developers about the standard mechanism, and testing that file probing fails for some inputs, will probably be sufficient.

    Read the article

  • CacheAdapter 2.4 – Bug fixes and minor functional update

    - by Glav
    Note: If you are unfamiliar with the CacheAdapter library and what it does, you can read all about its awesome ability to utilise memory, Asp.Net Web, Windows Azure AppFabric and memcached caching implementations via a single unified, simple to use API from here and here.. The CacheAdapter library is receiving an update to version 2.4 and is currently available on Nuget here. Update: The CacheAdapter has actualy just had a minor revision to 2.4.1. This significantly increases the performance and reliability in memcached scenario under more extreme loads. General to moderate usage wont see any noticeable difference though. Bugs This latest version fixes a big that is only present in the memcached implementation and is only seen in rare, intermittent times (making i particularly hard to find). The bug is where a cache node would be removed from the farm when errors in deserialization of cached objects would occur due to serialised data not being read from the stream in entirety. The code also contains enhancements to better surface serialization exceptions to aid in the debugging process. This is also specifically targeted at the memcached implementation. This is important when moving from something like memory or Asp.Web caching mechanisms to memcached where the serialization rules are not as lenient. There are a few other minor bug fixes, code cleanup and a little refactoring. Minor feature addition In addition to this bug fix, many people have asked for a single setting to either enable or disable the cache.In this version, you can disable the cache by setting the IsCacheEnabled flag to false in the application configuration file. Something like the example below: <Glav.CacheAdapter.MainConfig> <setting name="CacheToUse" serializeAs="String"> <value>memcached</value> </setting> <setting name="DistributedCacheServers" serializeAs="String"> <value>localhost:11211</value> </setting> <setting name="IsCacheEnabled" serializeAs="String"> <value>False</value> </setting> </Glav.CacheAdapter.MainConfig> Your reasons to use this feature may vary (perhaps some performance testing or problem diagnosis). At any rate, disabling the cache will cause every attempt to retrieve data from the cache, resulting in a cache miss and returning null. If you are using the ICacheProvider with the delegate/Func<T> syntax to populate the cache, this delegate method will get executed every single time. For example, when the cache is disabled, the following delegate/Func<T> code will be executed every time: var data1 = cacheProvider.Get<SomeData>("cache-key", DateTime.Now.AddHours(1), () => { // With the cache disabled, this data access code is executed every attempt to // get this data via the CacheProvider. var someData = new SomeData() { SomeText = "cache example1", SomeNumber = 1 }; return someData; }); One final note: If you access the cache directly via the ICache instance, instead of the higher level ICacheProvider API, you bypass this setting and still access the underlying cache implementation. Only the ICacheProvider instance observes the IsCacheEnabled setting. Thanks to those individuals who have used this library and provided feedback. Ifyou have any suggestions or ideas, please submit them to the issue register on bitbucket (which is where you can grab all the source code from too)

    Read the article

  • How to store character moves (sprite animations)?

    - by Saad
    So I'm thinking about making a small rpg, mainly to test out different design patterns I've been learning about. But the one question that I'm not too sure on how to approach is how to store an array of character moves in the best way possible. So let's say I have arrays of different sprites. This is how I'm thinking about implementing it: array attack = new array (10); array attack2 = new array(5); (loop) //blit some image attack.push(imageInstance); (end loop) Now every time I want the animation I call on attack or attack2; is there a better structure? The problem with this is let's say there are 100 different attacks, and a player can have up to 10 attacks equipped. So how do I tell which attack the user has; should I use a hash map?

    Read the article

  • How can I separate the user interface from the business logic while still maintaining efficiency?

    - by Uri
    Let's say that I want to show a form that represents 10 different objects on a combobox. For example, I want the user to pick one hamburguer from 10 different ones that contain tomatoes. Since I want to separate UI and logic, I'd have to pass the form a string representation of the hamburguers in order to display them on the combobox. Otherwise, the UI would have to dig into the objects fields. Then the user would pick a hamburguer from the combobox, and submit it back to the controller. Now the controller would have to find again said hamburguer based on the string representation used by the form (maybe an ID?). Isn't that incredibly inefficient? You already had the objects you wanted to pick one from. If you submited to the form the whole objects, and then returned a specific object, you wouldn't have to refind it later on since the form already returned a reference to that object. Moreover, if I'm wrong and you actually should send the whole object to the form, how can I isolate UI from logic?

    Read the article

  • Best way to store a large amount of game objects and update the ones onscreen

    - by user3002473
    Good afternoon guys! I'm a young beginner game developer working on my first large scale game project and I've run into a situation where I'm not quite sure what the best solution may be (if there is a lone solution). The question may be vague (if anyone can think of a better title after having read the question, please edit it) or broad but I'm not quite sure what to do and I thought it would help just to discuss the problem with people more educated in the field. Before we get started, here are some of the questions I've looked at for help in the past: Best way to keep track of game objects Elegant way to simulate large amounts of entities within a game world What is the most efficient container to store dynamic game objects in? I've also read articles about different data structures commonly used in games to store game objects such as this one about slot maps, but none of them are really what I'm looking for. Also, if it helps at all I'm using Python 3 to design the game. It has to be Python 3, if I could I would use C++ or Unityscript or something else, but I'm restricted to having to use Python 3. My game will be a form of side scroller shooter game. In said game the player will traverse large rooms with large amounts of enemies and other game objects to update (think some of the larger areas in Cave Story or Iji). The player obviously can't see the entire room all at once, so there is a viewport that follows the player around and renders only a selection of the room and the game objects that it contains. This is not a foreign concept. The part that's getting me confused has to do with how certain game objects are updated. Some of them are to be updated constantly, regardless of whether or not they can be seen. Other objects however are only to be updated when they are onscreen (for example, an enemy would only be updated to react to the player when it is onscreen or when it is in a certain range of the screen). Another problem is that game objects have to be easily referable by other game objects; something that happens in the player's update() method may affect another object in the world. Collision detection in games is always a serious problem. I need a way of containing the game objects such that it minimizes the number of cases when testing for collisions against one another. The final problem is that of creating and destroying game objects. I think this problem is pretty self explanatory. To store the game objects then I've considered a number of different methods. The original method I had was to simply store all the objects in a hash table by an id. This method was simple, and decently fast as it allows all the objects to be looked up in O(1) complexity, and also allows them to be deleted fairly easily. Hash collisions would not be a major problem; I wasn't originally planning on using computer generated ids to store the game objects I was going to rely on them all using ids given to them by the game designer (such names would be strings like 'Player' or 'EnemyWeapon4'), and even if I did use computer generated ids, if I used a decent hashing algorithm then the chances of collisions would be around 1 in 4 billion. The problem with using a hash table however is that it is inefficient in checking to see what objects are in range of the viewport. Considering the fact that certain game objects move (as well as the viewport itself), the only solution I could think of in order to only update objects that are in the viewport would be to iterate through every object in the hash table and check if it is in the viewport or not, updating only the ones that are in the valid area. This would be incredibly slow in scenarios where the amount of game objects exceeds 500, or even 200. The second solution was to store everything in a 2-d list. The world is partitioned up into cells (a tilemap essentially), where each cell or tile is the same size and is square. Each cell would contain a list of the game objects that are currently occupying it (each game object would be inserted into a cell depending on the center of the object's collision mask). A 2-d list would allow me to take the top-left and bottom-right corners of the viewport and easily grab a rectangular area of the grid containing only the cells containing entities that are in valid range to be updated. This method also solves the problem of collision detection; when I take an entity I can find the cell that it is currently in, then check only against entities in it's cell and the 8 cells around it. One problem with this system however is that it prohibits easy lookup of game objects. One solution I had would be to simultaneously keep a hash table that would contain all the positions of the objects in the 2-d list indexed by the id of said object. The major problem with a 2-d list is that it would need to be rebuilt every single game frame (along with the hash table of object positions), which may be a serious detriment to game speed. Both systems have ups and downs and seem to solve some of each other's problems, however using them both together doesn't seem like the best solution either. If anyone has any thoughts, ideas, suggestions, comments, opinions or solutions on new data structures or better implementations of the existing data structures I have in mind, please post, any and all criticism and help is welcome. Thanks in advance! EDIT: Please don't close the question because it has a bad title, I'm just bad with names!

    Read the article

< Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >