Search Results

Search found 18314 results on 733 pages for 'document architecture'.

Page 106/733 | < Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >

  • Maintain order of messages via proxies to app servers

    - by David Turner
    Hi, I am receiving messages from a 3rd party via a http post, and it is important that the order the messages hit our infrastructure is maintained through the load balancers and proxies until it hits our application server. Quick Diagram. (proxies in place due to security requirements.) [ACE load balancer] - [2 proxies] - [Application Servers] or maybe [ACE load balancer] - [2 proxies] - [ACE load balancer] - [Application Servers] My idea was that I would setup the load balancers in active-passive mode, to force all messages to use one proxy, and then both the proxies would hit a second load balancer that would be configured in active passive to hit one application server. Whilst the above is not ideal, it does give me resilience, and once the message is in my app servers, I enter a stateless world, and load balance across both nodes of my cluster. However, I am concerned that even a single proxy could send messages out of order, perhaps if 2 messages are recived very close together, message 2 might get processed faster than message 1. Is this possible? Likely? Is there a simple open source proxy (MOD_PROXY?) that can be easily configured to just pass messages through it, and to guarantee to send the messages through in the order they are received. If so which, and finally links to how I should configure it would be great. In fact any links to articles around avoiding "out of order" messages using hardware would be gratefully received. Thanks, ps for those that are interested, the app is a java spring integration application currently on a appliation server.

    Read the article

  • How to leverage concurrency checking with EF 4.0 POCO Self Tracking Entities in a N-Tier scenario?

    - by Mark Lindell
    I'm using VS1010RC with the POCO self tracking T4 templates. In my WCF update service method I am using something similar to the following: using (var context = new MyContext()) { context.MyObjects.ApplyChanges(myObject); context.SaveChanges(); } This works fine until I set ConcurrencyMode=Fixed on the entity and then I get an exception. It appears as if the context does not know about the previous values as the SQL statement is using the changed entities value in the WHERE clause. What is the correct approach when using ConcurrencyMode=Fixed?

    Read the article

  • C#: Inheritence, Overriding, and Hiding

    - by Rosarch
    I'm having difficulty with an architectural decision for my C# XNA game. The basic entity in the world, such as a tree, zombie, or the player, is represented as a GameObject. Each GameObject is composed of at least a GameObjectController, GameObjectModel, and GameObjectView. These three are enough for simple entities, like inanimate trees or rocks. However, as I try to keep the functionality as factored out as possible, the inheritance begins to feel unwieldy. Syntactically, I'm not even sure how best to accomplish my goals. Here is the GameObjectController: public class GameObjectController { protected GameObjectModel model; protected GameObjectView view; public GameObjectController(GameObjectManager gameObjectManager) { this.gameObjectManager = gameObjectManager; model = new GameObjectModel(this); view = new GameObjectView(this); } public GameObjectManager GameObjectManager { get { return gameObjectManager; } } public virtual GameObjectView View { get { return view; } } public virtual GameObjectModel Model { get { return model; } } public virtual void Update(long tick) { } } I want to specify that each subclass of GameObjectController will have accessible at least a GameObjectView and GameObjectModel. If subclasses are fine using those classes, but perhaps are overriding for a more sophisticated Update() method, I don't want them to have to duplicate the code to produce those dependencies. So, the GameObjectController constructor sets those objects up. However, some objects do want to override the model and view. This is where the trouble comes in. Some objects need to fight, so they are CombatantGameObjects: public class CombatantGameObject : GameObjectController { protected new readonly CombatantGameModel model; public new virtual CombatantGameModel Model { get { return model; } } protected readonly CombatEngine combatEngine; public CombatantGameObject(GameObjectManager gameObjectManager, CombatEngine combatEngine) : base(gameObjectManager) { model = new CombatantGameModel(this); this.combatEngine = combatEngine; } public override void Update(long tick) { if (model.Health <= 0) { gameObjectManager.RemoveFromWorld(this); } base.Update(tick); } } Still pretty simple. Is my use of new to hide instance variables correct? Note that I'm assigning CombatantObjectController.model here, even though GameObjectController.Model was already set. And, combatants don't need any special view functionality, so they leave GameObjectController.View alone. Then I get down to the PlayerController, at which a bug is found. public class PlayerController : CombatantGameObject { private readonly IInputReader inputReader; private new readonly PlayerModel model; public new PlayerModel Model { get { return model; } } private float lastInventoryIndexAt; private float lastThrowAt; public PlayerController(GameObjectManager gameObjectManager, IInputReader inputReader, CombatEngine combatEngine) : base(gameObjectManager, combatEngine) { this.inputReader = inputReader; model = new PlayerModel(this); Model.Health = Constants.PLAYER_HEALTH; } public override void Update(long tick) { if (Model.Health <= 0) { gameObjectManager.RemoveFromWorld(this); for (int i = 0; i < 10; i++) { Debug.WriteLine("YOU DEAD SON!!!"); } return; } UpdateFromInput(tick); // .... } } The first time that this line is executed, I get a null reference exception: model.Body.ApplyImpulse(movementImpulse, model.Position); model.Position looks at model.Body, which is null. This is a function that initializes GameObjects before they are deployed into the world: public void Initialize(GameObjectController controller, IDictionary<string, string> data, WorldState worldState) { controller.View.read(data); controller.View.createSpriteAnimations(data, _assets); controller.Model.read(data); SetUpPhysics(controller, worldState, controller.Model.BoundingCircleRadius, Single.Parse(data["x"]), Single.Parse(data["y"]), bool.Parse(data["isBullet"])); } Every object is passed as a GameObjectController. Does that mean that if the object is really a PlayerController, controller.Model will refer to the base's GameObjectModel and not the PlayerController's overriden PlayerObjectModel?

    Read the article

  • Application Architect vs. Systems Architect vs. Enterprise Architect?

    - by iaman00b
    So many buzzwords. Not sure if I need to start playing BS Bingo or not. And I'm not trying to be cynical. But I've heard many people with these various titles. There never seems to be a clear delineation between the three. Or there's a lot of domain crossover between the three. Actually, another I've seen while looking around here on Stackoverflow has been "Solutions Architect" as well. But that one doesn't seem to be so prevalent in other places. There are questions here and there with vague answers. But I'd like definative answers to this. Please assume I'm still relatively new to software stuff and that I'm trying to map out a career path. Oh, and please be gentle folks; this most definitely is not a duplicate question. Neither is it an aggregate. So kindly leave it alone. Xp

    Read the article

  • MVC architectural question - Where should payment processing go?

    - by Keltex
    This question is related to my ASP.NET MVC 2 development, but it could apply to any MVC environment and a question of where the logic should go. So let's say I have a controller that takes an online payment such as a shopping cart application. And I have the method that accepts the customers' credit card information: public class CartController : Controller CartRepository cartRepository = new CartRepository() [HttpPost] public ActionResult Payment(PaymentViewModel rec) { if(!ModelState.IsValid) { return View(rec); } // process payment here return RedirectToAction("Receipt"); } At the comment process payment here should the payment processing be handled: In the controller? By the repository? Someplace else?

    Read the article

  • Validation in n-tier asp.net mvc applications

    - by sTodorov
    Dear Stack Overflow gurus, I am looking for some practical/theoretical information regarding best practices for validation in asp.net mvc n-tier applications. I am working on a .Net application divided into the following layers: UI - Mvc3 BLL layer - all business rules. Decoupled from data access and UI layers through interfaces DAL layer - Data access with the repository pattern, EF4 and pocos Now, I am looking for a nice, clean and transparent way to specify my validation rules. Here are some thoughts on the matter so far: UI validation should only be responsible for user input and its validity. BLL validation should be handling the validity of the data regarding the application business rules. My main concern is how to bind the BLL and UI validation in the most efficient way. One think I am would like to avoid is having the UI check in a collection of validation and adding manually errors to the ModelState. Furthermore, I do not want to pass the ModelState to the BLL to be populated in there. I will appreciate any thoughts on the matter. P.S. Should this question be marked as a discussion ?

    Read the article

  • ASP.net 4.0 default.aspx problem on IIS6

    - by Dimonina
    I installed .net framework 4 on my windows 2003 enterprise x64, wrote simple asp.net 4.0 application (default.aspx page only). The application works great if request is to default.aspx, not to the root site: contoso.com/ - doesn't work (Get 404 error) contoso.com/default.aspx - works. Default.aspx is in list of default documents in IIS. Please help.

    Read the article

  • ASP.Net MVC Where do you convert from Entities to ViewModels?

    - by Pino
    Title pretty much explains it all, its the last thing I'm trying to work into our project. We are structured with a Service Library which contains a function like so. /// <summary> /// Returns a single category based on the specified ID. /// </summary> public Category GetCategory(int CategoryID) { var RetVal = _session.Single<Category>(x => x.ID == CategoryID); return RetVal; } Now Category is a Entity (We are using Entity Framework) we need to convert that to a CategoryViewModel. Now, how would people structure this? Would you make sure the service function returned a CategoryViewModel? Have the controller pull the data from the service then call another function to covnert to a view model?

    Read the article

  • realtime diagnostics

    - by Ion Todirel
    I have an application which has a loop, part of a "Scheduler", which runs at all time and is the heart of the application. Pretty much like a game loop, just that my application is a WPF application and it's not a game. Naturally the application does logging at many points, but the Scheduler does some sensitive monitoring, and sometimes it's impossible just from the logs to tell what may have gotten wrong (and by wrong I don't mean exceptions) or the current status. Because Scheduler's inner loop runs at short intervals, you can't do file I/O-based logging (or using the Event Viewer) in there. First, you need to watch it in real-time, and secondly the log file would grow in size very fast. So I was thinking of ways to show this data to the user in the realtime, some things I considered: Display the data in realtime in the UI Use AllocConsole/WriteConsole to display this information in a console Use a different console application which would display this information, communicate between the Scheduler and the console app using pipes or other IPC techniques Use Windows' Performance Monitor and somehow feed it with this information ETW Displaying in the UI would have its issues. First it doesn't integrate with the UI I had in mind for my application, and I don't want to complicate the UI just for this. This diagnostics would only happen rarely. Secondly, there is going to be some non-trivial data protection, as the Scheduler has it's own thread. A separate console window would work probably, but I'm still worried if it's not too much threshold. Allocating my own console, as this is a windows app, would probably be better than a different console application (3), as I don't need to worry about IPC communication, and non-blocking communication. However a user could close the console I allocated, and it would be problematic in that case. With a separate process you don't have to worry about it. Assuming there is an API for Performance Monitor, it wouldn't be integrated too well with my app or apparent to the users. Using ETW also doesn't solve anything, just a random idea, I still need to display this information somehow. What others think, would there be other ways I missed?

    Read the article

  • Empty data problem - data layer or DAL?

    - by luckyluke
    I designing the new App now and giving the following question a lot of thought. I consume a lot of data from the warehouse, and the entities have a lot of dictionary based values (currency, country, tax-whatever data) - dimensions. I cannot be assured though that there won't be nulls. So I am thinking: create an empty value in each of teh dictionaries with special keyID - ie. -1 do the ETL (ssis) do the correct stuff and insert -1 where it needs to let the DAL know that -1 is special (Static const whatever thing) don't care in the code to check for nullness of dictionary entries because THEY will always have a value But maybe I should be thinking: import data AS IS let the DAL do the thinking using empty record Pattern still don't care in the code because business layer will have what it needs from DAL. I think is more of a approach thing but maybe i am missing something important here... What do You think? Am i clear? Please don't confuse it with empty record problem. I do use emptyCustomer think all the time and other defaults too.

    Read the article

  • What is the relationship between Turing Machine & Modern Computer ?

    - by smwikipedia
    I heard a lot that modern computers are based on Turing machine. I just cannot build a bridge from a conceptual Turing Machine to a real modern computer. Could someone help me build this bridge? Below is my current understanding. I think the computer is a big general-purpose Turing machine. Each program we write is a small specific-purpose Turing machine. The classical Turing machine do its job based on the input and its current state inside and so do our programs. Let's take a running program (a process) as an example. We know that in the process's address space, there's areas for stack, heap, and code. A classical Turing machine doesn't have the ability to remember many things, so we borrow the concept of stack from the push-down automaton. The heap and stack areas contains the state of our specific-purpose Turing machine (our program). The code area represents the logic of this small Turing machine. And various I/O devices supply input to this Turing machine.

    Read the article

  • C# Process flow - Datastream, XML and datagrid

    - by Farstucker
    Im looking for some advice/suggestions on how I should setup the work flow of a small application Im building. When the application is launched the datagrid will be populated via the XML file. Once running the application will receive a datastream that I hope to update the file and datagrid. So Im curious what you would suggest on how I setup the workflow (ie, split the data from the data stream and simultaneously populate the file and grid or would you suggest populating the XML file first and setting up a timer to have the grid read the file?) Im really looking for optimal performance.

    Read the article

  • Which layer should create DataContext?

    - by Kevin
    I have a problem to decide which layer in my system should create DataContext. I have read a book, saying that if do not pass the same DataContext object for all the database updates, it will sometimes get an exception thrown from the DataContext. That's why i initially create new instance of DataContext in business layer, and pass it into data access layer. So that the same datacontext is used for all the updates. But this lead to one design problem, if i wanna change my DAL to Non-LinqToSQL in future, i need to re-write the code in business layer as well. Please give me some advice on this. Thanks. Example code 'Business Layer Public Sub SaveData(name As String) Using ts AS New TransactionScope() Using db As New MyDataContext() DAL.Insert(db,name) DAL.Insert(db,name) End Using ts.Complete() End Using End Sub 'Data Access Layer Public Sub Insert(db as MyDataContext,name As string) db.TableAInsert(name) End Sub

    Read the article

  • How much business logic belongs in RIA services layer?

    - by jkohlhepp
    I have been experimenting recently with Silverlight, RIA Services, and Entity Framework using .NET 4.0. I'm trying to figure out if that stack makes sense for use in any of my upcoming projects. It certainly seems like these technologies can be very productive for developing applications, but I'm struggling to decide how an application on top of this stack should be architected. The main issue I have is that in most of the demos I've seen most of the business logic ends up as DataAnnotations and custom validations in the RIA Services domain service class. This seems inappropriate to me. I view the domain service as basically a glorified web service that happens to make it easy to push information to the client. But most of what I've seen seems to orient the domain service as the main source of business logic in the application. So, my questions: What is the best location for business logic (rules, validations, behaviors, authorization) in an application using this stack? Are there any guidelines published at an architectural level for using this stack? My questions pertain to large, complex, and long-lived applications. Obviously for an application of only a few screens this is less of a concern. Edit: Another thing I meant to mention is that obviously you can make the domain service class stupid, but then you lose a lot of the automagic entity information (e.g. validations) being pushed to the client. And then if you lose that is there any point to using RIA services?

    Read the article

  • Considerations when architecting an extensible application using MEF

    - by Dan Bryant
    I've begun experimenting with dependency injection (in particular, MEF) for one of my projects, which has a number of different extensibility points. I'm starting to get a feel for what I can do with MEF, but I'd like to hear from others who have more experience with the technology. A few specific cases: My main use case at the moment is exposing various singleton-like services that my extensions make use of. My Framework assembly exposes service interfaces and my Engine assembly contains concrete implementations. This works well, but I may not want to allow all of my extensions to have access to all of my services. Is there a good way within MEF to limit which particular imports I allow a newly instantiated extension to resolve? This particular application has extension objects that I repeatedly instantiate. I can import multiple types of Controllers and Machines, which are instantiated in different combinations for a Project. I couldn't find a good way to do this with MEF, so I'm doing my own type discovery and instantiation. Is there a good way to do this within MEF or other DI frameworks? I welcome input on any other things to watch out for or surprising capabilities you've discovered that have changed the way you architect.

    Read the article

  • Need some help/advice on WCF Per-Call Service and NServiceBus interop.

    - by Alexey
    I have WCF Per-Call service wich provides data for clients and at the same time is integrated with NServiceBus. All statefull objects are stored in UnityContainer wich is integrated into custom service host. NServiceBus is configured in service host and uses same container as service instances. Every client has its own instance context(described by Juval Lowy in his book in chapter about Durable Services). If i need to send request over bus I just use some kind of dispatcher and wait response using Thread.Sleep().Since services are per-call this is ok afaik. But I am confused a bit about messages from bus, that service must handle and provide them to clients. For some data like stock quotes I just update some kind of statefull object and and then, when clients invoke GetQuotesData() just provide data from this object. But there are numerous service messages like new quote added and etc. At this moment I have an idea to implement something like "Postman daemon" =)) and store this type of messages in instance context. Then client will invoke "GetMail()",recieve those messages and parse them. Problem is that NServiceBus messages are "Interface based" and I cant pass them over WCF, so I need to convert them to types derieved from some abstract class. Dunno what is best way to handle this situation. Will be very gratefull for any advice on this. Thanks in advance.

    Read the article

  • What is the big deal with IQueryable?

    - by jjr2527
    I've seen a lot of people talking about IQueryable and I haven't quite picked up on what all the buzz is about. I always work with generic List's and find they are very rich in the way you can "query" them and work with them, even run LINQ queries against them. So I'm wondering if there is a good reason to start considering a different default collection in my projects.

    Read the article

  • Mimic remote API or extend existing django model

    - by drozzy
    I am in a process of designing a client for a REST-ful web-service. What is the best way to go about representing the remote resource locally in my django application? For example if the API exposes resources such as: List of Cars Car Detail Car Search Dealership summary So far I have thought of two different approaches to take: Try to wrangle the django's models.Model to mimic the native feel of it. So I could try to get some class called Car to have methods like Car.objects.all() and such. This kind of breaks down on Car Search resources. Implement a Data Access Layer class, with custom methods like: Car.get_all() Car.get(id) CarSearch.search("blah") So I will be creating some custom looking classes. Has anyone encoutered a similar problem? Perhaps working with some external API's (i.e. twitter?) Any advice is welcome. PS: Please let me know if some part of question is confusing, as I had trouble putting it in precise terms.

    Read the article

  • How do you use Linq2Sql in your applications ?

    - by this. __curious_geek
    I'm recently migrating to Linq2Sql and all my future projects would be done in Linq2Sql. Having said that, I researched a lot on how to properly plug-in Linq2Sql in application design. what to put at what layer ? Should I use DTOs over Linq2Sql entities ? I did not find any rock-solid material that really talked about one single thing and everyone had their own opinions and I found all of them justified right from their arguments. I'm looking forward to your ideas on how to integrate/use Linq2Sql in projects. My priority is maintenance[it should be maintenable and when multiple people work on same project] and scalabilty [it should have scope of evolution]. Thanks.

    Read the article

  • What is the best folder stucture in TFS for reporting service projects

    - by Dave
    Hi, I'm looking for some help on deciding a useful folder stucture strategy for reporting service projects in TFS. Does any one have some suggestions on which way I should stucture TFS? Should it be a project per report or should it be one Reporting project with multiple folders under the main that contain all the report projects? i.e. Senario 1 (separate projects for each report project) $ReportProject1 $ReportProject2 $ReportProject3 Senario 2 (Main report project in TFS and subfolders with report projects) $ReportingServices ------Src ---------Project1 -----------ReportProject1 files ---------Project2 -----------ReportProject2 files ---------Project3 -----------ReportProject3 files

    Read the article

  • Providing multi-version databases for backward compatibility for production applications/databases.

    - by JavaRocky
    How can I manage multiple versions of a database easily? I have some data (as views as selects for data originating in tables from other schemas), which other database may reference using various means including database synonyms & links. I wish to provide a sort of interface/guarantee in-case future for applications/databases which use this data. All of this is for in the event i need to update the views for correctness or applicability inside my database. How can i achieve this in a maintained, controlled and easy way? I am using Oracle 10g if that matters.

    Read the article

< Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >