Search Results

Search found 46436 results on 1858 pages for 'web architecture'.

Page 11/1858 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • Best Practices for MVC Architecture

    - by Mystere Man
    There are a number of questions on StackOverflow regard MVC best practices, but most of those seem to revolve around things like using Dependancy Injection, or creating helper functions, or do's and don'ts of what to do in views and controllers. My question is more about how to architect an MVC application. For example, we are encouraged to use DI with the Repository pattern to decouple data access from the controller, however very little is said on HOW to do that specifically for MVC. Where would we place the Repository classes, for instance? They don't seem to be model related specifically, since the model should likewise be relatively decoupled from the actual data access technologies. A second question involves how to structure the layers or tiers. Most example applications (Nerd dinner, Music Store, etc..) all seem to use a single tier, 2 layer approach (not counting tests) that typically has controllers directly calling L2S or EF code. If I want to create a multi-tier/layer aplication what are some of the best practices there in regards to MVC? This question is one-part standard q-a, but another part best-practices, so it could go either here or programmers.se, I am marking it CW. If you feel it would be better suited to programmers.se then it can be migrated. EDIT: What happened to the Community Wiki option? It seems to be gone.

    Read the article

  • How one does qualify as a Web UI Developer?

    - by Duralumin
    I have about 20 years of experience with programming, most of that on the job, and right now, I define myself as a Web Developer, because I think about half my expertise lies in the all too extended "web" field, both server side and client side, and because in the last years I'm mostly doing web development. I know my javascript, jQuery, jQueryUI, HTML4-5, css2-3 and some frameworks like backbone.js and angularJS Since university I've always been interested in Man-Machine Interaction, UI and UX. Recently, I saw the label "Web UI Developer" tossed around, and I thought that would be something I would like to qualify for. And I'd really like to qualify with confidence. I didn't find any certificate or similar, and I don't think there are any. Is the only way to qualify as a Web UI Developer having a job as one? What are the skills I need to have, and the resources I can use to acquire them?

    Read the article

  • Web publishing system with code highlighting

    - by Dragos Toader
    I'd like to publish some of the many programs I've written on the web. Is there a syntax highlighting Linux web publishing application (CMS/Blog/RoR app) that displays syntax for C++, Python, Bash scripts, SQL, VBA, awk, Erlang, java, makefiles, Ruby, Pascal and other languages? The more syntax settings configuration files, the better. The extensions I have in Textpad (for which I have syntax highlighting -- syn files) are .as, .asm, .asp, .awk, .bas, .bat, .c, .conf, .cpp, .cs, .ctl, .dfm, .dsc, .erl, .fnc, .h, .hpp, .inf, .ini, .jav, .java, .mak, .nsh, .nsi, .ora, .pas, .pkb, .pks, .pl, .prc, .py, .reg, .rsp, .sh, .sql, .syn, .tcl, .trg, .vw, .xml, .xsl, .xslfo

    Read the article

  • Architecture strategies for a complex competition scoring system

    - by mikewassmer
    Competition description: There are about 10 teams competing against each other over a 6-week period. Each team's total score (out of a 1000 total available points) is based on the total of its scores in about 25,000 different scoring elements. Most scoring elements are worth a small fraction of a point and there will about 10 X 25,000 = 250,000 total raw input data points. The points for some scoring elements are awarded at frequent regular time intervals during the competition. The points for other scoring elements are awarded at either irregular time intervals or at just one moment in time. There are about 20 different types of scoring elements. Each of the 20 types of scoring elements has a different set of inputs, a different algorithm for calculating the earned score from the raw inputs, and a different number of total available points. The simplest algorithms require one input and one simple calculation. The most complex algorithms consist of hundreds or thousands of raw inputs and a more complicated calculation. Some types of raw inputs are automatically generated. Other types of raw inputs are manually entered. All raw inputs are subject to possible manual retroactive adjustments by competition officials. Primary requirements: The scoring system UI for competitors and other competition followers will show current and historical total team scores, team standings, team scores by scoring element, raw input data (at several levels of aggregation, e.g. daily, weekly, etc.), and other metrics. There will be charts, tables, and other widgets for displaying historical raw data inputs and scores. There will be a quasi-real-time dashboard that will show current scores and raw data inputs. Aggregate scores should be updated/refreshed whenever new raw data inputs arrive or existing raw data inputs are adjusted. There will be a "scorekeeper UI" for manually entering new inputs, manually adjusting existing inputs, and manually adjusting calculated scores. Decisions: Should the scoring calculations be performed on the database layer (T-SQL/SQL Server, in my case) or on the application layer (C#/ASP.NET MVC, in my case)? What are some recommended approaches for calculating updated total team scores whenever new raw inputs arrives? Calculating each of the teams' total scores from scratch every time a new input arrives will probably slow the system to a crawl. I've considered some kind of "diff" approach, but that approach may pose problems for ad-hoc queries and some aggegates. I'm trying draw some sports analogies, but it's tough because most games consist of no more than 20 or 30 scoring elements per game (I'm thinking of a high-scoring baseball game; football and soccer have fewer scoring events per game). Perhaps a financial balance sheet analogy makes more sense because financial "bottom line" calcs may be calculated from 250,000 or more transactions. Should I be making heavy use of caching for this application? Are there any obvious approaches or similar case studies that I may be overlooking?

    Read the article

  • SharePoint 2010 Workflow for Multiple Items (Architecture)

    - by erobillard
    I had the question today of whether SharePoint 2010 supports workflow on multiple items, since Groove's workflow apparently supported multiple items and that model disappeared when Groove Workspaces were amalgamated into SharePoint Sites and SharePoint Workspace (the client utility). It's a great question, the short answer is that yes, it's possible. You could brute-force it in 2007 and that strategy should still carry over to 2010, and 3 new features (that I can think of) support multi-item scenarios...(read more)

    Read the article

  • Collision Detection Game Design and Architecture

    - by Chompas
    I've reading some articles about collision detection. My question here is about ideas on the design for it. Baically I have a C++ game that has a main loop with entities with an update method. Based on keyboard input, these characters updates their positions. My question is not about how to detect collisions, it's about getting ideas in which is the best way to implement this. The game has a main character but also enemies that have to collide between them, so I'm not sure where to make all the iterations for checking collisions and if the right way is to check everything against everything. Thanks in advance.

    Read the article

  • Role of systems in entity systems architecture

    - by bio595
    I've been reading a lot about entity components and systems and have thought that the idea of an entity just being an ID is quite interesting. However I don't know how this completely works with the components aspect or the systems aspect. A component is just a data object managed by some relevant system. A collision system uses some BoundsComponent together with a spatial data structure to determine if collisions have happened. All good so far, but what if multiple systems need access to the same component? Where should the data live? An input system could modify an entities BoundsComponent, but the physics system(s) need access to the same component as does some rendering system. Also, how are entities constructed? One of the advantages I've read so much about is flexibility in entity construction. Are systems intrinsically tied to a component? If I want to introduce some new component, do I also have to introduce a new system or modify an existing one? Another thing that I've read often is that the 'type' of an entity is inferred by what components it has. If my entity is just an id how can I know that my robot entity needs to be moved or rendered and thus modified by some system? Sorry for the long post (or at least it seems so from my phone screen)!

    Read the article

  • .Net search engine architecture and technology choice

    - by shrivb
    I am in the process of designing a search engine for an asp.net site. The site currently uses Microsoft Indexing Server to index and search content which range from simple text files to MS documents to PDFs. MIS is also used to crawl File servers. MIS in tandem with Index Server Companion crawls for content from external sites. I intend to replace MIS with the indexer/crawler I am trying to build. Since my platform is completely on the Microsoft stack, I cant afford to have a Java application server. Thus, Solr, and effectively, SolrNet is ruled out. With this being the context, I have couple of questions. 1.Technology choice I had done my initial investigation and looked at Lucene.Net. There seemed to be 2 issues in using Lucene.Net. First being, it cant crawl external content. There doesn't seem to be a direct port of Nutch in .Net. Second, since it is just an indexer, it cant parse various document types. The parsing is left to the developer. So, what would be best technology choice on the .Net platform to achieve indexing & crawling? Are there any .Net open source libraries available for document parsing? 2.Architectural pattern Is there any general architectural pattern or best practice that needs to be followed in designing such a search engine? Thanks in advance.

    Read the article

  • Where should Nhibernate IPostInsertEventListener go in the 3 tier architecture

    - by Quintin Par
    I have a IPostInsertEventListener implementation like public class NHibernateEventListener : IPostInsertEventListener, IPostUpdateEventListener which catches some entity inserts and does an additional insert to another table for ETL. Essentially it requires references to the Nhibernate, Domain entity and Repository<> Now where do I go about adding this class? If I add it to ApplicationServices I’ll end up referencing Nhibernate at that layer. If I add this to the Data layer, I’ll have to reference Domain (circular). How do I go implementing this class with S#arp principles? Any thoughts?

    Read the article

  • Software Architecture - From design to sucessful implementation

    - by user20358
    As the subject goes; once a software architect puts down the high level design and approach to a software that is to be developed from scratch, how does the team ensure that it is implemented successfully? To my mind the following things will need to be done Proper understanding of requirements Setting down coding practices and guidelines Regular code reviews to ensure the guidelines are being adhered to Revisiting the requirements phase and making necessary changes to design based on client inputs if there are any changes to requirements Proper documentation of what is being done in code Proper documentation of requirements and changes to them Last but not the least, implementing the design via object oriented code where appropriate Did I miss anything? Would love to hear any mistakes that you have learned from in your project experiences. What went wrong, what could have been done better. Thanks for taking the time..

    Read the article

  • Architecture- Tracking lead origin when data is submitted by a server

    - by Kevin
    I'm looking for some assistance in determining the least complex strategy for tracking leads on an affiliate's website. The idea is to make the affiliate's integration with my application as easy as possible. I've run into theoretical barriers, so i'm here to explore other options. Application Overview: This is a lead aggregation / distribution platform. We will be focusing on the affiliate portion of this website. Essentially affiliates sign up, enter in marketing campaigns and sell us their conversions. Problem to be solved: We want to track a lead's origin and other events on the affiliate site. We want to know what pages, ads, and forms they viewed before they converted. This can easily be solved with pixel tracking. Very straightforward. Theoretical Issues: I thought I would ask affiliates to place the pixel where I could log impressions and set a third party cookie when the pixel is first called. Then I could associate future impressions with this cookie. The problem is that when the visitor converts on the affiliate's site and I receive their information via HTTP POST from the Affiliate's server I wouldn't be able to access the cookie and associate it with the lead record unless the lead lands on my processor via a redirect and is then redirected back to the affiliate's landing page. I don't want to force the affiliates to submit their forms directly to my tracking site, so allowing them to make an HTTP POST from their server side form processor would be ideal. I've considered writing JavaScript to set a First Party cookie but this seems to make things more complicated for the affiliate. I also considered having the affiliate submit the lead's data via a conversion pixel. This seems to be the most ideal scenario so far as almost all pixels are as easy as copy/paste. The only complication comes from the conversion pixel- which would submit all of the lead information and the request would come from the visitor's machine so I could access my third party cookie.

    Read the article

  • Turn-based JRPG battle system architecture resources

    - by BenoitRen
    The past months I've been busy programming a 2D JRPG (Japanese-style RPG) in C++ using the SDL library. The exploration mode is more or less done. Now I'm tackling the battle mode. I have been unable to find any resources about how a classic turn-based JRPG battle system is structured. All I find are discussions about damage formula. I've tried googling, searching gamedev.net's message board, and crawling through C++-related questions here on Stack Exchange. I've also tried reading source code of existing open source RPGs, but without a guide of some sort it's like trying to find a needle in a haystack. I'm not looking for a set of rules like D&D or anything similar. I'm talking purely about code and object structure design. A battle system asks the player for input using menus. Next the battle turn is executed as the heroes and the enemies execute their actions. Can anyone point me in the right direction? Thanks in advance.

    Read the article

  • Modular enterprise architecture using MVC and Orchard CMS

    - by MrJD
    I'm making a large scale MVC application using Orchard. And I'm going to be separating my logic into modules. I'm also trying to heavily decouple the application for maximum extensibility and testability. I have a rudimentary understanding of IoC, Repository Pattern, Unit of Work pattern and Service Layer pattern. I've made myself a diagram. I'm wondering if it is correct and if there is anything I have missed regarding an extensible application. Note that each module is a separate project. Update So I have many UI modules that use the db module, that's why they've been split up. There are other services the UI modules will use. The UI modules have been split up because they will be made over time, independent of each other.

    Read the article

  • Architecture of an action multiplayer game from scratch

    - by lcf
    Not sure whether it's a good place to ask (do point me to a better one if it's not), but since what we're developing is a game - here it goes. So this is a "real-time" action multiplayer game. I have familiarized myself with concepts like lag compensation, view interpolation, input prediction and pretty much everything that I need for this. I have also prepared a set of prototypes to confirm that I understood everything correctly. My question is about the situation when game engine must be rewind to the past to find out whether there was a "hit" (sometimes it may involve the whole 'recomputation' of the world from that moment in the past up to the present moment. I already have a piece of code that does it, but it's not as neat as I need it to be. The domain logic of the app (the physics of the game) must be separated from the presentation (render) and infrastructure tools (e.g. the remote server interaction specifics). How do I organize all this? :) Is there any worthy implementation with open sources I can take a look at? What I'm thinking is something like this: -> Render / User Input -> Game Engine (this is the so called service layer) -> Processing User Commands & Remote Server -> Domain (Physics) How would you add into this scheme the concept of "ticks" or "interactions" with the possibility to rewind and recalculate "the game"? Remember, I cannot change the Domain/Physics but only the Game Engine. Should I store an array of "World's States"? Should they be just some representations of the world, optimized for this purpose somehow (how?) or should they be actual instances of the world (i.e. including behavior and all that). Has anybody had similar experience? (never worked on a game before if that matters)

    Read the article

  • Architecture for a template-building, WYSIWIG application

    - by Sam Selikoff
    I'm building a WYSIWYG designer in Ember.js. The designer will allow users to create campaigns - think MailChimp. To build a campaign, users will choose an existing template. The template will have a defined layout. The user will then be taken to the designer, where he will be able to edit the text and style, and additionally change some layout options. I've been thinking about how best to go about structuring this app, and there are a few hurdles. Specifically, the output of the campaign will be dynamic: eventually, it will be published somewhere, and when the consumers (not my users, but the people clicking on the campaign that my user created) visit the campaign, certain pieces of data will change, depending on the type of consumer viewing the campaign. That means the ultimate output of the designer will be a dynamic site. The data that is dynamic for this site - the end product - will not be manipulated by the user in the designer. However, the data that will be manipulated by the user in the designer are things like copy, styles, layout options, etc. I'll call the first set of variables server-side data, and the second client-side data. It seems, then, that the process will go something like this: I'll need to create templates for this designer that have two dynamic segments. For instance, the server-side data could be Liquid expressions, and the client-side data Handlebars expressions. When the user creates a campaign, I would compile the template on the back end using some dummy data for the server-side variables, and serve up a handlebars template to the Ember app. The user would then edit the template, and the Ember app would save all his edits to the JS variables that were powering the template. This way he'd be able to preview the template. When he saves, he'll send back the selected template, along with all the data and options he's made. When it comes time to publish, the back-end system will have to do two things: compile the template with Handlebars using the campaign data, and then compile the template with Liquid using the server-side data Is my thinking roughly accurate about this, or is there a simpler way?

    Read the article

  • Architecture a for a central renderer rather than self-rendering

    - by The Communist Duck
    For the architectural side of rendering, there's two main ways: having each object render itself, and having a single renderer which renders everything. I'm currently aiming for the second idea, for the following reasons: The list can be sorted to only use shaders once. Else each object would have to bind the shader, because it's not sure if it's active. The objects could be sorted and grouped. Easier to swap APIs. With a few macro lines, it can be easy to swap between a DirectX renderer and an OpenGL renderer (not a reason for my project, but still a good point) Easier to manage rendering code Of course, if anyone has strong recommendations for the first method, I will listen to them. But I was wondering how make this work. First idea The renderer has a list of pointers to the renderable components of each entity, which register themselves on RenderCompoent creation. However, I'm worrying that this may end up as a lot of extra pointer weight. But I can sort the list of pointers every so often. Second idea The entire list of entities is passed to the renderer each render call. The renderer then sorts the list (each call, or maybe once?) and gets what it wants. That's a lot of passing and/or sorting, however. Other ideas ??? PROFIT Anyone got ideas? Thank you.

    Read the article

  • help me to choose between two software architecture

    - by alex
    // stupid title, but I could not think anything smarter I have a code (see below, sorry for long code but it's very-very simple): namespace Option1 { class AuxClass1 { string _field1; public string Field1 { get { return _field1; } set { _field1 = value; } } // another fields. maybe many fields maybe several properties public void Method1() { // some action } public void Method2() { // some action 2 } } class MainClass { AuxClass1 _auxClass; public AuxClass1 AuxClass { get { return _auxClass; } set { _auxClass = value; } } public MainClass() { _auxClass = new AuxClass1(); } } } namespace Option2 { class AuxClass1 { string _field1; public string Field1 { get { return _field1; } set { _field1 = value; } } // another fields. maybe many fields maybe several properties public void Method1() { // some action } public void Method2() { // some action 2 } } class MainClass { AuxClass1 _auxClass; public string Field1 { get { return _auxClass.Field1; } set { _auxClass.Field1 = value; } } public void Method1() { _auxClass.Method1(); } public void Method2() { _auxClass.Method2(); } public MainClass() { _auxClass = new AuxClass1(); } } } class Program { static void Main(string[] args) { // Option1 Option1.MainClass mainClass1 = new Option1.MainClass(); mainClass1.AuxClass.Field1 = "string1"; mainClass1.AuxClass.Method1(); mainClass1.AuxClass.Method2(); // Option2 Option2.MainClass mainClass2 = new Option2.MainClass(); mainClass2.Field1 = "string2"; mainClass2.Method1(); mainClass2.Method2(); Console.ReadKey(); } } What option (option1 or option2) do you prefer ? In which cases should I use option1 or option2 ? Is there any special name for option1 or option2 (composition, aggregation) ?

    Read the article

  • Bringing in New Architecture During Maintenance on Legacy Systems

    - by Mike L.
    I have been tasked with adding some new features to a legacy ASP.NET MVC2 project. The codebase is a disaster and I want to write these new features with some thought behind the implementation and not just throw these new features into the mess. I would like to introduce things like dependency injection and the orchestrator pattern; just to the code that I am going to write. I don't have enough time to try to refactor the entire system. Is it OK to not be consistent with the rest of the codebase and add new features following different design principles? Should I not introduce new patterns and just get the features implemented? I feel like it might be confusing to the next person to see parts of the system using a design that other parts are not following.

    Read the article

  • Architecture : am I doing things right?

    - by Jeremy D
    I'm trying to use a '~classic' layered arch using .NET and Entity Framework. We are starting from a legacy database which is a little bit crappy: Inconsistent naming Unneeded views (view referencing other views, select * views etc...) Aggregated columns Potatoes and Carrots in the same table etc... So I ended with fully isolating my database structure from my domain model. To do so EF entities are hidden from presentation layer. The goal is to permit an easier database refactoring while lowering the impact of it on applications. I'm now facing a lot of challenges and I'm starting to ask myself if I'm doing things right. My Domain Model is highly volatile, it keeps evolving with apps as new fields needs are arising. Complexity of it keeps raising and class it contains start to get a lot of properties. Creating include strategy and reprojecting to EF is very tricky (my domain objects don't have any kind of lazy/eager loading relationship properties): DomainInclude<Domain.Model.Bar>.Include("Customers").Include("Customers.Friends") // To... IFooContext.Bars.Include(...).Include(...).Where(...) Some framework are raping the isolation levels (Devexpress Grids which needs either XPO or IQueryable for filtering and paging large data sets) I'm starting to ask myself if : the isolation of EF auto-generated entities is an unneeded cost. I should allow frameworks to hit IQueryable? Slow slope to hell? (it's really hard to isolate DevExpress framework, any successful experience?) the high volatility of my domain model is normal? Did you have similar difficulties? Any advice based on experience?

    Read the article

  • C# class architecture for REST services

    - by user15370
    Hi. I am integrating with a set of REST services exposed by our partner. The unit of integration is at the project level meaning that for each project created on our partners side of the fence they will expose a unique set of REST services. To be more clear, assume there are two projects - project1 and project2. The REST services available to access the project data would then be: /project1/search/getstuff?etc... /project1/analysis/getstuff?etc... /project1/cluster/getstuff?etc... /project2/search/getstuff?etc... /project2/analysis/getstuff?etc... /project2/cluster/getstuff?etc... My task is to wrap these services in a C# class to be used by our app developer. I want to make it simple for the app developer and am thinking of providing something like the following class. class ProjectClient { SearchClient _searchclient; AnalysisClient _analysisclient; ClusterClient _clusterclient; string Project {get; set;} ProjectClient(string _project) { Project = _project; } } SearchClient, AnalysisClient and ClusterClient are my classes to support the respective services shown above. The problem with this approach is that ProjectClient will need to provide public methods for each of the API's exposed by SearchClient, etc... public void SearchGetStuff() { _searchclient.getStuff(); } Any suggestions how I can architect this better?

    Read the article

  • Software architecture for two similar classes which require different input parameters for the same method

    - by I Like to Code
    I am writing code to simulate a supply chain. The supply chain can be simulated in either an intermediate stocking or a cross-docking configuration. So, I wrote two simulator objects IstockSimulator and XdockSimulator. Since the two objects share certain behaviors (e.g. making shipments, demand arriving), I wrote an abstract simulator object AbstractSimulator which is a parent class of the two simulator objects. The abstract simulator object has a method runSimulation() which takes an input parameter of class SimulationParameters. Up till now, the simulation parameters only contains fields that are common to both simulator objects, such as randomSeed, simulationStartPeriod and simulationEndPeriod. However, I now want to include fields that are specific to the type of simulation that is being run, i.e. an IstockSimulationParameters class for an intermediate stocking simulation, and a XdockSimulationParameters class for a cross-docking simulation. My current idea is take the method runSimulation() out of the AbstractSimulator class, but to put a runSimulation(IstockSimulationParameters) method in the IstockSimulator class, and a runSimulation(XdockSimulationParameters) method in the IstockSimulator class. I am worried however, that this approach will lead to code duplication. What should I do?

    Read the article

  • Data architecture for event log metrics?

    - by elliot42
    My service has a large ongoing number of user events, and we would like to do things like "count occurrence of event type T since date D." We are trying to make two basic decisions: What to store? Storing every event vs. only storing aggregates (Event log style) log every event and count them later, vs. (Time-series style) store a single aggregated "count of event E for date D" for every day Where to store the data In a relational database (particularly MySQL) In a non-relational (NoSQL) database In flat log files (collected centrally over the network via syslog-ng) What is standard practice / where can I read more about comparing the different types of systems? Additional details: The total event stream is large, potentially hundreds of thousands of entries per day But our current need is only to count certain types of events within it We don't necessarily need real-time access to the raw data or aggregation results IMHO, "log all events to files, crawl them at a later time to filter and aggregate the stream" is a pretty standard UNIX Way, but my Rails-y compatriots seem to think that nothing is real unless it's in MySQL.

    Read the article

  • PyQt application architecture

    - by L. De Leo
    I'm trying to give a sound structure to a PyQt application that implements a card game. So far I have the following classes: Ui_Game: this describes the ui of course and is responsible of reacting to the events emitted by my CardWidget instances MainController: this is responsible for managing the whole application: setup and all the subsequent states of the application (like starting a new hand, displaying the notification of state changes on the ui or ending the game) GameEngine: this is a set of classes that implement the whole game logic Now, the way I concretely coded this in Python is the following: class CardWidget(QtGui.QLabel): def __init__(self, filename, *args, **kwargs): QtGui.QLabel.__init__(self, *args, **kwargs) self.setPixmap(QtGui.QPixmap(':/res/res/' + filename)) def mouseReleaseEvent(self, ev): self.emit(QtCore.SIGNAL('card_clicked'), self) class Ui_Game(QtGui.QWidget): def __init__(self, window, *args, **kwargs): QtGui.QWidget.__init__(self, *args, **kwargs) self.setupUi(window) self.controller = None def place_card(self, card): cards_on_table = self.played_cards.count() + 1 print cards_on_table if cards_on_table <= 2: self.played_cards.addWidget(card) if cards_on_table == 2: self.controller.play_hand() class MainController(object): def __init__(self): self.app = QtGui.QApplication(sys.argv) self.window = QtGui.QMainWindow() self.ui = Ui_Game(self.window) self.ui.controller = self self.game_setup() Is there a better way other than injecting the controller into the Ui_Game class in the Ui_Game.controller? Or am I totally off-road?

    Read the article

  • Gathering application architecture

    - by userbb
    Suppose there is system for gathering info about system activities. There is a client part with an interface and there are agent parts that are installed on each machine. I estimate that there could be max 20 computers now. Later could be more like 50. My solutions: Agent stores data into local database e.g. sqlite. There is also a service which can be used by a client to query data. So if a client wants to display data for 50 computers, he sends a query to 50 computers. I'am on that solution now but maybe it's totally wrong. Agent stores data into local database (I don't known good one for that). There is also server (main database) and local databases are synchronized with the server. In this case, a client connects to the main database to display data. Agent sends data in realtime to main database. So same as point 2, but there is no sync. Like in point 3, but agent buffers data in local database and sends it in small chunks to main database. What is the best approach?

    Read the article

  • Mobile (Client) to Amazon S3 (Server) - Architecture

    - by wasabii
    let's start off with the problem statement: My iOS application has a login form. When the user logs in, a call is made to my API and access granted or denied. If access was granted, I want the user to be able to upload pictures to his account and/or manage them. As storage I've picked Amazon S3, and I figured it'd be a good idea to have one bucket called "myappphotos" for instance, which contains lots of folders. The folder names are hashes of a user's email and a secret key. So, every user has his own, unique folder in my Amazon S3 bucket. Since I've just recently started working with AWS, here's my question: What are the best practices for setting up a system like this? I want the user to be able to upload pictures directly to Amazon S3, but of course I cannot hard-code the access key. So I need my API to somehow talk to Amazon and request an access token of sorts - only for the particular folder that belongs to the user I'm making the request for. Can anyone help me out and/or guide me to some sources where a similar problem was addressed? Don't think I'm the first one and the amazon documentation is so extensive that I don't really know where to start looking. Thanks a lot!

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >