Search Results

Search found 18314 results on 733 pages for 'document architecture'.

Page 111/733 | < Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >

  • How to design a high-level application protocol for metadata syncing between devices and server?

    - by Jaanus
    I am looking for guidance on how to best think about designing a high-level application protocol to sync metadata between end-user devices and a server. My goal: the user can interact with the application data on any device, or on the web. The purpose of this protocol is to communicate changes made on one endpoint to other endpoints through the server, and ensure all devices maintain a consistent picture of the application data. If user makes changes on one device or on the web, the protocol will push data to the central repository, from where other devices can pull it. Some other design thoughts: I call it "metadata syncing" because the payloads will be quite small, in the form of object IDs and small metadata about those ID-s. When client endpoints retrieve new metadata over this protocol, they will fetch actual object data from an external source based on this metadata. Fetching the "real" object data is out of scope, I'm only talking about metadata syncing here. Using HTTP for transport and JSON for payload container. The question is basically about how to best design the JSON payload schema. I want this to be easy to implement and maintain on the web and across desktop and mobile devices. The best approach feels to be simple timer- or event-based HTTP request/response without any persistent channels. Also, you should not have a PhD to read it, and I want my spec to fit on 2 pages, not 200. Authentication and security are out of scope for this question: assume that the requests are secure and authenticated. The goal is eventual consistency of data on devices, it is not entirely realtime. For example, user can make changes on one device while being offline. When going online again, user would perform "sync" operation to push local changes and retrieve remote changes. Having said that, the protocol should support both of these modes of operation: Starting from scratch on a device, should be able to pull the whole metadata picture "sync as you go". When looking at the data on two devices side by side and making changes, should be easy to push those changes as short individual messages which the other device can receive near-realtime (subject to when it decides to contact server for sync). As a concrete example, you can think of Dropbox (it is not what I'm working on, but it helps to understand the model): on a range of devices, the user can manage a files and folders—move them around, create new ones, remove old ones etc. And in my context the "metadata" would be the file and folder structure, but not the actual file contents. And metadata fields would be something like file/folder name and time of modification (all devices should see the same time of modification). Another example is IMAP. I have not read the protocol, but my goals (minus actual message bodies) are the same. Feels like there are two grand approaches how this is done: transactional messages. Each change in the system is expressed as delta and endpoints communicate with those deltas. Example: DVCS changesets. REST: communicating the object graph as a whole or in part, without worrying so much about the individual atomic changes. What I would like in the answers: Is there anything important I left out above? Constraints, goals? What is some good background reading on this? (I realize this is what many computer science courses talk about at great length and detail... I am hoping to short-circuit it by looking at some crash course or nuggets.) What are some good examples of such protocols that I could model after, or even use out of box? (I mention Dropbox and IMAP above... I should probably read the IMAP RFC.)

    Read the article

  • Allow for modular development while still running in same JVM?

    - by Marcus
    Our current app runs in a single JVM. We are now splitting up the app into separate logical services where each service runs in its own JVM. The split is being done to allow a single service to be modified and deployed without impacting the entire system. This reduces the need to QA the entire system - just need to QA the interaction with the service being changed. For inter service communication we use a combination of REST, an MQ system bus, and database views. What I don't like about this: REST means we have to marshal data to/from XML DB views couple the systems together which defeats the whole concept of separate services MQ / system bus is added complexity There is inevitably some code duplication between services You have set up n JBoss server configurations, we have to do n number of deployments, n number of set up scripts, etc, etc. Is there a better way to structure an internal application to allow modular development and deployment while allowing the app to run in a single JVM (and achieving the associated benefits)?

    Read the article

  • One executable with cmd-line params or just many satellite executables?

    - by Nikos Baxevanis
    I design an application back-end. For now, it is a .NET process (a Console Application) which hosts various communication frameworks such as Agatha and NServiceBus. I need to periodically update my datastore with values (coming from the application while it's running). I found three possible ways: Accept command line arguments, so I can call my console app with -update. On start up a background thread will periodically invoke the update method. Create an updater.exe app which will do the updates, but I will have code duplication since in some way it will need to query the data from the source in order to save it to the datastore. Which one is better?

    Read the article

  • Does MVC replace traditional manually created UI, BLL, DAL ?

    - by used2could
    I'm use to creating the UI, BLL, DAL by hand (some times i've used LINQ-SQL or SubSonic for the DAL). I've done several small projects using MVC since it's release. On these projects i've still continued to write a BLL and DAL by hand and then incorporate those into the MVC's models/controllers. Looking to optimize my time on projects this seems like over kill and a potential waste of time. My question is: Would it be acceptable to roll a DAL such as SubSonic and directly use it in the Models/Controllers of my MVC web app? Now the models & controllers would act as the BLL. I just see this as a major time savor to not have to worry about another tier. (Agree ? "Please state way" : "Make argument")

    Read the article

  • ASP.NET MVC & Web Services

    - by ANaimi
    Hello, Does adding a Web Service to my ASP.NET MVC project break the whole concept of MVC? That Web Service (WCF) depends on the Model layer from my MVC project to communicate with the back-end (so it looks to me like it needs to be part of the MVC solution). Should I add this to the Controller or Model layer?

    Read the article

  • Using pointers, references, handles to generic datatypes, as generic and flexible as possible

    - by Patrick
    In my application I have lots of different data types, e.g. Car, Bicycle, Person, ... (they're actually other data types, but this is just for the example). Since I also have quite some 'generic' code in my application, and the application was originally written in C, pointers to Car, Bicycle, Person, ... are often passed as void-pointers to these generic modules, together with an identification of the type, like this: Car myCar; ShowNiceDialog ((void *)&myCar, DATATYPE_CAR); The 'ShowNiceDialog' method now uses meta-information (functions that map DATATYPE_CAR to interfaces to get the actual data out of Car) to get information of the car, based on the given data type. That way, the generic logic only has to be written once, and not every time again for every new data type. Of course, in C++ you could make this much easier by using a common root class, like this class RootClass { public: string getName() const = 0; }; class Car : public RootClass { ... }; void ShowNiceDialog (RootClass *root); The problem is that in some cases, we don't want to store the data type in a class, but in a totally different format to save memory. In some cases we have hundreds of millions of instances that we need to manage in the application, and we don't want to make a full class for every instance. Suppose we have a data type with 2 characteristics: A quantity (double, 8 bytes) A boolean (1 byte) Although we only need 9 bytes to store this information, putting it in a class means that we need at least 16 bytes (because of the padding), and with the v-pointer we possibly even need 24 bytes. For hundreds of millions of instances, every byte counts (I have a 64-bit variant of the application and in some cases it needs 6 GB of memory). The void-pointer approach has the advantage that we can almost encode anything in a void-pointer and decide how to use it if we want information from it (use it as a real pointer, as an index, ...), but at the cost of type-safety. Templated solutions don't help since the generic logic forms quite a big part of the application, and we don't want to templatize all this. Additionally, the data model can be extended at run time, which also means that templates won't help. Are there better (and type-safer) ways to handle this than a void-pointer? Any references to frameworks, whitepapers, research material regarding this?

    Read the article

  • Null-free "maps": Is a callback solution slower than tryGet()?

    - by David Moles
    In comments to "How to implement List, Set, and Map in null free design?", Steven Sudit and I got into a discussion about using a callback, with handlers for "found" and "not found" situations, vs. a tryGet() method, taking an out parameter and returning a boolean indicating whether the out parameter had been populated. Steven maintained that the callback approach was more complex and almost certain to be slower; I maintained that the complexity was no greater and the performance at worst the same. But code speaks louder than words, so I thought I'd implement both and see what I got. The original question was fairly theoretical with regard to language ("And for argument sake, let's say this language don't even have null") -- I've used Java here because that's what I've got handy. Java doesn't have out parameters, but it doesn't have first-class functions either, so style-wise, it should suck equally for both approaches. (Digression: As far as complexity goes: I like the callback design because it inherently forces the user of the API to handle both cases, whereas the tryGet() design requires callers to perform their own boilerplate conditional check, which they could forget or get wrong. But having now implemented both, I can see why the tryGet() design looks simpler, at least in the short term.) First, the callback example: class CallbackMap<K, V> { private final Map<K, V> backingMap; public CallbackMap(Map<K, V> backingMap) { this.backingMap = backingMap; } void lookup(K key, Callback<K, V> handler) { V val = backingMap.get(key); if (val == null) { handler.handleMissing(key); } else { handler.handleFound(key, val); } } } interface Callback<K, V> { void handleFound(K key, V value); void handleMissing(K key); } class CallbackExample { private final Map<String, String> map; private final List<String> found; private final List<String> missing; private Callback<String, String> handler; public CallbackExample(Map<String, String> map) { this.map = map; found = new ArrayList<String>(map.size()); missing = new ArrayList<String>(map.size()); handler = new Callback<String, String>() { public void handleFound(String key, String value) { found.add(key + ": " + value); } public void handleMissing(String key) { missing.add(key); } }; } void test() { CallbackMap<String, String> cbMap = new CallbackMap<String, String>(map); for (int i = 0, count = map.size(); i < count; i++) { String key = "key" + i; cbMap.lookup(key, handler); } System.out.println(found.size() + " found"); System.out.println(missing.size() + " missing"); } } Now, the tryGet() example -- as best I understand the pattern (and I might well be wrong): class TryGetMap<K, V> { private final Map<K, V> backingMap; public TryGetMap(Map<K, V> backingMap) { this.backingMap = backingMap; } boolean tryGet(K key, OutParameter<V> valueParam) { V val = backingMap.get(key); if (val == null) { return false; } valueParam.value = val; return true; } } class OutParameter<V> { V value; } class TryGetExample { private final Map<String, String> map; private final List<String> found; private final List<String> missing; public TryGetExample(Map<String, String> map) { this.map = map; found = new ArrayList<String>(map.size()); missing = new ArrayList<String>(map.size()); } void test() { TryGetMap<String, String> tgMap = new TryGetMap<String, String>(map); for (int i = 0, count = map.size(); i < count; i++) { String key = "key" + i; OutParameter<String> out = new OutParameter<String>(); if (tgMap.tryGet(key, out)) { found.add(key + ": " + out.value); } else { missing.add(key); } } System.out.println(found.size() + " found"); System.out.println(missing.size() + " missing"); } } And finally, the performance test code: public static void main(String[] args) { int size = 200000; Map<String, String> map = new HashMap<String, String>(); for (int i = 0; i < size; i++) { String val = (i % 5 == 0) ? null : "value" + i; map.put("key" + i, val); } long totalCallback = 0; long totalTryGet = 0; int iterations = 20; for (int i = 0; i < iterations; i++) { { TryGetExample tryGet = new TryGetExample(map); long tryGetStart = System.currentTimeMillis(); tryGet.test(); totalTryGet += (System.currentTimeMillis() - tryGetStart); } System.gc(); { CallbackExample callback = new CallbackExample(map); long callbackStart = System.currentTimeMillis(); callback.test(); totalCallback += (System.currentTimeMillis() - callbackStart); } System.gc(); } System.out.println("Avg. callback: " + (totalCallback / iterations)); System.out.println("Avg. tryGet(): " + (totalTryGet / iterations)); } On my first attempt, I got 50% worse performance for callback than for tryGet(), which really surprised me. But, on a hunch, I added some garbage collection, and the performance penalty vanished. This fits with my instinct, which is that we're basically talking about taking the same number of method calls, conditional checks, etc. and rearranging them. But then, I wrote the code, so I might well have written a suboptimal or subconsicously penalized tryGet() implementation. Thoughts?

    Read the article

  • How to allow for modular development while still running in same JVM?

    - by Marcus
    Our current app runs in a single JVM. We are now splitting up the app into separate logical services where each service runs in its own JVM. The split is being done to allow a single service to be modified and deployed without impacting the entire system. This reduces the need to QA the entire system - just need to QA the interaction with the service being changed. For interservice communication we use a combination of REST, an MQ system bus, and database views. What I don't like about this: REST means we have to marshal data to/from XML DB views couple the systems together which defeats the whole concept of separate services MQ / system bus is added complexity There is inevitably some code duplication between services You have set up n JBoss server configurations, we have to do n number of deployments, n number of set up scripts, etc, etc. Is there a better way to structure an internal application to allow modular development and deployment while allowing the app to run in a single JVM (and achieving the associated benefits)?

    Read the article

  • How do CUDA devices handle immediate operands?

    - by Jack Lloyd
    Compiling CUDA code with immediate (integer) operands, are they held in the instruction stream, or are they placed into memory? Specifically I'm thinking about 24 or 32 bit unsigned integer operands. I haven't been able to find information about this in any of the CUDA documentation I've examined so far. So references to any documents on specific uarch details like this would be perfect, as I don't currently have a good model for how CUDA works at this level.

    Read the article

  • How to extend a large website to an iPhone app?

    - by xoail
    I am trying to create an iPhone app for a large website (as big as amazon.com) and it involves using cookies and what not to get authenticated via the Apache intercepter and access the web services exposed by the main website. For that I am looking for strategies to go about developing it. I am new to iPhone development and I am mostly looking for some architectural guidance. Does anyone know how services like eBay and Amazon work seamlessly across the website and iPhone app?

    Read the article

  • c# multi threaded file processing

    - by user177883
    There is a folder that contains 1000 of small text files. I aim to parse and process all of them while more files are being populated in to the folder. My intention is to multithread this operation as the single threaded prototype took 6 minutes to process 1000 files. I like to have reader and writer thread(s) as following : while the reader thread(s) are reading the files, I d like to have writer thread(s) to process them. Once the reader is started reading a file, I d like to mark it as being processed, such as by renaming it, once it s read, rename it to completed. How to approach such multithreaded application ? Is it better to use a distributed hash table or a queue? Which data structure to use that would avoid locks? Would you have a better approach to this scheme that you like to share?

    Read the article

  • Spring security oauth2 provider to secure non-spring api

    - by user1241320
    I'm trying to set up an oauth 2.0 provider that should "secure" our restful api using spring-security-oauth. Being a 'spring fan' i thought it could be the quicker solution. main point is this restful thingie is not a spring based webapp. boss says the oauth provider should be a separate application, but i'm starting to doubt that. (got this impression by reading spring-security-oauth) i'm also new here so haven't really got my hands into this other (jersey-powered) restul api (core of our business). any help/hint will be much appreciated.

    Read the article

  • Can I Automap a tree hierarchy with Fluent NHibernate?

    - by NakChak
    Is it possible to auto map a simple nested object structure? Something like this: public class Employee : Entity { public Employee() { this.Manages = new List<Employee>(); } public virtual string FirstName { get; set; } public virtual string LastName { get; set; } public virtual bool IsLineManager { get; set; } public virtual Employee Manager { get; set; } public virtual IList<Employee> Manages { get; set; } } It causes the following error at run time: Repeated column in mapping for collection: SharpKtulu.Core.Employee.Manages column: EmployeeFk Is it possible to automap this sort of structure, or do I have override the auto mapper for this sort of structure?

    Read the article

  • Looking for a wsdl-based scaffolding framework.

    - by daniel.balla
    I am looking for a wsdl scaffolding framework, or even better if it was a POCO scaffolding framework. I need something like Dynamic Data, but running on wsdl metadata instead of db. I am currently using office infopath which generates decent forms based on wsdl, but I would like to have it in my app, and generated at runtime. Any ideas would be appreciated. Thanks

    Read the article

  • Business Object desgin

    - by Dan
    I have a question about how I setup my BO's. I setup the BO's to contain all of my properties of the object as well as the business logic to satisfy the business rules. I decided to make all of the methods static, but I'm not sure if that was the right decision. Someone told me to split my BO's into an Entity Object of just properties and then a BO of just methods that do business rules, and don't make the methods static. Does anyone have some experience with the way i've set this up? Any examples of how it might work better for future growth? Thanks!

    Read the article

  • I did my own web framework: now, how keep it sync with applications? must I use versions?

    - by Daniel Koch
    ... and I did the first web application using it, now I'm going to create the second. In this first web application I enhanced the framework's core library with new things and promptly updated framework branch. I'm using bazaar to keep framework and web application committed. The application was in the beginning, a full branch of framework source tree, now I'm updating framework manually at every change on core files. (copying changed files from web app to framework's branch). With this second web application that I'm going to create, I need to know about versions (or revisions) which the application is based. If I found a bug in this version I can fix and then sync files with first web application no worrying: functions will be the same to this application. If I'm going to make changes in core (new behavior, new functions in library or something new in source tree) it must be named as "new version". What's the best way to do this? Because I'm using a Distributed Version Control System (bazaar), I'm not dealing with VERSIONS, but revision numbers that change every time. Please fresh my mind with new ideas.

    Read the article

  • MyController class must produce class according to the enum type.

    - by programmerist
    GenoTipController must produce class according to the enum type. i have 3 class: _Company,_Muayene,_Radyoloji. Also i have CompanyView Class GetPersonel method. if you look GenoTipController my codes need refactoring. Can you understand me? i need a class according to ewnum type must me produce class. For example; case DataModelType.Radyoloji it must return radyoloji= new Radyoloji . Everything must be one switch case? public class GenoTipController { public _Company GenerateCompany(DataModelType modeltype) { _Company company = null; switch (modeltype) { case DataModelType.Radyoloji: break; case DataModelType.Satis: break; case DataModelType.Muayene: break; case DataModelType.Company: company = new Company(); break; default: break; } return company; } public _Muayene GenerateMuayene(DataModelType modeltype) { _Muayene muayene = null; switch (modeltype) { case DataModelType.Radyoloji: break; case DataModelType.Satis: break; case DataModelType.Muayene: muayene = new Muayene(); break; case DataModelType.Company: break; default: break; } return muayene; } public _Radyoloji GenerateRadyoloji(DataModelType modeltype) { _Radyoloji radyoloji = null; switch (modeltype) { case DataModelType.Radyoloji: radyoloji = new Radyoloji(); break; case DataModelType.Satis: break; case DataModelType.Muayene: break; case DataModelType.Company: break; default: break; } return radyoloji; } } public class CompanyView { public static List GetPersonel() { GenoTipController controller = new GenoTipController(); _Company company = controller.GenerateCompany(DataModelType.Company); return company.GetPersonel(); } } public enum DataModelType { Radyoloji, Satis, Muayene, Company } }

    Read the article

  • List DOM Documents attributes and methods using Javascript

    - by EddyR
    Just wondering if it's possible to print and list all methods and attributes available to the DOM document itself using Javascript? So I would get something like so: Document.doctype Document.implementation Document.documentElement Document.createElement Document.createDocumentFragment Document.createTextNode Document.createComment Document.createProcessingInstruction etc... etc... I want to do this to test on different browsers and not have to wade through mountains technical documents from each vendor to get accurate information.

    Read the article

  • Which is the better C# class design for dealing with read+write versus readonly

    - by DanM
    I'm contemplating two different class designs for handling a situation where some repositories are read-only while others are read-write. (I don't foresee any need to a write-only repository.) Class Design 1 -- provide all functionality in a base class, then expose applicable functionality publicly in sub classes public abstract class RepositoryBase { protected virtual void SelectBase() { // implementation... } protected virtual void InsertBase() { // implementation... } protected virtual void UpdateBase() { // implementation... } protected virtual void DeleteBase() { // implementation... } } public class ReadOnlyRepository : RepositoryBase { public void Select() { SelectBase(); } } public class ReadWriteRepository : RepositoryBase { public void Select() { SelectBase(); } public void Insert() { InsertBase(); } public void Update() { UpdateBase(); } public void Delete() { DeleteBase(); } } Class Design 2 - read-write class inherits from read-only class public class ReadOnlyRepository { public void Select() { // implementation... } } public class ReadWriteRepository : ReadOnlyRepository { public void Insert() { // implementation... } public void Update() { // implementation... } public void Delete() { // implementation... } } Is one of these designs clearly stronger than the other? If so, which one and why? P.S. If this sounds like a homework question, it's not, but feel free to use it as one if you want :)

    Read the article

  • Of Models / Entities and N-tier applications

    - by Jonn
    I've only discovered a month ago the folly of directly accessing entities / models from the data access layer of an n-tier app. After reading about ViewModels while studying ASP.NET MVC, I've come to understand that to make a truly extensible application the model that the UI layer interacts with must be different from the one that the Data Access layer has access to. But what about the Business layer? Should I have a different set of models for my business layer as well? For true separation of concern, should I have a specific set of models that are relevant only to my business layer so as not to mess around with any entities (possibly generated by for example, the entity framework, or EJB) in the DAL or would that be overkill?

    Read the article

  • java Database framework comparison

    - by user293655
    Hi, I want to create an application that synchronize a database to multiple databases(various type of databases). I'm looking for a framework that suitable to do this. I was looking for something just get the Object of the data (like a resultset) then copy that object to the destination database. Or comparing between 2 data. Any ideas? Thanks,

    Read the article

  • Platform Framework and Middle-ware

    - by Walidix
    I really want to fix the meanings of this terms.This is what I know about them: Platform: is an environment that allows an application to run and it is usually composite by hardware and software e.g. your machine with your OS installed on it. However some software stack are also called as platform like Java Platform. Framework: is design-oriented. That is, it defines a skeleton of an application type. Also it saves time by offering, through an API, the common functionalities that a kind of applications must implement. Moreover a Framework include a software components that manage the application life-cycle. Middle-ware: is a software that offers services to an application in order to let it inter-operates with other applications or supports different platforms. Middle-ware can be a part of a framework. Is there something wrong or missing ?

    Read the article

  • Framework or design pattern for mailing all users of a webapp

    - by Todd Owen
    My app takes care of user registration (with the option to receive email announcements), and can easily handle the actual template-based rendering of email for a given user. JavaMail provides the mail transport layer. But how should I design the application layer between the business objects (e.g. User) and the mail transport? The straightforward approach would be a simple, synchronous loop: iterate through the users, queue the emails, and be done with it. "Queue" might mean sending them straight to the MTA (mail server), or to an in-memory queue to be consumed by another thread. However, I also plan to implement features like throttling the rate of emails, processing bounced emails (NDRs), and maintaining status across application restarts. My intuition is that a good design would decouple this from both the business layer and the mail transport layer as much as possible. I wondered if others had solved this problem before, but after much searching I haven't found any Java libraries which seem to fit this problem. Standalone mail apps such as James or list servers are too large in scope; packages like Spring's MailSender or Commons Email are too small in scope (being basically drop-in replacements for JavaMail). For other languages I haven't found anything appropriate either. I'm curious about how other developers have gone about adding bulk mailing to their applications.

    Read the article

< Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >