Search Results

Search found 17047 results on 682 pages for 'architecture design patt'.

Page 201/682 | < Previous Page | 197 198 199 200 201 202 203 204 205 206 207 208  | Next Page >

  • How to structure Javascript programs in complex web applications?

    - by mixedpickles
    Hi there. I have a problem, which is not easily described. I'm writing a web application that makes heavy usage of jquery and ajax calls. Now I don't have experience in designing the architecture for javascript programms, but I realize that my program has not a good structure. I think I have to many identifiers referring to the same (at least more or less) thing. Let's have an exemplary look at an arbitrary UI widget: The eventhandlers use DOM elements as parameters. The DOM element represents a widget in the browser. A lot of times I use jQuery objects (I think they are basically a wrapper around DOM elements) to do something with the widget. Sometimes they are used transiently, sometimes they are stored in a variable for later purposes. The ajax function calls use strings identifiers for these widgets. They are processed server side. Beside that I have a widget class whose instances represents a widget. It is instantiated through the new operator. Now I have somehow four different object identifiers for the same thing, which needs to be kept in sync until the page is loaded anew. This seems not to be a good thing. Any advice?

    Read the article

  • Need help choosing database server

    - by The Pretender
    Good day everyone. Recently I was given a task to develop an application to automate some aspects of stocks trading. While working on initial architecture, the database dilemma emerged. What I need is a fast database engine which can process huge amounts of data coming in very fast. I'm fairly experienced in general programming, but I never faced a task of developing a high-load database architecture. I developed a simple MSSQL database schema with several many-to-many relationships during one of my projects, but that's it. What I'm looking for is some advice on choosing the most suitable database engine and some pointers to various manuals or books which describe high-load database development. Specifics of the project are as follows: OS: Windows NT family (Server 2008 / 7) Primary platform: .NET with C# Database structure: one table to hold primary items and two or three tables with foreign keys to the first table to hold additional information. Database SELECT requirements: Need super-fast selection by foreign keys and by combination of foreign key and one of the columns (presumably DATETIME) Database INSERT requirements: The faster the better :) If there'll be significant performance gain, some parts can be written in C++ with managed interfaces to the rest of the system. So once again: given all that stuff I just typed, please give me some advice on what the best database for my project is. Links or references to some manuals and books on the subject are also greatly appreciated. EDIT: I'll need to insert 3-5 rows in 2 tables approximately once in 30-50 milliseconds and I'll need to do SELECT with 0-2 WHERE clauses queries with similar rate.

    Read the article

  • Error message when running "make" command: /usr/bin/ld: i386 architecture of input file is incompatible with i386:x86-64 output

    - by user784637
    I am unable to create a working executable file by running the make command in a tree previously built on an i386 machine. I'm getting an error message in the form of me@me-desktop:~$ make /usr/bin/ld: i386 architecture of input file `../.. /Lib/libProgram.a(something.o)' is incompatible with i386:x86-64 output I've been told and reassured that this program has been tested and successfully compiled on 64-bit Fedora. I'm running a 64-bit machine me@me-desktop:~$ uname -m x86_64 I'm running Ubuntu 10.04 me@me-desktop:~$ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 10.04.3 LTS Release: 10.04 Codename: lucid I'm using g++ # me@me-desktop:~$ g++ --version g++ (Ubuntu 4.4.3-4ubuntu5) 4.4.3 Copyright (C) 2009 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. I'm also using libtool # me@me-desktop:~$ libtool --version ltmain.sh (GNU libtool) 2.2.6b Written by Gordon Matzigkeit <[email protected]>, 1996 Any clues as to what is going wrong?

    Read the article

  • Endianness and C API's: Specifically OpenSSL.

    - by Hassan Syed
    I have an algorithm that uses the following OpenSSL calls: HMAC_update() / HMAC_final() // ripe160 EVP_CipherUpdate() / EVP_CipherFinal() // cbc_blowfish These algorithm take a unsigned char * into the "plain text". My input data is comes from a C++ std::string::c_str() which originate from a protocol buffer object as a encoded UTF-8 string. UTF-8 strings are meant to be endian neutrial. However I'm a bit paranoid about how OpenSSL may perform operations on the data. My understanding is that encryption algorithms work on 8-bit blocks of data, and if a unsigned char * is used for pointer arithmetic when the operations are performed the algorithms should be endian neutral and I do not need to worry about anything. My uncertainty is compounded by the fact that I am working on a little-endian machine and have never done any real cross-architecture programming. My beliefs/reasoning are/is based on the following two properties std::string (not wstring) internally uses a 8-bit ptr and a the resulting c_str() ptr will itterate the same way regardless of the CPU architecture. Encryption algorithms are either by design, or by implementation, endian neutral. I know the best way to get a definitive answer is to use QEMU and do some cross-platform unit tests (which I plan to do). My question is a request for comments on my reasoning, and perhaps will assist other programmers when faced with similar problems.

    Read the article

  • Is a Mission Oriented Architecture (MOA) a better way to describe things than SOA?

    - by Brian Langbecker
    I might sound like a troll, but I would like to seriously understand this deeper. The place I work at has started to use the term MOA, versus SOA as we believe it drives more clarity and want to compare it to the true goals of SOA. A Mission Oriented Architecture is an approach whereby an application is broken down into various business mission elements, with the database, file assets, batch and real time functionality all tightly coupled in terms of delivering that piece of the functionality. The mission allows the developers to focus on a specific piece of functionality to get it right, and to build it with the ability for that piece to scale as an independent entity within the overall application. By tightly coupling the data, file assets and business logic you achieve the goals of working on a very large problem in bite size pieces. Some definitions of SOA mix it up with what is essentially a method call on a web service versus a true "service". As an architect, I have always found it fun getting everyone on the same page regarding SOA. Is it better to call it a "mission" versus a "service"?

    Read the article

  • Assuming "clean code/architecture" is there a difference in "effort" between PHP or Java/J2EE web application development?

    - by PhD
    A client asked us to estimate effort when selecting PHP as the implementation language for his next web-based application. We spent about a week exploring PHP, prototyping, testing etc., We are quite new to this language - may have hacked around it in the past but, let's go with PHP-noobs but application development experts (for the lack of a better, less flattering word :) It seems, that if we write, clean maintainable code, follow separation of concerns, enterprise architecture patters (DAOs etc.) the 'effort' in creating an object-oriented PHP based web-application seems to be the same for a Java based one. Here's our equation for estimating the effort (development/delivery time): ConstructionEffort = f(analysis, design, coding, testing, review, deployment) We were specifically comparing effort estimates in creating an enterprise application with the following: PHP + CakePHP/CodeIgniter (should we have considered others?) Java + Spring + Restlet It's an end-to-end application: Client: Javascript/jQuery + HTML/CSS Middle tier/Business Logic - (Still evaluating PHP/Java) Database: MySQL The effort estimates of the 1st and 3rd tier are constant and relatively independent of the middle tier's technology. At a high level with an initial breakdown into user stories of the requested features as well as a high-level SWAG on the sheer number of classes/SLOC that would be required for PHP doesn't seem to differ by much from what is required of the same in Java. Is this correct? We are basing our initial estimates on the initial prototyping/coding we've done with PHP - we are currently disregarding fluency with the language as a factor, since that'll be an initial hurdle and not a long term impediment IMHO (we also have sufficient time to become quite fluent with PHP). I'm interested in knowing the programmers' perspective with respect to effort when creating similar applications with either of the languages to justify choosing one over the other. Are we missing something here? It seems we are going against popular belief of PHP being quicker to market (or we being very fluent with Java have our vision clouded). It doesn't seem to have any coding/programming effort saving from what we/ve played around with.

    Read the article

  • How to remove all associated files and configuration settings of an app installed through 'force architecture' command

    - by Mysterio
    A few weeks ago I installed a 32 bit .deb file through the 'force architecture' command (on my 64bit notebook), however the procedure was unsuccessful and I used the apt-get purgecommand to uninstall the app. It seems there are some leftovers of the app I uninstalled which has now broken system update. Synaptic recommended a sudo apt-get install -fwhich I did in the terminal with this initial response: Reading package lists... Done Building dependency tree Reading state information... Done The following package was automatically installed and is no longer required: libntfs10 Use 'apt-get autoremove' to remove them. The following packages will be REMOVED: crossplatformui 0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded. 1 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? I chose 'Y' then got this response: (Reading database ... 187616 files and directories currently installed.) Removing crossplatformui ... ztemtvcdromd: no process found dpkg: error processing crossplatformui (--remove): subprocess installed post-removal script returned error exit status 1 Errors were encountered while processing: crossplatformui E: Sub-process /usr/bin/dpkg returned an error code (1) It seems the app I installed crossplatformuiis still on my system and has caused update manager to stop running with a partial upgrade warning. What do I do now?

    Read the article

  • How to build an API on top of an existing Rails app with NodeJs and what architecture to use?

    - by javiayala
    The explanation I was recently hired by a company that has an old RoR 2.3 application with more than 100k users, a strong SEO strategy with more than 170k indexed urls, native android and ios applications and other custom-made mobile and web applications that rely on a not so good API from the same RoR app. They recently merged with a company from another country as an strategy to grow the business and the profit. They have almost the same stats, a similar strategy and mobile apps. We have just decided that we need to merge the data from both companies and to start a new app from scratch since the RoR app is to old and heavily patched and the app from the other company was built with a custom PHP framework without any documentation. The only good news is that both databases are in MySQL and have a similar structure. The challenge I need to build a new version that: can handle a lot of traffic, preserves the SEO strategies of both companies, serve 2 different domains, and have a strong API that can support legacy mobile apps from both companies and be ready for a new set of native apps. I want to use RoR 3.2 for the main web apps and NodeJs with a Restful API. I know that I need to be very careful with the mobile apps and handle multiple versions of the API. I also think that I need to create a service that can handle a lot IO request since the apps is heavily used to create orders for restaurants at a certain time of the day. The questions With all this in mind: What type of architecture do you recommend me to follow? What gems or node packages do you think will work the best? How do I build a new rails app and keep using the same database structure? Should I use NodeJS to build an API or just build a new service with Ruby? I know that I'm asking to much from you guys, but please help me by answering any topic that you can or by pointing me on the right direction. All your comments and feedback will be extremely appreciated! Thanks!

    Read the article

  • Data Access Layer, Best Practices

    - by labratmatt
    I'm looking for input on the best way to refactor the data access layer (DAL) in my PHP based web app. I follow an MVC pattern: PHP/HTML/CSS/etc. views on the front end, PHP controllers/services in the middle, and a PHP DAL sitting on top of a relational database in the model. Pretty standard stuff. Things are working fine, but my DAL is getting large (codesmell?) and becoming a bit unwieldy. My DAL contains almost all of the logic to interface with my database and is full of functions that look like this: function getUser($user_id) { $statement = "select id, name from users where user_id=:user_id"; PDO builds statement and fetchs results as an array return $array_of_results_generated_by_PDO_fetch_method; } Notes: The logic in my controller only interacts with the model using functions like the above in the DAL I am not using a framework (I'm of the opinion that PHP is a templating language and there's no need to inject complexity via a framework) I generally use PHP as a procedural language and tend to shy away from its OOP approach (I enjoy OOP development but prefer to keep that complexity out of PHP) What approaches have you taken when your DAL has reached this point? Do I have a fundamental design problem? Do I simply need to chop my DAL into a number of smaller files (logically divide it)? Thanks.

    Read the article

  • Why put a DAO layer over a persistence layer (like JDO or Hibernate)

    - by Todd Owen
    Data Access Objects (DAOs) are a common design pattern, and recommended by Sun. But the earliest examples of Java DAOs interacted directly with relational databases -- they were, in essence, doing object-relational mapping (ORM). Nowadays, I see DAOs on top of mature ORM frameworks like JDO and Hibernate, and I wonder if that is really a good idea. I am developing a web service using JDO as the persistence layer, and am considering whether or not to introduce DAOs. I foresee a problem when dealing with a particular class which contains a map of other objects: public class Book { // Book description in various languages, indexed by ISO language codes private Map<String,BookDescription> descriptions; } JDO is clever enough to map this to a foreign key constraint between the "BOOKS" and "BOOKDESCRIPTIONS" tables. It transparently loads the BookDescription objects (using lazy loading, I believe), and persists them when the Book object is persisted. If I was to introduce a "data access layer" and write a class like BookDao, and encapsulate all the JDO code within this, then wouldn't this JDO's transparent loading of the child objects be circumventing the data access layer? For consistency, shouldn't all the BookDescription objects be loaded and persisted via some BookDescriptionDao object (or BookDao.loadDescription method)? Yet refactoring in that way would make manipulating the model needlessly complicated. So my question is, what's wrong with calling JDO (or Hibernate, or whatever ORM you fancy) directly in the business layer? Its syntax is already quite concise, and it is datastore-agnostic. What is the advantage, if any, of encapsulating it in Data Access Objects?

    Read the article

  • Should frontend and backend handled by different controllers?

    - by DR
    In my previous learning projects I always used a single controller, but know I wonder if that is good practice or even always possible. In all RESTful Rails tutorials the controllers have a show, an edit and an index view. If an authorized user is logged on, the edit view becomes available and the index view shows additional data manipulation controls, like a delete button or a link to the edit view. Now I have a Rails application which falls exactly into this pattern, but the index view is not reusable: The normal user sees a flashy index page with lots of pictures, complex layout, no Javascript requirement, ... The Admin user index has a completly different minimalistic design, jQuery table and lots of additional data, ... Now I'm not sure how to handle this case. I can think of the following: Single controller, single view: The view is split into two large blocks/partials using an if statement. Single controller, two views: index and index_admin. Two different controllers: BookController and BookAdminController None of this solutions seems perfect, but for now I'm inclined to use the 3rd option. What's the preferred way to do this?

    Read the article

  • Java plugin framework choice

    - by Marcus
    We're trying to determine how to implement a simple plugin framework for a service we are implementing that allows different types of calculators to be "plugged-in". After reading a number of posts about Java plugin frameworks, it seems like the most common options are: OSGI "Rolling your own" plugin framework The Java Plugin Framework (JPF) The Java Simple Plugin Framework (JSPF) OSGI seems to be more than we need. "Rolling your own" is ok but it would be nice to reuse a common library. So we're down to the JPF and JSPF. JPF seems to not be in active development right now. JSPF seems very simple and really all we need. However I haven't heard much about it. I've only seen one post on StackOverflow about it. Does anyone else have any experience with JSPF? Or any other comments on this design choice? Update: There isn't necessarily a correct answer to this.. however we're going to go with Pavol's idea as we need just a really, really simple solution. Thanks EoH for the nice guide.

    Read the article

  • Avoiding mass propagation of properties and events for exposure to ViewModels.

    - by firoso
    I have an MVVM application I am developing that is to the point where I'm ready to start putting together a user interface (my client code is largely functional) I'm now running into the issue that I'm trying to get my application data to where I need it so that it can be consumed by the view model and then bound to the view. Unfortunately, it seems that I've either got a few structural oversights, or I'm just going to have to face the reality that I need to be propogating events and raising excessive amounts of errors to notify view models that thier properties have changed. Let me go into some examples of my issue: I have a class "Unit" contained in a class "Test", contained in a class "Session" contained in a class "TestManager" which is contained in "TestDataModel" which is utilized by "TestViewModel" which is databound to by my "TestView" .... WHOA. Now, consider that Unit (the bottom of the heiarchy) has a property called "Results" that is updated periodically, I want to expose that to my viewmodel and then databind it to my view, trouble is, the only way I can really think to do this is to perpetuate events WAY up a chain that say "I've been updated!" and then request the new value... This seems like an aweful way to do this. Alternatively, I could register a static event and raise it, and have the appropriate "Unit view model" grab the event and request the update. This SEEMS better... but... static events? Is that a taboo idea? Also, having an expression like: TestDataModel.TestManager.Session.Test.Unit.Results[i] Seems REALLY gross to have on a View Model. I know this all reeks of a bad design issue, but I can't figure out what I did wrong? Should I be using more singleton/container controlled lifetimes type objects? Register object instances with static helper containers? Obviously these are hard questions to answer without being intimate with the existing structure, but if you've run into situations like this, what did you do to refactor? Should I just live with this, add mass events, and propogate them?

    Read the article

  • Should frontend and backend be handled by different controllers?

    - by DR
    In my previous learning projects I always used a single controller, but now I wonder if that is good practice or even always possible. In all RESTful Rails tutorials the controllers have a show, an edit and an index view. If an authorized user is logged on, the edit view becomes available and the index view shows additional data manipulation controls, like a delete button or a link to the edit view. Now I have a Rails application which falls exactly into this pattern, but the index view is not reusable: The normal user sees a flashy index page with lots of pictures, complex layout, no Javascript requirement, ... The Admin user index has a completly different minimalistic design, jQuery table and lots of additional data, ... Now I'm not sure how to handle this case. I can think of the following: Single controller, single view: The view is split into two large blocks/partials using an if statement. Single controller, two views: index and index_admin. Two different controllers: BookController and BookAdminController None of these solutions seems perfect, but for now I'm inclined to use the 3rd option. What's the preferred way to do this?

    Read the article

  • First Time Architecturing?

    - by cam
    I was recently given the task of rebuilding an existing RIA. The new RIA that I've designed is based on Silverlight, with a WCF service to connect to MS SQL Server. This is my first time doing something like this, so I'm not sure how to design the entire thing. Basically, the client can look through graphs of "stocks" (allowing the client to choose different time periods, settings, etc). I've written the whole application essentially, but I'm not sure how to put it together. The graphs are supposed to be directly based on the database, and to create the datapoints on the graph, some calculations need to be done (not very expensive ones). The problem I'm having is to decide where to put the calculations (client or serverside? Or half and half?) What factors should I look for to help me decide where the calculations should be done? And how can I go about optimizing this (caching, etc)? Obviously this is a very broad subject, so I'm not expecting an immediate answer, but any help/pointing in the right direction/resources would be appreciated.

    Read the article

  • data ownership and performance

    - by Ami
    We're designing a new application and we ran into some architectural question when thinking about data ownership. we broke down the system into components, for example Customer and Order. each of this component/module is responsible for a specific business domain, i.e. Customer deals with CRUD of customers and business process centered around customers (Register a n new customer, block customer account, etc.). each module is the owner of a set of database tables, and only that module may access them. if another module needs data that is owned by another module, it retrieves it by requesting it from that module. so far so good, the question is how to deal with scenarios such as a report that needs to show all the customers and for each customer all his orders? in such a case we need to get all the customers from the Customer module, iterate over them and for each one get all the data from the Order module. performance won't be good...obviously it would be much better to have a stored proc join customers table and orders table, but that would also mean direct access to the data that is owned by another module, creating coupling and dependencies that we wish to avoid. this is a simplified example, we're dealing with an enterprise application with a lot of business entities and relationships, and my goal is to keep it clean and as loosely coupled as possible. I foresee in the future many changes to the data scheme, and possibly splitting the system into several completely separate systems. I wish to have a design that would allow this to be done in a relatively easy way. Thanks!

    Read the article

  • Portal And Content - Content Integration - Best Practices

    - by Stefan Krantz
    Lately we have seen an increase in projects that have failed to either get user friendly content integration or non satisfactory performance. Our intention is to mitigate any knowledge gap that our previous post might have left you with, therefore this post will repeat some recommendation or reference back to old useful post. Moreover this post will help you understand ground up how to design, architect and implement business enabled, responsive and performing portals with complex requirements on business centric information publishing. Design the Information Model The key to successful portal deployments is Information modeling, it's a key task to understand the use case you designing for, therefore I have designed a set of question you need to ask yourself or your customer: Question: Who will own the content, IT or Business? Answer: BusinessQuestion: Who will publish the content, IT or Business? Answer: BusinessQuestion: Will there be multiple publishers? Answer: YesQuestion: Are the publishers computer scientist?Answer: NoQuestion: How often do the information changes, daily, weekly, monthly?Answer: Daily, weekly If your answers to the questions matches at least 2, we strongly recommend you design your content with following principles: Divide your pages in to logical sections, where each section is marked with its purpose Assign capabilities to each section, does it contain text, images, formatting and/or is it static and is populated through other contextual information Select editor/design element type WYSIWYG - Rich Text Plain Text - non-format text Image - Image object Static List - static list of formatted informationDynamic Data List - assembled information from multiple data files through CMIS query The result of such design map could look like following below examples: Based on the outcome of the required elements in the design column 3 from the left you will now simply design a data model in WebCenter Content - Site Studio by creating a Region Definition structure matching your design requirements.For more information on how to create a Region definition see following post: Region Definition Post - note see instruction 7 for details. Each region definition can now be used to instantiate data files, a data file will hold the actual data for each element in the region definition. Another way you can see this is to compare the region definition as an extension to the metadata model in WebCenter Content for each data file item. Design content templates With a solid dependable information model we can now proceed to template creation and page design, in this phase focuses on how to place the content sections from the region definition on the page via a Content Presenter template. Remember by creating content presenter templates you will leverage the latest and most integrated technology WebCenter has to offer. This phase is much easier since the you already have the information model and design wire-frames to base the logic on, however there is still few considerations to pay attention to: Base the template on ADF and make only necessary exceptions to markup when required Leverage ADF design components for Tabs, Accordions and other similar components, this way the design in the content published areas will comply with other design areas based on custom ADF taskflows There is no performance impact when using meta data or region definition based data All data access regardless of type, metadata or xml data it can be accessed via the Content Presenter - Node. See below for applied examples on how to access data Access metadata property from Document - #{node.propertyMap['myProp'].value}myProp in this example can be for instance (dDocName, dDocTitle, xComments or any other available metadata) Access element data from data file xml - #{node.propertyMap['[Region Definition Name]:[Element name]'].asTextHtml}Region Definition Name is the expect region definition that the current data file is instantiatingElement name is the element value you like to grab from the data file I recommend you read following  useful post on content template topic:CMIS queries and template creation - note see instruction 9 for detailsStatic List template rendering For more information on templates:Single Item Content TemplateMulti Item Content TemplateExpression Language Internationalization Considerations When integrating content assets via content presenter you by now probably understand that the content item/data file is wired to the page, what is also pretty common at this stage is that the content item/data file only support one language since its not practical or business friendly to mix that into a complex structure. Therefore you will be left with a very common dilemma that you will have to either build a complete new portal for each locale, which is not an good option! However with little bit of information modeling and clear naming convention this can be addressed. Basically you can simply make sure that all content item/data file are named with a predictable naming convention like "Content1_EN" for the English rendition and "Content1_ES" for the Spanish rendition. This way through simple none complex customizations you will be able to dynamically switch the actual content item/data file just before rendering. By following proposed approach above you not only enable a simple mechanism for internationalized content you also preserve the functionality in the content presenter to support business accessible run-time publishing of information on existing and new pages. I recommend you read following useful post on Internationalization topics:Internationalize with Content Presenter Integrate with Review & Approval processes Today the Review and approval functionality and configuration is based out of WebCenter Content - Criteria Workflows. Criteria Workflows uses the metadata of the checked in document to evaluate if the document is under any review/approval process. So for instance if a Criteria Workflow is configured to force any documents with Version = "2" or "higher" and Content Type is "Instructions", any matching content item version on check in will now enter the workflow before getting released for general access. Few things to consider when configuring Criteria Workflows: Make sure to not trigger on version one for Content Items that are Data Files - if you trigger on version 1 you will not only approve an empty document you will also have a content presenter pointing to a none existing document - since the document will only be available after successful completion of the workflow Approval workflows sometimes requires more complex criteria, the recommendation if that is the case is that the meta data triggering such criteria is automatically populated, this can be achieved through many approaches including Content Profiles Criteria workflows are configured and managed in WebCenter Content Administration Applets where you can configure one or more workflows. When you configured Criteria workflows the Content Presenter will support the editors with the approval process directly inline in the "Contribution mode" of the portal. In addition to approve/reject and details of the task, the content presenter natively support the user to view the current and future version of the change he/she is approving. See below for example: Architectural recommendation To support review&approval processes - minimize the amount of data files per page Each CMIS query can consume significant time depending on the complexity of the query - minimize the amount of CMIS queries per page Use Content Presenter Templates based on ADF - this way you minimize the design considerations and optimize the usage of caching Implement the page in as few Data files as possible - simplifies publishing process, increases performance and simplifies release process Named data file (node) or list of named nodes when integrating to pages increases performance vs. querying for data Named data file (node) or list of named nodes when integrating to pages enables business centric page creation and publishing and reduces the need for IT department interaction Summary Just because one architectural decision solves a business problem it doesn't mean its the right one, when designing portals all architecture has to be in harmony and not impacting each other. For instance the most technical complex solution is not always the best since it will most likely defeat the business accessibility, performance or both, therefore the best approach is to first design for simplicity that even a non-technical user can operate, after that consider the performance impact and final look at the technology challenges these brings and workaround them first with out-of-the-box features, after that design and develop functions to complement the short comings.

    Read the article

  • The best way to separate admin functionality from a public site?

    - by AndrewO
    I'm working on a site that's grown both in terms of user-base and functionality to the point where it's becoming evident that some of the admin tasks should be separate from the public website. I was wondering what the best way to do this would be. For example, the site has a large social component to it, and a public sales interface. But at the same time, there's back office tasks, bulk upload processing, dashboards (with long running queries), and customer relations tools in the admin section that I would like to not be effected by spikes in public traffic (or effect the public-facing response time). The site is running on a fairly standard Rails/MySQL/Linux stack, but I think this is more of an architecture problem than an implementation one: mainly, how does one keep the data and business logic in sync between these different applications? Some strategies that I'm evaluating: 1) Create a slave database of the public facing database on another machine. Extract out all of the model and library code so that it can be shared between the applications. Create new controllers and views for the admin interfaces. I have limited experience with replication and am not even sure that it's supposed to be used this way (most of the time I've seen it, it's been for scaling out the read capabilities of the same application, rather than having multiple different ones). I'm also worried about the potential for latency issues if the slave is not on the same network. 2) Create new more task/department-specific applications and use a message oriented middleware to integrate them. I read Enterprise Integration Patterns awhile back and they seemed to advocate this for distributed systems. (Alternatively, in some cases the basic Rails-style RESTful API functionality might suffice.) But, I have nightmares about data synchronization issues and the massive re-architecting that this would entail. 3) Some mixture of the two. For example, the only public information necessary for some of the back office tasks is a read-only completion time or status. Would it make sense to have that on a completely separate system and send the data to public? Meanwhile, the user/group admin functionality would be run on a separate system sharing the database? The downside is, this seems to keep many of the concerns I have with the first two, especially the re-architecting. I'm sure the answers are going to be highly dependent on a site's specific needs, but I'd love to hear success (or failure) stories.

    Read the article

  • A format for storing personal contacts in a database

    - by Gart
    I'm thinking of the best way to store personal contacts in a database for a business application. The traditional and straightforward approach would be to create a table with columns for each element, i.e. Name, Telephone Number, Job title, Address, etc... However, there are known industry standards for this kind of data, like for example vCard, or hCard, or vCard-RDF/XML or even Windows Contacts XML Schema. Utilizing an standard format would offer some benefits, like inter-operablilty with other systems. But how can I decide which method to use? The requirements are mainly to store the data. Search and ordering queries are highly unlikely but possible. The volume of the data is 100,000 records at maximum. My database engine supports native XML columns. I have been thinking to use some XML-based format to store the personal contacts. Then it will be possible to utilize XML indexes on this data, if searching and ordering is needed. Is this a good approach? Which contacts format and schema would you recommend for this? Edited after first answers Here is why I think the straightforward approach is bad. This is due to the nature of this kind of data - it is not that simple. The personal contacts it is not well-structured data, it may be called semi-structured. Each contact may have different data fields, maybe even such fields which I cannot anticipate. In my opinion, each piece of this data should be treated as important information, i.e. no piece of data could be discarded just because there was no relevant column in the database. If we took it further, assuming that no data may be lost, then we could create a big text column named Comment or Description or Other and put there everything which cannot be fitted well into table columns. But then again - the data would lose structure - this might be bad. If we wanted structured data then - according to the database design principles - the data should be decomposed into entities, and relations should be established between the entities. But this adds complexity - there are just too many entities, and lots of design desicions should be made, like "How do we store Address? Personal Name? Phone number? How do we encode home phone numbers and mobile phone numbers? How about other contact info?.." The relations between entities are complex and multiple, and each relation is a table in the database. Each relation needs to be documented in the design papers. That is a lot of work to do. But it is possible to avoid the complexity entirely - just document that the data is stored according to such and such standard schema, period. Then anybody who would be reading that document should easily understand what it was all about. Finally, this is all about using an industry standard. The standard is, hopefully, designed by some clever people who anticipated and described the structure of personal contacts information much better than I ever could. Why should we all reinvent the wheel?? It's much easier to use a standard schema. The problem is, there are just too many standards - it's not easy to decide which one to use!

    Read the article

  • BlackBerry - GUI design prototyping mockup (Visio stencils)

    - by Max Gontar
    Hi! Just want to share information (and ask for some challenge) My company has published a set of Visio 2003 stencils for BlackBerry GUI prototyping RIM BlackBerry 8300, BlackBerry OS 4.5 RIM BlackBerry 9000, BlackBerry OS 4.6 RIM BlackBerry 9500, BlackBerry OS 4.7 RIM BlackBerry 9700, BlackBerry OS 5.0 It's free, however for download you will need to fill email form. Question: Is there other sets like that, so we could learn and make our one better (or just throw it away)? If it's too "self-promotional", feel free to close :) Thank you!

    Read the article

  • WPF touchScreen ListBox control design

    - by pnb
    I need to allow the user to select the item in a listbox on mouse down and when the user drags up or down i need to scroll the items in listbox, since the user interacts with the listbox touch touch screen, i want some thing like the ipod behaviour. User should be able to touch/grab the item and drag it up or down to scroll the listbox.

    Read the article

  • Using Unit of Work design pattern / NHibernate Sessions in an MVVM WPF

    - by Echiban
    I think I am stuck in the paralysis of analysis. Please help! I currently have a project that Uses NHibernate on SQLite Implements Repository and Unit of Work pattern: http://blogs.hibernatingrhinos.com/nhibernate/archive/2008/04/10/nhibernate-and-the-unit-of-work-pattern.aspx MVVM strategy in a WPF app Unit of Work implementation in my case supports one NHibernate session at a time. I thought at the time that this makes sense; it hides inner workings of NHibernate session from ViewModel. Now, according to Oren Eini (Ayende): http://msdn.microsoft.com/en-us/magazine/ee819139.aspx He convinces the audience that NHibernate sessions should be created / disposed when the view associated with the presenter / viewmodel is disposed. He presents issues why you don't want one session per windows app, nor do you want a session to be created / disposed per transaction. This unfortunately poses a problem because my UI can easily have 10+ view/viewmodels present in an app. He is presenting using a MVP strategy, but does his advice translate to MVVM? Does this mean that I should scrap the unit of work and have viewmodel create NHibernate sessions directly? Should a WPF app only have one working session at a time? If that is true, when should I create / dispose a NHibernate session? And I still haven't considered how NHibernate Stateless sessions fit into all this! My brain is going to explode. Please help!

    Read the article

  • Self hosted WCF ServiceHost/WebServiceHost Concurrency/Peformance Design Options (.NET 3.5)

    - by Kyle
    So I'll be providing a few functions via a self hosted (in a WindowsService) WebServiceHost (not sure how to process HTTP GET/POST with ServiceHost), one of which may be called a large amount of the time. This function will also rely on a connection in the appdomain (hosted by the WindowsService so it can stay alive over multiple requests). I have the following concerns and would be oh so thankful for any input/thoughts/comments: Concurrent access - how does the WebServiceHost handle a bunch of concurrent requests. Are they queued and processes sequentially or are new instances of the contracts automagically created? WebServiceHost - WindowsService communication - I need some form of communication from the WebServiceHost to the hosting WindowsService for things like requesting a new session if one does not exist. Perhaps implementing a class which extends the WebServiceHost with events which the WindowsService subscribes to... (unless there is another way I can set off an event in the WindowsService when a request is made...) Multiple WebServiceHosts or Contracts - Would it give any real performance gain to be running multiple WebServiceHost instances in different threads (one per endpoint perhaps?) - A better understanding of the first point would probably help here. WSDL - I'm not sure why (probably just need to do more reading), but I'm not sure how to get the WebServiceHost base endpoint to respond with a WDSL document describing the available contract. Not required as all the operations will be done via GET requests which will not likely change, but it would be nice to have... That's about it for the moment ;) I've been reading a lot on WCF and wish I'd gotten into it long ago, but definitely still learning.

    Read the article

  • MVVM Light Toolkit throws an System.IO.FileLoadException

    - by joebeazelman
    I'm running VS 2010 along with Expression Blend 4 beta. I created a MVVM Light project from the supplied templates and I get a System.IO.FileLoadException when I try to view the MainWindow.Xaml in VS 2010 designer window. The template already references System.Windows.Interactivity. Here are the details of the exception: System.IO.FileLoadException Could not load file or assembly 'System.Windows.Interactivity, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. Operation is not supported. (Exception from HRESULT: 0x80131515) at System.Reflection.RuntimeAssembly.nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, RuntimeAssembly locationHint, StackCrawlMark& stackMark, Boolean throwOnFileNotFound, Boolean forIntrospection, Boolean suppressSecurityChecks) at System.Reflection.RuntimeAssembly.nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, RuntimeAssembly locationHint, StackCrawlMark& stackMark, Boolean throwOnFileNotFound, Boolean forIntrospection, Boolean suppressSecurityChecks) at System.Reflection.RuntimeAssembly.InternalLoadAssemblyName(AssemblyName assemblyRef, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection, Boolean suppressSecurityChecks) at System.Reflection.Assembly.Load(AssemblyName assemblyRef) at MS.Internal.Package.VSIsolationProviderService.RemoteReferenceProxy.VsReflectionResolver.GetRuntimeAssembly(Assembly reflectionAssembly) at Microsoft.Windows.Design.Metadata.ReflectionMetadataContext.CachingReflectionResolver.GetRuntimeAssembly(Assembly reflectionAssembly) at Microsoft.Windows.Design.Metadata.ReflectionMetadataContext.Microsoft.Windows.Design.Metadata.IReflectionResolver.GetRuntimeAssembly(Assembly reflectionAssembly) at MS.Internal.Metadata.ClrAssembly.GetRuntimeMetadata(Object reflectionMetadata) at Microsoft.Windows.Design.Metadata.AttributeTableContainer.d_c.MoveNext() at Microsoft.Windows.Design.Metadata.AttributeTableContainer.GetAttributes(Assembly assembly, Type attributeType, Func`2 reflectionMapper) at MS.Internal.Metadata.ClrAssembly.GetAttributes(ITypeMetadata attributeType) at MS.Internal.Design.Metadata.Xaml.XamlAssembly.get_XmlNamespaceCompatibilityMappings() at Microsoft.Windows.Design.Metadata.Xaml.XamlExtensionImplementations.GetXmlNamespaceCompatibilityMappings(IAssemblyMetadata sourceAssembly) at Microsoft.Windows.Design.Metadata.Xaml.XamlExtensions.GetXmlNamespaceCompatibilityMappings(IAssemblyMetadata source) at MS.Internal.Design.Metadata.ReflectionProjectNode.BuildSubsumption() at MS.Internal.Design.Metadata.ReflectionProjectNode.SubsumingNamespace(Identifier identifier) at MS.Internal.Design.Markup.XmlElement.BuildScope(PrefixScope parentScope, IParseContext context) at MS.Internal.Design.Markup.XmlElement.ConvertToXaml(XamlElement parent, PrefixScope parentScope, IParseContext context, IMarkupSourceProvider provider) at MS.Internal.Design.DocumentModel.DocumentTrees.Markup.XamlSourceDocument.FullParse(Boolean convertToXamlWithErrors) at MS.Internal.Design.DocumentModel.DocumentTrees.Markup.XamlSourceDocument.get_RootItem() at Microsoft.Windows.Design.DocumentModel.Trees.ModifiableDocumentTree.get_ModifiableRootItem() at Microsoft.Windows.Design.DocumentModel.MarkupDocumentManagerBase.get_LoadState() at MS.Internal.Host.PersistenceSubsystem.Load() at MS.Internal.Host.Designer.Load() at MS.Internal.Designer.VSDesigner.Load() at MS.Internal.Designer.VSIsolatedDesigner.VSIsolatedView.Load() at MS.Internal.Designer.VSIsolatedDesigner.VSIsolatedDesignerFactory.Load(IsolatedView view) at MS.Internal.Host.Isolation.IsolatedDesigner.BootstrapProxy.LoadDesigner(IsolatedDesignerFactory factory, IsolatedView view) at MS.Internal.Host.Isolation.IsolatedDesigner.BootstrapProxy.LoadDesigner(IsolatedDesignerFactory factory, IsolatedView view) at MS.Internal.Host.Isolation.IsolatedDesigner.Load() at MS.Internal.Designer.DesignerPane.LoadDesignerView() System.NotSupportedException An attempt was made to load an assembly from a network location which would have caused the assembly to be sandboxed in previous versions of the .NET Framework. This release of the .NET Framework does not enable CAS policy by default, so this load may be dangerous. If this load is not intended to sandbox the assembly, please enable the loadFromRemoteSources switch. See http://go.microsoft.com/fwlink/?LinkId=155569 for more information.

    Read the article

  • Design: an array of "enemy" objects for game AI

    - by Meko
    Hi..I made shoot em up like game.But I have only one ememy which fallows me on screen.But I want to make lots of enemys like each 10 second they will across on screen together 5 or 10 enemys. ArrayList<Enemies> enemy = new ArrayList<Enemies>(); for (Enemies e : enemy) { e.draw(g); } is it good creating array list and then showing on screen? And Do I have to make some planing movements thoose enemies in my code ? I want that they vill appear not on same pozition.Like First 5 enemies will come top of screen then the other 5 or 10 enemies will come from left side.. so on.What is best solution for this?

    Read the article

< Previous Page | 197 198 199 200 201 202 203 204 205 206 207 208  | Next Page >