Search Results

Search found 5597 results on 224 pages for 'restful architecture'.

Page 27/224 | < Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • Need some suggestions on my softwares architecture. [Code review]

    - by Sergio Tapia
    I'm making an open source C# library for other developers to use. My key concern is ease of use. This means using intuitive names, intuitive method usage and such. This is the first time I've done something with other people in mind, so I'm really concerned about the quality of the architecture. Plus, I wouldn't mind learning a thing or two. :) I have three classes: Downloader, Parser and Movie I was thinking that it would be best to only expose the Movie class of my library and have Downloader and Parser remain hidden from invocation. Ultimately, I see my library being used like this. using FreeIMDB; public void Test() { var MyMovie = Movie.FindMovie("The Matrix"); //Now MyMovie would have all it's fields set and ready for the big show. } Can you review how I'm planning this, and point out any wrong judgement calls I've made and where I could improve. Remember, my main concern is ease of use. Movie.cs using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Drawing; namespace FreeIMDB { public class Movie { public Image Poster { get; set; } public string Title { get; set; } public DateTime ReleaseDate { get; set; } public string Rating { get; set; } public string Director { get; set; } public List<string> Writers { get; set; } public List<string> Genres { get; set; } public string Tagline { get; set; } public string Plot { get; set; } public List<string> Cast { get; set; } public string Runtime { get; set; } public string Country { get; set; } public string Language { get; set; } public Movie FindMovie(string Title) { Movie film = new Movie(); Parser parser = Parser.FromMovieTitle(Title); film.Poster = parser.Poster(); film.Title = parser.Title(); film.ReleaseDate = parser.ReleaseDate(); //And so an so forth. } public Movie FindKnownMovie(string ID) { Movie film = new Movie(); Parser parser = Parser.FromMovieID(ID); film.Poster = parser.Poster(); film.Title = parser.Title(); film.ReleaseDate = parser.ReleaseDate(); //And so an so forth. } } } Parser.cs using System; using System.Collections.Generic; using System.Linq; using System.Text; using HtmlAgilityPack; namespace FreeIMDB { /// <summary> /// Provides a simple, and intuitive way for searching for movies and actors on IMDB. /// </summary> class Parser { private Downloader downloader = new Downloader(); private HtmlDocument Page; #region "Page Loader Events" private Parser() { } public static Parser FromMovieTitle(string MovieTitle) { var newParser = new Parser(); newParser.Page = newParser.downloader.FindMovie(MovieTitle); return newParser; } public static Parser FromActorName(string ActorName) { var newParser = new Parser(); newParser.Page = newParser.downloader.FindActor(ActorName); return newParser; } public static Parser FromMovieID(string MovieID) { var newParser = new Parser(); newParser.Page = newParser.downloader.FindKnownMovie(MovieID); return newParser; } public static Parser FromActorID(string ActorID) { var newParser = new Parser(); newParser.Page = newParser.downloader.FindKnownActor(ActorID); return newParser; } #endregion #region "Page Parsing Methods" public string Poster() { //Logic to scrape the Poster URL from the Page element of this. return null; } public string Title() { return null; } public DateTime ReleaseDate() { return null; } #endregion } } ----------------------------------------------- Do you guys think I'm heading towards a good path, or am I setting myself up for a world of hurt later on? My original thought was to separate the downloading, the parsing and the actual populating to easily have an extensible library. Imagine if one day the website changed its HTML, I would then only have to modifiy the parsing class without touching the Downloader.cs or Movie.cs class. Thanks for reading and for helping!

    Read the article

  • Python MySQL wrong architecture error

    - by phoebebright
    I've been at this for some time and read many sites on the subject. suspect I have junk lying about causing this problem. But where? This is the error when I import MySQLdb in python: >>> import MySQLdb /Library/Python/2.6/site-packages/MySQL_python-1.2.3c1-py2.6-macosx-10.6-universal.egg/_mysql.py:3: UserWarning: Module _mysql was already imported from /Library/Python/2.6/site-packages/MySQL_python-1.2.3c1-py2.6-macosx-10.6-universal.egg/_mysql.pyc, but /Users/phoebebr/Downloads/MySQL-python-1.2.3c1 is being added to sys.path Traceback (most recent call last): File "<stdin>", line 1, in <module> File "MySQLdb/__init__.py", line 19, in <module> import _mysql File "build/bdist.macosx-10.6-universal/egg/_mysql.py", line 7, in <module> File "build/bdist.macosx-10.6-universal/egg/_mysql.py", line 6, in __bootstrap__ ImportError: dlopen(/Users/phoebebr/.python-eggs/MySQL_python-1.2.3c1-py2.6-macosx-10.6-universal.egg-tmp/_mysql.so, 2): no suitable image found. Did find: /Users/phoebebr/.python-eggs/MySQL_python-1.2.3c1-py2.6-macosx-10.6-universal.egg-tmp/_mysql.so: mach-o, but wrong architecture I'm trying for 64 bit so checked here: file $(which python) /usr/bin/python: Mach-O universal binary with 3 architectures /usr/bin/python (for architecture x86_64): Mach-O 64-bit executable x86_64 /usr/bin/python (for architecture i386): Mach-O executable i386 /usr/bin/python (for architecture ppc7400): Mach-O executable ppc file $(which mysql) /usr/local/mysql/bin/mysql: Mach-O 64-bit executable x86_64 Have set my default version of python to 2.6 python Python 2.6.1 (r261:67515, Feb 11 2010, 00:51:29) [GCC 4.2.1 (Apple Inc. build 5646)] on darwin Tried deleting build directory and python setup.py clean Renamed Python/2.5/site-packages so it could not try and pick that up. Run out of ideas. Any suggestions?

    Read the article

  • Service Oriented Architecture & Domain-Driven Design

    - by Michael
    I've always developed code in a SOA type of way. This year I've been trying to do more DDD but I keep getting the feeling that I'm not getting it. At work our systems are load balanced and designed not to have state. The architecture is: Website ===Physical Layer== Main Service ==Physical Layer== Server 1/Service 2/Service 3/Service 4 Only Server 1,Service 2,Service 3 and Service 4 can talk to the database and the Main Service calls the correct service based on products ordered. Every physical layer is load balanced too. Now when I develop a new service, I try to think DDD in that service even though it doesn't really feel like it fits. I use good DDD principles like entities, value types, repositories, aggregates, factories and etc. I've even tried using ORM's but they just don't seem like they fit in a stateless architecture. I know there are ways around it, for example use IStatelessSession instead of ISession with NHibernate. However, ORM just feel like they don't fit in a stateless architecture. I've noticed I really only use some of the concepts and patterns DDD has taught me but the overall architecture is still SOA. I am starting to think DDD doesn't fit in large systems but I do think some of the patterns and concepts do fit in large systems. Like I said, maybe I'm just not grasping DDD or maybe I'm over analyzing my designs? Maybe by using the patterns and concepts DDD has taught me I am using DDD? Not sure if there is really a question to this post but more of thoughts I've had when trying to figure out where DDD fits in overall systems and how scalable it truly is. The truth is, I don't think I really even know what DDD is?

    Read the article

  • how to tune libstdc++ to the native architecture when building gcc

    - by John D
    I recently found that when I build my C++ software, I get about a 10% speedup by using the g++ march=native option. When compiling gcc and libstc++, is it possible to tune the libstdc++ library to the native architecture as well? I couldn't find any mention of this in the gcc install configuration documentation. (I'm building gcc 4.6.2 on Linux Mint 11 with an Intel Core i7 Sandy Bridge-E processor.)

    Read the article

  • VMware or Xen support for AIX on pSeries architecture

    - by A.Rashad
    I tried to find an explicit confirmation on VMware website if there is any chance we could virtualize AIX running on pSeriese architecture (P5, P6 and P7), but in vain. so far we have only one product available which is PowerVM (IBM Product) but we are trying to find alternative solutions to evaluate pros and cons before taking any action. Even Xen mentions the support for Power PC but for Linux not AIX. I hope someone could give an insight on this matter.

    Read the article

  • Duplication of Architecture State means physically extra?

    - by Doopy Doo
    Hyper-Threading Technology makes a single physical processor appear as two logical processors; the physical execution resources are shared and the architecture state is duplicated for the two logical processors. So, this means that there are two sets of basic registers such as Next Instruction Pointer, processor registers like AX, BX, CX etc physically embedded in the micro-processor chip, OR they(arch. state) are made to look two sets by some low level duplication by software/OS.

    Read the article

  • Frank Buytendijk on Prahalad, Business Best Practices

    - by Bob Rhubart
      In his video on the questionable value of some business best practices, Frank Buytendijk mentions a recent HBR article by business guru C.K. Prahalad. I just learned that Prahalad passed away this past weekend at the age of 68. (Information Week obit) A couple of years ago I had the good fortune to attend Mr. Prahalad’s keynote address at a Gartner event.  He had an audience of software architects absolutely mesmerized as he discussed technology’s role in the changing nature of business competition.  The often dysfunctional relationship between IT and business has and will probably always be hot-button issue. But during Prahalad’s keynote,  there was a palpable sense that the largely technical audience was having some kind of breakthrough, that they had achieved a new level of understanding about the importance of the relationship between the two camps. Fortunately, Prahalad leaves behind a significant body of work that will remain a valuable resource as business and the technology that supports it continues to evolve. Technorati Tags: business best practices,enterprise architecture,prahalad,oracle del.icio.us Tags: business best practices,enterprise architecture,prahalad,oracle

    Read the article

  • Why does TDD work?

    - by CesarGon
    Test-driven development (TDD) is big these days. I often see it recommended as a solution for a wide range of problems here in Programmers SE and other venues. I wonder why it works. From an engineering point of view, it puzzles me for two reasons: The "write test + refactor till pass" approach looks incredibly anti-engineering. If civil engineers used that approach for bridge construction, or car designers for their cars, for example, they would be reshaping their bridges or cars at very high cost, and the result would be a patched-up mess with no well thought-out architecture. The "refactor till pass" guideline is often taken as a mandate to forget architectural design and do whatever is necessary to comply with the test; in other words, the test, rather than the user, sets the requirement. In this situation, how can we guarantee good "ilities" in the outcomes, i.e. a final result that is not only correct but also extensible, robust, easy to use, reliable, safe, secure, etc.? This is what architecture usually does. Testing cannot guarantee that a system works; it can only show that it doesn't. In other words, testing may show you that a system contains defects if it fails a test, but a system that passes all tests is not safer than a system that fails them. Test coverage, test quality and other factors are crucial here. The false safe feelings that an "all green" outcomes produces to many people has been reported in civil and aerospace industries as extremely dangerous, because it may be interepreted as "the system is fine", when it really means "the system is as good as our testing strategy". Often, the testing strategy is not checked. Or, who tests the tests? I would like to see answers containing reasons why TDD in software engineering is a good practice, and why the issues that I have explained above are not relevant (or not relevant enough) in the case of software. Thank you.

    Read the article

  • Architectural and Design Challenges with SOA

    With all of the hype about service oriented architecture (SOA) primarily through the use of web services, not much has been said about potential issues of using SOA in the design of an application. I am personally a fan of SOA, but it is not the solution for every application. Proper evaluation should be done on all requirements and use cases prior to deciding to go down the SOA road. It is important to consider how your application/service will handle the following perils as it executes. Example Challenges of SOA Network Connectivity Issues Handling Connectivity Issues Longer Processing/Transaction Times How many of us have had issues visiting our favorite web sites from time to time? The same issue will occur when using service based architecture especially if it is implemented using web services. Forcing applications to access services via a network connection introduces a lot of new failure points to the application. Potential failure points include: DNS issues, network hardware issues, remote server issues, and the lack of physical network connections. When network connectivity issues do occur, how are the service clients are implemented is very important. Should the client wait and poll the service until it is accessible again? If so what is the maximum wait time or number of attempts it should retry. Due to the fact of services being distributed across a network automatically increase the responsiveness of client applications due to the fact that processing time must now also include time to send and receive messages from called services. This could add nanoseconds to minutes per each request based on network load and server usage of the service provider. If speed highly desirable quality attribute then I would consider creating components that are hosted where the client application is located. References: Rader, Dave. (2002). Overcoming Web Services Challenges with Smart Design: http://soa.sys-con.com/node/39458

    Read the article

  • How do you properly organize a commercial game?

    - by Reactorcore
    For the past months I've been studying programming and I've finally learned how to code, but one thing that is confusing me is how to properly organize the design of a game project - code wise. The game I'm building is a pretty standard commercial game. It has the basic components of a normal game: A world, characters and items interacting with each other and all of this is run by game manager. Basically you play as a hero in a world and do stuff. Fight, explore and interact. Think of your standard adventure game that starts off with an intro, goes to the menu system, then gets into the game and back to the menu. Pretty much like 99% of any commercial game or otherwise serious game projects. Thats what I'm aiming at. The problem is: How do you properly code a commercial game architecture? How do you organize it? How do you make it not become unmaintainable spaghetti code? What specific things to keep in mind when building this, codewise? How you can help me: a) Please tell how do you code your own game projects. What is your thought-process when designing the architecture? b) Recommend books, blogs, tutorials, videos or anything else on how to organize a commercial video game. c) Give hints and tips on do's/don'ts when building a game, codewise. Please help!

    Read the article

  • SOA Starting Point: Methods for Service Identification and Definition

    As more and more companies start to incorporate a Service Oriented Architectural design approach into their existing enterprise systems, it creates the need for a standardized integration technology. One common technology used by companies is an Enterprise Service Bus (ESB). An ESB, as defined by Progress Software, connects and mediates all communications and interactions between services. In essence an ESB is a form of middleware that allows services to communicate with one another regardless of framework, environment, or location. With the emergence of ESB, a new emphasis is now being placed on approaches that can be used to determine what Web services should be built. In addition, what order should these services be built? In May 2011, SOA Magazine published an article that identified 10 common methods for identifying and defining services. SOA’s Ten Common Methods for Service Identification and Definition: Business Process Decomposition Business Functions Business Entity Objects Ownership and Responsibility Goal-Driven Component-Based Existing Supply (Bottom-Up) Front-Office Application Usage Analysis Infrastructure Non-Functional Requirements  Each of these methods provides various pros and cons in regards to their use within the design process. I personally feel that during a design process, multiple methodologies should be used in order to accurately define a design for a system or enterprise system. Personally, I like to create a custom cocktail derived from combining these methodologies in order to ensure that my design fits with the project’s and business’s needs while still following development standards and guidelines. Of these ten methods, I am particularly fond of Business Process Decomposition, Business Functions, Goal-Driven, Component-Based, and routinely use them in my designs.  Works Cited Hubbers, J.-W., Ligthart, A., & Terlouw , L. (2007, 12 10). Ten Ways to Identify Services. Retrieved from SOA Magazine: http://www.soamag.com/I13/1207-1.php Progress.com. (2011, 10 30). ESB ARCHITECTURE AND LIFECYCLE DEFINITION. Retrieved from Progress.com: http://web.progress.com/en/esb-architecture-lifecycle-definition.html

    Read the article

  • Hadoop in a RESTful Java Web Application - Conflicting URI templates

    - by user1231583
    I have a small Java Web Application in which I am using Jersey 1.12 and the Hadoop 1.0.0 JAR file (hadoop-core-1.0.0.jar). When I deploy my application to my JBoss 5.0 server, the log file records the following error: SEVERE: Conflicting URI templates. The URI template / for root resource class org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods and the URI template / transform to the same regular expression (/.*)? To make sure my code is not the problem, I have created a fresh web application that contains nothing but the Jersey and Hadoop JAR files along with a small stub. My web.xml is as follows: <?xml version="1.0" encoding="UTF-8"?> <web-app version="2.5" xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd"> <servlet> <servlet-name>ServletAdaptor</servlet-name> <servlet-class>com.sun.jersey.spi.container.servlet.ServletContainer</servlet- class> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>ServletAdaptor</servlet-name> <url-pattern>/mytest/*</url-pattern> </servlet-mapping> <session-config> <session-timeout> 30 </session-timeout> </session-config> <welcome-file-list> <welcome-file>index.jsp</welcome-file> </welcome-file-list> </web-app> My simple RESTful stub is as follows: import javax.ws.rs.core.Context; import javax.ws.rs.core.UriInfo; import javax.ws.rs.Path; @Path("/mytest") public class MyRest { @Context private UriInfo context; public MyRest() { } } In my regular application, when I remove the Hadoop JAR files (and the code that is using Hadoop), everything works as I would expect. The deployment is successful and the remaining RESTful services work. I have also tried the Hadoop 1.0.1 JAR files and have had the same problems with the conflicting URL template in the NamenodeWebHdfsMethods class. Any suggestions or tips in solving this problem would be greatly appreciated.

    Read the article

  • Client for restful web service

    - by Ish
    I'd like to create a http-centric client for a restful web service created using CXF. To that end: Does any one know the (maven) dependencies for ONLY the CXF clients (Proxy & HTTP)? Is there any advantage to using CXF's built-in clients over say, Apache HttpClient?

    Read the article

  • RESTful service description

    - by Anax
    From what I understand, I need to use WADL to describe a RESTful web service. Still, I have read many answers in relevant posts, where users are strongly opposed the use of WADL. What are the disadvantages of WADL? Is there any alternative solution?

    Read the article

  • RESTful API: How to model 'request new password'?

    - by Jan P.
    I am designing a RESTful API for a booking application and was quite happy to see I could map all details of the application to the 4 HTTP methods. /users - GET, POST /users/({id}|myself) - GET, POST, PUT, DELETE /users/({id}|myself)/bookings - GET, POST /users/({id}|myself)/bookings/{id} - GET, POST, PUT, DELETE Example: Updating my own user uses a PUT to /users/myself. But now I found out that one thing is missing: The possibility to request a new password if I forgot my old one. Any idea how I could add this?

    Read the article

  • nodejs daemon wrong architecture

    - by Greg Pagendam-Turner
    I'm trying to run 'dali' a highcharts exporter from nodejs on my Mac under OSX Mountain Lion I'm getting the following error: module.js:485 process.dlopen(filename, module.exports); ^ Error: dlopen(/Users/greg/node_modules/daemon/lib/daemon.v0.8.8.node, 1): no suitable image found. Did find: /Users/greg/node_modules/daemon/lib/daemon.v0.8.8.node: mach-o, but wrong architecture at Object.Module._extensions..node (module.js:485:11) at Module.load (module.js:356:32) at Function.Module._load (module.js:312:12) at Module.require (module.js:362:17) at require (module.js:378:17) at Object.<anonymous> (/Users/greg/node_modules/daemon/lib/daemon.js:12:11) at Module._compile (module.js:449:26) at Object.Module._extensions..js (module.js:467:10) at Module.load (module.js:356:32) at Function.Module._load (module.js:312:12) The key part is: "wrong architecture" If I run: lipo -info /Users/greg/node_modules/daemon/lib/daemon.v0.8.8.node It returns: Non-fat file: /Users/greg/node_modules/daemon/lib/daemon.v0.8.8.node is architecture: i386 I'm guessing a x64 version is requried. How do I get and install the 64 bit version of this lib?

    Read the article

  • gcc architecture question

    - by Andy
    Hi, I'm compiling my program with architecture set to -mtune=i386 However, I'm also linking statically against several libs (libpng, zlib, jpeglib, vorbisfile, libogg). I've built these libs on my own using configure and make, so I guess these libs were built with architecture being set to my system's architecture which would be i686. But I don't want that! I want my program to run on i386, too, so I need to make sure that all these libs that I'm statically linking against are built for i386, too. So my question: Is there a convenient way to build libpng/zlib/jpeglib/vorbisfile/libogg etc. for i386 or do I have to modify all of their makefiles manually and make sure that -mtune is set to i386? Thanks for help! Andy

    Read the article

  • Flex Framework vs. Micro-Architecture

    - by droboZ
    I'm in the process of choosing a framework for my flex development, and one of the questions that was asked about a framework was "is this a framework or a micro-architecture"? Can someone clarify what's the difference? What exactly is a framework, and when can we start calling what we have a framework? I work with FlexBuilder3 (now called FlashBuilder4) and have a lot of standard things that I do for almost all projects, and components that I created for easy re-use. Some are very very small, but the benefit of a 1-liner has been immense for me instead of repeating the code over and over. So in the framework/micro-architecture scheme, can I say that these are my internal in-house framework or are they part of a micro-architecture? Trying to understand this topic better.

    Read the article

  • Framework vs. Micro-Architecture, which is mine

    - by droboZ
    I'm in the process of choosing a framework for my flex development, and one of the questions that was asked about a framework was "is this a framework or a micro-architecture"? Can someone clarify what's the difference? What exactly is a framework, and when can we start calling what we have a framework? I work with FlexBuilder3 (now called FlashBuilder4) and have a lot of standard things that I do for almost all projects, and components that I created for easy re-use. Some are very very small, but the benefit of a 1-liner has been immense for me instead of repeating the code over and over. So in the framework/micro-architecture scheme, can I say that these are my internal in-house framework or are they part of a micro-architecture? Trying to understand this topic better.

    Read the article

  • What should one keep in mind when switching from traditional to RESTful routing in Rails?

    - by Brian Holder-Chow
    What should one keep in mind when switching from traditional to RESTful routing in Rails? From a typical Rails routes.rb file: # This is a legacy wild controller route that's not recommended for RESTful applications. # Note: This route will make all actions in every controller accessible via GET requests. match ':controller(/:action(/:id))(.:format)' As switching away from this means that I will have to create routes for each controller individually, does anyone have any advice on the best way to migrate this safely?

    Read the article

  • Pirates, Treasure Chests and Architectural Mapping

    Pirate 1: Why do pirates create treasure maps? Pirate 2: I do not know.Pirate 1: So they can find their gold. Yes, that was a bad joke, but it does illustrate a point. Pirates are known for drawing treasure maps to their most prized possession. These documents detail the decisions pirates made in order to hide and find their chests of gold. The map allows them to trace the steps they took originally to hide their treasure so that they may return. As software engineers, programmers, and architects we need to treat software implementations much like our treasure chest. Why is software like a treasure chest? It cost money, time,  and resources to develop (Usually) It can make or save money, time, and resources (Hopefully) If we operate under the assumption that software is like a treasure chest then wouldn’t make sense to document the steps, rationale, concerns, and decisions about how it was designed? Pirates are notorious for documenting where they hide their treasure.  Shouldn’t we as creators of software do the same? By documenting our design decisions and rationale behind them will help others be able to understand and maintain implemented systems. This can only be done if the design decisions are correctly mapped to its corresponding implementation. This allows for architectural decisions to be traced from the conceptual model, architectural design and finally to the implementation. Mapping gives software professional a method to trace the reason why specific areas of code were developed verses other options. Just like the pirates we need to able to trace our steps from the start of a project to its implementation,  so that we will understand why specific choices were chosen. The traceability of a software implementation that actually maps back to its originating design decisions is invaluable for ensuring that architectural drifting and erosion does not take place. The drifting and erosion is prevented by allowing others to understand the rational of why an implementation was created in a specific manor or methodology The process of mapping distinct design concerns/decisions to the location of its implemented is called traceability. In this context traceability is defined as method for connecting distinctive software artifacts. This process allows architectural design models and decisions to be directly connected with its physical implementation. The process of mapping architectural design concerns to a software implementation can be very complex. However, most design decision can be placed in  a few generalized categories. Commonly Mapped Design Decisions Design Rationale Components and Connectors Interfaces Behaviors/Properties Design rational is one of the hardest categories to map directly to an implementation. Typically this rational is mapped or document in code via comments. These comments consist of general design decisions and reasoning because they do not directly refer to a specific part of an application. They typically focus more on the higher level concerns. Components and connectors can directly be mapped to architectural concerns. Typically concerns subdivide an application in to distinct functional areas. These functional areas then can map directly back to their originating concerns.Interfaces can be mapped back to design concerns in one of two ways. Interfaces that pertain to specific function definitions can be directly mapped back to its originating concern(s). However, more complicated interfaces require additional analysis to ensure that the proper mappings are created. Depending on the complexity some Behaviors\Properties can be translated directly into a generic implementation structure that is ready for business logic. In addition, some behaviors can be translated directly in to an actual implementation depending on the complexity and architectural tools used. Mapping design concerns to an implementation is a lot of work to maintain, but is doable. In order to ensure that concerns are mapped correctly and that an implementation correctly reflects its design concerns then one of two standard approaches are usually used. All Changes Come From ArchitectureBy forcing all application changes to come through the architectural model prior to implementation then the existing mappings will be used to locate where in the implementation changes need to occur. Allow Changes From Implementation Or Architecture By allowing changes to come from the implementation and/or the architecture then the other area must be kept in sync. This methodology is more complex compared to the previous approach.  One reason to justify the added complexity for an application is due to the fact that this approach tends to detect and prevent architectural drift and erosion. Additionally, this approach is usually maintained via software because of the complexity. Reference:Taylor, R. N., Medvidovic, N., & Dashofy, E. M. (2009). Software architecture: Foundations, theory, and practice Hoboken, NJ: John Wiley & Sons  

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >