Search Results

Search found 21023 results on 841 pages for 'computer architecture'.

Page 15/841 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • MAC computer is not seeing the Ubuntu(computer) samba share in SHare

    - by Mirage
    I have ubuntu with samba installed. Initially My Windows were not able to see Ubuntu on my network list. After searchinga lot i found that i had to write this line in smb.conf and it worked "ldap ssl = No" Don't know why. Now my MAC is also not able to see ubunut but if click on connect to server and use smb://servername then my connection is established. Now is there any thing which i can do so that MAC can see ubuntu in share and i don't need to use connect to server thing.

    Read the article

  • Computer Comparison - which is "better"

    - by David Murdoch
    A company I work with recently replaced their old server and gave it to me. Their old server is a Dell PowerEdge 2600. I've been playing with the machine and even installed Windows Server 2008 on it...and it seems to run it pretty well. Here are the specs for the two machines: Dev Machine: AMD Athlon64 3000+ 2.38 GHz (overclocked from 1.8GHz [@ 280x8.5] - it is stable-ish) Memory (RAM): 1x1GB OCZ PC3200 (Dual-Channel) 300GB HD OS: Windows XP Pro (32bit) SuperPi 1M digit test: 40 seconds Dell PowerEdge 2600 Server: Intel Xeon CPU 2.8GHz 2.8GHz Memory (RAM): 512MBx2 (PC2700, not dual channel) 68GB HD (RAID 5) OS: Windows Server 2000 (32bit) SuperPi 1M digit test: 56 seconds [using 1 processor] (Themes and Aero-Flass UI turned off, of course) I use my computer to regularly run Photoshop CS5, Illustrator CS5, Flash CS5, 5 browsers (Chrome, FF, IE, Safari, Opera), iTunes, Visual Studio 2010, and Kaspersky Internet Security 2010 [sometimes simultaneously :-) ]. The SuperPi test has my dev machine coming in about 30% faster than the Server machine...though this could be due to the server running "Vista" with background processes prioritized. Do you think it would be realistic/advantageous for me to move from my dev machine to the Dell PowerEdge 2600? Is it possible to install additional DVD drives/burners on the server? Can I install my internal 300 GB hard drive on the server? Can I add some USB 2.0 ports? Note: I'll probably install Win XP Pro on the dev machine if I do switch. If not, are there any creative and useful way for me to take advantage of this server (with the goal of faster computing)?

    Read the article

  • computer build for extreme tabbed browsing

    - by David Berger
    I'm interested in building or buying a task-specific computer for my brother. His requirements are ridiculously simple: the machine has to be able to wait in hundreds of web-based virtual waiting rooms at once and not crash. To be competitive, he needs to be able to enter the waiting rooms an dauto-refresh them faster. My question is, what priority do I give the different specs? My initial surmise is this: Connection speed (nothing to do with my build, but I kind of think this will be more beneficial than anything I build for him) Memory size -- I don't usually see firefox taking up more than a gig, even when heavily tabbed, but I think one gig for the operating system and two gigs for the browser are necessary. Processor speed -- I think the processor will affect performance, but even something out of date will do what he needs Memory speed/RAM bus -- I doubt this will matter much, but it seems just on this side of irrelevant. Everything else is a non-issue for him. Does this seem to stack up correctly? Also, since he's looking to stay on the cheaper side, and I might end up recommending a refurb to him, is there anything particularly egregious that Vista would do if it came pre-installed? If I build it myself, I'll give him linux, but if I have it shipped to him, I'm not sure I could walk him through the install process for linux, but I probably could walk him through the process to upgrade to Windows 7, if it were somehow worth it.

    Read the article

  • SQLAuthority News – 2 Whitepapers Announced – AlwaysOn Architecture Guide: Building a High Availability and Disaster Recovery Solution

    - by pinaldave
    Understanding AlwaysOn Architecture is extremely important when building a solution with failover clusters and availability groups. Microsoft has just released two very important white papers related to this subject. Both the white papers are written by top experts in industry and have been reviewed by excellent panel of experts. Every time I talk with various organizations who are adopting the SQL Server 2012 they are always excited with the concept of the new feature AlwaysOn. One of the requests I often here is the related to detailed documentations which can help enterprises to build a robust high availability and disaster recovery solution. I believe following two white paper now satisfies the request. AlwaysOn Architecture Guide: Building a High Availability and Disaster Recovery Solution by Using AlwaysOn Availability Groups SQL Server 2012 AlwaysOn Availability Groups provides a unified high availability and disaster recovery (HADR) solution. This paper details the key topology requirements of this specific design pattern on important concepts like quorum configuration considerations, steps required to build the environment, and a workflow that shows how to handle a disaster recovery. AlwaysOn Architecture Guide: Building a High Availability and Disaster Recovery Solution by Using Failover Cluster Instances and Availability Groups SQL Server 2012 AlwaysOn Failover Cluster Instances (FCI) and AlwaysOn Availability Groups provide a comprehensive high availability and disaster recovery solution. This paper details the key topology requirements of this specific design pattern on important concepts like asymmetric storage considerations, quorum model selection, quorum votes, steps required to build the environment, and a workflow. If you are not going to implement AlwaysOn feature, this two Whitepapers are still a great reference material to review as it will give you complete idea regarding what it takes to implement AlwaysOn architecture and what kind of efforts needed. One should at least bookmark above two white papers for future reference. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, T SQL, Technology Tagged: AlwaysOn

    Read the article

  • Podcast Show Notes: By Any Other Name: Governance and Architecture

    - by Bob Rhubart
    The OTN ArchBeat Podcast returns from a brief summer hiatus with a three-part conversation about IT architecture and governance. My guests for this conversation are Eric Stephens , an Oracle Enterprise architect and a frequent guest on this program. Joining Eric on the panel is Tim Hall , Senior Director of product management for the Oracle Enterprise Repository, Oracle Service Registry, and Oracle Application Integration Architecture. Tim made his first appearance on ArchBeat as panelist on the recent program featuring Thomas Erl. The Conversation Listen to Part 1:Why it's important to revive the dormant conversation about IT governance. Listen to Part 2 (Sept 19): Balancing functional, technical, operational requirements to meet the challenge of defining appropriate governance "guardrails." Listen to Part 3 (Sept 26): Bringing IT architecture out of the ivory tower to make governance a less intimidating, more collaborative process. Additional Resources Leveraging Governance to Sustain Enterprise Architecture Efforts, an Oracle white paper by Eric Stephens. SOA, Cloud, and Service Technologies, a transcript of an ArchBeat interview with Thomas Erl, Tim Hall, and Demed L'Her, in which Tim says the following about governance: "For a long time people have argued that SOA governance is sort of an awkward name, no one wanted to be audited. There's 50% of the world that think, yes, we're going to have to tops down initiative to address this and there's 50% of the world that says that it feels like a heavy weight process that I want no part of. So what I think we should do is change the name…"

    Read the article

  • A Great Work : ADF Architecture TV

    - by mustafakaya
    I would like to information about Oracle ADF Product Management's great work ; ADF Architecture TV. This channel has various subjects such as before start a new ADF or any software project what will you need or how can you select team member's skills, or how to implement and design an ADF projects etc. When developing with a new technology, one of the challenges for technical staff is to both learn the features of the technology and how to implement them, and also consider the broader concepts of design, engineering and architecture. Many an IT project has come undone because IT staff have been focused on the nitty gritty details of writing software, rather than looking at the "bigger picture" of how it will all go together. Oracle's "ADF Architecture TV" plans to address this issue by focusing on architectural issues and developer guidelines for writing ADF software solutions. The goal, to give ADF developers an understanding of the decisions you need to build a successful ADF application, potential architectural blueprints to choose from when putting the ADF application together, and potential best practices to take back to your development team.  You can click here for ADF Architecture TV. 

    Read the article

  • How to prepare for the GRE Computer Science Subject Test?

    - by Maddy.Shik
    How do I prepare for the GRE Computer Science subject test? Are there any standard text books I should follow? I want to score as competitively as possible. What are some good references? Is there anything that top schools like CMU, MIT, and Standford would expect? For example, Cormen et al is considered very good for algorithms. Please tell me standard text books for each subject covered by the test, like Computer Architecture, Database Design, Operating Systems, Discrete Maths etc.

    Read the article

  • How to best prepare for the GRE Computer Science Subject Test?

    - by Maddy.Shik
    How do I prepare for the GRE Computer Science subject test? Are there any standard text books I should follow? I want to score as competitively as possible. What are some good references? Is there anything that top schools like CMU, MIT, and Standford would expect? For example, Cormen et al is considered very good for algorithms. Please tell me standard text books for each subject covered by the test, like Computer Architecture, Database Design, Operating Systems, Discrete Maths etc.

    Read the article

  • Why does a computer science degree matter to a professional programmer?

    - by P.Brian.Mackey
    I have a degree in computer science. It has been great for opening doors, getting a job. As far as helping me in the professional field of C# .NET programming (the most popular platform and language in the area I work if not the entire united states on hands down the most popular OS in the world) its hardly useful. Why do you think it helps you as a programmer in your professional career (outside spouting off to prims algorithm to impress some interviewer)? In today's world adaptation, a quick mind, strong communication, OO and fundamental design skills enable a developer to write software that a customer will accept. These skills are only skimmed over in the cs program. In my mind, reading a 500 page C# book by Wrox offers far more useable a skillset than 4 years of the comp sci math blaster courses. Many disagree. So, why does a computer science degree matter?

    Read the article

  • How do I apply a computer science degree to web development?

    - by T. Webster
    I'm a web programmer, but I haven't found many opportunities to take advantage of a formal education in computer science. Maybe I'm not looking in the right places, but it seems to me like most of the web jobs I come across are CRUD, web forms, and data grids. For these jobs a formal CS background doesn't seem necessary, and you could do fine with O'Reilly cookbooks in jQuery, CSS 3, PHP, SQL, or ASP.NET MVC. What kinds of web developer jobs exist that really let you apply your computer science background? Do I need to branch out into other areas of programming to take full advantage of my degree?

    Read the article

  • How to prepare for the GRE Computer Science Subject Test?

    - by Maddy.Shik
    How do I prepare for the GRE Computer Science subject test? Are there any standard text books I should follow? I want to score as competitively as possible. What are some good references? Is there anything that top schools like CMU, MIT, and Standford would expect? For example, Cormen et al is considered very good for algorithms. Please tell me standard text books for each subject covered by the test, like Computer Architecture, Database Design, Operating Systems, Discrete Maths etc.

    Read the article

  • Obj-C component-based game architecture and message forwarding

    - by WanderWeird
    Hello, I've been trying to implement a simple component-based game object architecture using Objective-C, much along the lines of the article 'Evolve Your Hierarchy' by Mick West. To this end, I've successfully used a some ideas as outlined in the article 'Objective-C Message Forwarding' by Mike Ash, that is to say using the -(id)forwardingTargetForSelector: method. The basic setup is I have a container GameObject class, that contains three instances of component classes as instance variables: GCPositioning, GCRigidBody, and GCRendering. The -(id)forwardingTargetForSelector: method returns whichever component will respond to the relevant selector, determined using the -(BOOL)respondsToSelector: method. All this, in a way, works like a charm: I can call a method on the GameObject instance of which the implementation is found in one of the components, and it works. Of course, the problem is that the compiler gives 'may not respond to ...' warnings for each call. Now, my question is, how do I avoid this? And specifically regarding the fact that the point is that each instance of GameObject will have a different set of components? Maybe a way to register methods with the container objects, on a object per object basis? Such as, can I create some kind of -(void)registerMethodWithGameObject: method, and how would I do that? Now, it may or may not be obvious that I'm fairly new to Cocoa and Objective-C, and just horsing around, basically, and this whole thing may be very alien here. Of course, though I would very much like to know of a solution to my specific issue, anyone who would care to explain a more elegant way of doing this would additionally be very welcome. Much appreciated, -Bastiaan

    Read the article

  • P6 Architecture - Register renaming aside, does the limited user registers result in more ops spent

    - by mrjoltcola
    I'm studying JIT design with regard to dynamic languages VM implementation. I haven't done much Assembly since the 8086/8088 days, just a little here or there, so be nice if I'm out of sorts. As I understand it, the x86 (IA-32) architecture still has the same basic limited register set today that it always did, but the internal register count has grown tremendously, but these internal registers are not generally available and are used with register renaming to achieve parallel pipelining of code that otherwise could not be parallelizable. I understand this optimization pretty well, but my feeling is, while these optimizations help in overall throughput and for parallel algorithms, the limited register set we are still stuck with results in more register spilling overhead such that if x86 had double, or quadruple the registers available to us, there may be significantly less push/pop opcodes in a typical instruction stream? Or are there other processor optmizations that also optimize this away that I am unaware of? Basically if I've a unit of code that has 4 registers to work with for integer work, but my unit has a dozen variables, I've got potentially a push/pop for every 2 or so instructions. Any references to studies, or better yet, personal experiences?

    Read the article

  • General N-Tier Architecture Question

    - by whatispunk
    In an N-Tier app you're supposed to have a business logic layer and a data access layer. Is it bad to simply have two assemblies: BusinessLogicLayer.dll and DataAccessLayer.dll to handle all this logic? How do you actually represent these layers. It seems silly, the way I've seen it, to have a BusinessLogic class library containing classes like: CustomerBusinessLogic.cs, OrderBusinessLogic.cs, etc. each calling their appropriately named cousin in the DataAccessLayer class library, i.e. CustomerDataAccess.cs, OrderDataAccess.cs. I want to create a web app using MVP and it doesn't seem so cut and dry as this. There are lots of opinions about where the business logic is supposed to be put in MVP and I'm not sure I've found a really great answer yet. I want this project to be easily testable, and I am trying to adhere to TDD methodologies as best I can. I intend to use MSTest and Rhino Mocks for testing. I was thinking of something like the following for my architecture: I'd use LINQ-To-SQL to talk to the database. WCF services to define data contract interfaces for the business logic layer. Then use MVP with ASP.NET Forms for the UI/BLL. Now, this isn't the start of this project, most of the LINQ stuff is already done, so its stuck. The WCF service would replace the existing DataAccessLayer assembly and the UI/BLL would replace the BusinessLogicLayer assembly etc. This sort of makes sense in my head, but its getting really late. Anyone that's traveled down this path have any guidance? Good links? Warnings? Thanks!

    Read the article

  • change custom mapping - sharp architecture/ fluent nhibernate

    - by csetzkorn
    I am using the sharp architecture which also deploys FNH. The db schema sql code is generated during the testing like this: [TestFixture] [Category("DB Tests")] public class MappingIntegrationTests { [SetUp] public virtual void SetUp() { string[] mappingAssemblies = RepositoryTestsHelper.GetMappingAssemblies(); configuration = NHibernateSession.Init( new SimpleSessionStorage(), mappingAssemblies, new AutoPersistenceModelGenerator().Generate(), "../../../../app/XXX.Web/NHibernate.config"); } [TearDown] public virtual void TearDown() { NHibernateSession.CloseAllSessions(); NHibernateSession.Reset(); } [Test] public void CanConfirmDatabaseMatchesMappings() { var allClassMetadata = NHibernateSession.GetDefaultSessionFactory().GetAllClassMetadata(); foreach (var entry in allClassMetadata) { NHibernateSession.Current.CreateCriteria(entry.Value.GetMappedClass(EntityMode.Poco)) .SetMaxResults(0).List(); } } /// <summary> /// Generates and outputs the database schema SQL to the console /// </summary> [Test] public void CanGenerateDatabaseSchema() { System.IO.TextWriter writeFile = new StreamWriter(@"d:/XXXSqlCreate.sql"); var session = NHibernateSession.GetDefaultSessionFactory().OpenSession(); new SchemaExport(configuration).Execute(true, false, false, session.Connection, writeFile); } private Configuration configuration; } I am trying to use: using FluentNHibernate.Automapping; using xxx.Core; using SharpArch.Data.NHibernate.FluentNHibernate; using FluentNHibernate.Automapping.Alterations; namespace xxx.Data.NHibernateMaps { public class x : IAutoMappingOverride<x> { public void Override(AutoMapping<Tx> mapping) { mapping.Map(x => x.text, "text").CustomSqlType("varchar(max)"); mapping.Map(x => x.url, "url").CustomSqlType("varchar(max)"); } } } To change the standard mapping of strings from NVARCHAR(255) to varchar(max). This is not picked up during the sql schema generation. I also tried: mapping.Map(x = x.text, "text").Length(100000); Any ideas? Thanks. Christian

    Read the article

  • objective C architecture question

    - by thekevinscott
    Hey folks, I'm currently teaching myself objective C. I've gone through the tutorials, but I find I learn best when slogging through a project of my own, so I've embarked on making a backgammon app. Now that I'm partway in, I'm realizing there's some overall architecture things I just don't understand. I've established a "player" class, a "piece" class, and a "board" class. A piece theoretically belongs to both a player and the board. For instance, a player has a color, and every turn makes a move; so the player owns his pieces. At the same time, when moving a piece, it has to check whether it's a valid move, whether there are pieces on the board, etc. From my reading it seems like it's frowned upon to reach across classes. For instance, when a player makes a move, where should the function live that moves the piece? Should it exist on board? This would be my instinct, as the board should decide whether a move is valid or not; but the piece needs to initialize that query, as its the one being moved, no? Any info to help a noob would be super appreciated. Thanks guys!

    Read the article

  • Best architecture for a social media app

    - by Sky
    Hey guys, Im working on promising project that develops a new social media app for web and mobile. We are at begin defining functionalities. Nevertheless, I'm thinking ahead on architecture. So I'm asking: 1 - Whats the best plataform to develop the core of this aplication that will have a Rest API interface. 2 - Whats the best database that will scale and grow with my application. As far as I researched, these were the answers I found most interesting: For database: Cassandra NoSQL DB, amazing scalabilty, amazing write performance, good read performance (will be improved on 0.6). I think i will choose that one. Zookeer for transactions on Cassandra. I think that 2 technologies rly good for that propose. What do you think guys? On the front end that will serve the REST API, i dont have a final candidate. For this one i have questions based on Perfomance X Scalabilty X Fast Development/Maintenance. Java or .Net As far as I researched, brings the best balance of this requisits. Python, pearl and Rail, has the best (Fast Development/Maintenance), but sux on all other. C or C++ I dont even consider, because its (Fast Development/Maintenance) sux... So what do you guy think about it?

    Read the article

  • Design Decision - Scaling out web based application's architecture

    - by Vadi
    This question is about design decision. I am currently working on a web project that will have 40K users to start with and in couple of month expected to grow 50M users (not concurrent users though). I would like to have a architecture that can be scaled out easily without much effort. In order to explain, I would like to use a trivial scenario. Lets say, User entities and services such as CreateUser, AuthenticateUser etc., are a simple method calls for the Page Controllers. But once the traffic increases, for example, authenticating user (or such services related to user entities) has to be moved out to a different internal server to spread the load. But at the same time using RPC calls over the network when the user count is 40K would become overkill. My proposal was to use IPC initially and when we need to scale out we can interally switch to TCP based RPC calls so that it can easily scale out. For example, I am referring to System.IO.Pipes.NamedPipeStreamServer to start with and move on to a TcpListener later on. If we have proper design that can encapsulate above said approach, it would easy for us to scale out services into multiple network servers but at the same time avoid network calls when the user count is small. Is this is a best approach? Any suggestions would be great .. Note: The database scaling is definetly the second phase optimization so we have already made architectural design in place to easily partition data when traffic increases. The primary bottleneck would be application servers over the time period.

    Read the article

  • Windows Workflow Foundation: Recommendations how to design architecture

    - by Petr Felzmann
    We are running several the same ASP.NET applications (one per customer) based on our custom framework (libraries). Each application use its own database (Initial Catalog in the term of connection string). Now we would like to add workflow capability (of course 4.0 ;) to the applications. So the particular workflows will be the same for all the applications only some initial settings of each workflow can vary, e.g. in one application the e-mail will be send to the user X, but in other application to the user Y. I have several general questions how to design architecture: (1) Can be the workflow database shared for all the applications? (2) Where to host workflow engine - inside our custom windows NT service or inside IIS? What are the criteria to choose the right host? (3) How the workflow engine should communicate with applications? Should application call some WCF endpoint API configured in workflow host or vice verse - should each application provide WCF endpoint API and workflow engine will call it? How then the workflow engine will identify applications? Both cases requires probably some application identifier as a parameter in API calls? (4) We would like to also store some information to the application databases based on the workflow states. Is it possible? Thanks for suggestions!

    Read the article

  • Commercial Website architecture question

    - by Maxime ARNSTAMM
    Hello everyone, I have to write an architecture case study but there are some things that i don't know, so i'd like some pointers on the following : The website must handle 5k simultaneous users. The backend is composed by a commercial software, some webservices, some message queues, and a database. I want to recommend to use Spring for the backend, to deal with the different elements, and to expose some Rest services. I also want to recommend wicket for the front (not the point here). What i don't know is : must i install the front and the back on the same tomcat server or two different ? and i am tempted to put two servers for the front, with a load balancer (no need for session replication in this case). But if i have two front servers, must i have two back servers ? i don't want to create some kind of bottleneck. Based on what i read on this blog a really huge charge is handle by one tomcat only for the first website mentionned. But i cannot find any info on this, so i can't tell if it seems plausible. If you can enlight me, so i can go on in my case study, that would be really helpful. Thanks :)

    Read the article

  • Architecture for new ASP.NET web application

    - by Anders Abel
    I'm maintaining an application which currently is just a web service (built with WCF) and a database backend. The web service is built in layers with a linq-to-sql data access part with core functionality in an own assembly and on top of that the web service assembly which contains the WCF code. The core assembly also handles all business logic rules (very few actually). The customer now wants a Web interface for the application instead of just accessing it through other applications which are consuming the web service. I'm quite lost on modern web application design, so I would like some advice on what architecture and frameworks to use for the web application. The web application will be using the same core assembly with business rules and the linq-to-sql data access layer as the web service. Some concepts I've thought about are: ASP.NET MVC Webforms AJAX controls - possibly leting the AJAX controls access the existing web service through JSON. Are there any more concepts I should look into? Which one is the best for a fresh project? The development tools are Visual Studio 2008 Team Edition for Developers targeting .NET 3.5. An upgrade to Visual Studio 2010 Premium (or maybe even Ultimate) is possible if it gives any benefits.

    Read the article

  • Determine target architecture of binary file in Linux (library or executable)

    - by Fernando Miguélez
    We have an issue related to a Java application running under a (rather old) FC3 on a Advantech POS board with a Via C3 processor. The java application has several compiled shared libs that are accessed via JNI. Via C3 processor is suppossed to be i686 compatible. Some time ago after installing Ubuntu 6.10 on a MiniItx board with the same processor I found out that the previous statement is not 100% true. The Ubuntu kernel hanged on startup due to the lack of some specific and optional instructions of the i686 set in the C3 processor. These instructions missing in C3 implementation of i686 set are used by default by GCC compiler when using i686 optimizations. The solution in this case was to go with a i386 compiled version of Ubuntu distribution. The base problem with the Java application is that the FC3 distribution was installed on the HD by cloning from an image of the HD of another PC, this time an Intel P4. Afterwards the distribution needed some hacking to have it running such as replacing some packages (such as the kernel one) with the i383 compiled version. The problem is that after working for a while the system completely hangs without a trace. I am afraid that some i686 code is left somewhere in the system and could be executed randomly at any time (for example after recovering from suspend mode or something like that). My question is: Is there any tool or way to find out at what specific architecture is an binary file (executable or library) aimed provided that "file" does not give so much information?

    Read the article

  • Importing data from third party datasource (open architecture design )

    - by mare
    How would you design an application (classes, interfaces in class library) in .NET when we have a fixed database design on our side and we need to support imports of data from third party data sources, which will most likely be in XML? For instance, let us say we have a Products table in our DB which has columns Id Title Description TaxLevel Price and on the other side we have for instance Products: ProductId ProdTitle Text BasicPrice Quantity. Currently I do it like this: Have the third party XML convert to classes and XSD's and then deserialize its contents into strong typed objects (what we get as a result of this process is classes like ThirdPartyProduct, ThirdPartyClassification, etc.). Then I have methods like this: InsertProduct(ThirdPartyProduct newproduct) I do not use interfaces at the moment but I would like them to. What I would like is implement something like public class Contoso_ProductSynchronization : ProductSynchronization InsertProduct(ContosoProduct p) where ProductSynchronization will be an interface or abstract class. There will most likely be many implementations of ProductSynchronization. I cannot hardcode the types - classes like ContosoProduct, NorthwindProduct might be created from the third party XML's (so preferably I would continue to use deserialization). Hopefully someone will understand what I'm trying to explain here. Just imagine you are the seller and you have numerous providers and each one uses their own proprietary XML format. I don't mind the development, which will of course be needed everytime new format appears, because it will only require 10-20 methods to be implemented, I just want the architecture to be open and support that.

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >