Search Results

Search found 23827 results on 954 pages for 'software architecture'.

Page 29/954 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • What are some respectable online colleges to get my BS in Software Engineering? [closed]

    - by Charity
    I have an AA in Social Science and want to earn my BS in Software Engineering. However, I work full time and have a family to support, so my only option is online. I'm really considering Colorado Technical University. They promote a program called Bachelor of Science in Software Engineering on their website and Google searches, however, while I'm filling out the application; the program is actually called Bachelor of Science in Information Technology with a concentration in Software Systems Engineering Specialization. This shoots up a red flag for me. I spent the past week looking online for all kinds of schools and would prefer to go to a "brick and mortar" school's online program, however those only seem to be for international students, which I am not. Living in Colorado Springs, CO (and being prior Army) there are tons of Government DOD contractors, Lockheed Martin, Boeing, etc... that need software engineers and I'm just not sure what school they would like to see me coming from. Not only a reputable school, but also one that has great programs and will teach me real world situations and actually prepare me for my career. I would greatly appreciate any and all information or help you can offer.

    Read the article

  • Why is it that software is still easily pirated today?

    - by mohabitar
    I've always been curious about this. Now I wouldn't call myself a programmer yet, but I'm learning, so maybe the answer to this is obvious. It just seems a little hard to believe that with all of our technological advances and the billions of dollars spent on engineering the most unbelievable and mind-blowing software, we still have no other means of protecting against piracy then a "serial number/activation key". I'm sure a ton of money, maybe even billions, went into creating Windows 7 or Office and even Snow Leopard, yet I can get it for free in less than 20 minutes. Same for all of Adobe's products, which are probably the easiest. I guess my question is: can there exist a fool proof and hack proof method of protecting your software against piracy? If not realistically, how about theoretically possible? Or no matter what mechanisms these companies deploy, hackers can always find a way around it? EDIT: So apparently, the answer is no. There's pretty much no way. And so I'm sure these big companies have realized this as well. Should they adopt another sales model rather than charging a crapload for their software (I know its justified and they put a lot of hard work into their software, but its still a lot of money). Are there any alternative solutions that will benefit both the company and the user (i.e. if you purchase our product, we'll apply $X dollars to your account that will apply to future purchases from our company)?

    Read the article

  • Obj-C component-based game architecture and message forwarding

    - by WanderWeird
    Hello, I've been trying to implement a simple component-based game object architecture using Objective-C, much along the lines of the article 'Evolve Your Hierarchy' by Mick West. To this end, I've successfully used a some ideas as outlined in the article 'Objective-C Message Forwarding' by Mike Ash, that is to say using the -(id)forwardingTargetForSelector: method. The basic setup is I have a container GameObject class, that contains three instances of component classes as instance variables: GCPositioning, GCRigidBody, and GCRendering. The -(id)forwardingTargetForSelector: method returns whichever component will respond to the relevant selector, determined using the -(BOOL)respondsToSelector: method. All this, in a way, works like a charm: I can call a method on the GameObject instance of which the implementation is found in one of the components, and it works. Of course, the problem is that the compiler gives 'may not respond to ...' warnings for each call. Now, my question is, how do I avoid this? And specifically regarding the fact that the point is that each instance of GameObject will have a different set of components? Maybe a way to register methods with the container objects, on a object per object basis? Such as, can I create some kind of -(void)registerMethodWithGameObject: method, and how would I do that? Now, it may or may not be obvious that I'm fairly new to Cocoa and Objective-C, and just horsing around, basically, and this whole thing may be very alien here. Of course, though I would very much like to know of a solution to my specific issue, anyone who would care to explain a more elegant way of doing this would additionally be very welcome. Much appreciated, -Bastiaan

    Read the article

  • P6 Architecture - Register renaming aside, does the limited user registers result in more ops spent

    - by mrjoltcola
    I'm studying JIT design with regard to dynamic languages VM implementation. I haven't done much Assembly since the 8086/8088 days, just a little here or there, so be nice if I'm out of sorts. As I understand it, the x86 (IA-32) architecture still has the same basic limited register set today that it always did, but the internal register count has grown tremendously, but these internal registers are not generally available and are used with register renaming to achieve parallel pipelining of code that otherwise could not be parallelizable. I understand this optimization pretty well, but my feeling is, while these optimizations help in overall throughput and for parallel algorithms, the limited register set we are still stuck with results in more register spilling overhead such that if x86 had double, or quadruple the registers available to us, there may be significantly less push/pop opcodes in a typical instruction stream? Or are there other processor optmizations that also optimize this away that I am unaware of? Basically if I've a unit of code that has 4 registers to work with for integer work, but my unit has a dozen variables, I've got potentially a push/pop for every 2 or so instructions. Any references to studies, or better yet, personal experiences?

    Read the article

  • General N-Tier Architecture Question

    - by whatispunk
    In an N-Tier app you're supposed to have a business logic layer and a data access layer. Is it bad to simply have two assemblies: BusinessLogicLayer.dll and DataAccessLayer.dll to handle all this logic? How do you actually represent these layers. It seems silly, the way I've seen it, to have a BusinessLogic class library containing classes like: CustomerBusinessLogic.cs, OrderBusinessLogic.cs, etc. each calling their appropriately named cousin in the DataAccessLayer class library, i.e. CustomerDataAccess.cs, OrderDataAccess.cs. I want to create a web app using MVP and it doesn't seem so cut and dry as this. There are lots of opinions about where the business logic is supposed to be put in MVP and I'm not sure I've found a really great answer yet. I want this project to be easily testable, and I am trying to adhere to TDD methodologies as best I can. I intend to use MSTest and Rhino Mocks for testing. I was thinking of something like the following for my architecture: I'd use LINQ-To-SQL to talk to the database. WCF services to define data contract interfaces for the business logic layer. Then use MVP with ASP.NET Forms for the UI/BLL. Now, this isn't the start of this project, most of the LINQ stuff is already done, so its stuck. The WCF service would replace the existing DataAccessLayer assembly and the UI/BLL would replace the BusinessLogicLayer assembly etc. This sort of makes sense in my head, but its getting really late. Anyone that's traveled down this path have any guidance? Good links? Warnings? Thanks!

    Read the article

  • change custom mapping - sharp architecture/ fluent nhibernate

    - by csetzkorn
    I am using the sharp architecture which also deploys FNH. The db schema sql code is generated during the testing like this: [TestFixture] [Category("DB Tests")] public class MappingIntegrationTests { [SetUp] public virtual void SetUp() { string[] mappingAssemblies = RepositoryTestsHelper.GetMappingAssemblies(); configuration = NHibernateSession.Init( new SimpleSessionStorage(), mappingAssemblies, new AutoPersistenceModelGenerator().Generate(), "../../../../app/XXX.Web/NHibernate.config"); } [TearDown] public virtual void TearDown() { NHibernateSession.CloseAllSessions(); NHibernateSession.Reset(); } [Test] public void CanConfirmDatabaseMatchesMappings() { var allClassMetadata = NHibernateSession.GetDefaultSessionFactory().GetAllClassMetadata(); foreach (var entry in allClassMetadata) { NHibernateSession.Current.CreateCriteria(entry.Value.GetMappedClass(EntityMode.Poco)) .SetMaxResults(0).List(); } } /// <summary> /// Generates and outputs the database schema SQL to the console /// </summary> [Test] public void CanGenerateDatabaseSchema() { System.IO.TextWriter writeFile = new StreamWriter(@"d:/XXXSqlCreate.sql"); var session = NHibernateSession.GetDefaultSessionFactory().OpenSession(); new SchemaExport(configuration).Execute(true, false, false, session.Connection, writeFile); } private Configuration configuration; } I am trying to use: using FluentNHibernate.Automapping; using xxx.Core; using SharpArch.Data.NHibernate.FluentNHibernate; using FluentNHibernate.Automapping.Alterations; namespace xxx.Data.NHibernateMaps { public class x : IAutoMappingOverride<x> { public void Override(AutoMapping<Tx> mapping) { mapping.Map(x => x.text, "text").CustomSqlType("varchar(max)"); mapping.Map(x => x.url, "url").CustomSqlType("varchar(max)"); } } } To change the standard mapping of strings from NVARCHAR(255) to varchar(max). This is not picked up during the sql schema generation. I also tried: mapping.Map(x = x.text, "text").Length(100000); Any ideas? Thanks. Christian

    Read the article

  • objective C architecture question

    - by thekevinscott
    Hey folks, I'm currently teaching myself objective C. I've gone through the tutorials, but I find I learn best when slogging through a project of my own, so I've embarked on making a backgammon app. Now that I'm partway in, I'm realizing there's some overall architecture things I just don't understand. I've established a "player" class, a "piece" class, and a "board" class. A piece theoretically belongs to both a player and the board. For instance, a player has a color, and every turn makes a move; so the player owns his pieces. At the same time, when moving a piece, it has to check whether it's a valid move, whether there are pieces on the board, etc. From my reading it seems like it's frowned upon to reach across classes. For instance, when a player makes a move, where should the function live that moves the piece? Should it exist on board? This would be my instinct, as the board should decide whether a move is valid or not; but the piece needs to initialize that query, as its the one being moved, no? Any info to help a noob would be super appreciated. Thanks guys!

    Read the article

  • Best architecture for a social media app

    - by Sky
    Hey guys, Im working on promising project that develops a new social media app for web and mobile. We are at begin defining functionalities. Nevertheless, I'm thinking ahead on architecture. So I'm asking: 1 - Whats the best plataform to develop the core of this aplication that will have a Rest API interface. 2 - Whats the best database that will scale and grow with my application. As far as I researched, these were the answers I found most interesting: For database: Cassandra NoSQL DB, amazing scalabilty, amazing write performance, good read performance (will be improved on 0.6). I think i will choose that one. Zookeer for transactions on Cassandra. I think that 2 technologies rly good for that propose. What do you think guys? On the front end that will serve the REST API, i dont have a final candidate. For this one i have questions based on Perfomance X Scalabilty X Fast Development/Maintenance. Java or .Net As far as I researched, brings the best balance of this requisits. Python, pearl and Rail, has the best (Fast Development/Maintenance), but sux on all other. C or C++ I dont even consider, because its (Fast Development/Maintenance) sux... So what do you guy think about it?

    Read the article

  • Design Decision - Scaling out web based application's architecture

    - by Vadi
    This question is about design decision. I am currently working on a web project that will have 40K users to start with and in couple of month expected to grow 50M users (not concurrent users though). I would like to have a architecture that can be scaled out easily without much effort. In order to explain, I would like to use a trivial scenario. Lets say, User entities and services such as CreateUser, AuthenticateUser etc., are a simple method calls for the Page Controllers. But once the traffic increases, for example, authenticating user (or such services related to user entities) has to be moved out to a different internal server to spread the load. But at the same time using RPC calls over the network when the user count is 40K would become overkill. My proposal was to use IPC initially and when we need to scale out we can interally switch to TCP based RPC calls so that it can easily scale out. For example, I am referring to System.IO.Pipes.NamedPipeStreamServer to start with and move on to a TcpListener later on. If we have proper design that can encapsulate above said approach, it would easy for us to scale out services into multiple network servers but at the same time avoid network calls when the user count is small. Is this is a best approach? Any suggestions would be great .. Note: The database scaling is definetly the second phase optimization so we have already made architectural design in place to easily partition data when traffic increases. The primary bottleneck would be application servers over the time period.

    Read the article

  • Windows Workflow Foundation: Recommendations how to design architecture

    - by Petr Felzmann
    We are running several the same ASP.NET applications (one per customer) based on our custom framework (libraries). Each application use its own database (Initial Catalog in the term of connection string). Now we would like to add workflow capability (of course 4.0 ;) to the applications. So the particular workflows will be the same for all the applications only some initial settings of each workflow can vary, e.g. in one application the e-mail will be send to the user X, but in other application to the user Y. I have several general questions how to design architecture: (1) Can be the workflow database shared for all the applications? (2) Where to host workflow engine - inside our custom windows NT service or inside IIS? What are the criteria to choose the right host? (3) How the workflow engine should communicate with applications? Should application call some WCF endpoint API configured in workflow host or vice verse - should each application provide WCF endpoint API and workflow engine will call it? How then the workflow engine will identify applications? Both cases requires probably some application identifier as a parameter in API calls? (4) We would like to also store some information to the application databases based on the workflow states. Is it possible? Thanks for suggestions!

    Read the article

  • Commercial Website architecture question

    - by Maxime ARNSTAMM
    Hello everyone, I have to write an architecture case study but there are some things that i don't know, so i'd like some pointers on the following : The website must handle 5k simultaneous users. The backend is composed by a commercial software, some webservices, some message queues, and a database. I want to recommend to use Spring for the backend, to deal with the different elements, and to expose some Rest services. I also want to recommend wicket for the front (not the point here). What i don't know is : must i install the front and the back on the same tomcat server or two different ? and i am tempted to put two servers for the front, with a load balancer (no need for session replication in this case). But if i have two front servers, must i have two back servers ? i don't want to create some kind of bottleneck. Based on what i read on this blog a really huge charge is handle by one tomcat only for the first website mentionned. But i cannot find any info on this, so i can't tell if it seems plausible. If you can enlight me, so i can go on in my case study, that would be really helpful. Thanks :)

    Read the article

  • Architecture for new ASP.NET web application

    - by Anders Abel
    I'm maintaining an application which currently is just a web service (built with WCF) and a database backend. The web service is built in layers with a linq-to-sql data access part with core functionality in an own assembly and on top of that the web service assembly which contains the WCF code. The core assembly also handles all business logic rules (very few actually). The customer now wants a Web interface for the application instead of just accessing it through other applications which are consuming the web service. I'm quite lost on modern web application design, so I would like some advice on what architecture and frameworks to use for the web application. The web application will be using the same core assembly with business rules and the linq-to-sql data access layer as the web service. Some concepts I've thought about are: ASP.NET MVC Webforms AJAX controls - possibly leting the AJAX controls access the existing web service through JSON. Are there any more concepts I should look into? Which one is the best for a fresh project? The development tools are Visual Studio 2008 Team Edition for Developers targeting .NET 3.5. An upgrade to Visual Studio 2010 Premium (or maybe even Ultimate) is possible if it gives any benefits.

    Read the article

  • Determine target architecture of binary file in Linux (library or executable)

    - by Fernando Miguélez
    We have an issue related to a Java application running under a (rather old) FC3 on a Advantech POS board with a Via C3 processor. The java application has several compiled shared libs that are accessed via JNI. Via C3 processor is suppossed to be i686 compatible. Some time ago after installing Ubuntu 6.10 on a MiniItx board with the same processor I found out that the previous statement is not 100% true. The Ubuntu kernel hanged on startup due to the lack of some specific and optional instructions of the i686 set in the C3 processor. These instructions missing in C3 implementation of i686 set are used by default by GCC compiler when using i686 optimizations. The solution in this case was to go with a i386 compiled version of Ubuntu distribution. The base problem with the Java application is that the FC3 distribution was installed on the HD by cloning from an image of the HD of another PC, this time an Intel P4. Afterwards the distribution needed some hacking to have it running such as replacing some packages (such as the kernel one) with the i383 compiled version. The problem is that after working for a while the system completely hangs without a trace. I am afraid that some i686 code is left somewhere in the system and could be executed randomly at any time (for example after recovering from suspend mode or something like that). My question is: Is there any tool or way to find out at what specific architecture is an binary file (executable or library) aimed provided that "file" does not give so much information?

    Read the article

  • Importing data from third party datasource (open architecture design )

    - by mare
    How would you design an application (classes, interfaces in class library) in .NET when we have a fixed database design on our side and we need to support imports of data from third party data sources, which will most likely be in XML? For instance, let us say we have a Products table in our DB which has columns Id Title Description TaxLevel Price and on the other side we have for instance Products: ProductId ProdTitle Text BasicPrice Quantity. Currently I do it like this: Have the third party XML convert to classes and XSD's and then deserialize its contents into strong typed objects (what we get as a result of this process is classes like ThirdPartyProduct, ThirdPartyClassification, etc.). Then I have methods like this: InsertProduct(ThirdPartyProduct newproduct) I do not use interfaces at the moment but I would like them to. What I would like is implement something like public class Contoso_ProductSynchronization : ProductSynchronization InsertProduct(ContosoProduct p) where ProductSynchronization will be an interface or abstract class. There will most likely be many implementations of ProductSynchronization. I cannot hardcode the types - classes like ContosoProduct, NorthwindProduct might be created from the third party XML's (so preferably I would continue to use deserialization). Hopefully someone will understand what I'm trying to explain here. Just imagine you are the seller and you have numerous providers and each one uses their own proprietary XML format. I don't mind the development, which will of course be needed everytime new format appears, because it will only require 10-20 methods to be implemented, I just want the architecture to be open and support that.

    Read the article

  • What pattern is layered architecture in asp.net ?

    - by haansi
    Hi, I am a asp.net developer and don't know much about patterns and architecture. I will very thankful if you can please guide me here. In my web applications I use 4 layers. Web site project (having web forms + code behind cs files, user controls + code behind cs files, master pages + code behind cs files) CustomTypesLayer a class library (having custom types, enumerations, DTOs, constructers, get, set and validations) BusinessLogicLayer a class library (having all business logic, rules and all calls to DAL functions) DataAccessLayer a class library( having just classes communicating to database.) -My user interface just calls BusinessLogicLayer. BusinessLogicLayer do proecessign in it self and for data it calls DataAccessLayer funtions. -Web forms do not calls directly DAL. -CustomTypesLayer is shared by all layers. Please guide me is this approach a pattern ? I though it may be MVC or MVP but pages have there code behind files as well which are confusing me. If it is no patren is it near to some patren ? pleaes guide thanks

    Read the article

  • Entity and N-Tier architecture in C#

    - by acadia
    Hello, I have three tables as shown below Emp ---- empID int empName deptID empDetails ----------- empDetailsID int empID int empDocuments -------------- docID empID docName docType I am creating a entity class so that I can use n-tier architecture to do database transactions etc in C#. I started creating class for the same as shown below using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace employee { class emp { private int empID; private string empName; private int deptID; public int EmpID { get; set; } public string EmpName { get; set; } public int deptID { get; set; } } } My question is as empDetails and empDocuments are related to emp by empID. How do I have those in my emp class. I would appreciate if you can direct me to an example. Thanks

    Read the article

  • Pitfalls of the Architecture - Database based HTTP Request/Response Parsing

    - by Sam
    We have a current eCommerce Site that runs on ASP.NET and we hired a consultant to develop an new site bases on SOA. The new site architecture is as follows Web Application : Single Page Web Application (built on javascript/jquery templates - do not use any MVVM frameworks) that uses some javascript thrown all over the place. Service Layer : Very very light Service Layer that does not do anything other than calling a single stored procedure and pass in the entire http request. Database : The entire site content is in the database. The database does the heavy lifting of parsing the request and based on the HTTP method and some input parameter calls the appropriate Store Procedures or views and renders the result in JSON/XML. We have been told by them that this is built on latest and greatest technologies. I have a lot of concerns and of them given are the few Load on the Database SEO concerns for single page application as this is a public facing website Scalablity? Is this SOA? Cross Browser compatability (Site does not work in < IE9) Realistic implementaion of Single page application I know something is not right but I just need to validate my concerns here. Please help me.

    Read the article

  • Reporting system for organization. Architecture advise required

    - by Andrew Florko
    We have several legacy & 3'd-party systems in organization that use several RDBMS vendors (& more specific data storages). Cross-system data reporting (as well as extra-reports that are not implemented in 3'd-party systems) is required with charts and population of templates (winword, excel). Reporting system is visioned as intranet web-site with custom user access to reports. We expect ~50 reports per day. Would you suggest to use BizTalk or any other integration software if commercial-department doesn't plan to buy anything expensive. Would you suggest to create centralized data storage for reporting that is populated regularly or rely on on-demand services that providers always up-to-request data. Thank you in advance!

    Read the article

  • Application Architecture using WCF and System.AddIn

    - by Silverhalide
    A little background -- we're designing an application that uses a client/server architecture consisting of: A server which loads server-side modules, potentially developed by other teams. A client which loads corresponding client-side modules (also potentially developed by those other teams; each client module corresponds with a server module). The client side communicates with the server side for general coordination, and as well as module specific tasks. (At this point, I think that means client talks to server, client modules talk to server modules.) Environment is .NET 3.5, and client side is WPF. The deployment scenario introduces the potential to upgrade the server, any server-side module, the client, and any client-side module independently. However, being able to "work" using mismatched versions is required. I'm therefore concerned about versioning issues. My thinking so far: A Windows Service for the server. Using System.AddIn for the server to load and communicate with the server modules will give us the greatest flexibility in terms of version compatability between server and server modules. The server and each server module vend WCF services for communication to the client side; communication between the server and a server module, or between two server modules use the AddIn contracts. (One advantage of this is that a module can expose a different interface within the server and outside it.) Similarly, the client uses System.AddIn to find, load, and communicate with the client modules. Client communications with client modules is via the AddIn interface; communications from the client and from client modules to the server side are via WCF. For maximum resilience, each module will run in a separate app-domain. In general, the system has modest performance requirements, so marshalling and crossing process boundaries is not expected to be a performance concern. (Performance requirement is basically summed up by: don't get in the way of the other parts of the system not described here.) My questions are around the idea of having two different communication and versioning models to work with which will be an added burden on our developers. System.AddIn seems quite powerful, but also a little unwieldly. (I'm also unsure of Microsoft's commitment to it in the future.) On the other hand, I'm not thrilled with WCF's versioning capabilities. I have a feeling that it would be possible to implement the System.AddIn view/adapter/contract system within WCF, but being fairly new to both technologies, I would have no idea of where to start. So... Am I on the right track here? Am I doing this the hard way? Are there gotchas I need to be aware of on this road? Thanks.

    Read the article

  • What is a good software development plan?

    - by Totophil
    Whilst browsing through answers on SO I came across something that is, in my view, one of the more frequent software development management misconceptions: "[software development] plan is a reasonably detailed description of all the activities you need to undertake". Hence the question: what is good software development plan? Can it be boiled down just to a work breakdown structure; is WBS the single most important thing for a software development plan anyway?

    Read the article

  • Architecture Suggestions/Recommendations for a Web Application with Sub-Apps

    - by user579218
    Hello. I’m starting to plan an architecture for a big web application, and I wanted to get suggestions and/or recommendations on where to begin and which technologies and/or frameworks to use. The application will be an Intranet-based web site using Windows authentication, running on IIS and using SQL Server and ASP.NET. It’ll need to be structured as a main/shell application with sub-applications that are “pluggable” based on some configuration settings. The main or shell application is to provide the overall user interface structure – header/footer, dynamically built tabs for each available sub-app, and a content area in which the sub-application will be loaded when the user clicks on the sub-application’s tab. So, on start-up of the main/shell application, configuration information will be queried from a database, and, based on the user and which of the sub-apps are available, the main or shell app would dynamically build tabs (or buttons or something) as a way to access each individual application. On start-up, the content area will be populated with the “home” sub-app. But, clicking on an sub-app tab will cause the content area to be populated with the sub-app corresponding to the tab. For example, we’re going to have a reports application, a display application, and probably a couple other distinct applications. On startup of the main/shell application, after determining who the user is, the main app will query the database to determine which sub-apps the user can use and build out the UI. Then the user can navigate between available sub-apps and do their work in each. Finally, the entire app and all sub-apps need to be a layered design with presentation, service, business, and data access layers, as well as cross-cutting objects for things such as logging, exception handling, etc. Anyway, my questions revolve around where to begin to plan something like this application. What technologies/frameworks would work best in developing a solution for this application? MVC? MVP? WCSF? EF? NHibernate? Enterprise Library? Repository Pattern? Others???? I know all these technologies/frameworks are not used for the same purpose, but knowing which ones to focus on is a little overwhelming. Which ones would be the best choice(s) for a solution? Which ones work well together for an end-to-end design? How would one structure the VS project for something like this? Thanks!

    Read the article

  • Automated Error Reporting = More Robust Software

    - by Laila
    I would like to tell you how to revolutionize your software development process </marketing hyperbole> On a more serious note, we (Red Gate's .NET Development team) recently rolled a new tool into our development process which has made our lives dramatically easier AND improved the quality of our software, and I (& one of our developers, Alex Davies) just wanted to take a quick moment to share the love. I work with a development team that takes pride in what they ship, so we take software testing rather seriously. For every development project we run, we allocate at least one software tester for every two developers, and we never ship software without first shipping early access releases and betas to get user feedback. And therein lies the challenge -encouraging users to provide consistent, useful feedback is a headache, but without that feedback, improving the software is. tricky. Until fairly recently, we used the standard (if long-winded) approach of receiving bug reports of variable quality via email or through our support forums. If that didn't give us enough information to reproduce the problem - which was most of the time - we had to enter into a time-consuming to-and-fro conversation with the end-user, to get scrape together the data we needed to work out where the problem lay. As I'm sure you're aware, this is painfully slow. To the delight of the team, we recently got to work with SmartAssembly, which lets us embed automated exception and error reporting into our software with very little pain, and we decided to do a little dogfooding. As a result, we've have made a really handy (if perhaps slightly obvious) discovery: As soon as we release a beta, or indeed any release of software, we now get tonnes of customer feedback through automated error reports. Making this process easier for our users has dramatically increased the amount (and quality) of feedback we get. From their point of view, they get an experience similar to Microsoft's error reporting, and process is essentially idiot-proof. From our side of things, we can now react much faster to the information we get, fixing the bugs and shipping a new-and-improved release, which our users rather appreciate. Smiles and hugs all round. Even more so because, as we're use SmartAssembly's Automated Error Reporting, we get to avoid having to spend weeks building an exception reporting mechanism. It takes just a few minutes to add reporting to a project, and we get a bunch of useful information back, like a stack trace and the values of all the local variables, which we can use to fix bugs. Happily, "Automated Error Reporting = More Robust Software" can actually be read two ways: we've found that we not only ship higher quality software, but we also release within a shorter time. We can ship stable software that our users are happy to upgrade to, and we then bask in the glory of lots of positive customer feedback. Once we'd starting working with SmartAssembly, we were curious to know how widespread error reporting was as a practice. Our product manager ran a survey in autumn last year, and found that 40% of software developers never really considered deploying error reporting. Considering how we've now got plenty of experience on the subject, one of our dev guys, Alex Davies, thought we should share what we've learnt, and he's kindly offered to host a webinar on delivering robust software with Automated Error Reporting. Drawing on our own in-house development experiences, he'll cover how to add error reporting to your program, how to actually use the error reports to fix bugs (don't snigger, not everyone's as bright as you), how to customize the error report dialog that your users see, and how to automatically get log files from your users' machine. The webinar will take place on Jan 25th (that's next week). It's free to attend, but you'll still need to register to hear Alex's dulcet tones.

    Read the article

  • What quality standards to consider for software development process?

    - by Ron-Damon
    Hi, i'm looking for metrics/standards/normatives to evaluate a given "Software Development Process". I'm NOT looking to evaluate the SOFTWARE itself (trough SQUARE and such), i'm trying to evaluate software development PROCESS. So, my question is if you could give me some pointers to find this standard, considering that "evaluation objetives" would be documentation quality, how good is the customer relation, how efective is the process, etc. Very much like a ISO 9000, and like CMMI on a sense, but much lightweight and concrete and process oriented, not company oriented. Please help, i'm trying to stablish the advantages of our development process as formal as i can.

    Read the article

  • Isn't GPL enough to make a software free as in free speech?

    - by user61852
    I have read people rebutting the fact that a certain software is free as in free speech, even when it is licensed under GPL. Some say Java isn't free because to obtain a professional certification you must get it from Oracle. Some say Java JDK is not free to re-distribute. Some people even say the openJDK is not free or open. But Java is officially GPL. Doesn't GPL explicitly mean you are free to re-distribute ? Isn't GPL enough to make a software free as in free speech ? How can Java be both GPL and not-free as in free speech ? Is there is any license that trully makes a software free beyond any possible subjetive point of view? EDIT: These question is not about names or trademarks, it's about the code.

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >