Search Results

Search found 23827 results on 954 pages for 'software architecture'.

Page 83/954 | < Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >

  • The long road to bug-free software

    - by Tony Davis
    The past decade has seen a burgeoning interest in functional programming languages such as Haskell or, in the Microsoft world, F#. Though still on the periphery of mainstream programming, functional programming concepts are gradually seeping into the imperative C# language (for example, Lambda expressions have their root in functional programming). One of the more interesting concepts from functional programming languages is the use of formal methods, the lofty ideal behind which is bug-free software. The idea is that we write a specification that describes exactly how our function (say) should behave. We then prove that our function conforms to it, and in doing so have proved beyond any doubt that it is free from bugs. All programmers already use one form of specification, specifically their programming language's type system. If a value has a specific type then, in a type-safe language, the compiler guarantees that value cannot be an instance of a different type. Many extensions to existing type systems, such as generics in Java and .NET, extend the range of programs that can be type-checked. Unfortunately, type systems can only prevent some bugs. To take a classic problem of retrieving an index value from an array, since the type system doesn't specify the length of the array, the compiler has no way of knowing that a request for the "value of index 4" from an array of only two elements is "unsafe". We restore safety via exception handling, but the ideal type system will prevent us from doing anything that is unsafe in the first place and this is where we start to borrow ideas from a language such as Haskell, with its concept of "dependent types". If the type of an array includes its length, we can ensure that any index accesses into the array are valid. The problem is that we now need to carry around the length of arrays and the values of indices throughout our code so that it can be type-checked. In general, writing the specification to prove a positive property, even for a problem very amenable to specification, such as a simple sorting algorithm, turns out to be very hard and the specification will be different for every program. Extend this to writing a specification for, say, Microsoft Word and we can see that the specification would end up being no simpler, and therefore no less buggy, than the implementation. Fortunately, it is easier to write a specification that proves that a program doesn't have certain, specific and undesirable properties, such as infinite loops or accesses to the wrong bit of memory. If we can write the specifications to prove that a program is immune to such problems, we could reuse them in many places. The problem is the lack of specification "provers" that can do this without a lot of manual intervention (i.e. hints from the programmer). All this might feel a very long way off, but computing power and our understanding of the theory of "provers" advances quickly, and Microsoft is doing some of it already. Via their Terminator research project they have started to prove that their device drivers will always terminate, and in so doing have suddenly eliminated a vast range of possible bugs. This is a huge step forward from saying, "we've tested it lots and it seems fine". What do you think? What might be good targets for specification and verification? SQL could be one: the cost of a bug in SQL Server is quite high given how many important systems rely on it, so there's a good incentive to eliminate bugs, even at high initial cost. [Many thanks to Mike Williamson for guidance and useful conversations during the writing of this piece] Cheers, Tony.

    Read the article

  • The long road to bug-free software

    - by Tony Davis
    The past decade has seen a burgeoning interest in functional programming languages such as Haskell or, in the Microsoft world, F#. Though still on the periphery of mainstream programming, functional programming concepts are gradually seeping into the imperative C# language (for example, Lambda expressions have their root in functional programming). One of the more interesting concepts from functional programming languages is the use of formal methods, the lofty ideal behind which is bug-free software. The idea is that we write a specification that describes exactly how our function (say) should behave. We then prove that our function conforms to it, and in doing so have proved beyond any doubt that it is free from bugs. All programmers already use one form of specification, specifically their programming language's type system. If a value has a specific type then, in a type-safe language, the compiler guarantees that value cannot be an instance of a different type. Many extensions to existing type systems, such as generics in Java and .NET, extend the range of programs that can be type-checked. Unfortunately, type systems can only prevent some bugs. To take a classic problem of retrieving an index value from an array, since the type system doesn't specify the length of the array, the compiler has no way of knowing that a request for the "value of index 4" from an array of only two elements is "unsafe". We restore safety via exception handling, but the ideal type system will prevent us from doing anything that is unsafe in the first place and this is where we start to borrow ideas from a language such as Haskell, with its concept of "dependent types". If the type of an array includes its length, we can ensure that any index accesses into the array are valid. The problem is that we now need to carry around the length of arrays and the values of indices throughout our code so that it can be type-checked. In general, writing the specification to prove a positive property, even for a problem very amenable to specification, such as a simple sorting algorithm, turns out to be very hard and the specification will be different for every program. Extend this to writing a specification for, say, Microsoft Word and we can see that the specification would end up being no simpler, and therefore no less buggy, than the implementation. Fortunately, it is easier to write a specification that proves that a program doesn't have certain, specific and undesirable properties, such as infinite loops or accesses to the wrong bit of memory. If we can write the specifications to prove that a program is immune to such problems, we could reuse them in many places. The problem is the lack of specification "provers" that can do this without a lot of manual intervention (i.e. hints from the programmer). All this might feel a very long way off, but computing power and our understanding of the theory of "provers" advances quickly, and Microsoft is doing some of it already. Via their Terminator research project they have started to prove that their device drivers will always terminate, and in so doing have suddenly eliminated a vast range of possible bugs. This is a huge step forward from saying, "we've tested it lots and it seems fine". What do you think? What might be good targets for specification and verification? SQL could be one: the cost of a bug in SQL Server is quite high given how many important systems rely on it, so there's a good incentive to eliminate bugs, even at high initial cost. [Many thanks to Mike Williamson for guidance and useful conversations during the writing of this piece] Cheers, Tony.

    Read the article

  • Agile SOA Governance: SO-Aware and Visual Studio Integration

    - by gsusx
    One of the major limitations of traditional SOA governance platforms is the lack of integration as part of the development process. Tools like HP-Systinet or SOA Software are designed to operate by models on which the architects dictate the governance procedures and policies and the rest of the team members follow along. Consequently, those procedures are frequently rejected by developers and testers given that they can’t incorporate it as part of their daily activities. Having SOA governance products...(read more)

    Read the article

  • Database Trends & Applications column: Database Benchmarking from A to Z

    - by KKline
    Have you heard of the monthly print and web magazine Database Trends & Applications (DBTA)? Did you know I'm the regular columnist covering SQL Server ? For the past six months, I've been writing a series of articles about database benchmarking culminating in the latest article discussing my three favorite database benchmarking tools: the free, open-source HammerDB, the native SQL Server Distributed Replay Utility, and the commercial Benchmark Factory from Dell / Quest Software. Wondering what...(read more)

    Read the article

  • We are hiring (take a minute to read this, is not another BS talk ;) )

    - by gsusx
    I really wanted to wait until our new website was out to blog about this but I hope you can put up with the ugly website for a few more days J. Tellago keeps growing and, after a quick break at the beginning of the year, we are back in hiring mode J. We are currently expanding our teams in the United States and Argentina and have various positions open in the following categories. .NET developers: If you are an exceptional .NET programmer with a passion for creating great software solutions working...(read more)

    Read the article

  • Good design cannot be over-design

    - by ??? Shengyuan Lu
    Many engineers intend to design software to build "flexible" system in which many design patterns and interfaces there. Eventually too many interfaces and complex inheritances mess up the system. In most cases I think the improper design caused the mess, rather than not over-design. If design is reasonable, it's hard to be over. Alternatively, If we don't have enough skill to achieve flexible design, we choose to plain and practical design. What's your opinion about my understanding?

    Read the article

  • Developing a Cost Model for Cloud Applications

    - by BuckWoody
    Note - please pay attention to the date of this post. As much as I attempt to make the information below accurate, the nature of distributed computing means that components, units and pricing will change over time. The definitive costs for Microsoft Windows Azure and SQL Azure are located here, and are more accurate than anything you will see in this post: http://www.microsoft.com/windowsazure/offers/  When writing software that is run on a Platform-as-a-Service (PaaS) offering like Windows Azure / SQL Azure, one of the questions you must answer is how much the system will cost. I will not discuss the comparisons between on-premise costs (which are nigh impossible to calculate accurately) versus cloud costs, but instead focus on creating a general model for estimating costs for a given application. You should be aware that there are (at this writing) two billing mechanisms for Windows and SQL Azure: “Pay-as-you-go” or consumption, and “Subscription” or commitment. Conceptually, you can consider the former a pay-as-you-go cell phone plan, where you pay by the unit used (at a slightly higher rate) and the latter as a standard cell phone plan where you commit to a contract and thus pay lower rates. In this post I’ll stick with the pay-as-you-go mechanism for simplicity, which should be the maximum cost you would pay. From there you may be able to get a lower cost if you use the other mechanism. In any case, the model you create should hold. Developing a good cost model is essential. As a developer or architect, you’ll most certainly be asked how much something will cost, and you need to have a reliable way to estimate that. Businesses and Organizations have been used to paying for servers, software licenses, and other infrastructure as an up-front cost, and power, people to the systems and so on as an ongoing (and sometimes not factored) cost. When presented with a new paradigm like distributed computing, they may not understand the true cost/value proposition, and that’s where the architect and developer can guide the conversation to make a choice based on features of the application versus the true costs. The two big buckets of use-types for these applications are customer-based and steady-state. In the customer-based use type, each successful use of the program results in a sale or income for your organization. Perhaps you’ve written an application that provides the spot-price of foo, and your customer pays for the use of that application. In that case, once you’ve estimated your cost for a successful traversal of the application, you can build that into the price you charge the user. It’s a standard restaurant model, where the price of the meal is determined by the cost of making it, plus any profit you can make. In the second use-type, the application will be used by a more-or-less constant number of processes or users and no direct revenue is attached to the system. A typical example is a customer-tracking system used by the employees within your company. In this case, the cost model is often created “in reverse” - meaning that you pilot the application, monitor the use (and costs) and that cost is held steady. This is where the comparison with an on-premise system becomes necessary, even though it is more difficult to estimate those on-premise true costs. For instance, do you know exactly how much cost the air conditioning is because you have a team of system administrators? This may sound trivial, but that, along with the insurance for the building, the wiring, and every other part of the system is in fact a cost to the business. There are three primary methods that I’ve been successful with in estimating the cost. None are perfect, all are demand-driven. The general process is to lay out a matrix of: components units cost per unit and then multiply that times the usage of the system, based on which components you use in the program. That sounds a bit simplistic, but using those metrics in a calculation becomes more detailed. In all of the methods that follow, you need to know your application. The components for a PaaS include computing instances, storage, transactions, bandwidth and in the case of SQL Azure, database size. In most cases, architects start with the first model and progress through the other methods to gain accuracy. Simple Estimation The simplest way to calculate costs is to architect the application (even UML or on-paper, no coding involved) and then estimate which of the components you’ll use, and how much of each will be used. Microsoft provides two tools to do this - one is a simple slider-application located here: http://www.microsoft.com/windowsazure/pricing-calculator/  The other is a tool you download to create an “Return on Investment” (ROI) spreadsheet, which has the advantage of leading you through various questions to estimate what you plan to use, located here: https://roianalyst.alinean.com/msft/AutoLogin.do?d=176318219048082115  You can also just create a spreadsheet yourself with a structure like this: Program Element Azure Component Unit of Measure Cost Per Unit Estimated Use of Component Total Cost Per Component Cumulative Cost               Of course, the consideration with this model is that it is difficult to predict a system that is not running or hasn’t even been developed. Which brings us to the next model type. Measure and Project A more accurate model is to actually write the code for the application, using the Software Development Kit (SDK) which can run entirely disconnected from Azure. The code should be instrumented to estimate the use of the application components, logging to a local file on the development system. A series of unit and integration tests should be run, which will create load on the test system. You can use standard development concepts to track this usage, and even use Windows Performance Monitor counters. The best place to start with this method is to use the Windows Azure Diagnostics subsystem in your code, which you can read more about here: http://blogs.msdn.com/b/sumitm/archive/2009/11/18/introducing-windows-azure-diagnostics.aspx This set of API’s greatly simplifies tracking the application, and in fact you can use this information for more than just a cost model. After you have the tracking logs, you can plug the numbers into ay of the tools above, which should give a representative cost or in some cases a unit cost. The consideration with this model is that the SDK fabric is not a one-to-one comparison with performance on the actual Windows Azure fabric. Those differences are usually smaller, but they do need to be considered. Also, you may not be able to accurately predict the load on the system, which might lead to an architectural change, which changes the model. This leads us to the next, most accurate method for a cost model. Sample and Estimate Using standard statistical and other predictive math, once the application is deployed you will get a bill each month from Microsoft for your Azure usage. The bill is quite detailed, and you can export the data from it to do analysis, and using methods like regression and so on project out into the future what the costs will be. I normally advise that the architect also extrapolate a unit cost from those metrics as well. This is the information that should be reported back to the executives that pay the bills: the past cost, future projected costs, and unit cost “per click” or “per transaction”, as your case warrants. The challenge here is in the model itself - statistical methods are not foolproof, and the larger the sample (in this case I recommend the entire population, not a smaller sample) is key. References and Tools Articles: http://blogs.msdn.com/b/patrick_butler_monterde/archive/2010/02/10/windows-azure-billing-overview.aspx http://technet.microsoft.com/en-us/magazine/gg213848.aspx http://blog.codingoutloud.com/2011/06/05/azure-faq-how-much-will-it-cost-me-to-run-my-application-on-windows-azure/ http://blogs.msdn.com/b/johnalioto/archive/2010/08/25/10054193.aspx http://geekswithblogs.net/iupdateable/archive/2010/02/08/qampa-how-can-i-calculate-the-tco-and-roi-when.aspx   Other Tools: http://cloud-assessment.com/ http://communities.quest.com/community/cloud_tools

    Read the article

  • Automating custom software installation in a zone

    - by mgerdts
    In Solaris 11, the internals of zone installation are quite different than they were in Solaris 10.  This difference allows the administrator far greater control of what software is installed in a zone.  The rules in Solaris 10 are simple and inflexible: if it is installed in the global zone and is not specifically excluded by package metadata from being installed in a zone, it is installed in the zone.  In Solaris 11, the rules are still simple, but are much more flexible:  the packages you tell it to install and the packages on which they depend will be installed. So, where does the default list of packages come from?  From the AI (auto installer) manifest, of course.  The default AI manifest is /usr/share/auto_install/manifest/zone_default.xml.  Within that file you will find:             <software_data action="install">                 <name>pkg:/group/system/solaris-small-server</name>             </software_data> So, the default installation will install pkg:/group/system/solaris-small-server.  Cool.  What is that?  You can figure out what is in the package by looking for it in the repository with your web browser (click the manifest link), or use pkg(1).  In this case, it is a group package (pkg:/group/), so we know that it just has a bunch of dependencies to name the packages that really wants installed. $ pkg contents -t depend -o fmri -s fmri -r solaris-small-server FMRI compress/bzip2 compress/gzip compress/p7zip ... terminal/luit terminal/resize text/doctools text/doctools/ja text/less text/spelling-utilities web/wget If you would like to see the entire manifest from the command line, use pkg contents -r -m solaris-small-server. Let's suppose that you want to install a zone that also has mercurial and a full-fledged installation of vim rather than just the minimal vim-core that is part of solaris-small-server.  That's pretty easy. First, copy the default AI manifest somewhere where you will edit it and make it writable. # cp /usr/share/auto_install/manifest/zone_default.xml ~/myzone-ai.xml # chmod 644 ~/myzone-ai.xml Next, edit the file, changing the software_data section as follows:             <software_data action="install">                 <name>pkg:/group/system/solaris-small-server</name>                 <name>pkg:/developer/versioning/mercurial</name>                <name>pkg:/editor/vim</name>             </software_data> To figure out  the names of the packages, either search the repository using your browser, or use a command like pkg search hg. Now we are all ready to install the zone.  If it has not yet been configured, that must be done as well. # zonecfg -z myzone 'create; set zonepath=/zones/myzone' # zoneadm -z myzone install -m ~/myzone-ai.xml A ZFS file system has been created for this zone. Progress being logged to /var/log/zones/zoneadm.20111113T004303Z.myzone.install Image: Preparing at /zones/myzone/root. Install Log: /system/volatile/install.15496/install_log AI Manifest: /tmp/manifest.xml.XfaWpE SC Profile: /usr/share/auto_install/sc_profiles/enable_sci.xml Zonename: myzone Installation: Starting ... Creating IPS image Installing packages from: solaris origin: http://localhost:1008/solaris/54453f3545de891d4daa841ddb3c844fe8804f55/ DOWNLOAD PKGS FILES XFER (MB) Completed 169/169 34047/34047 185.6/185.6 PHASE ACTIONS Install Phase 46498/46498 PHASE ITEMS Package State Update Phase 169/169 Image State Update Phase 2/2 Installation: Succeeded Note: Man pages can be obtained by installing pkg:/system/manual done. Done: Installation completed in 531.813 seconds. Next Steps: Boot the zone, then log into the zone console (zlogin -C) to complete the configuration process. Log saved in non-global zone as /zones/myzone/root/var/log/zones/zoneadm.20111113T004303Z.myzone.install Now, for a few things that I've seen people trip over: Ignore that bit about man pages - it's wrong.  Man pages are already installed so long as the right facet is set properly.  And that's a topic for another blog entry. If you boot the zone then just use zlogin myzone, you will see that services you care about haven't started and that svc:/milestone/config:default is starting.  That is because you have not yet logged into the console with zlogin -C myzone. If the zone has been booted for more than a very short while when you first connect to the zone console, it will seem like the console is hung.  That's not really the case - hit ^L (control-L) to refresh the sysconfig(1M) screen that is prompting you for information.

    Read the article

  • Programming is easy, Designing is hard

    - by Rachel
    I work as Programmer and I feel if design documents are properly in place and requirements are clearly specified than programming is not that difficult but when I think in terms of Designing a Software than it gives chills to me and I think its a very difficult part. I want to develop my Design Skills so, How should I go about it ? Are there any books, blogs, websites or other approaches that SO community can suggest ? Update: By Design I meant Design of overall Application or particular problem at hand and not UI Design.

    Read the article

  • Why is x86 ugly? aka Why is x86 considered inferior when compared to others?

    - by claws
    Hello, recently I've been reading some SO archives and encountered statements against x86 architecture. http://stackoverflow.com/questions/2667256/why-do-we-need-different-cpu-architecture-for-server-mini-mainframe-mixed-cor says "PC architecture is a mess, any OS developer would tell you that." http://stackoverflow.com/questions/82432/is-learning-assembly-language-worth-the-effort says "Realize that the x86 architecture is horrible at best" http://forums.anandtech.com/showthread.php?t=976577 says "Most colleges teach assembly on something like MIPS because it's much simpler to understand, x86 assembly is really ugly" and many more comments like Compared to most architectures, X86 sucks pretty badly. It's definitely the conventional wisdom that X86 is inferior to MIPS, SPARC, and PowerPC x86 is ugly I tried searching but didn't find any reasons. I don't find x86 bad probably because this is the only architecture I'm familiar with. Can someone kindly give me reasons for considering x86 ugly/bad/inferior compared to others.

    Read the article

  • Are there too many qualified software development engineers chasing too few jobs?

    - by T Gregory
    I am trying to write this question in a non-argumentative way, but it is quite emotionally charged for some, so please bear with me. In the U.S., we hear constantly from CEOs that they cannot find enough qualified software engineers. In fact, it is the position of the U.S. government that demand for software engineering talent outpaces supply. This position can be clearly seen in the granting of tens of thousands of H1B visas, but also in the following excerpt from the official 2010-11 Bureau of Labor Statistics Occupational Outlook Handbook: Employment of computer software engineers is expected to increase by 32 percent from 2008-2018, which is much faster than the average for all occupations. In addition, this occupation will see a large number of new jobs, with more than 295,000 created between 2008 and 2018. Demand for computer software engineers will increase as computer networking continues to grow. For example, expanding Internet technologies have spurred demand for computer software engineers who can develop Internet, intranet, and World Wide Web applications. Likewise, electronic data-processing systems in business, telecommunications, healthcare, government, and other settings continue to become more sophisticated and complex. Implementing, safeguarding, and updating computer systems and resolving problems will fuel the demand for growing numbers of systems software engineers. New growth areas will also continue to arise from rapidly evolving technologies. The increasing uses of the Internet, the proliferation of Web sites, and mobile technology such as the wireless Internet have created a demand for a wide variety of new products. As more software is offered over the Internet, and as businesses demand customized software to meet their specific needs, applications and systems software engineers will be needed in greater numbers. In addition, the growing use of handheld computers will create demand for new mobile applications and software systems. As these devices become a larger part of the business environment, it will be necessary to integrate current computer systems with this new, more mobile technology. However, from the the employee side of the equation, we often hear the opposite. Many of the stories of SDEs with graduate degrees and decades of experience on the unemployment line, or the big tech interview war stories, are anecdotal, for sure. But, there is one piece of data that is neither anecdotal nor transitory, and that is the aggregate decisions of millions of undergraduates of what degree to pursue. Here, a different picture emerges from the data, and that picture is not good for the software profession. According the most recent Taulbee Survey from Computer Research Association, undergrad degree production in CS and CE has fallen nearly 60% since 2004. (Undergrad enrollments have ticked up in the past two years, but only modestly). Here we see that a basic disconnect between what corporate CEOs and the US government are saying and what potential employees really think about job prospects in software engineering. So my questions are these. Who are we to believe? Is there an acute talent shortage, or is there a long-term structural oversupply in the SDE labor market? Can anyone provide reliable data on long-term unemployment among SDEs? How many are leaving the profession due to lack of work? Real data is most helpful. Thanks.

    Read the article

  • A first look at ConfORM - Part 1

    - by thangchung
    All source codes for this post can be found at here.Have you ever heard of ConfORM is not? I have read it three months ago when I wrote an post about NHibernate and Autofac. At that time, this project really has just started and still in beta version, so I still do not really care much. But recently when reading a book by Jason Dentler NHibernate 3.0 Cookbook, I started to pay attention to it. Author have mentioned quite a lot of OSS in his book. And now again I have reviewed ConfORM once again. I have been involved in ConfORM development group on google and read some articles about it. Fabio Maulo spent a lot of work for the OSS, and I hope it will adapt a great way for NHibernate (because he contributed to NHibernate that). So what is ConfORM? It is stand for Configuration ORM, and it was trying to use a lot of heuristic model for identifying entities from C# code. Today, it's mostly Model First Driven development, so the first thing is to build the entity model. This is really important and we can see it is the heart of business software. Then we have to tell DB about the entity of this model. We often will use Inversion Engineering here, Database Schema is will create based on recently Entity Model. From now we will absolutely not interested in the DB again, only focus on the Entity Model.Fluent NHibenate really good, I liked this OSS. Sharp Architecture and has done so well in Fluent NHibernate integration with applications. A Multiple Database technical in Sharp Architecture is truly awesome. It can receive configuration, a connection string and a dll containing entity model, which would then create a SessionFactory, finally caching inside the computer memory. As the number of SessionFactory can be very large and will full of the memory, it has also devised a way of caching SessionFactory in the file. This post I hope this will not completely explain about and building a model of multiple databases. I just tried to mount a number of posts from the community and apply some of my knowledge to build a management model Session for ConfORM.As well as Fluent NHibernate, ConfORM also supported on the interface mapping, see this to understand it. So the first thing we will build the Entity Model for it, and here is what I will use the model for this article. A simple model for managing news and polls, it will be too easy for a number of people, but I hope not to bring complexity to this post.I will then have some code to build super type for the Entity Model. public interface IEntity<TId>    {        TId Id { get; set; }    } public abstract class EntityBase<TId> : IEntity<TId>    {        public virtual TId Id { get; set; }         public override bool Equals(object obj)        {            return Equals(obj as EntityBase<TId>);        }         private static bool IsTransient(EntityBase<TId> obj)        {            return obj != null &&            Equals(obj.Id, default(TId));        }         private Type GetUnproxiedType()        {            return GetType();        }         public virtual bool Equals(EntityBase<TId> other)        {            if (other == null)                return false;            if (ReferenceEquals(this, other))                return true;            if (!IsTransient(this) &&            !IsTransient(other) &&            Equals(Id, other.Id))            {                var otherType = other.GetUnproxiedType();                var thisType = GetUnproxiedType();                return thisType.IsAssignableFrom(otherType) ||                otherType.IsAssignableFrom(thisType);            }            return false;        }         public override int GetHashCode()        {            if (Equals(Id, default(TId)))                return base.GetHashCode();            return Id.GetHashCode();        }    } Database schema will be created as:The next step is to build the ConORM builder to create a NHibernate Configuration. Patrick have a excellent article about it at here. Contract of it below: public interface IConfigBuilder    {        Configuration BuildConfiguration(string connectionString, string sessionFactoryName);    } The idea here is that I will pass in a connection string and a set of the DLL containing the Entity Model and it makes me a NHibernate Configuration (shame that I stole this ideas of Sharp Architecture). And here is its code: public abstract class ConfORMConfigBuilder : RootObject, IConfigBuilder    {        private static IConfigurator _configurator;         protected IEnumerable<Type> DomainTypes;         private readonly IEnumerable<string> _assemblies;         protected ConfORMConfigBuilder(IEnumerable<string> assemblies)            : this(new Configurator(), assemblies)        {            _assemblies = assemblies;        }         protected ConfORMConfigBuilder(IConfigurator configurator, IEnumerable<string> assemblies)        {            _configurator = configurator;            _assemblies = assemblies;        }         public abstract void GetDatabaseIntegration(IDbIntegrationConfigurationProperties dBIntegration, string connectionString);         protected abstract HbmMapping GetMapping();         public Configuration BuildConfiguration(string connectionString, string sessionFactoryName)        {            Contract.Requires(!string.IsNullOrEmpty(connectionString), "ConnectionString is null or empty");            Contract.Requires(!string.IsNullOrEmpty(sessionFactoryName), "SessionFactory name is null or empty");            Contract.Requires(_configurator != null, "Configurator is null");             return CatchExceptionHelper.TryCatchFunction(                () =>                {                    DomainTypes = GetTypeOfEntities(_assemblies);                     if (DomainTypes == null)                        throw new Exception("Type of domains is null");                     var configure = new Configuration();                    configure.SessionFactoryName(sessionFactoryName);                     configure.Proxy(p => p.ProxyFactoryFactory<ProxyFactoryFactory>());                    configure.DataBaseIntegration(db => GetDatabaseIntegration(db, connectionString));                     if (_configurator.GetAppSettingString("IsCreateNewDatabase").ConvertToBoolean())                    {                        configure.SetProperty("hbm2ddl.auto", "create-drop");                    }                     configure.Properties.Add("default_schema", _configurator.GetAppSettingString("DefaultSchema"));                    configure.AddDeserializedMapping(GetMapping(),                                                     _configurator.GetAppSettingString("DocumentFileName"));                     SchemaMetadataUpdater.QuoteTableAndColumns(configure);                     return configure;                }, Logger);        }         protected IEnumerable<Type> GetTypeOfEntities(IEnumerable<string> assemblies)        {            var type = typeof(EntityBase<Guid>);            var domainTypes = new List<Type>();             foreach (var assembly in assemblies)            {                var realAssembly = Assembly.LoadFrom(assembly);                 if (realAssembly == null)                    throw new NullReferenceException();                 domainTypes.AddRange(realAssembly.GetTypes().Where(                    t =>                    {                        if (t.BaseType != null)                            return string.Compare(t.BaseType.FullName,                                          type.FullName) == 0;                        return false;                    }));            }             return domainTypes;        }    } I do not want to dependency on any RDBMS, so I made a builder as an abstract class, and so I will create a concrete instance for SQL Server 2008 as follows: public class SqlServerConfORMConfigBuilder : ConfORMConfigBuilder    {        public SqlServerConfORMConfigBuilder(IEnumerable<string> assemblies)            : base(assemblies)        {        }         public override void GetDatabaseIntegration(IDbIntegrationConfigurationProperties dBIntegration, string connectionString)        {            dBIntegration.Dialect<MsSql2008Dialect>();            dBIntegration.Driver<SqlClientDriver>();            dBIntegration.KeywordsAutoImport = Hbm2DDLKeyWords.AutoQuote;            dBIntegration.IsolationLevel = IsolationLevel.ReadCommitted;            dBIntegration.ConnectionString = connectionString;            dBIntegration.LogSqlInConsole = true;            dBIntegration.Timeout = 10;            dBIntegration.LogFormatedSql = true;            dBIntegration.HqlToSqlSubstitutions = "true 1, false 0, yes 'Y', no 'N'";        }         protected override HbmMapping GetMapping()        {            var orm = new ObjectRelationalMapper();             orm.Patterns.PoidStrategies.Add(new GuidPoidPattern());             var patternsAppliers = new CoolPatternsAppliersHolder(orm);            //patternsAppliers.Merge(new DatePropertyByNameApplier()).Merge(new MsSQL2008DateTimeApplier());            patternsAppliers.Merge(new ManyToOneColumnNamingApplier());            patternsAppliers.Merge(new OneToManyKeyColumnNamingApplier(orm));             var mapper = new Mapper(orm, patternsAppliers);             var entities = new List<Type>();             DomainDefinition(orm);            Customize(mapper);             entities.AddRange(DomainTypes);             return mapper.CompileMappingFor(entities);        }         private void DomainDefinition(IObjectRelationalMapper orm)        {            orm.TablePerClassHierarchy(new[] { typeof(EntityBase<Guid>) });            orm.TablePerClass(DomainTypes);             orm.OneToOne<News, Poll>();            orm.ManyToOne<Category, News>();             orm.Cascade<Category, News>(Cascade.All);            orm.Cascade<News, Poll>(Cascade.All);            orm.Cascade<User, Poll>(Cascade.All);        }         private static void Customize(Mapper mapper)        {            CustomizeRelations(mapper);            CustomizeTables(mapper);            CustomizeColumns(mapper);        }         private static void CustomizeRelations(Mapper mapper)        {        }         private static void CustomizeTables(Mapper mapper)        {        }         private static void CustomizeColumns(Mapper mapper)        {            mapper.Class<Category>(                cm =>                {                    cm.Property(x => x.Name, m => m.NotNullable(true));                    cm.Property(x => x.CreatedDate, m => m.NotNullable(true));                });             mapper.Class<News>(                cm =>                {                    cm.Property(x => x.Title, m => m.NotNullable(true));                    cm.Property(x => x.ShortDescription, m => m.NotNullable(true));                    cm.Property(x => x.Content, m => m.NotNullable(true));                });             mapper.Class<Poll>(                cm =>                {                    cm.Property(x => x.Value, m => m.NotNullable(true));                    cm.Property(x => x.VoteDate, m => m.NotNullable(true));                    cm.Property(x => x.WhoVote, m => m.NotNullable(true));                });             mapper.Class<User>(                cm =>                {                    cm.Property(x => x.UserName, m => m.NotNullable(true));                    cm.Property(x => x.Password, m => m.NotNullable(true));                });        }    } As you can see that we can do so many things in this class, such as custom entity relationships, custom binding on the columns, custom table name, ... Here I only made two so-Appliers for OneToMany and ManyToOne relationships, you can refer to it here public class ManyToOneColumnNamingApplier : IPatternApplier<PropertyPath, IManyToOneMapper>    {        #region IPatternApplier<PropertyPath,IManyToOneMapper> Members         public void Apply(PropertyPath subject, IManyToOneMapper applyTo)        {            applyTo.Column(subject.ToColumnName() + "Id");        }         #endregion         #region IPattern<PropertyPath> Members         public bool Match(PropertyPath subject)        {            return subject != null;        }         #endregion    } public class OneToManyKeyColumnNamingApplier : OneToManyPattern, IPatternApplier<PropertyPath, ICollectionPropertiesMapper>    {        public OneToManyKeyColumnNamingApplier(IDomainInspector domainInspector) : base(domainInspector) { }         #region Implementation of IPattern<PropertyPath>         public bool Match(PropertyPath subject)        {            return Match(subject.LocalMember);        }         #endregion Implementation of IPattern<PropertyPath>         #region Implementation of IPatternApplier<PropertyPath,ICollectionPropertiesMapper>         public void Apply(PropertyPath subject, ICollectionPropertiesMapper applyTo)        {            applyTo.Key(km => km.Column(GetKeyColumnName(subject)));        }         #endregion Implementation of IPatternApplier<PropertyPath,ICollectionPropertiesMapper>         protected virtual string GetKeyColumnName(PropertyPath subject)        {            Type propertyType = subject.LocalMember.GetPropertyOrFieldType();            Type childType = propertyType.DetermineCollectionElementType();            var entity = subject.GetContainerEntity(DomainInspector);            var parentPropertyInChild = childType.GetFirstPropertyOfType(entity);            var baseName = parentPropertyInChild == null ? subject.PreviousPath == null ? entity.Name : entity.Name + subject.PreviousPath : parentPropertyInChild.Name;            return GetKeyColumnName(baseName);        }         protected virtual string GetKeyColumnName(string baseName)        {            return string.Format("{0}Id", baseName);        }    } Everyone also can download the ConfORM source at google code and see example inside it. Next part I will write about multiple database factory. Hope you enjoy about it. happy coding and see you next part.

    Read the article

  • SRs @ Oracle: How do I License Thee?

    - by [email protected]
    With the release of the new Sun Ray product last week comes the advent of a different software licensing model. Where Sun had initially taken the approach of '1 desktop device = one license', we later changed things to be '1 concurrent connection to the server software = one license', and while there were ways to tell how many connections there were at a time, it wasn't the easiest thing to do.  And, when should you measure concurrency?  At your busiest time, of course... but when might that be?  9:00 Monday morning this week might yield a different result than 9:00 Monday morning last week.In the acquisition of this desktop virtualization product suite Oracle has changed things to be, in typical Oracle fashion, simpler.  There are now two choices for customers around licensing: Named User licenses and Per Device licenses.Here's how they work, and some examples:The Rules1) A Sun Ray device, and PC running the Desktop Access Client (DAC), are both considered unique devices.OR, 2) Any user running a session on either a Sun Ray or an DAC is still just one user.So, you have a choice of path to go down.Some Examples:Here are 6 use cases I can think of right now that will help you choose the Oracle server software licensing model that is right for your business:Case 1If I have 100 Sun Rays for 100 users, and 20 of them use DAC at home that is 100 user licenses.If I have 100 Sun Rays for 100 users, and 20 of them use DAC at home that is 120 device licenses.Two cases using the same metrics - different licensing models and therefore different results.Case 2If I have 100 Sun Rays for 200 users, and 20 of them use DAC at home that is 200 user licenses.If I have 100 Sun Rays for 200 users, and 20 of them use DAC at home that is 120 device licenses.Same metrics - very different results.Case 3If I have 100 Sun Rays for 50 users, and 20 of them use DAC at home that is 50 user licenses.If I have 100 Sun Rays for 50 users, and 20 of them use DAC at home that is 120 device licenses.Same metrics - but again - very different results.Based on the way your business operates you should be able to see which of the two licensing models is most advantageous to you.Got questions?  I'll try to help.(Thanks to Brad Lackey for the clarifications!)

    Read the article

  • ubuntu 12.10 not updating

    - by gunjan parashar
    i have upgrade to ubuntu 12.10 from ubuntu 12.04 after that it is not updating software updater gives the following error : W:Failed to fetch http://archive.canonical.com/ubuntu/dists/precise/Release.gpg Unable to connect to 10.4.42.15:8080: W:Failed to fetch http://extras.ubuntu.com/ubuntu/dists/quantal/Release.gpg Unable to connect to 10.4.42.15:8080: W:Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/quantal/Release.gpg Unable to connect to 10.4.42.15:8080: W:Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/quantal-updates/Release.gpg Unable to connect to 10.4.42.15:8080: W:Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/quantal-backports/Release.gpg Unable to connect to 10.4.42.15:8080: W:Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/quantal-security/Release.gpg Unable to connect to 10.4.42.15:8080: W:Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/quantal-proposed/Release.gpg Unable to connect to 10.4.42.15:8080: W:Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/quantal/restricted/source/Sources Unable to connect to 10.4.42.15:8080: W:Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/quantal/main/source/Sources Unable to connect to 10.4.42.15:8080: W:Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/quantal/multiverse/source/Sources Unable to connect to 10.4.42.15:8080: W:Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/quantal/universe/source/Sources Unable to connect to 10.4.42.15:8080: : W:Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/quantal-proposed/universe/i18n/Translation-en Unable to connect to 10.4.42.15:8080: E:Some index files failed to download. They have been ignored, or old ones used instead. along with this i am not able to install any thing from software center , it just asks to use this source and after that it just keeps on quering software sources and nothing happens after that plz help me out , this 12.10 has became a great problem for me and forgive for my poor engish

    Read the article

  • How common is prototyping as the first stage of development?

    - by EpsilonVector
    I've been taking some software design courses in the past few semesters, and while I see the benefit in a lot of the formalism, I still feel like it doesn't tell me anything about the program itself. You can't tell how the program is going to operate from the Use Case spec, even though it discusses what the program can do, and you can't tell anything about the user experience from the requirements document, even though it can include QA requirements. ...sequence diagrams are as good a description of how the software works as the call stack, in other words- very limited, highly partial view of the overall system, and a class diagram is great for describing how the system is built, but is utterly useless in helping you figure out what the software needs to be. Where in all this formalism is the bottom line- how the program looks, operates, and what experience it gives? Doesn't it make more sense to design off of that? Isn't it better to figure out how the program should work via a prototype and strive to implement it for real? I know that I'm probably suffering from being taught engineering by theoreticians, but I got to ask, do they do this in the industry? How do people figure out what the program actually is, not what it should conform to? Do people prototype a lot? ...or do they mostly use the formal tools like UML and I just didn't get the hang of using them yet?

    Read the article

  • Effective versus efficient code

    - by Todd Williamson
    TL;DR: Quick and dirty code, or "correct" (insert your definition of this term) code? There is often a tension between "efficient" and "effective" in software development. "Efficient" often means code that is "correct" from the point of view of adhering to standards, using widely-accepted patterns/approaches for structures, regardless of project size, budget, etc. "Effective" is not about being "right", but about getting things done. This often results in code that falls outside the bounds of commonly accepted "correct" standards, usage, etc. Usually the people paying for the development effort have dictated ahead of time what it is that they value more. An organization that lives in a technical space will tend towards the efficient end, others will tend towards the effective. Developers often refuse to compromise their favored approach for the other. In my own experience I have found that people with formal education in software development tend towards the Efficient camp. Those that picked up software development more or less as a tool to get things done tend towards the Effective camp. These camps don't get along very well. When managing a team of developers who are not all in one camp it is challenging. In your own experience, which camp do you land in, and do you find yourself having to justify your approach to others? To management? To other developers?

    Read the article

  • Is there a way to install i386 packages w/ it's i386 dependencies?

    - by foh1981
    I'm using Ubuntu 11.10 64-bit and I wish to install the CAD application DraftSight, which as of now only come in a 32-bit .deb file. I have installed this with some success before, but since 11.10 supports multiarch I would like to install the i386 versions of DraftSights dependencies. Ubuntu Software Center cannot handle the file, nor gdebi-gtk ('wrong architecture'). I can use dpkg with --force-architecture but there's A LOT of dependencies which I need to manually install afterward. Is there a way to automatically install these? Or semi-automatically with a script of some sort? (I'm thinking something along the lines of extracting the dependencies and adding :i386 and then feed that to apt-get or something...) Below is the output of dpkg-deb --info of the package in question. Package: dassault-systemes-draftsight Version: 2011.7.1198 Section: applications Priority: extra Architecture: i386 Pre-Depends: libexpat1 (>=2.0.1-4), libglib2.0-0 (>=2.22.3-0), libpcre3 (>=7.8-3), libselinux1 (>=2.0.85-2), zlib1g (>=1:1.2.3.3.dfsg-13), libc6 (>=2.10.1-0), libx11-6 (>=2:1.2.2-1), libxau6 (>=1:1.0.4-2), libxcomposite1 (>=1:0.4.0-4), libxcursor1 (>=1:1.1.9-1build1), libxdamage1 (>=1:1.1.1-4), libxdmcp6 (>=1:1.0.2-3), libxext6 (>=2:1.0.99.1-0), libxfixes3 (>=1:4.0.3-2build1), libxi6 (>=2:1.2.1-2), libxinerama1 (>=2:1.0.3-2), libxrandr2 (>=2:1.3.0-2), libxrender1 (>=1:0.9.4-2), libatk1.0-0 (>=1.28.0-0), libcairo2 (>=1.8.8-2), libdirectfb-extra (>=1.2.7-2), libfontconfig1 (>=2.6.0-1), libfreetype6 (>=2.3.9-5), libgtk2.0-0 (>=2.18.3-1), libpango1.0-0 (>=1.26.0-1), libpixman-1-0 (>=0.14.0-1), libpng12-0 (>=1.2.37-1), libxcb-render-util0 (>=0.3.6-1), libxcb-render0 (>=1.4-1), libxcb1 (>=1.4-1), debconf (>= 1.1) | debconf-2.0 Depends: libcomerr2 (>=1.41.9-1), libdbus-1-3 (>=1.2.16-0), libexpat1 (>=2.0.1-4), libgcc1 (>=1:4.4.1-4), libgcrypt11 (>=1.4.4-2), libglib2.0-0 (>=2.22.3-0), libgpg-error0 (>=1.6-1), libkeyutils1 (>=1.2-10), libpcre3 (>=7.8-3), libuuid1 (>=2.16-1), zlib1g (>=1:1.2.3.3.dfsg-13), libc6 (>=2.10.1-0), libgl1-mesa-glx (>=7.6.0-1), libglu1-mesa (>=7.6.0-1), libice6 (>=2:1.0.5-1), libsm6 (>=2:1.1.0-2), libx11-6 (>=2:1.2.2-1), libxau6 (>=1:1.0.4-2), libxdamage1 (>=1:1.1.1-4), libxdmcp6 (>=1:1.0.2-3), libxext6 (>=2:1.0.99.1-0), libxfixes3 (>=1:4.0.3-2build1), libxrender1 (>=1:0.9.4-2), libxt6 (>=1:1.0.5-3), libxxf86vm1 (>=1:1.0.2-1), libaudio2 (>=1.9.2-1), libavahi-client3 (>=0.6.25-1), libavahi-common3 (>=0.6.25-1), libcups2 (>=1.4.1-5), libdrm2 (>=2.4.14-1), libfontconfig1 (>=2.6.0-1), libgnutls26 (>=2.8.3-2), libgssapi-krb5-2 (>=1.7dfsg~beta3-1), libk5crypto3 (>=1.7dfsg~beta3-1), libkrb5-3 (>=1.7dfsg~beta3-1), libkrb5support0 (>=1.7dfsg~beta3-1), libstdc++6 (>=4.4.1-4), libtasn1-3 (>=2.2-1), libxcb1 (>=1.4-1), sendmail Installed-Size: 284948 Maintainer: Dassault Systemes <[email protected]> Homepage: www.3ds.com Description: With DraftSight, you can easily create professional CAD drawings. Supported file formats are DWT, DXF and DWG.

    Read the article

  • How should I describe the process of learning someone else's code? (In an invoicing situation.)

    - by MattyG
    I have a contract to upgrade some in-house software for a large company. The company has requested multiple feature additions and a few bug fixes. This is my first freelance style job. First, I needed to become familiar with how the application worked - I learnt it as if I was a user. Next, I had to learn how the software worked. I started with broad concepts, and then narrowed down into necessary detail before working on each bug fix and feature. At least at the start of the project, it took me a lot longer to learn the existing code than it did to write the additional features. How can I describe the process of learning the existing code on the invoice? (This part of the company usually does things in-house, so doesn't have much experience dealing with software contractors like me, and I fear they may not understand the overhead of learning someone else's code). I don't want to just tack the learning time onto the actual feature upgrade, because in some cases this would make a 'simple task' look like it took me way too long. I want break the invoice into relevant steps, and communicate that I'm charging for the large overhead of learning someone else's code before being able to add my own to it. Is there a standard way of describing this sort of activity when billing for a job?

    Read the article

  • How successful is GPL in reaching its goals?

    - by StasM
    There are, broadly, two types of FOSS licenses when it relates to commercial usage of the code - let's say the GPL-type and the BSD-type. The first is, broadly, restrictive about commercial usage (by usage I also mean modification and redistribution, as well as creating derived works, etc.) of the code under the license, and the second is much more permissive. As I understand, the idea behind GPL-type licenses is to encourage people to abandon the proprietary software model and instead convert to the FOSS code, and the license is the instrument to entice them to do so - i.e. "you can use this nice software, but only if you agree to come to our camp and play by our rules". What I want to ask is - was this strategy successful so far? I.e. are there any major achievements in the form of some big project going from closed to open because of GPL or some software being developed in the open only because GPL made it so? How big is the impact of this strategy - compared, say, to the world where everybody would have BSD-type licenses or release all open-source code under public domain? Note that I am not asking if FOSS model is successful - this is beyond question. What I am asking is if the specific way of enticing people to convert from proprietary to FOSS used by GPL-type and not used by BSD-type licenses was successful. I also don't ask about the merits of GPL itself as the license - just about the fact of its effectiveness.

    Read the article

  • How do I remove a malformed line from my sources.list?

    - by eminencejae
    I have unistalled and reinstalled the Ubuntu Software Center as per info I found in a similar thread and I got the same response about line 91 or something like that. I just tried to upload a screen shot but since I'm new it won't allow me to. I also can not figure out how to cut and paste anything so I have to hand type what the error screen says, both when I attempt to open the software center and nothing happens, when I try to enter commands into the terminal to uninstall, reinstall, whatever I get the same following: COULD NOT INTITIALIZE THE PACKAGE INFORMATION An unresolvable problem occured while initializing the package information Please report t:his bug against the 'update-manager' package and include the following error message: 'E: Malformed line 91 in source list/etc/apt/sources.list (dist parse) E: The list of sources could not be read., E: The package list of status file could not be parsed or opened. How do I report bugs? What can be done about this. I have searched and everything everyone says to do leads me back to the same line error message. So, I don't know how to get to line 91 in the source list; to tell you what it says. Sorry, I'm really new to this. That is what I need is to find out how to get there and fix what it says. I would really like to NOT have to re partition my hard drive and start from scratch, so I'm really looking forward to getting this problem solved. I need to be able to install new software.

    Read the article

  • ASP.NET plugin architecture for SaaS?

    - by chopps
    Hey Everyone, Been doing a bit of research and have a few questions about using a Plugin architecture for asp.net. I have used it before for a single application and works well but was wondering if it technically possible to use this architecture for a SaaS? Once code base but custom plugins for each customer using some form of custom folders to organize the plugins? Idea would be to use one main code base instead of having to maintain code for N number of clients. Ideas? Thoughts? Experience either way?

    Read the article

  • SaaS Architecture Question from Newbie

    - by user226767
    I have developed a number of departmental client-server applications, and am now ready to begin working on moving one of these applications to a SaaS model. I have done some basic web development, but I'm a newbie when it comes to SaaS architectures. One of the first questions that comes to mind as I try to design the architecture is the question of single vs. multi tenancy. The pros and cons of each vary significantly depending on the type of application and scale required, so I'd like to describe my application and scale needs below, and hope others can comment on how I should get started with the architecture. The client-server application currently consists of a Firebird database and a Windows application. The database contains about 20 tables containing a few thousand records in 4 primary tables, and a few hundred records in various lookup and related tables. Although the number of records is small, the size can get large, as the database can contain large BLOBS. Each customer sets up their own database and has a handful of users within the organization connected to it. When I update the db schema, a new windows application is released, and it checks the db schema and then applies the updates as needed. For the SaaS application, I am designing for 100's (not 1000's or millions) of new customers per year. My first thought was to go with a multi tenancy model to make updates easy (shut down apply the updates to one database, and then start up). On the other hand, a single tenancy model would provide a means to roll updates out to a group of customers at a time, and spread the risk of data corruption - i.e. if something goes wrong with a database, it will impact one customer instead of all customers. With this idea, I was thinking of having a single web front-end which would connect to a single customer database upon login. Thus, when a new customer creates an account, a new database would be created (each customer would have their own db with multiple users as needed for the customer). In this model, a db update would require either a process to go through each db to apply schema changes, or a trigger upon logging in to initiate a schema update similar to the client-server model currently in use. Can anyone point me to information for similar applications which have been ported from client-server to SaaS? Or provide any pointers to consider? Basically I'm looking for architecture examples of taking a departmental application and making it available as a self service website for multiple customers. Thanks for any suggestions, resources, etc.

    Read the article

  • What languages should a microISV use to write commercial software?

    - by Wal
    I've been writing software in Java for many years now, but it was always for internal applications that would be deployed to a server. I'd like to get into writing desktop applications now but I don't know where to start. I've written a few Java/Swing applications but again they were for internal use. My understanding is that Java and other semi-compiled and interpreted languages are too easy to reverse engineer, making them unsuitable for commercial software. I am aware that there are compilers for Java and some other interpreted language, but I've also heard that they are pricey and/or unreliable. Assuming I start a microISV and wish to develop and sell applications to a broad audience, what's my best bet? I would prefer something that can be written close to once, and compiled for different operating systems but I am not opposed to .NET and a Windows-only audience if other languages would compromise the experience (installation ease & user experience) in Windows. My only issue there is that I don't have a large starting budget and paying out the wazoo for the required development tools is not really in the cards.

    Read the article

  • Have any software engineers gotten math degrees later in their careers?

    - by vin
    I have a bachelors in computer science and worked the last 12 years as a software engineer. I'm bored with doing general development work, so I want to specialize. I'm thinking about getting a master's degree in math so I can build math models and write algorithms to implement them. I'm unsure what type of work I'd do (financial, gaming, graphics, science, research, etc) but I'm open minded. I would need to refresh my undergrad math skills (which are old and faded), but I loved algebra and calculus. I've been working with couple statisticians so I've been finding myself more interested in statistics. Since I'm a parent supporting a household, I would have to continue working while studying. Have any software engineers taken this route? (Specifically, going from BS in comp sci to MS in math.) If so, what advice do you have for coursework, financing, and getting a job that combines programming with advanced math? How abundant are these kinds of jobs? I'm not sure where one starts. Also, how do you hop from a BS to an MS in a different subject?

    Read the article

< Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >