Search Results

Search found 1793 results on 72 pages for 'effort'.

Page 31/72 | < Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >

  • Bridging the gap between developers and testers with VS 2010

    - by Etienne Tremblay
    Hey everyone, I know it’s been an eternity since I blogged but I have so much to do that I unfortunately need to prioritize.  Vincent Grondin and I did a 7h presentation on the new developer and tester tools available in the VS 2010 suite.  It was a blast.  We did it in front of an audience (around 120) and it was taped.  We did it as a play and really didn’t look at the crowd at all we were training each other on the technology. It is now available for anyone that would like to watch it at this location: http://www.devteach.com/ALM-TFS2010-Bridgingthegap.aspx What we covered in the full day event was Migration to TFS 2010 (10h00) 1-Migration of VSS to TFS (20 min.) 2-Automating the Build (Something you can't do with VSS) ( 20 Min.) 3-User story (Real application context for this presentation) (20 min.) 10h00 Pause Manuel Tests by Dev ( 11h30) 4-Adding a tester to the team (Into to MTM) (20 min.) 5-Define tests (what is a white bug) (20 min.) 6-Fix the bug and show Intellitrace and Play back the test (20 min.) 12h15 Lunch Manuel testing for maintenance (13h30) 7- Implement new Feature (web service) and Identify bug with MTM and branch for a production fix and also add a new Build script (20 min.) 8- Fix bug in production branch, Playback tests, merge the change in main branch (20 min.) Manuel testing with the lab manager (14h30) 9- Intro to Lab manager and environment (20 min.) 10- Change build script to deploy to lab and test with web service in lab environment. (20 min.) 15h15 Pause Automate UI test with CodeUI (15h30) 11- Reducing the effort of testing the UI (20 min.) 12- Repeating testing to make sure the application is working properly (20 min.) 13- Automate Coded UI with the Lab environment (20 min.) 16h30 Conclusions As you can see lots of stuff!! Enjoy the show and let us know how you like it Cheers, ET Technorati Tags: VS 2010,Testing Tools,ALM,Training

    Read the article

  • Finding the Right Solution to Source and Manage Your Contractors

    - by mark.rosenberg(at)oracle.com
    Many of our PeopleSoft Enterprise applications customers operate in service-based industries, and all of our customers have at least some internal service units, such as IT, marketing, and facilities. Employing the services of contractors, often referred to as "contingent labor," to deliver either or both internal and external services is common practice. As we've transitioned from an industrial age to a knowledge age, talent has become a primary competitive advantage for most organizations. Contingent labor offers talent on flexible terms; it offers the ability to scale up operations, close skill gaps, and manage risk in the process of delivering services. Talent comes from many sources and the rise in the contingent worker (contractor, consultant, temporary, part time) has increased significantly in the past decade and is expected to reach 40 percent in the next decade. Managing the total pool of talent in a seamless integrated fashion not only saves organizations money and increases efficiency, but creates a better place for workers of all kinds to work. Although the term "contingent labor" is frequently used to describe both contractors and employees who have flexible schedules and relationships with an organization, the remainder of this discussion focuses on contractors. The term "contingent labor" is used interchangeably with "contractor." Recognizing the importance of contingent labor, our PeopleSoft customers often ask our team, "What Oracle vendor management system (VMS) applications should I evaluate for managing contractors?" In response, I thought it would be useful to describe and compare the three most common Oracle-based options available to our customers. They are:   The enterprise licensed software model in which you implement and utilize the PeopleSoft Services Procurement (sPro) application and potentially other PeopleSoft applications;  The software-as-a-service model in which you gain access to a derivative of PeopleSoft sPro from an Oracle Business Process Outsourcing Partner; and  The managed service provider (MSP) model in which staffing industry professionals utilize either your enterprise licensed software or the software-as-a-service application to administer your contingent labor program. At this point, you may be asking yourself, "Why three options?" The answer is that since there is no "one size fits all" in terms of talent, there is also no "one size fits all" for effectively sourcing and managing contingent workers. Various factors influence how an organization thinks about and relates to its contractors, and each of the three Oracle-based options addresses an organization's needs and preferences differently. For the purposes of this discussion, I will describe the options with respect to (A) pricing and software provisioning models; (B) control and flexibility; (C) level of engagement with contractors; and (D) approach to sourcing, employment law, and financial settlement. Option 1:  Enterprise Licensed Software In this model, you purchase from Oracle the license and support for the applications you need. Typically, you license PeopleSoft sPro as your VMS tool for sourcing, monitoring, and paying your contract labor. In conjunction with sPro, you can also utilize PeopleSoft Human Capital Management (HCM) applications (if you do not already) to configure more advanced business processes for recruiting, training, and tracking your contractors. Many customers choose this enterprise license software model because of the functionality and natural integration of the PeopleSoft applications and because the cost for the PeopleSoft software is explicit. There is no fee per transaction to source each contractor under this model. Our customers that employ contractors to augment their permanent staff on billable client engagements often find this model appealing because there are no fees to affect their profit margins. With this model, you decide whether to have your own IT organization run the software or have the software hosted and managed by either Oracle or another application services provider. Your organization, perhaps with the assistance of consultants, configures, deploys, and operates the software for managing your contingent workforce. This model offers you the highest level of control and flexibility since your organization can configure the contractor process flow exactly to your business and security requirements and can extend the functionality with PeopleTools. This option has proven very valuable and applicable to our customers engaged in government contracting because their contingent labor management practices are subject to complex standards and regulations. Customers find a great deal of value in the application functionality and configurability the enterprise licensed software offers for managing contingent labor. Some examples of that functionality are... The ability to create a tiered network of preferred suppliers including competencies, pricing agreements, and elaborate candidate management capabilities. Configurable alerts and online collaboration for bid, resource requisition, timesheet, and deliverable entry, routing, and approval for both resource and deliverable-based services. The ability to manage contractors with the same PeopleSoft HCM and Projects applications that are used to manage the permanent workforce. Because it allows you to utilize much of the same PeopleSoft HCM and Projects application functionality for contractors that you use for permanent employees, the enterprise licensed software model supports the deepest level of engagement with the contingent workforce. For example, you can: fill job openings with contingent labor; guide contingent workers through essential safety and compliance training with PeopleSoft Enterprise Learning Management; and source contingent workers directly to project-based assignments in PeopleSoft Resource Management and PeopleSoft Program Management. This option enables contingent workers to collaborate closely with your permanent staff on complex, knowledge-based efforts - R&D projects, billable client contracts, architecture and engineering projects spanning multiple years, and so on. With the enterprise licensed software model, your organization maintains responsibility for the sourcing, onboarding (including adherence to employment laws), and financial settlement processes. This means your organization maintains on staff or hires the expertise in these domains to utilize the software and interact with suppliers and contractors. Option 2:  Software as a Service (SaaS) The effort involved in setting up and operating VMS software to handle a contingent workforce leads many organizations to seek a system that can be activated and configured within a few days and for which they can pay based on usage. Oracle's Business Process Outsourcing partner, Provade, Inc., provides exactly this option to our customers. Provade offers its vendor management software as a service over the Internet and usually charges your organization a fee that is a percentage of your total contingent labor spending processed through the Provade software. (Percentage of spend is the predominant fee model, although not the only one.) In addition to lower implementation costs, the effort of configuring and maintaining the software is largely upon Provade, not your organization. This can be very appealing to IT organizations that are thinly stretched supporting other important information technology initiatives. Built upon PeopleSoft sPro, the Provade solution is tailored for simple and quick deployment and administration. Provade has added capabilities to clone users rapidly and has simplified business documents, like work orders and change orders, to facilitate enterprise-wide, self-service adoption with little to no training. Provade also leverages Oracle Business Intelligence Enterprise Edition (OBIEE) to provide integrated spend analytics and dashboards. Although pure customization is more limited than with the enterprise licensed software model, Provade offers a very effective option for organizations that are regularly on-boarding and off-boarding high volumes of contingent staff hired to perform discrete support tasks (for example, order fulfillment during the holiday season, hourly clerical work, desktop technology repairs, and so on) or project tasks. The software is very configurable and at the same time very intuitive to even the most computer-phobic users. The level of contingent worker engagement your organization can achieve with the Provade option is generally the same as with the enterprise licensed software model since Provade can automatically establish contingent labor resources in your PeopleSoft applications. Provade has pre-built integrations to Oracle's PeopleSoft and the Oracle E-Business Suite procurement, projects, payables, and HCM applications, so that you can evaluate, train, assign, and track contingent workers like your permanent employees. Similar to the enterprise licensed software model, your organization is responsible for the contingent worker sourcing, administration, and financial settlement processes. This means your organization needs to maintain the staff expertise in these domains. Option 3:  Managed Services Provider (MSP) Whether you are using the enterprise licensed model or the SaaS model, you may want to engage the services of sourcing, employment, payroll, and financial settlement professionals to administer your contingent workforce program. Firms that offer this expertise are often referred to as "MSPs," and they are typically staffing companies that also offer permanent and temporary hiring services. (In fact, many of the major MSPs are Oracle applications customers themselves, and they utilize the PeopleSoft Solution for the Staffing Industry to run their own business operations.) Usually, MSPs place their staff on-site at your facilities, and they can utilize either your enterprise licensed PeopleSoft sPro application or the Provade VMS SaaS software to administer the network of suppliers providing contingent workers. When you utilize an MSP, there is a separate fee for the MSP's service that is typically funded by the participating suppliers of the contingent labor. Also in this model, the suppliers of the contingent labor (not the MSP) usually pay the contingent labor force. With an MSP, you are intentionally turning over business process control for the advantages associated with having someone else manage the processes. The software option you choose will to a certain extent affect your process flexibility; however, the MSPs are often able to adapt their processes to the unique demands of your business. When you engage an MSP, you will want to give some thought to the level of engagement and "partnering" you need with your contingent workforce. Because the MSP acts as an intermediary, it can be very valuable in handling high volume, routine contracting for which there is a relatively low need for "partnering" with the contingent workforce. However, if your organization (or part of your organization) engages contingent workers for high-profile client projects that require diplomacy, intensive amounts of interaction, and personal trust, introducing an MSP into the process may prove less effective than handling the process with your own staff. In fact, in many organizations, it is common to enlist an MSP to handle contractors working on internal projects and to have permanent employees handle the contractor relationships that affect the portion of the services portfolio focused on customer-facing, billable projects. One of the key advantages of enlisting an MSP is that you do not have to maintain the expertise required for orchestrating the sourcing, hiring, and paying of contingent workers.  These are the domain of the MSPs. If your own staff members are not prepared to manage the essential "overhead" processes associated with contingent labor, working with an MSP can make solid business sense. Proper administration of a contingent workforce can make the difference between project success and failure, operating profit and loss, and legal compliance and fines. Concluding Thoughts There is little doubt that thoughtfully and purposefully constructing a service delivery strategy that leverages the strengths of contingent workers can lead to better projects, deliverables, and business results. What requires a bit more thinking is determining the platform (or platforms) that will enable each part of your organization to best deliver on its mission.

    Read the article

  • Important additions to the Silverlight MaskedTextBox

    RadMaskedTextBox is one of the major controls in the Telerik Silverlight suite. It enables you to filter the user input and makes the work with data much more easier for the end-user. That is why in the past quarter (Q1) we put a great effort and get all the scenarios and users reports that we had so far and made this control as stable as possible. Now post Q1 we added some small, but important new features to the control. Here they are: Option to get the changed value when the focus of the control is lost. Before this change each user stroke causes the ValueChanged and ValueChanging events to be raised. Now you have to option to get these events on lost focus. This also gives you the option to enable the Regular expression validation of the new value on ValueChanging event. So if you want a complex ...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • What to use for simple cross-platform games instead of Flash?

    - by jmh_gr
    In short, for simple games: Is Flash still a good option for browser-based PC clients? It still has 90%+ penetration. What is a good alternative for mobile devices? It HTML5 + JavaScript the choice for mobile? Or does one have to learn a new native language for each target platform? (Android, Apple, Windows Phone)... If you desire further background: There are more blogs about the official demise of mobile Flash than I can count, along with endless useless and vitriolic comments. I'm actually trying to do something practical: build simple games that can be served accross multiple platforms. Several months ago I plopped down $1100 for CS5.5 Web and am wading into Flash. Bummer. My question to people who actually develop simple games and apps: What platform should I use instead? Is Flash still a sensible platform for web-served PC users? For example, let's say I build a simple arcade game that I would like to serve as an app to mobile users and as a browser-based game to PC users. Should I still invest the time and effort to learn and develop in Flash for the PC users, while building a parallel code set in some other language for mobile users? My games are simple enough that it would be annoying but not inconceivable to maintain parallel code sets.

    Read the article

  • Announcing: Oracle Enterprise Manager 12c Delivers Advanced Self-Service Automation for Oracle Database 12c Multitenant

    - by Scott McNeil
    New Self-Service Driven Provisioning of Pluggable Databases Today Oracle announced new capabilities that support managing the full lifecycle of pluggable database as a service in Oracle Enterprise Manager 12c Release 3 (12.1.0.3). This latest release builds on the existing capabilities to provide advanced automation for deploying database as a service using Oracle Database 12c Multitenant option. It takes it one step further by offering pluggable database as a service through Oracle Enterprise Manager 12c self-service portal providing customers with fast provisioning of database cloud services with minimal time and effort. This is a significant addition to Oracle Enterprise Manager 12c’s existing portfolio of cloud services that includes infrastructure as a service, database as a service, testing as a service, and Java platform as a service. The solution provides a self-service mechanism to provision pluggable databases allowing users to request and access database(s) on-demand. The self-service operations are also enabled through REST APIs allowing customers to integrate with third-party automation systems or their custom enterprise portals. Benefits Self-service provisioning allows rapid access to pluggable database as a service for hosting or certifying applications on Oracle Database 12c Self-service driven migration to pluggable database as a service in order to migrate a pre-Oracle Database 12c database to a pluggable database as a service model and test the consolidation strategy Single service catalog for all approved pluggable database as a service configurations which helps customers achieve standardization while catering to all applications and users in the enterprise Resource guarantee via database resource manager (and IORM on Oracle Exadata) that enables deployment of mixed workloads in a shared environment Quota, role based access, and policy based management that enforces governance and reduces administrative overhead Chargeback or showback which improves metering and accountability for services consumed by each pluggable database Comprehensive REST APIs that support integration with ticketing or change management systems, and or with other self-service portals Minimal administrative and maintenance overhead through self-managing automation that allows for intelligent placement of pluggable databases To understand how pluggable database as a service works, watch this quick demo: Stay Connected: Twitter | Facebook | YouTube | Linkedin | Newsletter Download the Oracle Enterprise Manager Cloud Control12c Mobile app

    Read the article

  • 2012 Oracle Fusion Middleware Innovation Awards for Oracle Exalogic

    - by Sanjeev Sharma
    Companies from around the world were honored for their innovative solutions using Oracle Fusion Middleware. This year’s 27 award winners, representing 11 countries and a wide span of industries, wowed the judges with a range of projects across eight product categories. 4 awards were given out to customers who demonstrated innovative application of Oracle Exalogic for their mission-critical applications.Below is an overview of the 4 businesses that won the Oracle Fusion Middleware Innovation Award for Oracle Exalogic this year. Company: Netshoes About: Leading online retailer of sporting goods in Latin America.Challenges: Rapid business growth resulted in frequent outages and poor response-time of online store-front Conventional ad-hoc approach to horizontal scaling resulted in high CAPEX and OPEX Poor performance and unavailability of online store-front resulted in revenue loss from purchase abandonment Solution: Consolidated ATG Commerce and Oracle WebLogic running on Oracle Exalogic.Business Impact:Reduced abandonment rates resulting in a two-digit increase in online conversion rates translating directly into revenue up-liftCompany: ClaroAbout: Leading communications services provider in Latin America.Challenges: Support business growth over the next 3  - 5 years while maximizing re-use of existing middleware and application investments with minimal effort and risk Solution: Consolidated Oracle Fusion Middleware components (Oracle WebLogic, Oracle SOA Suite, Oracle Tuxedo) and JAVA applications onto Oracle Exalogic and Oracle Exadata. Business Impact:Improved partner SLA’s 7x while improving throughput 5X and response-time 35x for  JAVA applicationsCompany: ULAbout: Leading safety testing and certification organization in the world.Challenges: Transition from being a non-profit to a profit oriented enterprise and grow from a $1B to $5B in annual revenues in the next 5 years Undertake a massive business transformation by aligning change strategy with execution Solution: Consolidated Oracle Applications (E-Business Suite, Siebel, BI, Hyperion) and Oracle Fusion Middleware (AIA, SOA Suite) on Oracle Exalogic and Oracle ExadataBusiness Impact:Reduced financial and operating risk in re-architecting IT services to support new business capabilities supporting 87,000 manufacturersCompany: Ingersoll RandAbout: Leading manufacturer of industrial, climate, residential and security solutions.Challenges: Business continuity risks due to complexity in enforcing consistent operational and financial controls; Re-active business decisions reduced ability to offer differentiation and compete Solution: Consolidated Oracle E-business Suite on Oracle Exalogic and Oracle ExadataBusiness Impact:Service differentiation with faster order provisioning and a shorter lead-to-cash cycle translating into higher customer satisfaction and quicker cash-conversionCheck out the winners of the Oracle Fusion Middleware Innovation awards in other categories here.

    Read the article

  • Security vulnerability and nda's [closed]

    - by Chris
    I want to propose a situation and gain insight from the communities thoughts. A customer, call them Customer X has a contract with a vendor, Vendor Y to provide an application and services. Customer X discovers a serious authentication vulnerability in Vendor Y's software. Vendor Y and Customer X has a discussion. Vendor Y acknowledges/confirms flaw. Vendor Y confirms they will put effort to fix. Customer X requests Vendor Y to inform all customers impacted by this. Vendor agrees. Fast forward 2 months, and the flaw has not been fixed. Patches were applied to mitigate but the flaw still exists. However, no customers were informed of issue. At this point customer X contacts Vendor Y to determine the status and understand why customer's were not informed. The vendor nicely reminds the customer they are under an NDA and are still working on the issue. A few questions/discussion pieces out of this. By discussing a software flaw with a vendor, does this imply you have agreed to any type of NDA disclosure? Additionally, what rights as does Customer X have to inform other customers of this vulnerability if vendor does not appear willing to comply? I (the op) am under the impression that when this situation occurs, you are supposed to notify vendor of issue, provide them with ample time to respond and if no response you are able to do what you wish with the information. I am thinking back to the MIT/subway incident where they contacted transit authorities, transit authorities didn't respond in a timely fashion so the students disclosed the information publicly on their own. Few things to note about this: I am not the customer in above situation, also lets assume for purposes of keeping discussion inline that customer X has no intentions of disclosing information, they are merely concerned and interested in making sure other customers are aware until it is fixed so they do not expierence a major security breach. (More information can be supplied if needed to add context to question. )

    Read the article

  • Rasbperry Pi Mod Offers One Button Audiobook Playback

    - by Jason Fitzpatrick
    How do you design an audiobook player for an elderly book lover who doesn’t want to wrestle with new technology? Simple and with a single button interface is a great place to start. This clever and thoughtful build comes to us courtesy of tinker Michael Clemens. His wife’s grandmother, in her 90s, is visually impaired but still loves to take in books via audiobooks. In an effort to make modern MP3 audiobooks accessible to her, Michael built a dedicated audiobook reader based off Rasbperry Pi and programmed it to use a single button. The system boots, loads the audiobook it finds on the attached USB drive, and loads up its track position from memory. Press the button to resume play or, for a refresher, hold the button for four seconds to start the track over. While you may not be in the market for a one-button audiobook player for an elderly relative, the same simple design could be easily adopted, via new scripts, to another function. Hit up the link below to read more about the build. The One Button Audiobook Player [via Hack A Day] How To Play DVDs on Windows 8 6 Start Menu Replacements for Windows 8 What Is the Purpose of the “Do Not Cover This Hole” Hole on Hard Drives?

    Read the article

  • Code Reuse is (Damn) Hard

    - by James Michael Hare
    Being a development team lead, the task of interviewing new candidates was part of my job.  Like any typical interview, we started with some easy questions to get them warmed up and help calm their nerves before hitting the hard stuff. One of those easier questions was almost always: “Name some benefits of object-oriented development.”  Nearly every time, the candidate would chime in with a plethora of canned answers which typically included: “it helps ease code reuse.”  Of course, this is a gross oversimplification.  Tools only ease reuse, its developers that ultimately can cause code to be reusable or not, regardless of the language or methodology. But it did get me thinking…  we always used to say that as part of our mantra as to why Object-Oriented Programming was so great.  With polymorphism, inheritance, encapsulation, etc. we in essence set up the concepts to help facilitate reuse as much as possible.  And yes, as a developer now of many years, I unquestionably held that belief for ages before it really struck me how my views on reuse have jaded over the years.  In fact, in many ways Agile rightly eschews reuse as taking a backseat to developing what's needed for the here and now.  It used to be I was in complete opposition to that view, but more and more I've come to see the logic in it.  Too many times I've seen developers (myself included) get lost in design paralysis trying to come up with the perfect abstraction that would stand all time.  Nearly without fail, all of these pieces of code become obsolete in a matter of months or years. It’s not that I don’t like reuse – it’s just that reuse is hard.  In fact, reuse is DAMN hard.  Many times it is just a distraction that eats up architect and developer time, and worse yet can be counter-productive and force wrong decisions.  Now don’t get me wrong, I love the idea of reusable code when it makes sense.  These are in the few cases where you are designing something that is inherently reusable.  The problem is, most business-class code is inherently unfit for reuse! Furthermore, the code that is reusable will often fail to be reused if you don’t have the proper framework in place for effective reuse that includes standardized versioning, building, releasing, and documenting the components.  That should always be standard across the board when promoting reusable code.  All of this is hard, and it should only be done when you have code that is truly reusable or you will be exerting a large amount of development effort for very little bang for your buck. But my goal here is not to get into how to reuse (that is a topic unto itself) but what should be reused.  First, let’s look at an extension method.  There’s many times where I want to kick off a thread to handle a task, then when I want to reign that thread in of course I want to do a Join on it.  But what if I only want to wait a limited amount of time and then Abort?  Well, I could of course write that logic out by hand each time, but it seemed like a great extension method: 1: public static class ThreadExtensions 2: { 3: public static bool JoinOrAbort(this Thread thread, TimeSpan timeToWait) 4: { 5: bool isJoined = false; 6:  7: if (thread != null) 8: { 9: isJoined = thread.Join(timeToWait); 10:  11: if (!isJoined) 12: { 13: thread.Abort(); 14: } 15: } 16: return isJoined; 17: } 18: } 19:  When I look at this code, I can immediately see things that jump out at me as reasons why this code is very reusable.  Some of them are standard OO principles, and some are kind-of home grown litmus tests: Single Responsibility Principle (SRP) – The only reason this extension method need change is if the Thread class itself changes (one responsibility). Stable Dependencies Principle (SDP) – This method only depends on classes that are more stable than it is (System.Threading.Thread), and in itself is very stable, hence other classes may safely depend on it. It is also not dependent on any business domain, and thus isn't subject to changes as the business itself changes. Open-Closed Principle (OCP) – This class is inherently closed to change. Small and Stable Problem Domain – This method only cares about System.Threading.Thread. All-or-None Usage – A user of a reusable class should want the functionality of that class, not parts of that functionality.  That’s not to say they most use every method, but they shouldn’t be using a method just to get half of its result. Cost of Reuse vs. Cost to Recreate – since this class is highly stable and minimally complex, we can offer it up for reuse very cheaply by promoting it as “ready-to-go” and already unit tested (important!) and available through a standard release cycle (very important!). Okay, all seems good there, now lets look at an entity and DAO.  I don’t know about you all, but there have been times I’ve been in organizations that get the grand idea that all DAOs and entities should be standardized and shared.  While this may work for small or static organizations, it’s near ludicrous for anything large or volatile. 1: namespace Shared.Entities 2: { 3: public class Account 4: { 5: public int Id { get; set; } 6:  7: public string Name { get; set; } 8:  9: public Address HomeAddress { get; set; } 10:  11: public int Age { get; set;} 12:  13: public DateTime LastUsed { get; set; } 14:  15: // etc, etc, etc... 16: } 17: } 18:  19: ... 20:  21: namespace Shared.DataAccess 22: { 23: public class AccountDao 24: { 25: public Account FindAccount(int id) 26: { 27: // dao logic to query and return account 28: } 29:  30: ... 31:  32: } 33: } Now to be fair, I’m not saying there doesn’t exist an organization where some entites may be extremely static and unchanging.  But at best such entities and DAOs will be problematic cases of reuse.  Let’s examine those same tests: Single Responsibility Principle (SRP) – The reasons to change for these classes will be strongly dependent on what the definition of the account is which can change over time and may have multiple influences depending on the number of systems an account can cover. Stable Dependencies Principle (SDP) – This method depends on the data model beneath itself which also is largely dependent on the business definition of an account which can be very inherently unstable. Open-Closed Principle (OCP) – This class is not really closed for modification.  Every time the account definition may change, you’d need to modify this class. Small and Stable Problem Domain – The definition of an account is inherently unstable and in fact may be very large.  What if you are designing a system that aggregates account information from several sources? All-or-None Usage – What if your view of the account encompasses data from 3 different sources but you only care about one of those sources or one piece of data?  Should you have to take the hit of looking up all the other data?  On the other hand, should you have ten different methods returning portions of data in chunks people tend to ask for?  Neither is really a great solution. Cost of Reuse vs. Cost to Recreate – DAOs are really trivial to rewrite, and unless your definition of an account is EXTREMELY stable, the cost to promote, support, and release a reusable account entity and DAO are usually far higher than the cost to recreate as needed. It’s no accident that my case for reuse was a utility class and my case for non-reuse was an entity/DAO.  In general, the smaller and more stable an abstraction is, the higher its level of reuse.  When I became the lead of the Shared Components Committee at my workplace, one of the original goals we looked at satisfying was to find (or create), version, release, and promote a shared library of common utility classes, frameworks, and data access objects.  Now, of course, many of you will point to nHibernate and Entity for the latter, but we were looking at larger, macro collections of data that span multiple data sources of varying types (databases, web services, etc). As we got deeper and deeper in the details of how to manage and release these items, it quickly became apparent that while the case for reuse was typically a slam dunk for utilities and frameworks, the data access objects just didn’t “smell” right.  We ended up having session after session of design meetings to try and find the right way to share these data access components. When someone asked me why it was taking so long to iron out the shared entities, my response was quite simple, “Reuse is hard...”  And that’s when I realized, that while reuse is an awesome goal and we should strive to make code maintainable, often times you end up creating far more work for yourself than necessary by trying to force code to be reusable that inherently isn’t. Think about classes the times you’ve worked in a company where in the design session people fight over the best way to implement a class to make it maximally reusable, extensible, and any other buzzwordable.  Then think about how quickly that design became obsolete.  Many times I set out to do a project and think, “yes, this is the best design, I can extend it easily!” only to find out the business requirements change COMPLETELY in such a way that the design is rendered invalid.  Code, in general, tends to rust and age over time.  As such, writing reusable code can often be difficult and many times ends up being a futile exercise and worse yet, sometimes makes the code harder to maintain because it obfuscates the design in the name of extensibility or reusability. So what do I think are reusable components? Generic Utility classes – these tend to be small classes that assist in a task and have no business context whatsoever. Implementation Abstraction Frameworks – home-grown frameworks that try to isolate changes to third party products you may be depending on (like writing a messaging abstraction layer for publishing/subscribing that is independent of whether you use JMS, MSMQ, etc). Simplification and Uniformity Frameworks – To some extent this is similar to an abstraction framework, but there may be one chosen provider but a development shop mandate to perform certain complex items in a certain way.  Or, perhaps to simplify and dumb-down a complex task for the average developer (such as implementing a particular development-shop’s method of encryption). And what are less reusable? Application and Business Layers – tend to fluctuate a lot as requirements change and new features are added, so tend to be an unstable dependency.  May be reused across applications but also very volatile. Entities and Data Access Layers – these tend to be tuned to the scope of the application, so reusing them can be hard unless the abstract is very stable. So what’s the big lesson?  Reuse is hard.  In fact it’s damn hard.  And much of the time I’m not convinced we should focus too hard on it. If you’re designing a utility or framework, then by all means design it for reuse.  But you most also really set down a good versioning, release, and documentation process to maximize your chances.  For anything else, design it to be maintainable and extendable, but don’t waste the effort on reusability for something that most likely will be obsolete in a year or two anyway.

    Read the article

  • How can we make agile enjoyable for developers that like to personally, independently own large chunks from start to finish

    - by Kris
    We’re roughly midway through our transition from waterfall to agile using scrum; we’ve changed from large teams in technology/discipline silos to smaller cross-functional teams. As expected, the change to agile doesn’t suit everyone. There are a handful of developers that are having a difficult time adjusting to agile. I really want to keep them engaged and challenged, and ultimately enjoying coming to work each day. These are smart, happy, motivated people that I respect on both a personal and a professional level. The basic issue is this: Some developers are primarily motivated by the joy of taking a piece of difficult work, thinking through a design, thinking through potential issues, then solving the problem piece by piece, with only minimal interaction with others, over an extended period of time. They generally complete work to a high level of quality and in a timely way; their work is maintainable and fits with the overall architecture. Transitioning to a cross-functional team that values interaction and shared responsibility for work, and delivery of working functionality within shorter intervals, the teams evolve such that the entire team knocks that difficult problem over. Many people find this to be a positive change; someone that loves to take a problem and own it independently from start to finish loses the opportunity for work like that. This is not an issue with people being open to change. Certainly we’ve seen a few people that don’t like change, but in the cases I’m concerned about, the individuals are good performers, genuinely open to change, they make an effort, they see how the rest of the team is changing and they want to fit in. It’s not a case of someone being difficult or obstructionist, or wanting to hoard the juiciest work. They just don’t find joy in work like they used to. I’m sure we can’t be the only place that hasn’t bumped up on this. How have others approached this? If you’re a developer that is motivated by personally owning a big chunk of work from end to end, and you’ve adjusted to a different way of working, what did it for you?

    Read the article

  • Data transfer between"main" site and secured virtual subsite

    - by Emma Burrows
    I am currently working on a C# ASP.Net 3.5 website I wrote some years ago which consists of a "main" public site, and a sub-site which is our customer management application, using forms-based authentication. The sub-site is set up as a virtual folder in IIS and though it's a subfolder of "main", it functions as a separate web app which handles CRUD access to our customer database and is only accessible by our staff. The main site currently includes a form for new leads to fill in, which generates an email to our sales staff so they can contact them and convince them to become customers. If that process is successful, the staff manually enter the information from the email into the database. Not surprisingly, I now have a new requirement to feed the data from the new lead form directly into the database so staff can just check a box for instance to turn the lead into a customer. My question therefore is how to go about doing this? Possible options I've thought of: Move the new lead form into the customer database subsite (with authentication turned off). Add database handling code to the main site. (No, not seriously considering this duplication of effort! :) Design some mechanism (via REST?) so a webpage outside the customer database subsite can feed data into the customer database I'd welcome some suggestions on how to organise the code for this situation, preferably with extensibility in mind, and particularly if there are any options I haven't thought of. Thanks in advance.

    Read the article

  • Jumpstart Fusion Middleware projects with Oracle User Productivity Kit

    - by Dain C. Hansen
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Missed our webinar on how Oracle UPK can reduce your project timeline by more than 10%?  View the playback and discover how to successfully build, deploy, and manage custom applications built with Fusion Middleware using Oracle User Productivity Kit!  Oracle UPK develops standards, processes, and designs the right solution for Oracle SOA Suite, WebCenter, Web 2.0, and Business Process Management tool users. By using Oracle UPK organizations can reduce implementation costs, increase user adoption, and shorten time to deployment of custom applications built with Fusion Middleware.  View this webcast and learn how Oracle UPK: Reduces standardization effort costs by 75% Drives standardization and adoption of ITIL processes Brings products to market faster with rapid custom application development Increases user adoption and productivity rate  For more information on Oracle UPK, visit the resource center. 

    Read the article

  • Unit testing statically typed functional code

    - by back2dos
    I wanted to ask you people, in which cases it makes sense to unit test statically typed functional code, as written in haskell, scala, ocaml, nemerle, f# or haXe (the last is what I am really interested in, but I wanted to tap into the knowledge of the bigger communities). I ask this because from my understanding: One aspect of unit tests is to have the specs in runnable form. However when employing a declarative style, that directly maps the formalized specs to language semantics, is it even actually possible to express the specs in runnable form in a separate way, that adds value? The more obvious aspect of unit tests is to track down errors that cannot be revealed through static analysis. Given that type safe functional code is a good tool to code extremely close to what your static analyzer understands. However a simple mistake like using x instead of y (both being coordinates) in your code cannot be covered. However such a mistake could also arise while writing the test code, so I am not sure whether its worth the effort. Unit tests do introduce redundancy, which means that when requirements change, the code implementing them and the tests covering this code must both be changed. This overhead of course is about constant, so one could argue, that it doesn't really matter. In fact, in languages like Ruby it really doesn't compared to the benefits, but given how statically typed functional programming covers a lot of the ground unit tests are intended for, it feels like it's a constant overhead one can simply reduce without penalty. From this I'd deduce that unit tests are somewhat obsolete in this programming style. Of course such a claim can only lead to religious wars, so let me boil this down to a simple question: When you use such a programming style, to which extents do you use unit tests and why (what quality is it you hope to gain for your code)? Or the other way round: do you have criteria by which you can qualify a unit of statically typed functional code as covered by the static analyzer and hence needs no unit test coverage?

    Read the article

  • Experience the new Bootloader of CE7 VirtualPC BSP - Display Resolution Override

    - by Kate Moss' Open Space
    The CE 7 (aka. Windows Embedded Compact) provides many new features, a new VirtualPC is one of them and as a replacement of Device Emulator in CE 6.   The bootloader of VPC BSP utilize a new introduced framework in CE7, the BLDR (not the BIOSLOADER!) It provides many rich and advanced feature, I will introduce more detail in my future posts. Today, I am going to introduce a basic usage: setting the display resolution. One of the benefit os using the BLDR is it provides interactive user interface, no DOS enviroment required, so user can change the setting on the console. It is especially useful on VPC: if you are not using Win7, edit a file in VHD could take some effort! In the Boot menu, you can select [5] Display Settings. There are a couples of sub menu allow you to change resolution, bpp and etc. As it is very straight forward, I won't go through each option except to the Option [3] "Change Viewable Display Region". The resolution it provides depends on the BIOS (VPC is a PC compatible device), and the minimum resolution it provides is 640x480. But what if user need smaller resolution or any non-standard resolution for whatever reason, it comes the use of "Change Viewable Display Region". User can use it to create a reduced display region. e.g. 240x320 on 640x480 screen. Also you can alter the platform\virtualpc\src\boot\bldr\config.c to add a non-standard resolution (e.g. 480x272) to displayMode array. Another solution in case of you don't want to rebuilt and replace bootloader is to alter SaveVGAArgs in platform\common\src\x86\common\io\ioctl.c to overwrite cxDisplayScreen and cyDisplayScreen setting to whatever resolution you want.

    Read the article

  • Why can't we capture the design of software more effectively?

    - by Ira Baxter
    As engineers, we all "design" artifacts (buildings, programs, circuits, molecules...). That's an activity (design-the-verb) that produces some kind of result (design-the-noun). I think we all agree that design-the-noun is a different entity than the artifact itself. A key activity in the software business (indeed, in any business where the resulting product artifact needs to be enhanced) is to understand the "design (the-noun)". Yet we seem, as a community, to be pretty much complete failures at recording it, as evidenced by the amount of effort people put into rediscovering facts about their code base. Ask somebody to show you the design of their code and see what you get. I think of a design for software as having: An explicit specification for what the software is supposed to do and how well it does it An explicit version of the code (this part is easy, everybody has it) An explanation for how each part of the code serves to achieve the specification A rationale as to why the code is the way it is (e.g., why a particualr choice rather than another) What is NOT a design is a particular perspective on the code. For example [not to pick specifically on] UML diagrams are not designs. Rather, they are properties you can derive from the code, or arguably, properties you wish you could derive from the code. But as a general rule, you can't derive the code from UML. Why is it that after 50+ years of building software, why don't we have regular ways to express this? My personal opinion is that we don't have good ways to express this. Even if we do, most of the community seems so focused on getting "code" that design-the-noun gets lost anyway. (IMHO, until design becomes the purpose of engineering, with the artifact extracted from the design, we're not going to get around this). What have you seen as means for recording designs (in the sense I have described it)? Explicit references to papers would be good. Why do you think specific and general means have not been succesful? How can we change this?

    Read the article

  • eSTEP TechCast - November 2013

    - by uwes
    Dear partner, we are pleased to announce our next eSTEP TechCast on Thursday 7th of November and would be happy if you could join. Please see below the details for the next TechCast.Date and time:Thursday, 07. November 2013, 11:00 - 12:00 GMT (12:00 - 13:00 CET; 15:00 - 16:00 GST) Title: The Operational Management benefits of Engineered Systems Abstract:Oracle Engineered Systems require significantly less administration effort than traditional platforms. This presentation will explain why this is the case, how much can be saved and discusses the best practices recommended to maximise Engineered Systems operational efficiency. Target audience: Tech Presales Speaker: Julian Lane Call Info:Call-in-toll-free number: 08006948154 (United Kingdom)Call-in-toll-free number: +44-2081181001 (United Kingdom) Show global numbers Conference Code: 803 594 3Security Passcode: 9876Webex Info (Oracle Web Conference) Meeting Number: 599 156 244Meeting Password: tech2011 Playback / Recording / Archive: The webcasts will be recorded and will be available shortly after the event in the eSTEP portal under the Events tab, where you could find also material from already delivered eSTEP TechCasts. Use your email-adress and PIN: eSTEP_2011 to get access. Feel free to have a look. We are happy to get your comments and feedback. Thanks and best regards, Partner HW Enablement EMEA

    Read the article

  • IASA ITARC &ndash; Denver May 6th

    - by Jeff Certain
    The Denver chapter of the International Association of Software Architects (IASA) is holding an IT Architect Regional Conference (ITARC) in Denver on May 6th. The speaker list for this conference is amazing. Paul Rayner, Dave McComb, Randy Kahle, Peter Provost, Randy Stafford, George Fairbanks – all great speakers, and from Colorado. Brandon Satrom (who also happens to be the president of the IASA Austin chapter) will also be speaking, as will some other heavy hitters (for example, Ted Farrell, Chief Architect and Senior VP of Oracle). This is an amazing line-up, and the conference is quite reasonably priced ($150 for IASA members until April 10th, including a catered lunch). I also have the privilege of being a presenter at this conference. If you’ve ever heard any of the previously named speakers, you know that they set the bar quite high. Sounds like I’m going to have to step up my game. What I get to talk about is really cool stuff. The company I work for – Colorado CustomWare – brought me on board nearly two years ago. To say there was some technical debt is somewhat… understated. Equally understated would be that management is committed to doing the right thing. Over the past two years, we’ve done significant architectural refactoring – including an effort that took the entire team offline for most of a month. We’ve reduced the application size by 50% without losing functionality. As you can imagine, this has reduced the complexity of the application, making development faster and less prone to bugs. We’ve made many other changes – moving to an agile process, training developers, moving towards a more OO architecture. The changes we’ve made reveal, in some ways, just how far afield we were.. and there are still more changes to be made. Amazingly enough, our leadership team is eager for me to share these experiences with other architects. I’m really looking forward to being able to do so.

    Read the article

  • Translation and Localization Resources for UX Designers

    - by ultan o'broin
    Here is a handy list of translation and localization-related resources for user experience professionals. Following these will help you design an easily translatable user experience. Most of the references here are for web pages or software. Fundamentally, remember your designs will be consumed globally, and never divorce the design process from the development or deployment effort that goes into bringing your designs to life in code. Ask yourself today: Do you know how the text you are using in your designs are delivered to the customer, even in English? Key areas that UX designers always seen to fall foul of, in my space anyway, are: Terminology that is impossible to translate (jargon, multiple modifiers, gerunds) or is used inconsistently Poorly written, verbose text (really, just write well in English, no special considerations) String construction (concatenation of parts assembled dynamically) Composite widget positioning (my favourite) Hard-coded fonts, small font sizes, or character formatting or casing that doesn't work globally Format that is not separate from content Restricted real estate not allowing for text expansion in translation Forcing formatting with breaks, and hard-coding alphabetical sorting Graphics that do not work in Bi-Di languages (because they indicate directionality and can't flip) or contain embedded text. The problems of culturally offensive icons are well known by now in the enterprise applications space, though there are some dangers, such as the use of flags to indicate language, for example. Resources Internationalization Techniques: Authoring HTML & CSS Global By Design Insert Title Here : Variables in Interface Language Prose: Internationalisation Doc and help considerations I can deal with later.

    Read the article

  • The biggest ADF conference "down under"

    - by Chris Muir
    While Oracle Open World is the place to be for ADF presentations, for Aussies living in Perth, San Francisco is a tad far away (believe me from experience, the 23hrs flight from PER-SYD-SFO is tedious).  That's why I'm very excited to see that the Australian Oracle User Group at this year's Perth conference is running its largest set of ADF presentation to date: 5! Okay, it doesn't compare to the 60 ADF sessions at OOW, but it's a small conference of around 300 people that runs for 2 days with 54 sessions total, not 40000 people that runs for 5 days with 1900+ sessions, so I think that's a good effort for a conference that's at the end of the earth! What's even better about this year's conference, is the AUSOUG conference is moving away from just consultants and Oracle staff presenting, but will also include customers presenting on ADF too.  This again proves Perth is a little ADF hotspot, which puts a tear to an ADF product manager's eye let me tell you ;-) The ADF sessions will include: Kevin Payne - JWH Group - ADF Mobile Application Development Matthew Carrigy - Department of Finance Western Australia - The times, they are a-changin’ - An Oracle Forms to JDeveloper ADF  Case Study Penny Cookson & Chris Noonan - Sage Computing Services - Impress your bosses with JDeveloper ADF dashboards on their iPads ...oh and... Chris Muir - Oracle Corporation - Speed-Dating Oracle JDeveloper 12c and Oracle ADF New Features  Chris Muir - Oracle Corporation - Develop Mobile Apps for Smart Devices: Converging Web and Native Applications You can check out the conference schedule here.  I hope you'll support these ADF presenters by attending the AUSOUG Perth conference, I look forward to seeing you there.

    Read the article

  • data handling with javascript

    - by Vincent Warmerdam
    Python has a very neat package called pandas which allows for quick data transformation; tables, aggregation, that sort of thing. A lot of these types of functionality can also be found in the python itertools module. The plyR package in R is also very similar. Usually one woud use this functionality to produce a table which is later visualized with a plot. I am personally very fond of d3, and I would like to allow the user to first indicate what type of data aggregation he wants on the dataset before it is visualized. The visualisation in question involves making a heatmap where the user gets to select the size of the bins of the heatmap beforehand (I want d3 to project this through leaflet). I want to visually select the ideal size of the bins for the heatmap. The way I work now is that I take the dataset, aggregate it with python and then manually load it in d3. This is a process that takes a lot of human effort and I was wondering if the data aggregation can be done through the javascript of the browser. I couldn't find a package for javascript specifically built for data, suggesting (to me) that this is a bad idea and that one should not use javascript for the data handling. Is there a good module/package for javascript to handle data aggregation? Is it a good/bad idea to do the data aggregation in javascript (performance wise)?

    Read the article

  • Newbie seeking advice on programming in general

    - by user974685
    need some of you to remember back to a time when you might have been bad at programming... Been at my new job (as a software developer) for a couple of months now, passed probation period. Have very little programming experience (C++ only) and am currently working with asp.net MVC and silverlight. So there's a website the company has been working on and I am joining the effort to make it better, iron out bugs etc. The problem is - learning about a system/website which has already been made, via visual studio. I ALWAYS feel HUGELY overwhelmed, never knowing which part of this line should I look up, and generally having lots of trouble getting the big picture. Visual studio itself is something I'm finding it difficult to get to grips with, let alone the asp.net framework. I get the impression that because my coworkers have more experience than me, they are getting all the good jobs, and I am left with crap to do - stuff which is not even vaguely programming. Meaning they are learning/creating more, and I am learning/creating near nothing. I'm getting demoralised, and too scared to say anything. I'm not stupid, I've read and practiced plenty of the fundamental programming concepts...I'm just bloody scared of this damn framework. I look at it and just feel paralyzed. The result is that I keep asking the older veteran guy of questions, and he is getting irritated, and would rather give me easy/mindless/non programming jobs to avoid wasting time with helping me out. Then when I don't understand something, I'm hesitating about whether or not I should ask him yet, and trying to decide if it would be a waste of time. I'm the kind of person who picks things up slowly, but with a lot of attention to detail. The former I think is making me look incompetent though. Anyone get where I'm coming from please say something helpful....I'm scared of losing my job in a few months or something...

    Read the article

  • Change from Bullet/OgreAnim to Havok?

    - by Brett Powell
    I have been working on a game in Ogre for the last 6 months or so. It started as a learning project, and after a few rewrites it actually turned into a real game project. Physics scared me, and using Bullet with its lack of documentation was a nightmare, but I was able to atleast get some basics added and learned a lot. So as of now I am using Ogre with its default animation system (fairly basic) and Bullet Physics. I had always wanted to use Havok when I started out, but due to lack of integration information on the Ogre forums, and lack of tutorials on the net, I decided against it. Now that I am actually at the point where Bullet is just too much of a headache to proceed with (staring at forum threads praying someone answers) and the Ogre animation system is so basic, I am considering switching to Havok for Physics and Animation. The Physics system looks extremely polished and easy to use. The animation system looks incredible with the retargeting/blending/etc. The documentation is incredibly detailed as well (I guess when you come from Bullet, any documentation looks amazing) So my question is, as I am still somewhat of a 'newbie' to game development, should I just stick with what I am using now or should I make the switch over to Havok? The physics looks like I could get my project back to where it is now with minimal effort, and be able to expand much faster. The animation aspect looks extremely daunting as far as integrating it with Ogre.

    Read the article

  • How can we make agile enjoyable for developers that like to personally, independently own large chunks from start to finish

    - by Kris
    We’re roughly midway through our transition from waterfall to agile using scrum; we’ve changed from large teams in technology/discipline silos to smaller cross-functional teams. As expected, the change to agile doesn’t suit everyone. There are a handful of developers that are having a difficult time adjusting to agile. I really want to keep them engaged and challenged, and ultimately enjoying coming to work each day. These are smart, happy, motivated people that I respect on both a personal and a professional level. The basic issue is this: Some developers are primarily motivated by the joy of taking a piece of difficult work, thinking through a design, thinking through potential issues, then solving the problem piece by piece, with only minimal interaction with others, over an extended period of time. They generally complete work to a high level of quality and in a timely way; their work is maintainable and fits with the overall architecture. Transitioning to a cross-functional team that values interaction and shared responsibility for work, and delivery of working functionality within shorter intervals, the teams evolve such that the entire team knocks that difficult problem over. Many people find this to be a positive change; someone that loves to take a problem and own it independently from start to finish loses the opportunity for work like that. This is not an issue with people being open to change. Certainly we’ve seen a few people that don’t like change, but in the cases I’m concerned about, the individuals are good performers, genuinely open to change, they make an effort, they see how the rest of the team is changing and they want to fit in. It’s not a case of someone being difficult or obstructionist, or wanting to hoard the juiciest work. They just don’t find joy in work like they used to. I’m sure we can’t be the only place that hasn’t bumped up on this. How have others approached this? If you’re a developer that is motivated by personally owning a big chunk of work from end to end, and you’ve adjusted to a different way of working, what did it for you?

    Read the article

  • How do graphics programmers deal with rendering vertices that don't change the image?

    - by canisrufus
    So, the title is a little awkward. I'll give some background, and then ask my question. Background: I work as a web GIS application developer, but in my spare time I've been playing with map rendering and improving data interchange formats. I work only in 2D space. One interesting issue I've encountered is that when you're rendering a polygon at a small scale (zoomed way out), many of the vertices are redundant. An extreme case would be that you have a polygon with 500,000 vertices that only takes up a single pixel. If you're sending this data to the browser, it would make sense to omit ~499,999 of those vertices. One way we achieve that is by rendering an image on a server and and sending it as a PNG: voila, it's a point. Sometimes, though, we want data sent to the browser where it can be rendered with SVG (or canvas, or webgl) so that it can be interactive. The problem: It turns out that, using modern geographic data sets, it's very easy to overload SVG's rendering abilities. In an effort to cope with those limitations, I'm trying to figure out how to visually losslessly reduce a data set for a given scale and map extent (and, if necessary, for a known map pixel width and height). I got a great reduction in data size just using the Douglas-Peucker algorithm, and I believe I was able to get it to keep the polygons true to within one pixel. Unfortunately, Douglas-Peucker doesn't preserve topology, so it changed how borders between polygons got rendered. I couldn't readily find other algorithms to try out and adapt to the purpose, but I don't have much CS/algorithm background and might not recognize them if I saw them.

    Read the article

  • The value of an updated specification

    - by Mr. Jefferson
    I'm at the tail end of a large project (around 5 months of my time, 60,000 lines of code) on which I was the only developer. The specification documents for the project were designed well early on, but as always happens during development, some things have changed. For example: Bugs were discovered and fixed in ways that don't correspond well with the spec Additional needs were discovered and dealt with We found areas where the spec hadn't thought far enough ahead and had to change some implementation Etc. Through all this, the spec was not kept updated, because we were lean on resources (I know that's a questionable excuse). I'm hoping to get the spec updated soon when we have time, but it's a matter of convincing management to spend the effort. I believe it's a good idea, but I need some good reasoning; I can't speak from very much experience because I'm only a year and a half out of school. So here are my questions: What value is there in keeping an updated spec? How often should the spec be updated? With every change made that might affect it, just at the end of development, or something in between? EDIT to clarify: this project was internal, so no external client was involved. It's our product.

    Read the article

< Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >