Search Results

Search found 10563 results on 423 pages for 'complex numbers'.

Page 120/423 | < Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >

  • maintaining a growing, diverse codebase with continuous integration

    - by Nate
    I am in need of some help with philosophy and design of a continuous integration setup. Our current CI setup uses buildbot. When I started out designing it, I inherited (well, not strictly, as I was involved in its design a year earlier) a bespoke CI builder that was tailored to run the entire build at once, overnight. After a while, we decided that this was insufficient, and started exploring different CI frameworks, eventually choosing buildbot. One of my goals in transitioning to buildbot (besides getting to enjoy all the whiz-bang extras) was to overcome some of the inadequacies of our bespoke nightly builder. Humor me for a moment, and let me explain what I have inherited. The codebase for my company is almost 150 unique c++ Windows applications, each of which has dependencies on one or more of a dozen internal libraries (and many on 3rd party libraries as well). Some of these libraries are interdependent, and have depending applications that (while they have nothing to do with each other) have to be built with the same build of that library. Half of these applications and libraries are considered "legacy" and unportable, and must be built with several distinct configurations of the IBM compiler (for which I have written unique subclasses of Compile), and the other half are built with visual studio. The code for each compiler is stored in two separate Visual SourceSafe repositories (which I am simply handling using a bunch of ShellCommands, as there is no support for VSS). Our original nightly builder simply took down the source for everything, and built stuff in a certain order. There was no way to build only a single application, or pick a revision, or to group things. It would launched virtual machines to build a number of the applications. It wasn't very robust, it wasn't distributable. It wasn't terribly extensible. I wanted to be able to overcame all of these limitations in buildbot. The way I did this originally was to create entries for each of the applications we wanted to build (all 150ish of them), then create triggered schedulers that could build various applications as groups, and then subsume those groups under an overall nightly build scheduler. These could run on dedicated slaves (no more virtual machine chicanery), and if I wanted I could simply add new slaves. Now, if we want to do a full build out of schedule, it's one click, but we can also build just one application should we so desire. There are four weaknesses of this approach, however. One is our source tree's complex web of dependencies. In order to simplify config maintenace, all builders are generated from a large dictionary. The dependencies are retrieved and built in a not-terribly robust fashion (namely, keying off of certain things in my build-target dictionary). The second is that each build has between 15 and 21 build steps, which is hard to browse and look at in the web interface, and since there are around 150 columns, takes forever to load (think from 30 seconds to multiple minutes). Thirdly, we no longer have autodiscovery of build targets (although, as much as one of my coworkers harps on me about this, I don't see what it got us in the first place). Finally, aformentioned coworker likes to constantly bring up the fact that we can no longer perform a full build on our local machine (though I never saw what that got us, either, considering that it took three times as long as the distributed build; I think he is just paranoically phobic of ever breaking the build). Now, moving to new development, we are starting to use g++ and subversion (not porting the old repository, mind you - just for the new stuff). Also, we are starting to do more unit testing ("more" might give the wrong picture... it's more like any), and integration testing (using python). I'm having a hard time figuring out how to fit these into my existing configuration. So, where have I gone wrong philosophically here? How can I best proceed forward (with buildbot - it's the only piece of the puzzle I have license to work on) so that my configuration is actually maintainable? How do I address some of my design's weaknesses? What really works in terms of CI strategies for large, (possibly over-)complex codebases?

    Read the article

  • Getting Started with StreamInsight 2.1

    - by Roman Schindlauer
    If you're just beginning to get familiar with StreamInsight, you may be looking for a way to get started. What are the basics? How can I get my first StreamInsight application running so I can see how it works? Where is the 'front door' that will get me going? If that describes you, then this blog entry might be just what you need. If you're already a StreamInsight wiz, keep reading anyway - you may find some helpful links here that you weren't aware of. But here's what we'd like from you experienced readers in particular: if you know of other good resources that we missed, please feel free to add them in the comments below. We appreciate you sharing your expertise. The Book The basic documentation for StreamInsight is located in the MSDN Library (Microsoft StreamInsight 2.1). You'll notice that previous versions of StreamInsight are still there (1.2 and 2.0), but if you're just getting started you can stick to the 2.1 section. The documentation has been organized to function as reference material, which is fine after you're familiar with the technology. But if you're trying to learn the basics, you might want to take a different path instead of just starting at the top. The following is one map you can use. What Is StreamInsight? Here is a sequence of topics that should give you a good overview of what StreamInsight is and how it works: Overview answers the question, "what is it?" StreamInsight Server Architecture gives you a quick look at a high-level architectural drawing StreamInsight Concepts lays out an overview of the basic components Deploying StreamInsight Entities to a StreamInsight Server describes the mechanics of how these components work together Getting an Example Running Once you have this background, go ahead and install StreamInsight and get a basic example up and running: Installation download and install the software StreamInsight Examples walk through a set of 3 simple StreamInsight applications that work together to demonstrate what you learned in the topics above; you can copy and paste the code into Visual Studio, compile, and run That's it - you now have a real, functioning StreamInsight system! Now that you have a handle on the basics, you might want to start digging deeper. Digging Deeper Here's a suggested path through the documentation to help you understand the next layer of StreamInsight technologies: Using Event Sources and Event Sinks sources supply data and sinks consume it; this topic gives you an overview of how they work Publishing and Connecting to the StreamInsight Server practical details on how to set up a StreamInsight server A Hitchhiker’s Guide to StreamInsight 2.1 Queries queries are the heart of how StreamInsight performs data analytics, and this whitepaper will help you really understand how they work Using StreamInsight LINQ root through this section for technical details on specific query components Using the StreamInsight Event Flow Debugger in addition to troubleshooting, the debugger is a great way to learn more about what goes on inside a StreamInsight application And Even Deeper Finally, to get a handle on some of the more complex things you can do with StreamInsight, dig into these: Input and Output Adapters adapters can be useful for handling more complex sources and sinks Building Resilient StreamInsight Applications a resilient application is able to recover from system failures Operations this section will help you monitor and troubleshoot a running StreamInsight system The StreamInsight Community As you're designing and developing your StreamInsight solutions, you probably will find it helpful to see working examples or to learn tips and tricks from others. Or maybe you need a place to post a vexing question. Here are some community resources that we have found useful. If you know of others, please add them in the comments below. Code samples and tools Official StreamInsight code samples Introduction to LinqPad Driver for StreamInsight 2.1 - LinqPad is a very useful tool for developing queries The following case studies are based on earlier versions of StreamInsight, but they still are useful examples: Microsoft Media Analytics - real-time monitoring and analytic Edgenet - responding to information from multiple source ICONICS - managing energy usage Blogs Microsoft StreamInsight Ruminations of J.net Richard Seroter's Architecture Musings pluralsight Forums MSDN StreamInsight Forum stackoverflow Training Microsoft StreamInsight Fundamentals (“Introducing StreamInsight” is free) from pluralsight Twitter @streaminsight   You’re a StreamInsight Expert That should get you going. Please add any other resources you have found useful in the comments below.   Regards, The StreamInsight Team

    Read the article

  • App Stores&ndash;In All Things, Its Quality Over Quantity

    - by D'Arcy Lussier
    Everybody has an opinion about Windows 8. People love it, people hate it, people are meh about it, people are apparently buying it from Microsoft stores in NYC as if it was water before a natural disaster…if there’s one thing that Microsoft product launches do well, its the ability to bring out strong emotional responses. Over at eweek.com, Don Reisinger wrote about 5 good and bad things about Windows 8. Yes, another opinion piece on WIndows 8. I figured since this one had good and bad it might be worthwhile to read. I then came across #10 on his list, and figured “What the hell…might as well post a bit of a rant on Windows 8 myself!” Here’s #10: 10. Bad: Too few apps Unfortunately, Microsoft wasn’t able to get too many developers to start producing applications for its Windows 8 Store. Microsoft hasn’t yet released official numbers, but some have said that the marketplace has less than 8,000 programs. Considering Apple’s App Store has 100 times that, it’s about time Microsoft starts leaning on developers to get more programs into its store. Believe me, Microsoft *has* been leaning on developers to get apps into the store. I’ve been asked at least 5 or 6 times from 5 or 6 different friends at Microsoft about whether I was going to write a Windows 8 app. I think Microsoft felt they had to try and address the number of apps available in their marketplace, since some people (like Don) would draw comparisons to the number of apps in the Apple marketplace. I feel for Microsoft in this, since the number of apps in a marketplace are an empty stat. Quality of Quantity I have an iPad that my family (wife, 10yo daughter, 3yo daughter) use. We all have our own apps installed on it. In addition, my wife has an iPhone 4S that she also installs apps on. As someone who gets asked by his kids often whether they can buy/download an app, the vast majority of the vast catalogue of iOS marketplace apps are crap! Do you realize how many “free” games are out there, only to really be not-free because you have to purchase in-game content to make the game actually playable? And how about searching – with such a vast array of apps and such high numbers of craptastic ones, trying to find something is incredibly difficult and can be frustrating. I would rather see that Microsoft has 8000 high quality apps in their store at launch, instead of 800000 that were mostly junk. Too Few Apps?! And seriously, 8000 is not a small number. How many iOS apps have I actually bought between the iPad and iPhone? I’ll be generous and say 30…heck, let’s round it up to 40. It’s not like I have 10,000 apps installed on my iPad, nor will that ever happen! So if people have, at the *launch* of a new platform ecosystem, EIGHT THOUSAND apps to choose from, I don’t see that as a fail at all! It should be noted that most of the most common apps (Netflix, Skype, etc.) are available for Windows 8 at launch – I guess I’ll have to wait a few weeks for My Pony Ranch and all its clones to start showing up; pity. Let’s Check Back in a Year So look, let’s check back in a year’s time and see what the app store looks like. My hope is that Microsoft doesn’t continue to push quantity over quality. Even knowing the optics that # of apps in the store carries and the pressure to catch Apple and Android marketplaces, I hope Microsoft avoids the scenario where there’s a good percentage of apps in the Windows Store that are utter rubbish and finding the gems will be cumbersome. But if that happens, we can thank guys like Dan who raised the false issue of app count at the launch for it.

    Read the article

  • Master Data Management for Location Data - Oracle Site Hub

    - by david.butler(at)oracle.com
    Most MDM discussions cover key domains such as customer, supplier, product, service, and reference data. It is usually understood that these domains have complex structures and hundreds if not thousands of attributes that need governing. Location, on the other hand, strikes most people as address data. How hard can that be? But for many industries, locations are complex, and site information is critical to efficient operations and relevant analytics. Retail stores and malls, bank branches, construction sites come to mind. But one of the best industries for illustrating the power of a site mastering application is Oil & Gas.   Oracle's Master Data Management solution for location data is the Oracle Site Hub. It is a location mastering solution that enables organizations to centralize site and location specific information from heterogeneous systems, creating a single view of site information that can be leveraged across all functional departments and analytical systems.   Let's take a look at the location entities the Oracle Site Hub can manage for the Oil & Gas industry: organizations, property, land, buildings, roads, oilfield, service center, inventory site, real estate, facilities, refineries, storage tanks, vendor locations, businesses, assets; project site, area, well, basin, pipelines, critical infrastructure, offshore platform, compressor station, gas station, etc. Any site can be classified into multiple hierarchies, like organizational hierarchy, operational hierarchy, geographic hierarchy, divisional hierarchies and so on. Any site can also be associated to multiple clusters, i.e. collections of sites, and these can be used as a foundation for driving reporting, analysis, organize daily work, etc. Hierarchies can also be used to model entities which are structured or non-structured collections of nodes, like for example routes, pipelines and more. The User Defined Attribute Framework provides the needed infrastructure to add single row attributes groups like well base attributes (well IDs, well type, well structure and key characterizing measures, and more) and well geometry, and multi row attribute groups like well applications, permits, production data, activities, operations, logs, treatments, tests, drills, treatments, and KPIs. Site Hub can also model areas, lands, fields, basins, pools, platforms, eco-zones, and stratigraphic layers as specific sites, tracking their base attributes, aliases, descriptions, subcomponents and more. Midstream entities (pipelines, logistic sites, pump stations) and downstream entities (cylinders, tanks, inventories, meters, partner's sites, routes, facilities, gas stations, and competitor sites) can also be easily modeled, together with their specific attributes and relationships. Site Hub can store any type of unstructured data associated to a site. This could be stored directly or on an external content management solution, like Oracle Universal Content Management. Considering a well, for example, Site Hub can store any relevant associated multimedia file such as: CAD drawings of the well profile, structure and/or parts, engineering documents, contracts, applications, permits, logs, pictures, photos, videos and more. For any site entity, Site Hub can associate all the related assets and equipments at the site, as well as all relationships between sites, between a site and multiple parties, and between a site and any purchasable or sellable item, over time. Items can be equipment, instruments, facilities, services, products, production entities, production facilities (pipelines, batteries, compressor stations, gas plants, meters, separators, etc.), support facilities (rigs, roads, transmission or radio towers, airstrips, etc.), supplier products and services, catalogs, and more. Items can just be associated to sites using standard Site Hub features, or they can be fully mastered by implementing Oracle Product Hub. Site locations (addresses or geographical coordinates) are also managed with out-of-the-box address geo-coding capabilities coupled with Google Maps integration to deliver powerful mapping capabilities and spatial data analysis. Locations can be shared between different sites. Centered on the site location, any site can also have associated areas. Site Hub can master any site location specific information, like for example cadastral, ownership, jurisdictional, geological, seismic and more, and any site-centric area specific information, like for example economical, political, risk, weather, logistic, traffic information and more. Now if anyone ever asks you why locations need MDM, think about how all these Oil & Gas entities and attributes would translate into your business locations. To learn more about Oracle's full MDM solution for the digital oil field, here is a link to Roberto Negro's outstanding whitepaper: Oracle Site Master Data Management for mastering wells and other PPDM entities in a digital oilfield context  

    Read the article

  • C#: A "Dumbed-Down" C++?

    - by James Michael Hare
    I was spending a lovely day this last weekend watching my sons play outside in one of the better weekends we've had here in Saint Louis for quite some time, and whilst watching them and making sure no limbs were broken or eyes poked out with sticks and other various potential injuries, I was perusing (in the correct sense of the word) this month's MSDN magazine to get a sense of the latest VS2010 features in both IDE and in languages. When I got to the back pages, I saw a wonderful article by David S. Platt entitled, "In Praise of Dumbing Down"  (msdn.microsoft.com/en-us/magazine/ee336129.aspx).  The title captivated me and I read it and found myself agreeing with it completely especially as it related to my first post on divorcing C++ as my favorite language. Unfortunately, as Mr. Platt mentions, the term dumbing-down has negative connotations, but is really and truly a good thing.  You are, in essence, taking something that is extremely complex and reducing it to something that is much easier to use and far less error prone.  Adding safeties to power tools and anti-kick mechanisms to chainsaws are in some sense "dumbing them down" to the common user -- but that also makes them safer and more accessible for the common user.  This was exactly my point with C++ and C#.  I did not mean to infer that C++ was not a useful or good language, but that in a very high percentage of cases, is too complex and error prone for the job at hand. Choosing the correct programming language for a job is a lot like choosing any other tool for a task.  For example: if I want to dig a French drain in my lawn, I can attempt to use a huge tractor-like backhoe and the job would be done far quicker than if I would dig it by hand.  I can't deny that the backhoe has the raw power and speed to perform.  But you also cannot deny that my chances of injury or chances of severing utility lines or other resources climb at an exponential rate inverse to the amount of training I may have on that machinery. Is C++ a powerful tool?  Oh yes, and it's great for those tasks where speed and performance are paramount.  But for most of us, it's the wrong tool.  And keep in mind, I say this even though I have 17 years of experience in using it and feel myself highly adept in utilizing its features both in the standard libraries, the STL, and in supplemental libraries such as BOOST.  Which, although greatly help with adding powerful features quickly, do very little to curb the relative dangers of the language. So, you may say, the fault is in the developer, that if the developer had some higher skills or if we only hired C++ experts this would not be an issue.  Now, I will concede there is some truth to this.  Obviously, the higher skilled C++ developers you hire the better the chance they will produce highly performant and error-free code.  However, what good is that to the average developer who cannot afford a full stable of C++ experts? That's my point with C#:  It's like a kinder, gentler C++.  It gives you nearly the same speed, and in many ways even more power than C++, and it gives you a much softer cushion for novices to fall against if they code less-than-optimally.  A bug is a bug, of course, in any language, but C# does a good job of hiding and taking on the task of handling almost all of the resource issues that make C++ so tricky.  For my money, C# is much more maintainable, more feature-rich, second only slightly in performance, faster to market, and -- last but not least -- safer and easier to use.  That's why, where I work, I much prefer to see the developers moving to C#.  The quantity of bugs is much lower, and we don't need to hire "experts" to achieve the same results since the language itself handles those resource pitfalls so prevalent in poorly written C++ code.  C++ will still have its place in the world, and I'm sure I'll still use it now and again where it is truly the correct tool for the job, but for nearly every other project C# is a wonderfully "dumbed-down" version of C++ -- in the very best sense -- and to me, that's the smart choice.

    Read the article

  • What Counts For A DBA: ESP

    - by Louis Davidson
    Now I don’t want to get religious here, and I’m not going to, but what I’m going to describe in this ‘What Counts for a DBA’ installment sometimes feels like magic. Often  I will spend hours thinking about the solution to a design issue or coding problem, working diligently to try to come up with a solution and then finally just give up with the feeling that I’m not even qualified to be a data entry clerk, much less a data architect.  At this point I often take a walk (or sometimes a nap), and then it hits me. I realize that I have the answer just sitting in my brain, ready to implement.  This phenomenon is not limited to walks either; it can happen almost any time after I stop my obsession about a problem. I call this phenomena ESP (or Extra-Sensory Programming.)  Another term for this could be ‘sleeping on it’, and while the idiom tends to mean to let time pass to actively think about a problem, sleeping on a problem also lets you relax and let your brain do the work. I first noticed this back in my college days when I would play video games for hours on end. We would get stuck deep in some dungeon unable to find a way out, playing for days on end until we were beaten down tired. Once we gave up and walked away, the solution would usually be there waiting for one of us before we came back to play the next day.  Sometimes it would be in the form of a dream, and sometimes it would just be that the problem was now easy to solve when we started to play again.  While it worked great for video games, it never occurred when I studied English Literature for hours on end, or even when I worked for the same sort of frustrating hours attempting to solve a homework problem in Calculus.  I believe that the difference was that I was passionate about the video game, and certainly far less so about homework where people used the word “thou” instead of “you” or x to represent a number. This phenomenon occurs somewhat more often in my current work as a professional data programmer, because I am very passionate about SQL and love those aspects of my career choice.  Every day that I get to draw a new data model to solve a customer issue, or write a complex SELECT statement to ferret out the answer to a complex data question, is a great day. I hope it is the same for any reader of this blog.  But, unfortunately, while the day on a whole is great, a heck of a lot of noise is generated in work life. There are the typical project deadlines, along with the requisite project manager sitting on your shoulders shouting slogans to try to make you to go faster: Add in office politics, and the occasional family issues that permeate the mind, and you lose the ability to think deeply about any problem, not to mention occasionally forgetting your own name.  These office realities coupled with a difficult SQL problem staring at you from your widescreen monitor will slowly suck the life force out of your body, making it seem impossible to solve the problem This is when the walk starts; or a nap. Maybe you hide from the madness under your desk like George Costanza hides from Steinbrenner on Seinfeld.  Forget about the problem. Free your mind from the insanity of the problem and your surroundings. Then let your training and education deep in your brain take over and see if it will passively do the rest for you. If you don’t end up with a solution, the worst case scenario is that you have a bit of exercise or rest, and you won’t have heard the phrase “better is the enemy of good enough” even once…which certainly will do your brain some good. Once you stop expecting whipping your brain for information, inspiration may just strike and instead of a humdrum solution you find a solution you hadn’t even considered, almost magically. So, my beloved manager, next time you have an urgent deadline and you come across me taking a nap, creep away quietly because I’m working, doing some extra-sensory programming.

    Read the article

  • Joy! | Important Information About Your iPad 3G

    - by Jeff Julian
    Looks like I was one of the lucky 114,000 who AT&T lost their email to “hackers”.  Why is “hackers” in “double quotes”.  I can just imagine some executive at AT&T in their “Oh No, We Messed Up Meeting” saying, what happened?  Then someone replied, well we have had a breach and “hackers” broke in (using the quote in the air gesture) and stole our iPad 3G customers emails. Oh well, I am sure my email has been sold and sold again by many different vendors, why not AT&T now.  At least Dorothy Attwood could have gave us her email to give to someone else instead of blinking it through a newsletter system. June 13, 2010 Dear Valued AT&T Customer, Recently there was an issue that affected some of our customers with AT&T 3G service for iPad resulting in the release of their customer email addresses. I am writing to let you know that no other information was exposed and the matter has been resolved.  We apologize for the incident and any inconvenience it may have caused. Rest assured, you can continue to use your AT&T 3G service on your iPad with confidence. Here’s some additional detail: On June 7 we learned that unauthorized computer “hackers” maliciously exploited a function designed to make your iPad log-in process faster by pre-populating an AT&T authentication page with the email address you used to register your iPad for 3G service.  The self-described hackers wrote software code to randomly generate numbers that mimicked serial numbers of the AT&T SIM card for iPad – called the integrated circuit card identification (ICC-ID) – and repeatedly queried an AT&T web address.   When a number generated by the hackers matched an actual ICC-ID, the authentication page log-in screen was returned to the hackers with the email address associated with the ICC-ID already populated on the log-in screen. The hackers deliberately went to great efforts with a random program to extract possible ICC-IDs and capture customer email addresses.  They then put together a list of these emails and distributed it for their own publicity. As soon as we became aware of this situation, we took swift action to prevent any further unauthorized exposure of customer email addresses.  Within hours, AT&T disabled the mechanism that automatically populated the email address. Now, the authentication page log-in screen requires the user to enter both their email address and their password. I want to assure you that the email address and ICC-ID were the only information that was accessible. Your password, account information, the contents of your email, and any other personal information were never at risk.  The hackers never had access to AT&T communications or data networks, or your iPad.  AT&T 3G service for other mobile devices was not affected. While the attack was limited to email address and ICC-ID data, we encourage you to be alert to scams that could attempt to use this information to obtain other data or send you unwanted email. You can learn more about phishing by visiting the AT&T website. AT&T takes your privacy seriously and does not tolerate unauthorized access to its customers’ information or company websites.   We will cooperate with law enforcement in any investigation of unauthorized system access and to prosecute violators to the fullest extent of the law. AT&T acted quickly to protect your information – and we promise to keep working around the clock to keep your information safe.  Thank you very much for your understanding, and for being an AT&T customer. Sincerely, Dorothy Attwood Senior Vice President, Public Policy and Chief Privacy Officer for AT&T Technorati Tags: AT&T,iPad 3G,Email

    Read the article

  • When to use Aspect Oriented Architecture (AOA/AOD)

    When is it appropriate to use aspect oriented architecture? I think the only honest answer to this question is that it depends on the context for which the question is being asked. There really are no hard and fast rules regarding the selection of an architectural model(s) for a project because each model provides good and bad benefits. Every system is built with a unique requirements and constraints. This context will dictate when to use one type of architecture over another or in conjunction with others. To me aspect oriented architecture models should be a sub-phase in the architectural modeling and design process especially when creating enterprise level models. Personally, I like to use this approach to create a base architectural model that is defined by non-functional requirements and system quality attributes.   This general model can then be used as a starting point for additional models because it is targets all of the business key quality attributes required by the system.Aspect oriented architecture is a method for modeling non-functional requirements and quality attributes of a system known as aspects. These models do not deal directly with specific functionality. They do categorize functionality of the system. This approach allows a system to be created with a strong emphasis on separating system concerns into individual components. These cross cutting components enables a systems to create with compartmentalization in regards to non-functional requirements or quality attributes.  This allows for the reduction in code because an each component maintains an aspect of a system that can be called by other aspects. This approach also allows for a much cleaner and smaller code base during the implementation and support of a system. Additionally, enabling developers to develop systems based on aspect-oriented design projects will be completed faster and will be more reliable because existing components can be shared across a system; thus, the time needed to create and test the functionality is reduced.   Example of an effective use of Aspect Oriented ArchitectureIn my experiences, aspect oriented architecture can be very effective with large or more complex systems. Typically, these types of systems have a large number of concerns so the act of defining them is very beneficial for reducing the system’s complexity because components can be developed to address each concern while exposing functionality to the other system components. The benefits to using the aspect oriented approach as the starting point for a system is that it promotes communication between IT and the business due to the fact that the aspect oriented models are quality attributes focused so not much technical understanding is needed to understand the model.An example of this can be in developing a new intranet website. Common Intranet Concerns: Error Handling Security Logging Notifications Database connectivity Example of a not as effective use of Aspect Oriented ArchitectureAgain in my experiences, aspect oriented architecture is not as effective with small or less complex systems in comparison.  There is no need to model concerns for a system that has a limited amount of them because the added overhead would not be justified for the actual benefits of creating the aspect oriented architecture model.  Furthermore, these types of projects typically have a reduced time schedule and a limited budget.  The creation of the Aspect oriented models would increase the overhead of a project and thus increase the time needed to implement the system. An example of this is seen by creating a small application to poll a network share for new files and then FTP them to a new location.  The two primary concerns for this project is to monitor a network drive and FTP files to a new location.  There is no need to create an aspect model for this system because there will never be a need to share functionality amongst either of these concerns.  To add to my point, this system is so small that it could be created with just a few classes so the added layer of componentizing the concerns would be complete overkill for this situation. References:Brichau, Johan; D'Hondt, Theo. (2006) Aspect-Oriented Software Development (AOSD) - An Introduction. Retreived from: http://www.info.ucl.ac.be/~jbrichau/courses/introductionToAOSD.pdf

    Read the article

  • Moving monarchs and dragons: migrating the JDK bugs to JIRA

    - by darcy
    Among insects, monarch butterflies and dragonflies have the longest migrations; migrating JDK bugs involves a long journey as well! As previously announced by Mark back in March, we've been working according to a revised plan to transition the JDK bug management from Sun's legacy system to initially an Oracle-internal JIRA instance which is afterward made visible and usable externally. I've been busily working on this project for the last few months and the team has made good progress on many aspects of the effort: JDK bugs will be imported into JIRA regardless of age; bugs will also be imported regardless of state, including closed bugs. Consequently, the JDK bug project will start pre-populated with over 100,000 existing bugs, some dating all the way back to 1994. This will allow a continuity of information and allow new issues to be linked to old ones. Using a custom import process, the Sun bug numbers will be preserved in JIRA. For example, the Sun bug with bug number 4040458 will become "JDK-4040458" in JIRA. In JIRA the project name, "JDK" in our case, is part of the bug's identifier. Bugs created after the JIRA migration will be numbered starting at 8000000; bugs imported from the legacy system have numbers ranging between 1000000 and 79999999. We're working with the bugs.sun.com team to try to maintain continuity of the ability to both read JDK bug information as well as to file new incidents. At least for now, the overall architecture of bugs.sun.com will be the same as it is today: it will be a gateway bridging to an Oracle-internal system, but the internal system will change to JIRA from the legacy database. Generally we are aiming to preserve the visibility of bugs currently viewable on bugs.sun.com; however, bugs in areas not related to the JDK will not be visible after the transition to JIRA. New incoming incidents will be sent to a separate JIRA project for initial triage before possibly being moved into the JDK project. JDK bug management leans heavily on being able to track the state of bugs in multiple releases, especially to coordinate delivering synchronized security releases (known as CPUs, critital patch updates, in Oracle parlance). For a security release, it is common for half a dozen or more release trains to be affected (for example, JDK 5, JDK 6 update, OpenJDK 6, JDK 7 update, JDK 8, virtual releases for HotSpot express, etc.). We've determined we need to track at least the tuple of (release, responsible engineer/assignee for the release, status in the release) for the release trains a fix is going into. To do this in JIRA, we are creating a separate port/backport issue type along with a custom link type to allow the multiple release information to be easily grouped and presented together. The Sun legacy system had a three-level classification scheme, product, category, and subcategory. Out of the box, JIRA only has a one-level classification, component. We've implemented a custom second-level classification, subcomponent. As part of the bug migration we've taken the opportunity to think about how bugs should be grouped under a two-level system and we'll the new system will be simpler and more regular. The main top-level components of the JDK product will include: core-libs client-libs deploy install security-libs other-libs tools hotspot For the libs areas, the primary name of the subcomportment will be the package of the API in question. In the core-libs component, there will be subcomponents like: java.lang java.lang.class_loading java.math java.util java.util:i18n In the tools component, subcomponents will primarily correspond to command names in $JDK/bin like, jar, javac, and javap. The first several bulk imports of the JDK bugs into JIRA have gone well and we're continuing to refine the import to have greater fidelity to the current data, including by reconstructing information not brought over in a structured fashion during the previous large JDK bug system migration back in 2004. We don't currently have a firm timeline of when the new system will be usable externally, but as it becomes available, I'll share further information in follow-up blog posts.

    Read the article

  • ArchBeat Link-o-Rama Top 10 for December 9-15, 2012

    - by Bob Rhubart
    You click, we listen. The following list reflects the Top 10 most popular items posted on the OTN ArchBeat Facefbook page for the week of December 9-15, 2012. DevOps Basics II: What is Listening on Open Ports and Files – WebLogic Essentials | Dr. Frank Munz "Can you easily find out which WebLogic servers are listening to which port numbers and addresses?" asks Dr. Frank Munz. The good doctor has an answer—and a tech tip. Using OBIEE against Transactional Schemas Part 4: Complex Dimensions | Stewart Bryson "Another important entity for reporting in the Customer Tracking application is the Contact entity," says Stewart Bryson. "At first glance, it might seem that we should simply build another dimension called Dim – Contact, and use analyses to combine our Customer and Contact dimensions along with our Activity fact table to analyze Customer and Contact behavior." SOA 11g Technology Adapters – ECID Propagation | Greg Mally "Many SOA Suite 11g deployments include the use of the technology adapters for various activities including integration with FTP, database, and files to name a few," says Oracle Fusion Middleware A-Team member Greg Mally. "Although the integrations with these adapters are easy and feature rich, there can be some challenges from the operations perspective." Greg's post focuses on technical tips for dealing with one of these challenges. Podcast: DevOps and Continuous Integration In Part 1 of a 3-part program, panelists Tim Hall (Senior Director of product management for Oracle Enterprise Repository and Oracle’s Application Integration Architecture), Robert Wunderlich (Principal Product Manager for Oracle’s Application Integration Architecture Foundation Pack) and Peter Belknap (Director of product management for Oracle SOA Integration) discuss why DevOps matters and how it changes development methodologies and organizational structure. Good To Know - Conflicting View Objects and Shared Entity | Andrejus Baranovskis Oracle ACE Director Andrejus Baranovskis shares his thoughts -- and a sample application -- dealing with an "interesting ADF behavior" encountered over the weekend. Cloud Deployment Models | B. R. Clouse Looking out for the cloud newbies... "As the cloud paradigm grows in depth and breadth, more readers are approaching the topic for the first time, or from a new perspective," says B. R. Clouse. "This blog is a basic review of cloud deployment models, to help orient newcomers and neophytes." Service governance morphs into cloud API management | David Linthicum "When building and using clouds, the ability to manage APIs or services is the single most important item you can provide to ensure the success of the project," says David Linthicum. "But most organizations driving a cloud project for the first time have no experience handling a service-based architecture and don't see the need for API management until after deployment. By then, it's too late." Oracle Fusion Middleware Security: Password Policy in OAM 11g R2 | Rob Otto Rob Otto continues the Oracle Fusion Middleware A-Team "Oracle Access Manager Academy" series with a detailed look at OAM's ability to support "a subset of password management processes without the need to use Oracle Identity Manager and LDAP Sync." Understanding the JSF Lifecycle and ADF Optimized Lifecycle | Steven Davelaar Could you call that a surprise ending? Oracle WebCenter & ADF Architecture Team (A-Team) member learned a lot more than he expected while creating a UKOUG presentation entitled "What you need to know about JSF to be succesful with ADF." Expanding on requestaudit - Tracing who is doing what...and for how long | Kyle Hatlestad "One of the most helpful tracing sections in WebCenter Content (and one that is on by default) is the requestaudit tracing," says Oracle Fusion Middleware A-Team architect Kyle Hatlestad. Get up close and technical in his post. Thought for the Day "There is no code so big, twisted, or complex that maintenance can't make it worse." — Gerald Weinberg Source: SoftwareQuotes.com

    Read the article

  • Does your analytic solution tell you what questions to ask?

    - by Manan Goel
    Analytic solutions exist to answer business questions. Conventional wisdom holds that if you can answer business questions quickly and accurately, you can take better business decisions and therefore achieve better business results and outperform the competition. Most business questions are well understood (read structured) so they are relatively easy to ask and answer. Questions like what were the revenues, cost of goods sold, margins, which regions and products outperformed/underperformed are relatively well understood and as a result most analytics solutions are well equipped to answer such questions. Things get really interesting when you are looking for answers but you don’t know what questions to ask in the first place? That’s like an explorer looking to make new discoveries by exploration. An example of this scenario is the Center of Disease Control (CDC) in United States trying to find the vaccine for the latest strand of the swine flu virus. The researchers at CDC may try hundreds of options before finally discovering the vaccine. The exploration process is inherently messy and complex. The process is fraught with false starts, one question or a hunch leading to another and the final result may look entirely different from what was envisioned in the beginning. Speed and flexibility is the key; speed so the hundreds of possible options can be explored quickly and flexibility because almost everything about the problem, solutions and the process is unknown.  Come to think of it, most organizations operate in an increasingly unknown or uncertain environment. Business Leaders have to take decisions based on a largely unknown view of the future. And since the value proposition of analytic solutions is to help the business leaders take better business decisions, for best results, consider adding information exploration and discovery capabilities to your analytic solution. Such exploratory analysis capabilities will help the business leaders perform even better by empowering them to refine their hunches, ask better questions and take better decisions. That’s your analytic system not only answering the questions but also suggesting what questions to ask in the first place. Today, most leading analytic software vendors offer exploratory analysis products as part of their analytic solutions offerings. So, what characteristics should be top of mind while evaluating the various solutions? The answer is quite simply the same characteristics that are essential for exploration and analysis – speed & flexibility. Speed is required because the system inherently has to be agile to handle hundreds of different scenarios with large volumes of data across large user populations. Exploration happens at the speed of thought so make sure that you system is capable of operating at speed of thought. Flexibility is required because the exploration process from start to finish is full of unknowns; unknown questions, answers and hunches. So, make sure that the system is capable of managing and exploring all relevant data – structured or unstructured like databases, enterprise applications, tweets, social media updates, documents, texts, emails etc. and provides flexible Google like user interface to quickly explore all relevant data. Getting Started You can help business leaders become “Decision Masters” by augmenting your analytic solution with information discovery capabilities. For best results make sure that the solution you choose is enterprise class and allows advanced, yet intuitive, exploration and analysis of complex and varied data including structured, semi-structured and unstructured data.  You can learn more about Oracle’s exploratory analysis solutions by clicking here.

    Read the article

  • what differs a computer scientist/software engineer to regular people who learn programming language and APIs?

    - by Amumu
    In University, we learn and reinvent the wheel a lot to truly learn the programming concepts. For example, we may learn assembly language to understand, what happens inside the box, and how the system operates, when we execute our code. This helps understanding higher level concepts deeper. For example, memory management like in C is just an abstraction of manually managed memory contents and addresses. The problem is, when we're going to work, usually productivity is required more. I could program my own containers, or string class, or date/time (using POSIX with C system call) to do the job, but then, it would take much longer time to use existing STL or Boost library, which abstract all of those thing and very easy to use. This leads to an issue, that a regular person doesn't need to get through all the low level/under the hood stuffs, who learns only one programming language and using language-related APIs. These people may eventually compete with the mainstream graduates from computer science or software engineer and call themselves programmers. At first, I don't think it's valid to call them programmers. I used to think, a real programmer needs to understand the computer deeply (but not at the electronic level). But then I changed my mind. After all, they get the job done and satisfy all the test criteria (logic, performance, security...), and in business environment, who cares if you're an expert and understand how computer works or not. You may get behind the "amateurs" if you spend to much time learning about how things work inside. It is totally valid for those people to call themselves programmers. This makes me confuse. So, after all, programming should be considered an universal skill? Does programming language and concepts matter or the problems we solve matter? For example, many C/C++ vs Java and other high level language, one of the main reason is because C/C++ features performance, as well as accessing low level facility. One of the main reason (in my opinion), is coding in C/C++ seems complex, so people feel good about it (not trolling anyone, just my observation, and my experience as well. Try to google "C hacker syndrome"). While Java on the other hand, made for simplifying programming tasks to help developers concentrate on solving their problems. Based on Java rationale, if the programing language keeps evolve, one day everyone can map their logic directly with natural language. Everyone can program. On that day, maybe real programmers are mathematicians, who could perform most complex logic (including business logic and academic logic) without worrying about installing/configuring compiler, IDEs? What's our job as a computer scientist/software engineer? To solve computer specific problems or to solve problems in general? For example, take a look at this exame: http://cm.baylor.edu/ICPCWiki/attach/Problem%20Resources/2010WorldFinalProblemSet.pdf . The example requires only basic knowledge about the programming language, but focus more on problem solving with the language. In sum, what differs a computer scientist/software engineer to regular people who learn programming language and APIs? A mathematician can be considered a programmer, if he is good enough to use programming language to implement his formula. Can we programmer do this? Probably not for most of us, since we specialize about computer, not math. An electronic engineer, who learns how to use C to program for his devices, can be considered a programmer. If the programming languages keep being simplified, may one day the software engineers, who implements business logic and create softwares, be obsolete? (Not for computer scientist though, since many of the CS topics are scientific, and science won't change, but technology will).

    Read the article

  • World Record Siebel PSPP Benchmark on SPARC T4 Servers

    - by Brian
    Oracle's SPARC T4 servers set a new World Record for Oracle's Siebel Platform Sizing and Performance Program (PSPP) benchmark suite. The result used Oracle's Siebel Customer Relationship Management (CRM) Industry Applications Release 8.1.1.4 and Oracle Database 11g Release 2 running Oracle Solaris on three SPARC T4-2 and two SPARC T4-1 servers. The SPARC T4 servers running the Siebel PSPP 8.1.1.4 workload which includes Siebel Call Center and Order Management System demonstrates impressive throughput performance of the SPARC T4 processor by achieving 29,000 users. This is the first Siebel PSPP 8.1.1.4 benchmark supporting 29,000 concurrent users with a rate of 239,748 Business Transactions/hour. The benchmark demonstrates vertical and horizontal scalability of Siebel CRM Release 8.1.1.4 on SPARC T4 servers. Performance Landscape Systems Txn/hr Users Call Center Order Management Response Times (sec) 1 x SPARC T4-1 (1 x SPARC T4 2.85 GHz) – Web 3 x SPARC T4-2 (2 x SPARC T4 2.85 GHz) – App/Gateway 1 x SPARC T4-1 (1 x SPARC T4 2.85 GHz) – DB 239,748 29,000 0.165 0.925 Oracle: Call Center + Order Management Transactions: 197,128 + 42,620 Users: 20300 + 8700 Configuration Summary Web Server Configuration: 1 x SPARC T4-1 server 1 x SPARC T4 processor, 2.85 GHz 128 GB memory Oracle Solaris 10 8/11 iPlanet Web Server 7 Application Server Configuration: 3 x SPARC T4-2 servers, each with 2 x SPARC T4 processor, 2.85 GHz 256 GB memory 3 x 300 GB SAS internal disks Oracle Solaris 10 8/11 Siebel CRM 8.1.1.5 SIA Database Server Configuration: 1 x SPARC T4-1 server 1 x SPARC T4 processor, 2.85 GHz 128 GB memory Oracle Solaris 11 11/11 Oracle Database 11g Release 2 (11.2.0.2) Storage Configuration: 1 x Sun Storage F5100 Flash Array 80 x 24 GB flash modules Benchmark Description Siebel 8.1 PSPP benchmark includes Call Center and Order Management: Siebel Financial Services Call Center – Provides the most complete solution for sales and service, allowing customer service and telesales representatives to provide superior customer support, improve customer loyalty, and increase revenues through cross-selling and up-selling. High-level description of the use cases tested: Incoming Call Creates Opportunity, Quote and Order and Incoming Call Creates Service Request . Three complex business transactions are executed simultaneously for specific number of concurrent users. The ratios of these 3 scenarios were 30%, 40%, 30% respectively, which together were totaling 70% of all transactions simulated in this benchmark. Between each user operation and the next one, the think time averaged approximately 10, 13, and 35 seconds respectively. Siebel Order Management – Oracle's Siebel Order Management allows employees such as salespeople and call center agents to create and manage quotes and orders through their entire life cycle. Siebel Order Management can be tightly integrated with back-office applications allowing users to perform tasks such as checking credit, confirming availability, and monitoring the fulfillment process. High-level description of the use cases tested: Order & Order Items Creation and Order Updates. Two complex Order Management transactions were executed simultaneously for specific number of concurrent users concurrently with aforementioned three Call Center scenarios above. The ratio of these 2 scenarios was 50% each, which together were totaling 30% of all transactions simulated in this benchmark. Between each user operation and the next one, the think time averaged approximately 20 and 67 seconds respectively. Key Points and Best Practices No processor cores or cache were activated or deactivated on the SPARC T-Series systems to achieve special benchmark effects. See Also Siebel White Papers SPARC T4-1 Server oracle.com OTN SPARC T4-2 Server oracle.com OTN Siebel CRM oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 30 September 2012.

    Read the article

  • More Stuff less Fluff

    - by brendonpage
    Originally posted on: http://geekswithblogs.net/brendonpage/archive/2013/11/08/more-stuff-less-fluff.aspxYAGNI – "You Aren't Going To Need It". This is an acronym commonly used in software development to remind developers to only write what they need. This acronym exists because software developers have gotten into the habit of writing everything they need to solve a problem and then everything they think they're going to possibly need in the future. Since we can't predict the future this results in a large portion of the code that we write never being used. That extra code causes unnecessary complexity, which makes it harder to understand and harder to modify when we inevitably have to write something that we didn't think of. I've known about YAGNI for some time now but I never really got it. The words made sense and the idea was clear but the concept never sank in. I was one of those devs who'd happily write a ton of code in the anticipation of future needs. In my mind this was an essential part of writing high quality code. I didn't realise that in doing so I was actually writing low quality code. If you are anything like me you are probably thinking "Lies and propaganda! High quality code needs to be future proof." I agree! But what makes code future proof? If we could see into the future the answer would be simple, code that allows for or meets all future requirements. Since we can't see the future the best we can do is write code that can easily adapt to future requirements, this means writing flexible code. Flexible code is: Fast to understand. Fast to add to. Fast to modify. To be flexible code has to be simple, this means only making it as complex as it needs to be to meet those 3 criteria. That is high quality code. YAGNI! The art is in deciding where to place the seams (abstractions) that will give you flexibility without making decisions about future functionality. Robert C Martin explains it very nicely, he says a good architecture allows you to defer decisions because if you can defer a decision then you have the flexibility to change it. I've recently had a YAGNI experience which brought this all into perspective. I was working on a new project which had multiple clients that connect to a server hosted in the cloud. I was tasked with adding a feature to the desktop client that would allow users to capture items that would then be saved to the cloud. My immediate thought was "Hey we have multiple clients so I should build a web service for these items, that way we can access them from other clients", so I went to work and this is what I created.  I stood back and gazed upon what I'd created with a warm fuzzy feeling. It was beautiful! Then the time came for the team to use the design I'd created for another feature with a new entity. Let's just say that they didn't get the same warm fuzzy feeling that I did when they looked at the design. After much discussion they eventually got it through to me that I'd bloated the design based on an assumption of future functionality. After much more discussion we cut the design down to the following. This design gives us future flexibility with no extra work, it is as complex as it needs to be. It has been a couple of months since this incident and we still haven't needed to access either of the entities from other clients. Using the simpler design allowed us to do more stuff with less stuff!

    Read the article

  • The Rise of Project Intelligence and Why It Matters

    - by Melissa Centurio Lopes
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} By Amy DeWolf Are you doing any of these in your organization? How are you leveraging historical data to forecast projects? There’s a lot going on in government today. The economic pressures agencies feel from the uncertainty of budget cuts and sequestration effect every part of an organization, including the Project Management Office (PMO).  The PMO is responsible for monitoring and administering government IT projects. As time goes on, priorities shift, technology advances, and new regulations are imposed, all of which make planning and executing projects more difficult.  For example, think about your own projects.  How many boxes do you need to check and hoops do you need to jump through to ensure you comply with new regulations? While new regulations and technology advancements can be a good thing, they add an additional layer of complexity to already complex projects. To overcome some of these pressures, particularly new regulations, many in the PMO world are adopting a new approach- Project Intelligence (PI). According to a new Oracle Primavera white paper, The Rise of Project Intelligence: When Project Management is Just Not Enough, “PI uses Business Intelligence methods to leverage historical project data to make more informed decisions and greatly enhance project execution.” Currently, project managers plan and forecast the possible phases in an execution cycle.  However, most project managers don’t have the proper tools to do this as effectively as they would like. As the white paper noted, “The underlying deficiencies in most forecasting approaches are that 1) the PM fails in most instances to leverage historical data and 2) the PM doesn’t employ current Business Intelligence tools.” PI seeks to overturn this by combining modeling tools used in Business Intelligence for projects with the understanding of Emotional Intelligence for managing people.   Simply put, Project Intelligence is built off four main pillars: Actively use historical data to forecast project cycles Understand the intricacies of complex projects Enhance social and emotional intelligence in projects Actively use Business intelligence tools Read our complimentary whitepaper and discover the importance of emotional intelligence and best practices for improving projects, specifically in terms of communication.

    Read the article

  • How much is a subscriber worth?

    - by Tom Lewin
    This year at Red Gate, we’ve started providing a way to back up SQL Azure databases and Azure storage. We decided to sell this as a service, instead of a product, which means customers only pay for what they use. Unfortunately for us, it makes figuring out revenue much trickier. With a product like SQL Compare, a customer pays for it, and it’s theirs for good. Sure, we offer support and upgrades, but, fundamentally, the sale is a simple, upfront transaction: we’ve made this product, you need this product, we swap product for money and everyone is happy. With software as a service, it isn’t that easy. The money and product don’t change hands up front. Instead, we provide a service in exchange for a recurring fee. We know someone buying SQL Compare will pay us $X, but we don’t know how long service customers will stay with us, or how much they will spend. How do we find this out? We use lifetime value analysis. What is lifetime value? Lifetime value, or LTV, is how much a customer is worth to the business. For Entrepreneurs has a brilliant write up that we followed to conduct our analysis. Basically, it all boils down to this equation: LTV = ARPU x ALC To make it a bit less of an alphabet-soup and a bit more understandable, we can write it out in full: The lifetime value of a customer equals the average revenue per customer per month, times the average time a customer spends with the service Simple, right? A customer is worth the average spend times the average stay. If customers pay on average $50/month, and stay on average for ten months, then a new customer will, on average, bring in $500 over the time they are a customer! Average spend is easy to work out; it’s revenue divided by customers. The problem comes when we realise that we don’t know exactly how long a customer will stay with us. How can we figure out the average lifetime of a customer, if we only have six months’ worth of data? The answer lies in the fact that: Average Lifetime of a Customer = 1 / Churn Rate The churn rate is the percentage of customers that cancel in a month. If half of your customers cancel each month, then your average customer lifetime is two months. The problem we faced was that we didn’t have enough data to make an estimate of one month’s cancellations reliable (because barely anybody cancels)! To deal with this data problem, we can take data from the last three months instead. This means we have more data to play with. We can still use the equation above, we just need to multiply the final result by three (as we worked out how many three month periods customers stay for, and we want our answer to be in months). Now these estimates are likely to be fairly unreliable; when there’s not a lot of data it pays to be cautious with inference. That said, the numbers we have look fairly consistent, and it’s super easy to revise our estimates when new data comes in. At the very least, these numbers give us a vague idea of whether a subscription business is viable. As far as Cloud Services goes, the business looks very viable indeed, and the low cancellation rates are much more than just data points in LTV equations; they show that the product is working out great for our customers, which is exactly what we’re looking for!

    Read the article

  • WiX, MSDeploy and an appealing configuration/deployment paradigm

    - by alexhildyard
    I do a lot of application and server configuration; I've done this for many years and have tended to view the complexity of this strictly in terms of the complexity of the ultimate configuration to be deployed. For example, specific APIs aside, I would tend to regard installing a server certificate as a more complex activity than, say, copying a file or adding a Registry entry.My prejudice revolved around the idea of a sequential deployment script that not only had the explicit prescription to apply a specific server configuration, but also made the implicit presumption that the server in question was in a good known state. Scripts like this fail for hundreds of reasons -- the Default Website didn't exist; the application had already been deployed; the application had already been partially deployed and failed to rollback fully, and so on. And so the problem is that the more complex the configuration activity, the more scope for error in any individual part of that activity, and therefore the greater the chance the server in question will not end up at exactly the desired configuration level.Recently I was introduced to a completely different mindset, which, for want of a better turn of phrase, I will call the "make it so" mindset. It's extremely simple both to explain and to implement. In place of the head-down, imperative script you used to use, you substitute a set of checks -- much like exception handlers -- around each configuration activity, starting with a check of the current system state. Thus the configuration logic becomes: "IF these services aren't started then start them, and IF XYZ website doesn't exist then create it, and IF these shares don't exist then create them, and IF these shares aren't permissioned in some particular way, then permission them so." This works. Really well, in my experience. Scenario 1: You want to get a system into a good known state; it's already in a good known state; you quickly realise there is nothing to do.Scenario 2: You want to get the system into a good known state; your script is flawed or the system is bust; it cannot be put into that state. You know exactly where (at least part of) the problem is and why.Scenario 3: You want to get the system into a good known state; people are fiddling around with the system just now. That's fine. You do what you can, and later you come back and try it againScenario 4: No one wants to deploy anything; they want you to prove that the previous deployment was successful. So you re-run the deployment script with the "-WhatIf" flag. It reports that there was nothing to change. There's your proof.I mentioned two technologies in the title -- MSI and MSDeploy. I am thinking specifically of the conversation that took place here. Having worked with both technologies, I think Rob Mensching's response is appropriately nuanced, and in essence the difference is this: sometimes your target is either to achieve a specific new server state, or to rollback to a known good one. Then again, your target may be to configure what you can, and to understand what you can't. Implicitly MSDeploy's "rollback" is simply to redeploy the previous version, whereas a well-crafted MSI will actively put your system into that state without further intervention. Either way, if all goes well it will leave you with a system in one of two states, whereas MSDeploy could leave your system in one of many states. The key is that MSDeploy and MSI are complementary technologies; which suits you best depends as much on Operational guidance as your Configuration remit.What I wanted to say was that I have always been for atomic, transactional-based configuration, but having worked with the "make it so" paradigm, I have been favourably impressed by the actual results. I'm tempted to put a more technical post up on this in due course.

    Read the article

  • Minimum team development sizes

    - by MarkPearl
    Disclaimer - these are observations that I have had, I am not sure if this follows the philosophy of scrum, agile or whatever, but most of these insights were gained while implementing a scrum scenario. Two is a partnership, three starts a team For a while I thought that a team was anything more than one and that scrum could be effective methodology with even two people. I have recently adjusted my thinking to a scrum team being a minimum of three, so what happened to two and what do you call it? For me I consider a group of two people working together a partnership - there is value in having a partnership, but some of the dynamics and value that you get from having a team is lost with a partnership. Avoidance of a one on one confrontation The first dynamic I see missing in a partnership is the team motivation to do better and how this is delivered to individuals that are not performing. Take two highly motivated individuals and put them together and you will typically see them continue to perform. Now take a situation where you have two individuals, one performing and one not and the behaviour is totally different compared to a team of three or more individuals. With two people, if one feels the other is not performing it becomes a one on one confrontation. Most people avoid confrontations and so nothing changes. Compare this to a situation where you have three people in a team, 2 performing and 1 not the dynamic is totally different, it is no longer a personal one on one confrontation but a team concern and people seem more willing to encourage the individual not performing and express their dissatisfaction as a team if they do not improve. Avoiding the effects of Tuckman’s Group Development Theory If you are not familiar with Tuckman’s group development theory give it a read (http://en.wikipedia.org/wiki/Tuckman's_stages_of_group_development) In a nutshell with Tuckman’s theory teams go through these stages of Forming, Storming, Norming & Performing. You want your team to reach and remain in the Performing stage for as long as possible - this is where you get the most value. When you have a partnership of two and you change the individuals in the partnership you basically do a hard reset on the partnership and go back to the beginning of Tuckman’s model each time. This has a major effect on the performance of a team and what they can deliver. What I have seen is that you reduce the effects of Tuckman's theory the more individuals you have in the team (until you hit the maximum team size in which other problems kick in). While you will still experience Tuckman's theory with a team of three, the impact will be greatly reduced compared to two where it is guaranteed every time a change occurs. It's not just in the numbers, it's in the people One final comment - while the actual numbers of a team do play a role, the individuals in the team are even more important - ideally you want to keep individuals working together for an extended period. That doesn't mean that you never change the individuals in a team, or that once someone joins a team they are stuck there - there is value in an individual moving from team to team and getting cross pollination, but the period of time that an individual moves should be in month's or years, not days or weeks. Why? So why is it important to know this? Why is it important to know how a team works and what motivates them? I have been asking myself this question for a while and where I am at right now is this… the aim is to achieve the stage where the sum of the total (team) is greater than the sum of the parts (team members). This is why we form teams and why understanding how they work is a challenge and also extremely stimulating.

    Read the article

  • Dependency injection: How to sell it

    - by Mel
    Let it be known that I am a big fan of dependency injection (DI) and automated testing. I could talk all day about it. Background Recently, our team just got this big project that is to built from scratch. It is a strategic application with complex business requirements. Of course, I wanted it to be nice and clean, which for me meant: maintainable and testable. So I wanted to use DI. Resistance The problem was in our team, DI is taboo. It has been brought up a few times, but the gods do not approve. But that did not discourage me. My Move This may sound weird but third-party libraries are usually not approved by our architect team (think: "thou shalt not speak of Unity, Ninject, NHibernate, Moq or NUnit, lest I cut your finger"). So instead of using an established DI container, I wrote an extremely simple container. It basically wired up all your dependencies on startup, injects any dependencies (constructor/property) and disposed any disposable objects at the end of the web request. It was extremely lightweight and just did what we needed. And then I asked them to review it. The Response Well, to make it short. I was met with heavy resistance. The main argument was, "We don't need to add this layer of complexity to an already complex project". Also, "It's not like we will be plugging in different implementations of components". And "We want to keep it simple, if possible just stuff everything into one assembly. DI is an uneeded complexity with no benefit". Finally, My Question How would you handle my situation? I am not good in presenting my ideas, and I would like to know how people would present their argument. Of course, I am assuming that like me, you prefer to use DI. If you don't agree, please do say why so I can see the other side of the coin. It would be really interesting to see the point of view of someone who disagrees. Update Thank you for everyone's answers. It really puts things into perspective. It's nice enough to have another set of eyes to give you feedback, fifteen is really awesome! This are really great answers and helped me see the issue from different sides, but I can only choose one answer, so I will just pick the top voted one. Thanks everyone for taking the time to answer. I have decided that it is probably not the best time to implement DI, and we are not ready for it. Instead, I will concentrate my efforts on making the design testable and attempt to present automated unit testing. I am aware that writing tests is additional overhead and if ever it is decided that the additional overhead is not worth it, personally I would still see it as a win situation since the design is still testable. And if ever testing or DI is a choice in future, the design can easily handle it.

    Read the article

  • Intern Screening - Software 'Quiz'

    - by Jeremy1026
    I am in charge of selecting a new software development intern for a company that I work with. I wanted to throw a little 'quiz' at the applicants before moving forth with interviews so as to weed out the group a little bit to find some people that can demonstrate some skill. I put together the following quiz to send to applicants, it focuses only on PHP, but that is because that is what about 95% of the work will be done in. I'm hoping to get some feedback on A. if its a good idea to send this to applicants and B. if it can be improved upon. # 1. FizzBuzz # Write a small application that does the following: # Counts from 1 to 100 # For multiples of 3 output "Fizz" # For multiples of 5 output "Buzz" # For multiples of 3 and 5 output "FizzBuzz" # For numbers that are not multiples of 3 nor 5 output the number. <?php ?> # 2. Arrays # Create a multi-dimensional array that contains # keys for 'id', 'lot', 'car_model', 'color', 'price'. # Insert three sets of data into the array. <?php ?> # 3. Comparisons # Without executing the code, tell if the expressions # below will return true or false. <?php if ((strpos("a","abcdefg")) == TRUE) echo "True"; else echo "False"; //True or False? if ((012 / 4) == 3) echo "True"; else echo "False"; //True or False? if (strcasecmp("abc","ABC") == 0) echo "True"; else echo "False"; //True or False? ?> # 4. Bug Checking # The code below is flawed. Fix it so that the code # runs properly without producing any Errors, Warnings # or Notices, and returns the proper value. <?php //Determine how many parts are needed to create a 3D pyramid. function find_3d_pyramid($rows) { //Loop through each row. for ($i = 0; $i < $rows; $i++) { $lastRow++; //Append the latest row to the running total. $total = $total + (pow($lastRow,3)); } //Return the total. return $total; } $i = 3; echo "A pyramid consisting of $i rows will have a total of ".find_3d_pyramid($i)." pieces."; ?> # 5. Quick Examples # Create a small example to complete the task # for each of the following problems. # Create an md5 hash of "Hello World"; # Replace all occurances of "_" with "-" in the string "Welcome_to_the_universe." # Get the current date and time, in the following format, YYYY/MM/DD HH:MM:SS AM/PM # Find the sum, average, and median of the following set of numbers. 1, 3, 5, 6, 7, 9, 10. # Randomly roll a six-sided die 5 times. Store the 5 rolls into an array. <?php ?>

    Read the article

  • Oracle Applications Cloud Release 8 Customization: Your User Interface, Your Text

    - by ultan o'broin
    Introducing the User Interface Text Editor In Oracle Applications Cloud Release 8, there’s an addition to the customization tool set, called the User Interface Text Editor  (UITE). When signed in with an application administrator role, users launch this new editing feature from the Navigator's Tools > Customization > User Interface Text menu option. See how the editor is in there with other customization tools? User Interface Text Editor is launched from the Navigator Customization menu Applications customers need a way to make changes to the text that appears in the UI, without having to initiate an IT project. Business users can now easily change labels on fields, for example. Using a composer and activated sandbox, these users can take advantage of the Oracle Metadata Services (MDS), add a key to a text resource bundle, and then type in their preferred label and its description (as a best practice for further work, I’d recommend always completing that description). Changing a simplified UI field label using Oracle Composer In Release 8, the UITE enables business users to easily change UI text on a much wider basis. As with composers, the UITE requires an activated sandbox where users can make their changes safely, before committing them for others to see. The UITE is used for editing UI text that comes from Oracle ADF resource bundles or from the Message Dictionary (or FND_MESSAGE_% tables, if you’re old enough to remember such things). Functionally, the Message Dictionary is used for the text that appears in business rule-type error, warning or information messages, or as a text source when ADF resource bundles cannot be used. In the UITE, these Message Dictionary texts are referred to as Multi-part Validation Messages.   If the text comes from ADF resource bundles, then it’s categorized as User Interface Text in the UITE. This category refers to the text that appears in embedded help in the UI or in simple error, warning, confirmation, or information messages. The embedded help types used in the application are explained in an Oracle Fusion Applications User Experience (UX) design pattern set. The message types have a UX design pattern set too. Using UITE  The UITE enables users to search and replace text in UI strings using case sensitive options, as well as by type. Users select singular and plural options for text changes, should they apply. Searching and replacing text in the UITE The UITE also provides users with a way to preview and manage changes on an exclusion basis, before committing to the final result. There might, for example, be situations where a phrase or word needs to remain different from how it’s generally used in the application, depending on the context. Previewing replacement text changes. Changes can be excluded where required. Multi-Part Messages The Message Dictionary table architecture has been inherited from Oracle E-Business Suite days. However, there are important differences in the Oracle Applications Cloud version, notably the additional message text components, as explained in the UX Design Patterns. Message Dictionary text has a broad range of uses as indicated, and it can also be reserved for internal application use, for use by PL/SQL and C programs, and so on. Message Dictionary text may even concatenate together at run time, where required. The UITE handles the flexibility of such text architecture by enabling users to drill down on each message and see how it’s constructed in total. That way, users can ensure that any text changes being made are consistent throughout the different message parts. Multi-part (Message Dictionary) message components in the UITE Message Dictionary messages may also use supportability-related numbers, the ones that appear appended to the message text in the application’s UI. However, should you have the requirement to remove these numbers from users' view, the UITE is not the tool for the job. Instead, see my blog about using the Manage Messages UI.

    Read the article

  • Multichannel Digital Engagement: Find Out How Your Organization Measures Up

    - by Michael Snow
    This article was originally published in the September 2013 Edition of the Oracle Information InDepth Newsletter ORACLE WEBCENTER EDITION Thanks to mobile and social technologies, interactive online experiences are now commonplace. Not only that, they give consumers more choices, influence, and control than ever before. So how can you make your organization stand out? The key building blocks for delivering exceptional cross-channel digital experiences are outlined below. Also, a new assessment tool is available to help you measure your organization's ability to deliver such experiences. A clearly defined digital strategy. The customer journey is growing increasingly complex, encompassing multiple touchpoints and channels. It used to be easy to map marketing efforts to specific offline channels; for example, a direct mail piece with an offer to visit a store for a discounted purchase. Now it is more difficult to cultivate and track such clear cause-and-effect relationships. To deliver an integrated digital experience in this more complex world, organizations need a clearly defined and comprehensive digital marketing strategy that is backed up by an integrated set of software, middleware, and hardware solutions. Strong support for business agility and speed-to-market. As both IT and marketing executives know, speed-to-market and business agility are key to competitive advantage. That means marketers need solutions to support the rapid implementation of online marketing initiatives—plus the flexibility to adapt quickly to a changing marketplace. And IT needs tools with the performance, scalability, and ease of integration to support marketing efforts. Both teams benefit when business users are empowered to implement marketing initiatives on their own, with minimal IT intervention. The ability to deliver relevant, personalized content. Delivering a one-size-fits-all online customer experience is no longer acceptable. Customers expect you to know who they are, including their preferences and past relationship with your brand. That means delivering the most relevant content from the moment a visitor enters your site. To make that happen, you need a powerful rules engine so that marketers and business users can easily define site visitor segments and deliver content accordingly. That includes both implicit targeting that is based on the user’s behavior, and explicit targeting that takes a user’s profile information into account. Ideally, the rules engine can also intelligently weight recommendations when multiple segments apply to a specific customer. Support for social interactivity. With the advent of Facebook and LinkedIn, visitors expect to participate in and contribute to your web presence—and share their experience on their own social networks. That requires easy incorporation of user-generated content such as comments, ratings, reviews, polls, and blogs; seamless integration with third-party social networking sites; and support for social login, which helps to remove barriers to social participation. The ability to deliver connected, multichannel experiences that include powerful, flexible mobile capabilities. By 2015, mobile usage is projected to surpass that of PCs and other wired devices. In other words, mobile is an essential element in delivering exceptional online customer experiences. This requires the creation and management of mobile experiences that are optimized for delivery to the thousands of different devices that are in use today. Just as important, organizations must be able to easily extend their traditional web presence to the mobile channel and deliver highly personalized and relevant multichannel marketing initiatives while also managing to minimize the time and effort required to manage mobile sites. Are you curious to know how your organization measures up when it comes to delivering an engaging, multichannel digital experience? If so, take this brief, 15-question online assessment and see how your organization scores in the areas of digital strategy, digital agility, relevance and personalization, social interactivity, and multichannel experience.

    Read the article

  • WPF: Draw a grid on a Canvas?

    - by stefan.at.wpf
    Hello community, I'm currently trying to add gridlines to a canvas. I need an exact free space between them, where I'd like to place something for hit detection per cell, maybe simply a transparent border or such a thing. While I thought this would be an easy thing, I'm facing problems like antialiasing and that lines in WPF aren't very "calculate" / exact drawing friendly - e.g. if I draw a line on x=20 with a thickness of 10, the line's width goes from x=15 to x=25 (maybe not exactly, just some kind like that), so it takes the given position as middle point - if it would draw from 20 to 30 it would be easier in my case. Besides that making things more complex, how does WPF handle e.g. a thickness of 5? Draw thickness 3 left from the given point and the remaining 2 right from it? Or maybe just the opposite way? Well, just wanted to show you which problems I have, though this all maybe just seems simple to be done. Just wondering if anyone has ever done this before. Currently I find a Border without content and just 2 sites set to a thickness greater than 0 as line seems to work the best in my tests, seems like it's clear where they are drawn and they somehow don't seem to make any antialiasing problems. Just wondering if there's a more intuitive / better way of doing this? I don't want to lay a Canvas over a Grid, I think this maybe makes some things more complex in the end (by the way: how would I place a Canvas on top of a Grid?). Thanks for any hint!

    Read the article

  • Javascript object dependencies

    - by Anurag
    In complex client side projects, the number of Javascript files can get very large. However, for performance reasons it's good to concatenate these files, and compress the resulting file for sending over the wire. I am having problems in concatenating these as the dependencies are included after they are needed in some cases. For instance, there are 2 files: /modules/Module.js <requires Core.js> /modules/core/Core.js The directories are recursively traversed, and Module.js gets included before Core.js, which causes errors. This is just a simple example where dependencies could span across directories, and there could be other complex cases. There are no circular dependencies though. The Javascript structure I follow is similar to Java packages, where each file defines a single Object (I'm using MooTools, but that's irrelevant). The structure of each javascript file and the dependencies is always consistent: Module.js var Module = new Class({ Implements: Core, ... }); Core.js var Core = new Class({ ... }); What practices do you usually follow to handle dependencies in projects where the number of Javascript files is huge, and there are inter-file dependencies?

    Read the article

  • Android SeekBar Minimum and Continuous Float Value Attributes

    - by fordays
    I'm looking for a way to implement a minimum value in my SeekBar and also have the option to increment decimal numbers. For example, currently my SeekBar's minimum is set to 0, but I need it to start at the value 0.2. Also, I would like to have the functionality to be able to have the user select a number from 0.2 to 10.0 at a .1 precision so they can choose the numbers 5.6 or 7.1. Here are the style attributes for my SeekBar: <style name="SeekBar"> <item name="android:paddingLeft">30dp</item> <item name="android:paddingRight">30dp</item> <item name="android:layout_width">fill_parent</item> <item name="android:layout_height">wrap_content</item> <item name="android:layout_marginTop">0dp</item> <item name="android:layout_marginBottom">0dp</item> <item name="android:max">10</item> <item name="android:progress">1</item> </style>

    Read the article

< Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >