Search Results

Search found 23265 results on 931 pages for 'justin case'.

Page 489/931 | < Previous Page | 485 486 487 488 489 490 491 492 493 494 495 496  | Next Page >

  • How to take a partial screen capture using Ruby?

    - by Jason
    Hi, I need to run a ruby client that wakes up every 10 minutes, takes a screen-shot (ss) of a users screen, crops part of the (ss) out and use's OCR to check for a matching word....its basically a program to make sure remote employees are actually working by checking that they have a specific application open & the case numbers shown change. Not sure where to even start when it comes to taking a screen-shot and cropping it, has anyone done any kind of screen capture work using Ruby? The app will run on OSX using Ruby 1.9 Thanks!

    Read the article

  • Oracle Data Integrator 11.1.1.5 Complex Files as Sources and Targets

    - by Alex Kotopoulis
    Overview ODI 11.1.1.5 adds the new Complex File technology for use with file sources and targets. The goal is to read or write file structures that are too complex to be parsed using the existing ODI File technology. This includes: Different record types in one list that use different parsing rules Hierarchical lists, for example customers with nested orders Parsing instructions in the file data, such as delimiter types, field lengths, type identifiers Complex headers such as multiple header lines or parseable information in header Skipping of lines  Conditional or choice fields Similar to the ODI File and XML File technologies, the complex file parsing is done through a JDBC driver that exposes the flat file as relational table structures. Complex files are mapped to one or more table structures, as opposed to the (simple) file technology, which always has a one-to-one relationship between file and table. The resulting set of tables follows the same concept as the ODI XML driver, table rows have additional PK-FK relationships to express hierarchy as well as order values to maintain the file order in the resulting table.   The parsing instruction format used for complex files is the nXSD (native XSD) format that is already in use with Oracle BPEL. This format extends the XML Schema standard by adding additional parsing instructions to each element. Using nXSD parsing technology, the native file is converted into an internal XML format. It is important to understand that the XML is streamed to improve performance; there is no size limitation of the native file based on memory size, the XML data is never fully materialized.  The internal XML is then converted to relational schema using the same mapping rules as the ODI XML driver. How to Create an nXSD file Complex file models depend on the nXSD schema for the given file. This nXSD file has to be created using a text editor or the Native Format Builder Wizard that is part of Oracle BPEL. BPEL is included in the ODI Suite, but not in standalone ODI Enterprise Edition. The nXSD format extends the standard XSD format through nxsd attributes. NXSD is a valid XML Schema, since the XSD standard allows extra attributes with their own namespaces. The following is a sample NXSD schema: <?xml version="1.0"?> <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:nxsd="http://xmlns.oracle.com/pcbpel/nxsd" elementFormDefault="qualified" xmlns:tns="http://xmlns.oracle.com/pcbpel/demoSchema/csv" targetNamespace="http://xmlns.oracle.com/pcbpel/demoSchema/csv" attributeFormDefault="unqualified" nxsd:encoding="US-ASCII" nxsd:stream="chars" nxsd:version="NXSD"> <xsd:element name="Root">         <xsd:complexType><xsd:sequence>       <xsd:element name="Header">                 <xsd:complexType><xsd:sequence>                         <xsd:element name="Branch" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy=","/>                         <xsd:element name="ListDate" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy="${eol}"/>                         </xsd:sequence></xsd:complexType>                         </xsd:element>                 </xsd:sequence></xsd:complexType>         <xsd:element name="Customer" maxOccurs="unbounded">                 <xsd:complexType><xsd:sequence>                 <xsd:element name="Name" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy=","/>                         <xsd:element name="Street" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy="," />                         <xsd:element name="City" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy="${eol}" />                         </xsd:sequence></xsd:complexType>                         </xsd:element>                 </xsd:sequence></xsd:complexType> </xsd:element> </xsd:schema> The nXSD schema annotates elements to describe their position and delimiters within the flat text file. The schema above uses almost exclusively the nxsd:terminatedBy instruction to look for the next terminator chars. There are various constructs in nXSD to parse fixed length fields, look ahead in the document for string occurences, perform conditional logic, use variables to remember state, and many more. nXSD files can either be written manually using an XML Schema Editor or created using the Native Format Builder Wizard. Both Native Format Builder Wizard as well as the nXSD language are described in the Application Server Adapter Users Guide. The way to start the Native Format Builder in BPEL is to create a new File Adapter; in step 8 of the Adapter Configuration Wizard a new Schema for Native Format can be created:   The Native Format Builder guides through a number of steps to generate the nXSD based on a sample native file. If the format is complex, it is often a good idea to “approximate” it with a similar simple format and then add the complex components manually.  The resulting *.xsd file can be copied and used as the format for ODI, other BPEL constructs such as the file adapter definition are not relevant for ODI. Using this technique it is also possible to parse the same file format in SOA Suite and ODI, for example using SOA for small real-time messages, and ODI for large batches. This nXSD schema in this example describes a file with a header row containing data and 3 string fields per row delimited by commas, for example: Redwood City Downtown Branch, 06/01/2011 Ebeneezer Scrooge, Sandy Lane, Atherton Tiny Tim, Winton Terrace, Menlo Park The ODI Complex File JDBC driver exposes the file structure through a set of relational tables with PK-FK relationships. The tables for this example are: Table ROOT (1 row): ROOTPK Primary Key for root element SNPSFILENAME Name of the file SNPSFILEPATH Path of the file SNPSLOADDATE Date of load Table HEADER (1 row): ROOTFK Foreign Key to ROOT record ROWORDER Order of row in native document BRANCH Data BRANCHORDER Order of Branch within row LISTDATE Data LISTDATEORDER Order of ListDate within row Table ADDRESS (2 rows): ROOTFK Foreign Key to ROOT record ROWORDER Order of row in native document NAME Data NAMEORDER Oder of Name within row STREET Data STREETORDER Order of Street within row CITY Data CITYORDER Order of City within row Every table has PK and/or FK fields to reflect the document hierarchy through relationships. In this example this is trivial since the HEADER and all CUSTOMER records point back to the PK of ROOT. Deeper nested documents require this to identify parent elements. All tables also have a ROWORDER field to define the order of rows, as well as order fields for each column, in case the order of columns varies in the original document and needs to be maintained. If order is not relevant, these fields can be ignored. How to Create an Complex File Data Server in ODI After creating the nXSD file and a test data file, and storing it on the local file system accessible to ODI, you can go to the ODI Topology Navigator to create a Data Server and Physical Schema under the Complex File technology. This technology follows the conventions of other ODI technologies and is very similar to the XML technology. The parsing settings such as the source native file, the nXSD schema file, the root element, as well as the external database can be set in the JDBC URL: The use of an external database defined by dbprops is optional, but is strongly recommended for production use. Ideally, the staging database should be used for this. Also, when using a complex file exclusively for read purposes, it is recommended to use the ro=true property to ensure the file is not unnecessarily synchronized back from the database when the connection is closed. A data file is always required to be present  at the filename path during design-time. Without this file, operations like testing the connection, reading the model data, or reverse engineering the model will fail.  All properties of the Complex File JDBC Driver are documented in the Oracle Fusion Middleware Connectivity and Knowledge Modules Guide for Oracle Data Integrator in Appendix C: Oracle Data Integrator Driver for Complex Files Reference. David Allan has created a great viewlet Complex File Processing - 0 to 60 which shows the creation of a Complex File data server as well as a model based on this server. How to Create Models based on an Complex File Schema Once physical schema and logical schema have been created, the Complex File can be used to create a Model as if it were based on a database. When reverse-engineering the Model, data stores(tables) for each XSD element of complex type will be created. Use of complex files as sources is straightforward; when using them as targets it has to be made sure that all dependent tables have matching PK-FK pairs; the same applies to the XML driver as well. Debugging and Error Handling There are different ways to test an nXSD file. The Native Format Builder Wizard can be used even if the nXSD wasn’t created in it; it will show issues related to the schema and/or test data. In ODI, the nXSD  will be parsed and run against the existing test XML file when testing a connection in the Dataserver. If either the nXSD has an error or the data is non-compliant to the schema, an error will be displayed. Sample error message: Error while reading native data. [Line=1, Col=5] Not enough data available in the input, when trying to read data of length "19" for "element with name D1" from the specified position, using "style" as "fixedLength" and "length" as "". Ensure that there is enough data from the specified position in the input. Complex File FAQ Is the size of the native file limited by available memory? No, since the native data is streamed through the driver, only the available space in the staging database limits the size of the data. There are limits on individual field sizes, though; a single large object field needs to fit in memory. Should I always use the complex file driver instead of the file driver in ODI now? No, use the file technology for all simple file parsing tasks, for example any fixed-length or delimited files that just have one row format and can be mapped into a simple table. Because of its narrow assumptions the ODI file driver is easy to configure within ODI and can stream file data without writing it into a database. The complex file driver should be used whenever the use case cannot be handled through the file driver. Are we generating XML out of flat files before we write it into a database? We don’t materialize any XML as part of parsing a flat file, either in memory or on disk. The data produced by the XML parser is streamed in Java objects that just use XSD-derived nXSD schema as its type system. We use the nXSD schema because is the standard for describing complex flat file metadata in Oracle Fusion Middleware, and enables users to share schemas across products. Is the nXSD file interchangeable with SOA Suite? Yes, ODI can use the same nXSD files as SOA Suite, allowing mixed use cases with the same data format. Can I start the Native Format Builder from the ODI Studio? No, the Native Format Builder has to be started from a JDeveloper with BPEL instance. You can get BPEL as part of the SOA Suite bundle. Users without SOA Suite can manually develop nXSD files using XSD editors. When is the database data written back to the native file? Data is synchronized using the SYNCHRONIZE and CREATE FILE commands, and when the JDBC connection is closed. It is recommended to set the ro or read_only property to true when a file is exclusively used for reading so that no unnecessary write-backs occur. Is the nXSD metadata part of the ODI Master or Work Repository? No, the data server definition in the master repository only contains the JDBC URL with file paths; the nXSD files have to be accessible on the file systems where the JDBC driver is executed during production, either by copying or by using a network file system. Where can I find sample nXSD files? The Application Server Adapter Users Guide contains nXSD samples for various different use cases.

    Read the article

  • Accessing ruby counter cache

    - by Julian
    Hi all, I'm playing around with a fork of acts_as_taggable_on_steroids as a learning exercise. The version I'm looking at does some stuff I don't understand to calculate Tag counts. So I thought I'd do a version using PORC (Plain Old Rails Counters): class Tagging < ActiveRecord::Base #:nodoc: belongs_to :tag, :counter_cache => "tagging_counter_cache" ... I thought tagging_counter_cache was transparently accessed when I access tag.taggings.count but apparently not? Do I really have to access tag.tagging_counter_cache explicitly? >> tag.taggings.count SQL (0.7ms) SELECT count(*) AS count_all FROM `taggings` WHERE (`taggings`.tag_id = 16) Same for size. It's cool if that's the case but just wanted to check.

    Read the article

  • TFS and SVN code Merge

    - by Mohanavel
    We are a small team of 8. 3 are from other country and they are using the Source controller as TFS and TFS server is also located there and they have only 4 licence. So we (5 developers) are using SVN source controller as local source controller and for every 3 days we are taking the TFS latest and merging the changes from and to SVN & TFS. Really this is overkilling and hour consuming task. They don't want to change the TFS ( Not event add ins ). So what i can use to merge the code between two source controllers. Is it i can use SVN-Bridge to Merge the code without changing or modifying the TFS Server. Please guide me on this. (Worst case they might go for add ins, for sure no other source controllers than Microsoft Product) hhhmmmmmm, i'm sitting in front of monitor and hitting the F5 on this page, Please save my hours.

    Read the article

  • Can you optimize this code? T-SQL

    - by Yoda
    Essentially I have three fields and I am creating a new one which is the three combined making a mailable address; problem being some fields contain null values and adding myString to a null just produces a null in sql. So this is my code, can anyone make it any cleaner? It's still looking pretty butch! UPDATE [mydb].[dbo].[Account] SET [Billing Street] = CASE WHEN [(Billing Address 1)] is null and [(Billing Address 2)] is null THEN [(Billing Address 3)] WHEN [(Billing Address 1)] is null and [(Billing Address 3)] is null THEN [(Billing Address 2)] WHEN [(Billing Address 2)] is null and [(Billing Address 3)] is null THEN [(Billing Address 1)] WHEN [(Billing Address 1)] is null THEN [(Billing Address 2)] + ' ' + [(Billing Address 3)] WHEN [(Billing Address 2)] is null THEN [(Billing Address 1)] + ' ' + [(Billing Address 3)] WHEN [(Billing Address 3)] is null THEN [(Billing Address 1)] + ' ' + [(Billing Address 2)] ELSE [(Billing Address 1)] + ' ' + [(Billing Address 2)] + ' ' + [(Billing Address 3)] END

    Read the article

  • JQuery preventDefault() but still add the fragment path to the URL without navigating to the fragment

    - by jdln
    My question is similar to this one but none of the answers solve my problem: Use JQuery preventDefault(), but still add the path to the URL When a user clicks a fragment link, I need to remove the default behaviour of jumping to the fragment but still add the fragment to the URL. This code (taken from the link) will fire the animation, and then add the fragment to the URL. However the fragment is then navigated to, which im my case breaks my site. $("#login_link").click(function (e) { e.preventDefault(); $("#login").animate({ 'margin-top': 0 }, 600, 'linear', function(){ window.location.hash = $(this).attr('href'); }); });

    Read the article

  • 3 dimensional bin packing algorithms

    - by BuschnicK
    I'm faced with a 3 dimensional bin packing problem and am currently conducting some preliminary research as to which algorithms/heuristics are currently yielding the best results. Since the problem is NP hard I do not expect to find the optimal solution in every case, but I was wondering: 1) what are the best exact solvers? Branch and Bound? What problem instance sizes can I expect to solve with reasonable computing resources? 2) what are the best heuristic solvers? 3) What off-the-shelf solutions exist to conduct some experiments with?

    Read the article

  • spring.net application scope repository object on loadbalanced application

    - by Bert Vandamme
    Hi, We have an application running on a loadbalanced environment, let say webserver A and B. The loadbalancing is on the HTTP level, so the loadbalancer directs each user request to one of both webservers. The scope of the repositories in the application is managed by the spring.net container, and the application relies on data that can be cached by the repository (performance reasons). In this case we can never be sure that the cached data in the repositories on both webservers is the same. Is there mechanism in spring.net that can manage this kind problem? Or is there another common approach for this kind of thing? Any ideas? Thx, Bert

    Read the article

  • android: using listpreference and retrieving key string...

    - by Allan
    I have a settings menu that pops up and in it is a listpreference type menu. It is associated with a settings.xml file where there are 'array-strings' within it. It all works good but I don't know how to retrieve the users preference. As an example, let's say the user picks a color (red, green, or blue). The list that I've made within my 'array-strings' contain the text red, green, and blue. Within my code, I would like to do something if the user has choosen red, something else if they choose blue, etc., etc. Would I use a 'case' statement or an 'if' statement? And most importantly, how would I retrieve the users preference - the key? (am I checking for a boolean?) Help would be appreciated...thank you...

    Read the article

  • Null-coalescing operator and operator && in C#

    - by abatishchev
    Is it possible to use together any way operator ?? and operator && in next case: bool? Any { get { var any = this.ViewState["any"] as bool?; return any.HasValue ? any.Value && this.SomeBool : any; } } This means next: if any is null then this.Any.HasValue return false if any has value, then it returns value considering another boolean property, i.e. Any && SomeBool

    Read the article

  • Visual Studio HTML Cursor-within-HTML-Element Syntax-highlighting color

    - by David Murdoch
    I can't, for the life of me, figure out how to change grey highlight color in the screenshot below to something that will make the text a little more legible. Does anyone know what the "Display Item" name is that I need to change? To get to the "theme editor" select Tools = Options = Environment = Fonts and Colors. I can't find what to edit. I've also looked through Tools = Options = Text Editor = HTML = Formatting to no avail. In case you are wondering the theme is a slightly modified Coding Instinct Theme

    Read the article

  • My Windows Service crashes with "the key does not exist in the appSettings configuration section"

    - by Greg
    There is the same question listed under http://stackoverflow.com/questions/427007/the-key-userid-does-not-exist-in-the-appsettings-configuration-section, but unfortunately none of the answers worked in my case. All was working fine, I checked everything in and when I opened the solution again, it started crashing on the above. I cannot find any hint of what I am doing wrong. Any ideas? <?xml version="1.0" encoding="utf-8" ?> <configuration> <appSettings> <add key="URI" value="http://123.123.123.123:8080/smsxml/collector" /> <add key="Provider" value="123" /> <add key="LongCode" value="+123" /> <add key="PassWord" value="123" /> </appSettings> </configuration>

    Read the article

  • MongoDB, Carrierwave, GridFS and prevention of files' duplication

    - by Arkan
    I am dealing with Mongoid, carrierwave and gridFS to store my uploads. For example, I have a model Article, containing a file upload(a picture). class Article include Mongoid::Document field :title, :type => String field :content, :type => String mount_uploader :asset, AssetUploader end But I would like to only store the file once, in the case where I'll upload many times the same file for differents articles. I saw GridFS has a MD5 checksum. What would be the best way to prevent duplication of identicals files ? Thanks

    Read the article

  • Tutorials for .NET database app using SQLite

    - by ChrisC
    I have some MS Access experience, and had a class on console c++ apps, now I am trying to develop my first program. It's a little C# db app. I have the db tables and columns planned and keyed into VS, but that's where I'm stuck. I'm needing C#/VS tutorials that will guide me on configuring relationships, datatyping, etc, on the db so I can get it ready for testing of the schema. The only tutorials I've been able to find either talk about general db basics (ie, not helping me with VS/C#), or about C# communications with an existing SQL db. Thank you. (In case it matters, I'm using the open source System.Data.SQLite (sqlite.phxsoftware.com) for the db. I chose it over SQL Server CE after seeing a comparison between the two. Also I wanted a server-less version of SQL because this little app will be on other people's computers and I want to to do as little support as possible.)

    Read the article

  • java-COM interop: Implement COM interface in Java

    - by mdma
    How can I implement a vtable COM interface in java? In the old days, I'd use the Microsft JVM, which had built in java-COM interop. What's the equivalent for a modern JRE? Answers to a similar SO question proposed JACOB. I've looked at JACOB, but that is based on IDispatch, and is aimed at controlling Automation serers. The COM interfaces I need are custom vtable (extend IUnknown), e.g. IPersistStream, IOleWindow, IContextMenu etc. For my use case, I could implement all the COM specifics in JNI, and have the JNI layer call corresponding interfaces in java. But I'm hoping for a less painful solution. It's for an open source project, so open source alternatives are preferred.

    Read the article

  • A control that contains multiple duplicate properties causing deadlock issues on IIS

    - by heads5150
    I am trying to work out if the above case is true for our site. I've been told by my hosting provider that this fix (http://support.microsoft.com/kb/974165) has to applied to our server due to performance issues. It basically describes an issues where UI code like: <asp:gridview id="GridView1" runat="server" ... PageSize="100" PagerSettings-Mode="Numeric" PagerStyle-BorderStyle="None" PagerStyle-BorderColor="Navy" PagerStyle-HorizontalAlign="Right" PagerSettings-PageButtonCount="2" PagerSettings-Position="Bottom"> <PagerStyle HorizontalAlign="Left" BorderColor="Navy" BorderStyle="None"></PagerStyle> ... <PagerSettings PageButtonCount="2"></PagerSettings> ... </asp:gridview> causing the following warning on the server "ISAPI 'C:\Windows\Microsoft.NET\Framework\v2.0.50727\aspnet_isapi.dll' reported itself as unhealthy for the following reason: 'Deadlock detected'." Does anybody know of a way that I can detect this issue in the build process or the debugger? Any help would be much appreciate.

    Read the article

  • The Incremental Architect&acute;s Napkin - #2 - Balancing the forces

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/06/02/the-incremental-architectacutes-napkin---2---balancing-the-forces.aspxCategorizing requirements is the prerequisite for ecconomic architectural decisions. Not all requirements are created equal. However, to truely understand and describe the requirement forces pulling on software development, I think further examination of the requirements aspects is varranted. Aspects of Functionality There are two sides to Functionality requirements. It´s about what a software should do. I call that the Operations it implements. Operations are defined by expressions and control structures or calls to frameworks of some sort, i.e. (business) logic statements. Operations calculate, transform, aggregate, validate, send, receive, load, store etc. Operations are about behavior; they take input and produce output by considering state. I´m not using the term “function” here, because functions - or methods or sub-programs - are not necessary to implement Operations. Functions belong to a different sub-aspect of requirements (see below). Operations alone are not enough, though, to make a customer happy with regard to his/her Functionality requirements. Only correctly implemented Operations provide full value. This should make clear, why testing is so important. And not just manual tests during development of some operational feature, but automated tests. Because only automated tests scale when over time the number of operations increases. Without automated tests there is no guarantee formerly correct operations are still correct after more got added. To retest all previous operations manually is infeasible. So whoever relies just on manual tests is not really balancing the two forces Operations and Correctness. With manual tests more weight is put on the side of the scale of Operations. That might be ok for a short period of time - but in the long run it will bite you. You need to plan for Correctness in the long run from the first day of your project on. Aspects of Quality As important as Functionality is, it´s not the driver for software development. No software has ever been written to just implement some operation in code. We don´t need computers just to do something. All computers can do with software we can do without them. Well, at least given enough time and resources. We could calculate the most complex formulas without computers. We could do auctions with millions of people without computers. The only reason we want computers to help us with this and a million other Operations is… We don´t want to wait for the results very long. Or we want less errors. Or we want easier accessability to complicated solutions. So the main reason for customers to buy/order software is some Quality. They want some Functionality with a higher Quality (e.g. performance, scalability, usability, security…) than without the software. But Qualities come in at least two flavors: Most important are Primary Qualities. That´s the Qualities software truely is written for. Take an online auction website for example. Its Primary Qualities are performance, scalability, and usability, I´d say. Auctions should come within reach of millions of people; setting up an auction should be very easy; finding a suitable auction and bidding on it should be as fast as possible. Only if those Qualities have been implemented does security become relevant. A secure auction website is important - but not as important as a fast auction website. Nobody would want to use the most secure auction website if it was unbearably slow. But there would be people willing to use the fastest auction website even it was lacking security. That´s why security - with regard to online auction software - is not a Primary Quality, but just a Secondary Quality. It´s a supporting quality, so to speak. It does not deliver value by itself. With a password manager software this might be different. There security might be a Primary Quality. Please get me right: I don´t want to denigrate any Quality. There´s a long list of non-functional requirements at Wikipedia. They are all created equal - but that does not mean they are equally important for all software projects. When confronted with Quality requirements check with the customer which are primary and which are secondary. That will help to make good economical decisions when in a crunch. Resources are always limited - but requirements are a bottomless ocean. Aspects of Security of Investment Functionality and Quality are traditionally the requirement aspects cared for most - by customers and developers alike. Even today, when pressure rises in a project, tunnel vision will focus on them. Any measures to create and hold up Security of Investment (SoI) will be out of the window pretty quickly. Resistance to customers and/or management is futile. As long as SoI is not placed on equal footing with Functionality and Quality it´s bound to suffer under pressure. To look closer at what SoI means will help to become more conscious about it and make customers and management aware of the risks of neglecting it. SoI to me has two facets: Production Efficiency (PE) is about speed of delivering value. Customers like short response times. Short response times mean less money spent. So whatever makes software development faster supports this requirement. This must not lead to duct tape programming and banging out features by the dozen, though. Because customers don´t just want Operations and Quality, but also Correctness. So if Correctness gets compromised by focussing too much on Production Efficiency it will fire back. Customers want PE not just today, but over the whole course of a software´s lifecycle. That means, it´s not just about coding speed, but equally about code quality. If code quality leads to rework the PE is on an unsatisfactory level. Also if code production leads to waste it´s unsatisfactory. Because the effort which went into waste could have been used to produce value. Rework and waste cost money. Rework and waste abound, however, as long as PE is not addressed explicitly with management and customers. Thanks to the Agile and Lean movements that´s increasingly the case. Nevertheless more could and should be done in many teams. Each and every developer should keep in mind that Production Efficiency is as important to the customer as Functionality and Quality - whether he/she states it or not. Making software development more efficient is important - but still sooner or later even agile projects are going to hit a glas ceiling. At least as long as they neglect the second SoI facet: Evolvability. Delivering correct high quality functionality in short cycles today is good. But not just any software structure will allow this to happen for an indefinite amount of time.[1] The less explicitly software was designed the sooner it´s going to get stuck. Big ball of mud, monolith, brownfield, legacy code, technical debt… there are many names for software structures that have lost the ability to evolve, to be easily changed to accomodate new requirements. An evolvable code base is the opposite of a brownfield. It´s code which can be easily understood (by developers with sufficient domain expertise) and then easily changed to accomodate new requirements. Ideally the costs of adding feature X to an evolvable code base is independent of when it is requested - or at least the costs should only increase linearly, not exponentially.[2] Clean Code, Agile Architecture, and even traditional Software Engineering are concerned with Evolvability. However, it seems no systematic way of achieving it has been layed out yet. TDD + SOLID help - but still… When I look at the design ability reality in teams I see much room for improvement. As stated previously, SoI - or to be more precise: Evolvability - can hardly be measured. Plus the customer rarely states an explicit expectation with regard to it. That´s why I think, special care must be taken to not neglect it. Postponing it to some large refactorings should not be an option. Rather Evolvability needs to be a core concern for every single developer day. This should not mean Evolvability is more important than any of the other requirement aspects. But neither is it less important. That´s why more effort needs to be invested into it, to bring it on par with the other aspects, which usually are much more in focus. In closing As you see, requirements are of quite different kinds. To not take that into account will make it harder to understand the customer, and to make economic decisions. Those sub-aspects of requirements are forces pulling in different directions. To improve performance might have an impact on Evolvability. To increase Production Efficiency might have an impact on security etc. No requirement aspect should go unchecked when deciding how to allocate resources. Balancing should be explicit. And it should be possible to trace back each decision to a requirement. Why is there a null-check on parameters at the start of the method? Why are there 5000 LOC in this method? Why are there interfaces on those classes? Why is this functionality running on the threadpool? Why is this function defined on that class? Why is this class depending on three other classes? These and a thousand more questions are not to mean anything should be different in a code base. But it´s important to know the reason behind all of these decisions. Because not knowing the reason possibly means waste and having decided suboptimally. And how do we ensure to balance all requirement aspects? That needs practices and transparency. Practices means doing things a certain way and not another, even though that might be possible. We´re dealing with dangerous tools here. Like a knife is a dangerous tool. Harm can be done if we use our tools in just any way at the whim of the moment. Over the centuries rules and practices have been established how to use knifes. You don´t put them in peoples´ legs just because you´re feeling like it. You hand over a knife with the handle towards the receiver. You might not even be allowed to cut round food like potatos or eggs with it. The same should be the case for dangerous tools like object-orientation, remote communication, threads etc. We need practices to use them in a way so requirements are balanced almost automatically. In addition, to be able to work on software as a team we need transparency. We need means to share our thoughts, to work jointly on mental models. So far our tools are focused on working with code. Testing frameworks, build servers, DI containers, intellisense, refactoring support… That´s all nice and well. I don´t want to miss any of that. But I think it´s not enough. We´re missing mental tools, tools for making thinking and talking about software (independently of code) easier. You might think, enough of such tools already exist like all those UML diagram types or Flow Charts. But then, isn´t it strange, hardly any team is using them to design software? Or is that just due to a lack of education? I don´t think so. It´s a matter value/weight ratio: the current mental tools are too heavy weight compared to the value they deliver. So my conclusion is, we need lightweight tools to really be able to balance requirements. Software development is complex. We need guidance not to forget important aspects. That´s like with flying an airplane. Pilots don´t just jump in and take off for their destination. Yes, there are times when they are “flying by the seats of their pants”, when they are just experts doing thing intuitively. But most of the time they are going through honed practices called checklist. See “The Checklist Manifesto” for very enlightening details on this. Maybe then I should say it like this: We need more checklists for the complex businss of software development.[3] But that´s what software development mostly is about: changing software over an unknown period of time. It needs to be corrected in order to finally provide promised operations. It needs to be enhanced to provide ever more operations and qualities. All this without knowing when it´s going to stop. Probably never - until “maintainability” hits a wall when the technical debt is too large, the brownfield too deep. Software development is not a sprint, is not a marathon, not even an ultra marathon. Because to all this there is a foreseeable end. Software development is like continuously and foreever running… ? And sometimes I dare to think that costs could even decrease over time. Think of it: With each feature a software becomes richer in functionality. So with each additional feature the chance of there being already functionality helping its implementation increases. That should lead to less costs of feature X if it´s requested later than sooner. X requested later could stand on the shoulders of previous features. Alas, reality seems to be far from this despite 20+ years of admonishing developers to think in terms of reusability.[1] ? Please don´t get me wrong: I don´t want to bog down the “art” of software development with heavyweight practices and heaps of rules to follow. The framework we need should be lightweight. It should not stand in the way of delivering value to the customer. It´s purpose is even to make that easier by helping us to focus and decreasing waste and rework. ?

    Read the article

  • MSBuild Override Project Reference to resolve to Precompiled Assembly

    - by Ryu
    Situation I have about 400 csproj files using project references. About 3 of those a separate team wants to fork and incorporate into a standalone app. I branched the 3 projects of interest, and because the separate team uses a diff SVN repo I used svn externals to pull in these projects into the folder of the standalone app. Obviously since this team uses a different folder structure the project references no longer resolve. Attempted Solution I figured setting the msbuild properties ReferencePath and AdditionalLibPaths to point to a directory with all the precompiled dependencies would allow the project references a fallback point and resolve correctly. However that doesn't appear to be the case. Question Does anybody know a way to have a failed projectreference look up resolve to the precompiled dll? Perhaps point me to an automated tool to convert projectreferences to dll references? Or is there a better way to solve this problem? Thanks

    Read the article

  • Joda time : How to convert String to LocalDate?

    - by bsreekanth
    Hello, How to specify the format string to convert the date alone from string. In my case, only the date part is relevant Constructing it as DateTime fails: String dateString = "2009-04-17"; DateTimeFormatter formatter = DateTimeFormat.forPattern("yyyy-MM-dd"); DateTime dateTime = formatter.parseDateTime(dateString); with error java.lang.IllegalArgumentException: Invalid format: "2011-04-17" is too short Probably because I should use LocalDate instead. But, I do not see any formatter for LocalDate . What is the best way to convert String dateString = "2009-04-17"; into LocalDate (or something else if that is not the right representation) thanks...

    Read the article

  • What useful GDB scripts have you used/written?

    - by nik
    People use gdb on and off for debugging, of course there are lots of other debugging tools across the varied OSes, with and without GUI and, maybe other fancy IDE features. I would like to know what useful gdb scripts you have written and liked. While, I do not mean a dump of commands in a something.gdb file that you source to pull out a bunch of data, if that made your day, go ahead and talk about it. Lets think conditional processing, control loops and functions written for more elegant and refined programming to debug and, maybe even for whitebox testing Things get interesting when you start debugging remote systems (say, over a serial/ethernet interface) And, what if the target is a multi-processor (and, multithreaded) system Let me put a simple case as an example... Say, A script that traversed serially over entries to locate a bad entry in a large hash-table that is implemented on an embedded platform. That helped me debug a broken hash-table once.

    Read the article

  • How to organize a large number of objects

    - by shane
    We have a large number of documents and metadata (xml files) associated with these documents. What is the best way to organize them? Currently we put them into a series of nested folders: /repository/category/date(when they were loaded into our db)/document_number.pdf and .xml We use the path as a unique identifier for the document in our system. This is more versatile than putting them all in a single flat folder. also it is independent from our database/application, so we can reload them in case of failure. Yet, it introduces some limitations. for example we can't move the files once they've been placed in this structure, also it takes work to put them this way. What is the best practice? How websites such as Scribd deal with this problem?

    Read the article

  • Event Capturing vs Event Bubbling

    - by Rajat
    I just wanted to get the general consensus on which is a better mode of event delegation in JS between bubbling and capturing. Now I understand that depending on a particular use-case, one might want to use capturing phase over bubbling or vice versa but I want to understand what delegation mode is preferred for most general cases and why (to me it seems bubbling mode). Or to put it in other words, is there a reason behind W3C addEventListener implementation to favor the bubbling mode. [capturing is initiated only when you specify the 3rd parameter and its true. However, you can forget that 3rd param and bubbling mode is kicked in] I looked up the JQuery's bind function to get an answer to my question and it seems it doesn't even support events in capture phase (it seems to me because of IE not support capturing mode). So looks like bubbling mode is the default choice but why?

    Read the article

  • Is it possible to make the AntiForgeryToken value in ASP.NET MVC change after each verification?

    - by jmcd
    We've just had some Penetration Testing carried out on an application we've built using ASP.NET MVC, and one of the recommendations that came back was that the value of the AntiForgeryToken in the Form could be resubmitted multiple times and did not expire after a single use. According to the OWASP recommendations around the Synchronizer Token Pattern: "In general, developers need only generate this token once for the current session." Which is how I think the ASP.NET MVC AntiForgeryToken works. In case we have to fight the battle, is it possible to cause the AntiForgeryToken to regenerate a new value after each validation?

    Read the article

  • Proper indentation in array initialization, PDT/Zend Studio

    - by Sergei Stolyarov
    I'm using the following style of array initialization in the code: $a = array( 'one' => 123, 'two' => 456 ); But PDT/Zend Studio doesn't work properly in this case; after pressing [Return] key it places cursor under the $a (in my example) and ignores indentation. If array keys are numbers (at least not start with quotation marks) everything is working fine. This is how it works currently (| — is a position where edtitor places caret after pressing [Return]) $a = array( 'one' => 123,[RETURN] | ); This is expected result: $a = array( 'one' => 123,[RETURN] | ); So is it possible to force editor follow my indentation rules?

    Read the article

  • Tips for resolving Hibernate/JPA EntityNotFoundException

    - by Damo
    I'm running into a problem with Hibernate where when trying to delete a group of Entities I encounter the following error: javax.persistence.EntityNotFoundException: deleted entity passed to persist: [com.locuslive.odyssey.entity.FreightInvoiceLine#<null>] These are not normally so difficult to track down as they are usually caused by an entity being deleted but not being removed from a Collection that it is a member of. In this case I have removed the entity from every list that I can think of (it's a complex datamodel). I've put JBoss logging into Trace and I can see the collections that are being cascaded. However I can't seem to find the Collection containing the entity that I'm deleting. Does anyone have any tips for resolving this particular Exception? Thanks.

    Read the article

< Previous Page | 485 486 487 488 489 490 491 492 493 494 495 496  | Next Page >