Search Results

Search found 1900 results on 76 pages for 'skills'.

Page 74/76 | < Previous Page | 70 71 72 73 74 75 76  | Next Page >

  • “It’s only test code…”

    - by Chris George
    “Let me hack this in, it’s only test code”, “Don’t worry about getting it reviewed, it’s only test code”, “It doesn’t have to be elegant or efficient, it’s only test code”… do these phrases sound familiar? Chances are if you’ve working with test automation, at one point or other you will have heard these phrases, you have probably even used them yourself! What is certain is that code written under this “it’s only test code” mantra will come back and bite you in the arse! I’ve recently encountered a case where a test was giving a false positive, therefore hiding a real product bug because that test code was very badly written. Firstly it was very difficult to understand what the test was actually trying to achieve let alone how it was doing it, and this complexity masked a simple logic error. These issues are real and they do happen. Let’s take a step back from this and look at what we are trying to do. We are writing test code that tests product code, and we do this to create a suite of tests that will help protect our software against regressions. This test code is making sure that the product behaves as it should by employing some sort of expected result verification. The simple cases of these are generally not a problem. However, automation allows us to explore more complex scenarios in many more permutations. As this complexity increases then so does the complexity of the test code. It is at this point that code which has not been architected properly will cause problems.   Keep your friends close… So, how do we make sure we are doing it right? The development teams I have worked on have always had Test Engineers working very closely with their Software Engineers. This is something that I have always tried to take full advantage of. They are coding experts! So run your ideas past them, ask for advice on how to structure your code, help you design your data structures. This may require a shift in your teams viewpoint, as contrary to this section title and folklore, Software Engineers are not actually the mortal enemy of Test Engineers. As time progresses, and test automation becomes more and more ingrained in what we do, the two roles are converging more than ever. Over the 16 years I have spent as a Test Engineer, I have seen the grey area between the two roles grow significantly larger. This serves to strengthen the relationship and common bond between the two roles which helps to make test code activities so much easier!   Pair for the win Possibly the best thing you could do to write good test code is to pair program on the task. This will serve a few purposes. you will get the benefit of the Software Engineers knowledge and experience the Software Engineer will gain knowledge on the testing process. Sharing the love is a wonderful thing! two pairs of eyes are always better than one… And so are two brains. Between the two of you, I will guarantee you will derive more useful test cases than if it was just one of you.   Code reviews Another policy which certainly pays dividends is the practice of code reviews. By having one of your peers review your code before you commit it serves two purposes. Firstly, it forces you to explain your code. Just the act of doing this will often pick up errors in your code. Secondly, it gets yet another pair of eyes on your code! I cannot stress enough how important code reviews are. The benefits they offer apply as much to product code as test code. In short, Software and Test Engineers should all be doing them! It can be extended even further by getting test code reviewed by a Software Engineer and a Test Engineer, and likewise product code. This serves to keep both functions in the loop with changes going on within your code base.   Learn from your devs I briefly touched on this earlier but I’d like to go into more detail here. Pairing with your Software Engineers when writing your test code is such an amazing opportunity to improve your coding skills. As I sit here writing this article waiting to be called into court for jury service, it reminds me that it takes a lot of patience to be a Test Engineer, almost as much as it takes to be a juror! However tempting it is to go rushing in and start writing your automated tests, resist that urge. Discuss what you want to achieve then talk through the approach you’re going to take. Then code it up together. I find it really enlightening to ask questions like ‘is there a better way to do this?’ Or ‘is this how you would code it?’ The latter question, especially, is where I learn the most. I’ve found that most Software Engineers will be reluctant to show you the ‘right way’ to code something when writing tests because they perceive the ‘right way’ to be too complicated for the Test Engineer (e.g. not mentioning LINQ and instead doing something verbose). So by asking how THEY would code it, it unleashes their true dev-ness and advanced code usually ensues! I would like to point out, however, that you don’t have to accept their method as the final answer. On numerous occasions I have opted for the more simple/verbose solution because I found the code written by the Software Engineer too advanced and therefore I would find it unreadable when I return to the code in a months’ time! Always keep the target audience in mind when writing clever code, and in my case that is mostly Test Engineers.  

    Read the article

  • Getting Started With Tailoring Business Processes

    - by Richard Bingham
    In this article, and for the sake of simplicity, we will use the term “On-Premise” to mean a deployment where you have design-time development access to the instance, including administration of the technology components, the applications filesystem, and the database. In reality this might be a local development instance that is then supported by a team who can deploy your customizations to the restricted production instance equivalents. Tools Overview Firstly let’s look at the Design-Time tools within JDeveloper for customizing and extending the artifacts of a Business Process. In essence this falls into two buckets; SOA Composite Editor for working with BPEL processes, and the BPM Studio. The SOA Composite Editor As a standard extension to JDeveloper, this graphical design tool should be familiar to anyone previously worked with Oracle SOA Server. With easy-to-use modeling capability, backed-up by full XML source-view (for read-only), it provides everything that is needed to implement the technical design. In simple terms, once deployed to the remote SOA Server the composite components (like Mediator) leverage the Event Delivery Network (EDN) for interaction with the application logic. If you are customizing an existing Fusion Applications BPEL process then be aware that it does support MDS-based customization layers just like Page Composer where different customizations are used based on the run-time context, like for a specific Product or Business Unit. This also makes them safe from patching and upgrades, although only a single active version of the composite is available at run-time. This is defined by a field on the composite record, available in Enterprise Manager. Obviously if you wish to fire different activities and tasks based on the user context then you can should include switches to fork the flows in your custom BPEL process. Figure 1 – A BPEL process in Composite Editor The following describes the simplified steps for making customizations to BPEL processes. This is the most common method of changing the business processes of Fusion Applications, as over 400 BPEL-based composite applications are provided out-of-the-box. Setup your local Fusion Applications JDeveloper environment. The SOA Composite Editor should be installed as part of the Fusion Applications extension. If there are problems you can also find it under the ‘Check for Updates’ help menu option. Since SOA Server is not part of the JDeveloper integrated WebLogic Server, setup a standalone WebLogic environment for deploying and testing. Obviously you might use a Fusion Applications development instance also. Package the existing standard Fusion Applications SOA Composite using Enterprise Manager and export it as a complete SOA Archive (SAR) file, resulting in a local .jar file. You may need to ask your system administrator for this. Import the exported SAR .jar file into JDeveloper using the File menu, under the option ‘SOA Archive into SOA Project’. In JDeveloper set the appropriate customization layer values, and then change from the default role to the Fusion Applications Customization Developer role. Make the customizations and save the application project. Finally redeploy the composite application, either to a direct Application Server connection, or as a fresh SAR (jar) file that can then be re-imported and deployed via Enterprise Manager. The Business Process Management (BPM) Suite In addition to the relatively low-level development environment associated with BPEL process creation, Oracle provides a suite of products that allow business process adjustments to be made without the need for some of the programming skills.  The aim is to abstract much of the technical implementation and to provide a Business Analyst tools for immediately implementing organization changes. Obviously there are some limitations on what they can do, however the BPM Suite functionality increases with each release and for the majority of the cases the tools remains as applicable as its developer-orientated sister. At the current time business processes must be explicitly coded to support just one of these use-cases, either BPEL for developer use or BPM for business analyst use. That said, they both run on the same SOA Server in much the same way. The components bundled in each SOA Composite Application can be verified by inspection through Enterprise Manager. Figure 2 – A BPM Process in JDeveloper BPM Suite. BPM processes are written in a standard notation (BPMN) and the modeling tools are very similar to that of BPEL. The steps to deploy a custom BPM process are also essentially much the same, since the BPM process is bundled into a SOA Composite just like a BPEL process. As such the SOA Composite Editor  actually has support for both artifacts and even allows use of them together, such as a calling a BPM process as a partnerlink from a BPEL process. For more details see the references below. Business Analyst Tooling In addition to using JDeveloper extensions for BPM development, there are run-time tools that Business Analysts can use to make adjustments, so that without high costs of an IT project the system can be tuned to match changes to the business operation. The first tool to consider is the BPM Composer, deployed with the middleware SOA Server and accessible online, and for Fusion Applications it is under the Business Process icon on the homepage of the Application Composer. Figure 3 – Business Process Composer showing a CRM process flow. The key difference between this and using JDeveloper is that the BPM Composer has a Business Catalog prepopulated with features and functions that can be used, mostly through registered WebServices. This means no coding or complex interface development is required, simply drag-drop-configure. The items in the business catalog are seeded by either Oracle (as a BPM Template) or added to by your own custom development. You cannot create or generate catalog content from BPM Composer directly. As per the screenshot you can see the Business Catalog content in the BPM Project browser region. In addition, other online tools for use by Business Analysts include the BPM Worklist application for editing business rules and approval management configuration, plus the SOA Composer which focuses on non-approval business rules and domain value maps. At the current time there are only a handful of BPM processes shipped with Fusion Applications HCM and CRM, including on-boarding workers and processing customer registrations.  This also means a limited number of associated BPM Templates provided out-of-the-box, therefore a limited Business Catalog. That said, BPM-based extension is a powerful capability to leverage and will most likely develop going forwards, especially for use in SaaS deployments where full design-time JDeveloper access is not available. Further Reading For BPEL – Fusion Applications Extensibility Guide – Section 12 For BPM – Fusion Applications Extensibility Guide – Section 7 The product-specific documentation and implementation guides for Fusion Applications Fusion Middleware Developers Guide for SOA Suite Modeling and Implementation Guide for Oracle Business Process Management User’s Guide for Oracle Business Process Composer Oracle University courses on BPM Suite and SOA Development

    Read the article

  • PASS Summit 2010 BI Workshop Feedbacks

    - by Davide Mauri
    As many other speakers already did, I’d like to share with the SQL Community the feedback of my PASS Summit 2010 Workshop. For those who were not there, my workshop was the “BI From A-Z” and the main objective of that workshop was to introduce people in the BI world not only from a technical point of view but insist a lot on the methodological and “engineered” approach. The will to put more engineering in the IT (and specially in the BI field) is something that has been growing stronger and stronger in me every day for of this last 5 years since is simply envy the fact that Airbus, Fincatieri, BMW (just to name a few) can create very complex machine “just” using putting people together and giving them some rules to follow (Of course this is an oversimplification but I think you get what I mean). The key point of engineering is that, after having defined the project blueprint, you have the possibility to give to a huge number of people, the rules to follow, the correct tools in order to implement the rules easily and semi-automatically and a way to measure the quality of the results. Could this be done in IT? Very big question, so my scope is now limited to BI. So that’s the main point of my workshop: and entry-level approach to BI (level was 200) in order to allow attendees to know the basics, to understand what tools they should use for which purpose and, above all, a set of rules and tools in order to make a BI solution scalable in terms of people working on it, while still maintaining a very good quality. All done not focusing only on the practice but explaining the theory behind to see how it can help *a lot* to build a correct solution despite the technology used to implement it. The idea is to reach a point where more then 70% of the work done to create a BI solution can be reused even if technologies changes. This is a very demanding challenge nowadays with the coming of Denali and its column-aligned storage and the shiny-new DAX language. As you may understand I was looking forward to get the feedback since you may have noticed that there’s a lot of “architectural” stuff in IT but really nothing on “engineering”. So how the session could be perceived by the attendees was really unknown to me. The feedback could also give a good indication if the need of more “engineering” is something I feel only by myself or if is something more broad. I’m very happy to be able to say that the overall score of 4.75 put my workshop in the TOP 20 session (on near 200 sessions)! Here’s the detailed evaluations: How would you rate the usefulness of the information presented in your day-to-day environment? 4.75 Answer:    # of Responses 3    1         4    12        5    42               How would you rate the Speaker's presentation skills? 4.80 Answer:    # of Responses 3 : 1         4 : 9         5 : 45               How would you rate the Speaker's knowledge of the subject? 4.95 Answer:    # of Responses 4 :  3         5 : 52               How would you rate the accuracy of the session title, description and experience level to the actual session? 4.75 Answer:    # of Responses 3 : 2         4 : 10         5 : 43               How would you rate the amount of time allocated to cover the topic/session? 4.44 Answer:    # of Responses 3 : 7         4 : 17        5 : 31               How would you rate the quality of the presentation materials? 4.62 Answer:    # of Responses 4 : 21        5 : 34 The comments where all very positive. Many of them asked for more time on the subject (or to shorten the very last topics). I’ll make treasure of these comments and will review the content accordingly. We’ll organize a two-day classes on this topic, where also more examples will be shown and some arguments will be explained more deeply. I’d just like to answer a comment that asks how much of what I shown is “universally applicable”. I can tell you that all of our BI project follow these rules and they’ve been applied to different markets (Insurance, Fashion, GDO) with different people and different teams and they allowed us to be “Adaptive” against the customer. The more the rules are well defined and the more there are tools that supports their implementations, the easier is to add new people to the project and to add or change solution features. Think of a car. How come that almost any mechanic can help you to fix a problem? Because they know what to expect. Because there a rules that allow them to identify the problem without having to discover each time how the car has been implemented build. And this is of course also true for car upgrades/improvements. Last but not least: thanks a lot to everyone for coming!

    Read the article

  • Beyond Cloud Technology, Enabling A More Agile and Responsive Organization

    - by sxkumar
    This is the second part of the blog “Clouds, Clouds Everywhere But not a Drop of Rain”. In the first part,  I was sharing with you how a broad-based transformation makes cloud more than a technology initiative, I will describe in this section how it requires people (organizational) and process changes as well, and these changes are as critical as is the choice of right tools and technology. People: Most IT organizations have a fairly complex organizational structure. There are different groups, managing different pieces of the puzzle, and yet, they don't always work together. Provisioning a new application therefore may require a request to float endlessly through system administrators, DBAs and middleware admin worlds – resulting in long delays and constant finger pointing.  Cloud users expect end-to-end automation - which requires these silos to be greatly simplified, if not completely eliminated.  Most customers I talk to acknowledge this problem but are quick to admit that such a transformation is hard. As hard as it may be, I am afraid that the status quo is no longer an option. Sticking to an organizational structure that was created ages back will not only impede cloud adoption,  it also risks making the IT skills increasingly irrelevant in a world that is rapidly moving towards converged applications and infrastructure.   Process: Most IT organizations today operate with a mindset that they must fully "control" access to any and all types of IT services. This in turn leads to people clinging on to outdated manual approval processes .  While requiring approvals for scarce resources makes sense, insisting that every single request must be manually approved defeats the very purpose of cloud. Not only this causes delays, thereby at least partially negating the agility benefits, it also results in gross inefficiency. In a cloud environment, self-service access should be governed by policies, quotas that the administrators can define upfront . For a cloud initiative to be successful, IT organizations MUST be ready to empower users by giving them real control rather than insisting on brokering every single interaction between users and the cloud resources. Technology: From a technology perspective, cloud is about consolidation, standardization and automation. A consolidated and standardized infrastructure helps increase utilization and reduces cost. Additionally, it  enables a much higher degree of automation - thereby providing users the required agility while minimizing operational costs.  Obviously, automation is the key to cloud. Unfortunately it hasn’t received as much attention within enterprises as it should have.  Many organizations are just now waking up to the criticality of automation and it still often gets relegated to back burner in favor of other "high priority" projects. However, it is important to understand that without the right type and level of automation, cloud will remain a distant dream for most enterprises. This in turn makes the choice of the cloud management software extremely critical.  For a cloud management software to be effective in an enterprise environment, it must meet the following qualifications: Broad and Deep Solution It should offer a broad and deep solution to enable the kind of broad-based transformation we are talking about.  Its footprint must cover physical and virtual systems, as well as infrastructure, database and application tiers. Too many enterprises choose to equate cloud with virtualization. While virtualization is a critical component of a cloud solution, it is just a component and not the whole solution. Similarly, too many people tend to equate cloud with Infrastructure-as-a-Service (IaaS). While it is perfectly reasonable to treat IaaS as a starting point, it is important to realize that it is just the first stepping stone - and on its own it can only provide limited business benefits. It is actually the higher level services, such as (application) platform and business applications, that will bring about a more meaningful transformation to your enterprise. Run and Manage Efficiently Your Mission Critical Applications It should not only be able to run your mission critical applications, it should do so better than before.  For enterprises, applications and data are the critical business assets  As such, if you are building a cloud platform that cannot run your ERP application, it isn't truly a "enterprise cloud".  Also, be wary of  vendors who try to sell you the idea that your applications must be written in a certain way to be able to run on the cloud. That is nothing but a bogus, self-serving argument. For the cloud to be meaningful to enterprises, it should adopt to your applications - and not the other way around.  Automated, Integrated Set of Cloud Management Capabilities At the root of many of the problems plaguing enterprise IT today is complexity. A complex maze of tools and technology, coupled with archaic  processes, results in an environment which is inflexible, inefficient and simply too hard to manage. Management tool consolidation, therefore, is key to the success of your cloud as tool proliferation adds to complexity, encourages compartmentalization and defeats the very purpose that you are building the cloud for. Decision makers ought to be extra cautious about vendors trying to sell them a "suite" of disparate and loosely integrated products as a cloud solution.  An effective enterprise cloud management solution needs to provide a tightly integrated set of capabilities for all aspects of cloud lifecycle management. A simple question to ask: will your environment be more or less complex after you implement your cloud? More often than not, the answer will surprise you.  At Oracle, we have understood these challenges and have been working hard to create cloud solutions that are relevant and meaningful for enterprises.  And we have been doing it for much longer than you may think. Oracle was one of the very first enterprise software companies to make our products available on the Amazon Cloud. As far back as in 2007, we created new cloud solutions such as Cloud Database Backup that are helping customers like Amazon save millions every year.  Our cloud solution portfolio is also the broadest and most deep in the industry  - covering public, private, hybrid, Infrastructure, platform and applications clouds. It is no coincidence therefore that the Oracle Cloud today offers the most comprehensive set of public cloud services in the industry.  And to a large part, this has been made possible thanks to our years on investment in creating cloud enabling technologies. I will dedicated the third and final part of the blog “Clouds, Clouds Everywhere But not a Drop of Rain” to Oracle Cloud Technologies Building Blocks and how they mapped into our vision of Enterprise Cloud. Stay Tuned.

    Read the article

  • Create an Alias Directory inside a Virtual Host

    - by Praveen Kumar
    First, let me say, I asked this question in StackOverflow, and thought I could get more replies here. I checked here, here, here, here, here, and here before asking this question. I guess my search skills are weak. I am using the WampServer version 2.2e. I have a need like, I need a virtual path inside a virtual host. Let me say the two hosts that I have. Primary Virtual Host (Localhost) NameVirtualHost *:80 <VirtualHost *:80> ServerName localhost DocumentRoot "C:/Wamp/www" </VirtualHost> My Apps Virtual Hosts <VirtualHost *:80> ServerName apps.ptrl DocumentRoot "C:/Wamp/vhosts/ptrl/apps" ErrorLog "logs/apps-ptrl-error.log" CustomLog "logs/apps-ptrl-access.log" common <Directory "C:/Wamp/vhosts/ptrl/apps"> allow from all order allow,deny AllowOverride All </Directory> DirectoryIndex index.html index.htm index.php </VirtualHost> My Blog Virtual Host <VirtualHost *:80> ServerName blog.praveen-kumar.ptrl DocumentRoot "C:/Wamp/vhosts/ptrl/praveen-kumar/blog" ErrorLog "logs/praveen-kumar-ptrl-error.log" CustomLog "logs/praveen-kumar-ptrl-access.log" common <Directory "C:/Wamp/vhosts/ptrl/praveen-kumar/blog"> allow from all order allow,deny AllowOverride All </Directory> DirectoryIndex index.html index.htm index.php </VirtualHost> My requirement now is to have http://apps.ptrl/blog/ and http://blog.praveen-kumar.ptrl/ should be the same directory. One thing I thought of is, moving the blog folder inside the apps folder, but it is connected with Git and other stuffs are there, so it is not possible to move the folder. So, I thought of creating an alias to the VirtualHost in this way: <VirtualHost *:80> ServerName apps.ptrl DocumentRoot "C:/Wamp/vhosts/ptrl/apps" ErrorLog "logs/apps-ptrl-error.log" CustomLog "logs/apps-ptrl-access.log" common <Directory "C:/Wamp/vhosts/ptrl/apps"> allow from all order allow,deny AllowOverride All </Directory> DirectoryIndex index.html index.htm index.php # The alias to the blog! Alias /blog "C:/Wamp/vhosts/ptrl/praveen-kumar/blog" <Directory "C:/Wamp/vhosts/ptrl/praveen-kumar/blog"> allow from all order allow,deny AllowOverride All </Directory> </VirtualHost> But when I tried to access http://apps.ptrl/blog, I am getting an Error 403 Forbidden page. Am I doing the right thing? If you need to look at the access log, and error log, they are here: # Access Log 127.0.0.1 - - [14/Oct/2012:09:53:11 +0530] "GET /blog HTTP/1.1" 403 206 127.0.0.1 - - [14/Oct/2012:09:53:11 +0530] "GET /favicon.ico HTTP/1.1" 404 209 127.0.0.1 - - [14/Oct/2012:09:53:53 +0530] "GET / HTTP/1.1" 200 6935 127.0.0.1 - - [14/Oct/2012:09:53:53 +0530] "GET /app/blog/thumb.png HTTP/1.1" 404 216 # Error Log [Sun Oct 14 09:53:11 2012] [error] [client 127.0.0.1] client denied by server configuration: C:/Wamp/vhosts/ptrl/praveen-kumar/blog [Sun Oct 14 09:53:11 2012] [error] [client 127.0.0.1] File does not exist: C:/Wamp/vhosts/ptrl/apps/favicon.ico [Sun Oct 14 09:53:53 2012] [error] [client 127.0.0.1] File does not exist: C:/Wamp/vhosts/ptrl/apps/app/blog, referer: http://apps.ptrl/ Waiting eagerly for some help. I am ready to provide more info, if needed. Update #1: Changed VirtualHosts: <VirtualHost *:80> ServerName apps.ptrl DocumentRoot "C:/Wamp/vhosts/ptrl/apps" ErrorLog "logs/apps-ptrl-error.log" CustomLog "logs/apps-ptrl-access.log" common # The alias to the blog! Alias /blog "C:/Wamp/vhosts/ptrl/praveen-kumar/blog" <Directory "C:/Wamp/vhosts/ptrl/praveen-kumar/blog"> allow from all order allow,deny AllowOverride All </Directory> <Directory "C:/Wamp/vhosts/ptrl/apps"> allow from all order allow,deny AllowOverride All </Directory> DirectoryIndex index.html index.htm index.php </VirtualHost> The issue now: I am able to access the site. The physical links are working now. i.e., I am able to open http://apps.ptrl/blog/index.php but not http://apps.ptrl/blog/view-1.ptf, which gets translated to http://apps.ptrl/blog/index.php?page=view&id=1. Any solutions?

    Read the article

  • Paying great programmers more than average programmers

    - by Kelly French
    It's fairly well recognized that some programmers are up to 10 times more productive than others. Joel mentions this topic on his blog. There is a whole blog devoted to the idea of the "10x productive programmer". In years since the original study, the general finding that "There are order-of-magnitude differences among programmers" has been confirmed by many other studies of professional programmers (Curtis 1981, Mills 1983, DeMarco and Lister 1985, Curtis et al. 1986, Card 1987, Boehm and Papaccio 1988, Valett and McGarry 1989, Boehm et al 2000). Fred Brooks mentions the wide range in the quality of designers in his "No Silver Bullet" article, The differences are not minor--they are rather like the differences between Salieri and Mozart. Study after study shows that the very best designers produce structures that are faster, smaller, simpler, cleaner, and produced with less effort. The differences between the great and the average approach an order of magnitude. The study that Brooks cites is: H. Sackman, W.J. Erikson, and E.E. Grant, "Exploratory Experimental Studies Comparing Online and Offline Programming Performance," Communications of the ACM, Vol. 11, No. 1 (January 1968), pp. 3-11. The way programmers are paid by employers these days makes it almost impossible to pay the great programmers a large multiple of what the entry-level salary is. When the starting salary for a just-graduated entry-level programmer, we'll call him Asok (From Dilbert), is $40K, even if the top programmer, we'll call him Linus, makes $120K that is only a multiple of 3. I'd be willing to be that Linus does much more than 3 times what Asok does, so why wouldn't we expect him to get paid more as well? Here is a quote from Stroustrup: "The companies are complaining because they are hurting. They can't produce quality products as cheaply, as reliably, and as quickly as they would like. They correctly see a shortage of good developers as a part of the problem. What they generally don't see is that inserting a good developer into a culture designed to constrain semi-skilled programmers from doing harm is pointless because the rules/culture will constrain the new developer from doing anything significantly new and better." This leads to two questions. I'm excluding self-employed programmers and contractors. If you disagree that's fine but please include your rationale. It might be that the self-employed or contract programmers are where you find the top-10 earners, but please provide a explanation/story/rationale along with any anecdotes. [EDIT] I thought up some other areas in which talent/ability affects pay. Financial traders (commodities, stock, derivatives, etc.) designers (fashion, interior decorators, architects, etc.) professionals (doctor, lawyer, accountant, etc.) sales Questions: Why aren't the top 1% of programmers paid like A-list movie stars? What would the industry be like if we did pay the "Smart and gets things done" programmers 6, 8, or 10 times what an intern makes? [Footnote: I posted this question after submitting it to the Stackoverflow podcast. It was included in episode 77 and I've written more about it as a Codewright's Tale post 'Of Rockstars and Bricklayers'] Epilogue: It's probably unfair to exclude contractors and the self-employed. One aspect of the highest earners in other fields is that they are free-agents. The competition for their skills is what drives up their earning power. This means they can not be interchangeable or otherwise treated as a plug-and-play resource. I liked the example in one answer of a major league baseball team trying to field two first-basemen. Also, something that Joel mentioned in the Stackoverflow podcast (#77). There are natural dynamics to shrink any extreme performance/pay ranges between the highs and lows. One is the peer pressure of organizations to pay within a given range, another is the likelyhood that the high performer will realize their undercompensation and seek greener pastures.

    Read the article

  • Stored proc running 30% slower through Java versus running directly on database

    - by James B
    Hi All, I'm using Java 1.6, JTDS 1.2.2 (also just tried 1.2.4 to no avail) and SQL Server 2005 to create a CallableStatement to run a stored procedure (with no parameters). I am seeing the Java wrapper running the same stored procedure 30% slower than using SQL Server Management Studio. I've run the MS SQL profiler and there is little difference in I/O between the two processes, so I don't think it's related to query plan caching. The stored proc takes no arguments and returns no data. It uses a server-side cursor to calculate the values that are needed to populate a table. I can't see how the calling a stored proc from Java should add a 30% overhead, surely it's just a pipe to the database that SQL is sent down and then the database executes it....Could the database be giving the Java app a different query plan?? I've posted to both the MSDN forums, and the sourceforge JTDS forums (topic: "stored proc slower in JTDS than direct in DB") I was wondering if anyone has any suggestions as to why this might be happening? Thanks in advance, -James (N.B. Fear not, I will collate any answers I get in other forums together here once I find the solution) Java code snippet: sLogger.info("Preparing call..."); stmt = mCon.prepareCall("SP_WB200_POPULATE_TABLE_limited_rows"); sLogger.info("Call prepared. Executing procedure..."); stmt.executeQuery(); sLogger.info("Procedure complete."); I have run sql profiler, and found the following: Java app : CPU: 466,514 Reads: 142,478,387 Writes: 284,078 Duration: 983,796 SSMS : CPU: 466,973 Reads: 142,440,401 Writes: 280,244 Duration: 769,851 (Both with DBCC DROPCLEANBUFFERS run prior to profiling, and both produce the correct number of rows) So my conclusion is that they both execute the same reads and writes, it's just that the way they are doing it is different, what do you guys think? It turns out that the query plans are significantly different for the different clients (the Java client is updating an index during an insert that isn't in the faster SQL client, also, the way it is executing joins is different (nested loops Vs. gather streams, nested loops Vs index scans, argh!)). Quite why this is, I don't know yet (I'll re-post when I do get to the bottom of it) Epilogue I couldn't get this to work properly. I tried homogenising the connection properties (arithabort, ansi_nulls etc) between the Java and Mgmt studio clients. It ended up the two different clients had very similar query/execution plans (but still with different actual plan_ids). I posted a summary of what I found to the MSDN SQL Server forums as I found differing performance not just between a JDBC client and management studio, but also between Microsoft's own command line client, SQLCMD, I also checked some more radical things like network traffic too, or wrapping the stored proc inside another stored proc, just for grins. I have a feeling the problem lies somewhere in the way the cursor was being executed, and it was somehow giving rise to the Java process being suspended, but why a different client should give rise to this different locking/waiting behaviour when nothing else is running and the same execution plan is in operation is a little beyond my skills (I'm no DBA!). As a result, I have decided that 4 days is enough of anyone's time to waste on something like this, so I will grudgingly code around it (if I'm honest, the stored procedure needed re-coding to be more incremental instead of re-calculating all data each week anyway), and chalk this one down to experience. I'll leave the question open, big thanks to everyone who put their hat in the ring, it was all useful, and if anyone comes up with anything further, I'd love to hear some more options...and if anyone finds this post as a result of seeing this behaviour in their own environments, then hopefully there's some pointers here that you can try yourself, and hope fully see further than we did. I'm ready for my weekend now! -James

    Read the article

  • Problems with multiple setIntervals running simultaneously

    - by Roel V.
    Hello, My first post here. I want to make a horizontal menu with submenu's sliding down on mouseover. I know I could use jQuery but this is to practice my javascript skills. I use the following code: var up = new Array() var down = new Array() var submenustart function titleover(headmenu, inter) { submenu = headmenu.lastChild up[inter] = window.clearInterval(up[inter]) down[inter] = window.setInterval("slidedown(submenu)",1) } function slidedown(submenu) { if(submenu.offsetTop < submenustart) { submenu.style.top = submenu.offsetTop + 1 + "px" } } function titleout(headmenu, inter) { submenu = headmenu.lastChild down[inter] = window.clearInterval(down[inter]) up[inter] = window.setInterval("slideup(submenu)", 1) } function slideup(submenu) { if(submenu.offsetTop > submenustart - submenu.clientHeight + 1) { submenu.style.top = submenu.offsetTop - 1 + "px" } } The variable submenustart gets appointed a value in another function which is not relevant for my question. HTML looks like this: <table class="hoofding" id="hoofding"> <tr> <td onmouseover="titleover(this, 0)" onmouseout="titleout(this, 0)"><a href="#" class="hoofdinglink" id="hoofd1">AAAA</a> <table class="menu"> <tr><td><a href="...">1111</a></td></tr> <tr><td><a href="...">2222</a></td></tr> <tr><td><a href="...">3333</a></td></tr> </table></td> <td onmouseover="titleover(this, 1)" onmouseout="titleout(this, 1)"><a href="#" class="hoofdinglink">BBBB</a> <table class="menu"> <tr><td><a href="...">1111</a></td></tr> <tr><td><a href="...">2222</a></td></tr> <tr><td><a href="...">3333</a></td></tr> <tr><td><a href="...">4444</a></td></tr> <tr><td><a href="...">5555</a></td></tr> </table></td> ... </tr> </table> What happens is the following: If I go over and out (for ex) menu A it works fine. If i go now over menu B the interval applied to A is now applied to B. There are now 2 interval functions applied to B. The one originally for A and a new one triggered by the mouseover on B. If I would go to A all the intervals are now applied to A. I have been searching for hours but and I am completely stuck. Thanks in advance.

    Read the article

  • HttpWebRequest: How to find a postal code at Canada Post through a WebRequest with x-www-form-enclos

    - by Will Marcouiller
    I'm currently writing some tests so that I may improve my skills with the Internet interaction through Windows Forms. One of those tests is to find a postal code which should be returned by Canada Post website. My default URL setting is set to: http://www.canadapost.ca/cpotools/apps/fpc/personal/findByCity?execution=e4s1 The required form fields are: streetNumber, streetName, city, province The contentType is "application/x-www-form-enclosed" EDIT: Please consider the value "application/x-www-form-encoded" instead of point 3 value as the contentType. (Thanks EricLaw-MSFT!) The result I get is not the result expected. I get the HTML source code of the page where I could manually enter the information to find the postal code, but not the HTML source code with the found postal code. Any idea of what I'm doing wrong? Shall I consider going the XML way? Is it first of all possible to search on Canada Post anonymously? Here's a code sample for better description: public static string FindPostalCode(ICanadadianAddress address) { var postData = string.Concat(string.Format("&streetNumber={0}", address.StreetNumber) , string.Format("&streetName={0}", address.StreetName) , string.Format("&city={0}", address.City) , string.Format("&province={0}", address.Province)); var encoding = new ASCIIEncoding(); byte[] postDataBytes = encoding.GetBytes(postData); request = (HttpWebRequest)WebRequest.Create(DefaultUrlSettings); request.ImpersonationLevel = System.Security.Principal.TokenImpersonationLevel.Anonymous; request.Container = new CookieContainer(); request.Timeout = 10000; request.ContentType = contentType; request.ContentLength = postDataBytes.LongLength; request.Method = @"post"; var senderStream = new StreamWriter(request.GetRequestStream()); senderStream.Write(postDataBytes, 0, postDataBytes.Length); senderStream.Close(); string htmlResponse = new StreamReader(request.GetResponse().GetResponseStream()).ReadToEnd(); return processedResult(htmlResponse); // Processing the HTML source code parsing, etc. } I seem stuck in a bottle neck in my point of view. I find no way out to the desired result. EDIT: There seems to have to parameters as for the ContentType of this site. Let me explain. There's one with the "meta"-variables which stipulates the following: meta http-equiv="Content-Type" content="application/xhtml+xml, text/xml, text/html; charset=utf-8" And another one later down the code that is read as: form id="fpcByAdvancedSearch:fpcSearch" name="fpcByAdvancedSearch:fpcSearch" method="post" action="/cpotools/apps/fpc/personal/findByCity?execution=e1s1" enctype="application/x-www-form-urlencoded" My question is the following: With which one do I have to stick? Let me guess, the first ContentType is to be considered as the second is only for another request to a function or so when the data is posted? EDIT: As per request, the closer to the solution I am is listed under this question: WebRequest: How to find a postal code using a WebRequest against this ContentType=”application/xhtml+xml, text/xml, text/html; charset=utf-8”? Thanks for any help! :-)

    Read the article

  • WPF: Updating visibility of controls not updating the screen

    - by Brad McBride
    I will preface this by stating that I am new to WPF programming and may be making multiple errors. Any insight that can be provided to help me improve in my skills are greatly appreciated. I am working with a WPF application and am looping through a list of objects that contain properties that describe a document that should be built on the fly and automatically printed. I am attempting to display a small grid in the interface that shows the document being built before it is printed. This serves two purposes: one, it allows the user to see work being done by the application. Two, it renders the items on the screen so that I can then have something to actually print since WPF appears to not be able to load an image for printing dynamicaly without displaying it on the screen. In my code, I am setting the various elements in the grid and setting the visibility to visible. However, the UI is not updating and the printed document doesn't look as intended since the image never shows up on the screen. Here is the XAML that I have set up <Grid x:Name="LayoutRoot" Background="Black"> <Grid Name="previewGrid" Grid.Row="1" Grid.Column="1" Background="White" Visibility="Hidden"> <Canvas Name="pageCanvas" HorizontalAlignment="Center" VerticalAlignment="Center"> <Grid Name="pageGrid" Width="163" Height="211"> <Grid.ColumnDefinitions> <ColumnDefinition Width="81.5"></ColumnDefinition> <ColumnDefinition Width="81.5"></ColumnDefinition> </Grid.ColumnDefinitions> <TextBlock Grid.Column="0" Name="copyright" TextAlignment="Center" HorizontalAlignment="Center" VerticalAlignment="Bottom"></TextBlock> <Image Name="pageImage" Grid.Column="1" HorizontalAlignment="Center" VerticalAlignment="Center"></Image> </Grid> </Canvas> .....canvas for pages 2-4 not shown but structure is the same as for pageGrid..... </Grid> </Grid> </Window> Here is the code behind that is supposed to set the elements. previewGrid.Visibility = Windows.Visibility.Visible pageURI = New Uri(pageCollection(i).iamgeURL, UriKind.Absolute) pageGrid.Visibility = Windows.Visibility.Visible bmp.BeginInit() bmp.StreamSource = getCachedURLStream(cardURI) bmp.EndInit() pageImage.Source = bmp copyright.Text = copyrightText cardPreviewGrid.UpdateLayout() ' More code that prints the visual element pageGrid previewGrid.Visibility = Windows.Visibility.Hidden The code in codebehind loops through a number of times depending on how many different documents the user prints. Basically it builds a visual element for a page, prints an XPS version of it and then builds the next page and prints it, etc. Once all pages have been processed, the job is actually sent to the printer. The only purpose of this application is to let the user print these documents so there is not other task that they can do in the application while the documents print. I thought that putting this task in a background thread would help to update the UI but since I am trying to manipulate items directly on the UI thread it would appear that this option won't work for me. What am I doing wrong here and how can I improve the code so that I can get the behavior that I am trying to achieve?

    Read the article

  • How to use JQuery to set the value of 2 html form select elements depending on the value of another

    - by Chris Stevenson
    My Javascript and JQuery skills are poor at best and this is ** I have the following three elements in a form : <select name="event_time_start_hours"> <option value="blank" disabled="disabled">Hours</option> <option value="blank" disabled="disabled">&nbsp;</option> <option value="01">1</option> <option value="02">2</option> <option value="03">3</option> <option value="04">4</option> <option value="05">5</option> <option value="06">6</option> <option value="07">7</option> <option value="08">8</option> <option value="09">9</option> <option value="10">10</option> <option value="11">11</option> <option value="12">12</option> <option value="midnight">Midnight</option> <option value="midday">Midday</option> </select> <select name="event_time_start_minutes"> <option value="blank" disabled="disabled">Minutes</option> <option value="blank" disabled="disabled">&nbsp;</option> <option value="00">00</option> <option value="15">15</option> <option value="30">30</option> <option value="45">45</option> </select> <select name="event_time_start_ampm"> <option value="blank" disabled="disabled">AM / PM</option> <option value="blank" disabled="disabled">&nbsp;</option> <option value="am">AM</option> <option value="pm">PM</option> </select> Quite simply, when either 'midnight' or 'midday' is selected in "event_time_start_hours", I want the values of "event_time_start_minutes" and "event_time_start_ampm" to change to "00" and "am" respectively. My VERY poor piece of Javascript says this so far : $(document).ready(function() { $('#event_time_start_hours').change(function() { if($('#event_time_start_hours').val('midnight')) { $('#event_time_start_minutes').val('00'); } }); }); ... and whilst I'm not terribly surprised it doesn't work, I'm at a loss as to what to do next. I want to do this purely for visual reasons for the user as when the form submits I ignore the "minutes" and "am/pm". I'm trying to decide whether it would be best to change the selected values, change the selected values and then disable the element or hide them altogether. However, without any success in getting anything to happen at all I haven't been able to try the different approaches to see what feels right. I've ruled out the obvious things like a duplicate element ID or simply not linking to JQuery. Thank you.

    Read the article

  • Problems with multiple setIntervals running simultaniously

    - by Roel V.
    Hello, My first post here. I want to make a horizontal menu with submenu's sliding down on mouseover. I know I could use jQuery but this is to practice my javascript skills. I use the following code: var up = new Array() var down = new Array() var submenustart function titleover(headmenu, inter) { submenu = headmenu.lastChild up[inter] = window.clearInterval(up[inter]) down[inter] = window.setInterval("slidedown(submenu)",1) } function slidedown(submenu) { if(submenu.offsetTop < submenustart) { submenu.style.top = submenu.offsetTop + 1 + "px" } } function titleout(headmenu, inter) { submenu = headmenu.lastChild down[inter] = window.clearInterval(down[inter]) up[inter] = window.setInterval("slideup(submenu)", 1) } function slideup(submenu) { if(submenu.offsetTop > submenustart - submenu.clientHeight + 1) { submenu.style.top = submenu.offsetTop - 1 + "px" } } The variable submenustart gets appointed a value in another function which is not relevant for my question. HTML looks like this: <table class="hoofding" id="hoofding"> <tr> <td onmouseover="titleover(this, 0)" onmouseout="titleout(this, 0)"><a href="#" class="hoofdinglink" id="hoofd1">AAAA</a> <table class="menu"> <tr><td><a href="...">1111</a></td></tr> <tr><td><a href="...">2222</a></td></tr> <tr><td><a href="...">3333</a></td></tr> </table></td> <td onmouseover="titleover(this, 1)" onmouseout="titleout(this, 1)"><a href="#" class="hoofdinglink">BBBB</a> <table class="menu"> <tr><td><a href="...">1111</a></td></tr> <tr><td><a href="...">2222</a></td></tr> <tr><td><a href="...">3333</a></td></tr> <tr><td><a href="...">4444</a></td></tr> <tr><td><a href="...">5555</a></td></tr> </table></td> ... </tr> </table> What happens is the following: If I go over and out (for ex) menu A it works fine. If i go now over menu B the interval applied to A is now applied to B. There are now 2 interval functions applied to B. The one originally for A and a new one triggered by the mouseover on B. If I would go to A all the intervals are now applied to A. I have been searching for hours but and I am completely stuck. Thanks in advance.

    Read the article

  • How do you read from a file into an array of struct?

    - by Thomas.Winsnes
    I'm currently working on an assignment and this have had me stuck for hours. Can someone please help me point out why this isn't working for me? struct book { char title[25]; char author[50]; char subject[20]; int callNumber; char publisher[250]; char publishDate[11]; char location[20]; char status[11]; char type[12]; int circulationPeriod; int costOfBook; }; void PrintBookList(struct book **bookList) { int i; for(i = 0; i < sizeof(bookList); i++) { struct book newBook = *bookList[i]; printf("%s;%s;%s;%d;%s;%s;%s;%s;%s;%d;%d\n",newBook.title, newBook.author, newBook.subject, newBook.callNumber,newBook.publisher, newBook.publishDate, newBook.location, newBook.status, newBook.type,newBook.circulationPeriod, newBook.costOfBook); } } void GetBookList(struct book** bookList) { FILE* file = fopen("book.txt", "r"); struct book newBook[1024]; int i = 0; while(fscanf(file, "%s;%s;%s;%d;%s;%s;%s;%s;%s;%d;%d", &newBook[i].title, &newBook[i].author, &newBook[i].subject, &newBook[i].callNumber,&newBook[i].publisher, &newBook[i].publishDate, &newBook[i].location, &newBook[i].status, &newBook[i].type,&newBook[i].circulationPeriod, &newBook[i].costOfBook) != EOF) { bookList[i] = &newBook[i]; i++; } /*while(fscanf(file, "%s;%s;%s;%d;%s;%s;%s;%s;%s;%d;%d", &bookList[i].title, &bookList[i].author, &bookList[i].subject, &bookList[i].callNumber, &bookList[i].publisher, &bookList[i].publishDate, &bookList[i].location, &bookList[i].status, &bookList[i].type, &bookList[i].circulationPeriod, &bookList[i].costOfBook) != EOF) { i++; }*/ PrintBookList(bookList); fclose(file); } int main() { struct book *bookList[1024]; GetBookList(bookList); } I get no errors or warnings on compile it should print the content of the file, just like it is in the file. Like this: OperatingSystems Internals and Design principles;William.S;IT;741012759;Upper Saddle River;2009;QA7676063;Available;circulation;3;11200 Communication skills handbook;Summers.J;Accounting;771239216;Milton;2010;BF637C451;Available;circulation;3;7900 Business marketing management:B2B;Hutt.D;Management;741912319;Mason;2010;HF5415131;Available;circulation;3;1053 Patient education rehabilitation;Dreeben.O;Education;745121511;Sudbury;2010;CF5671A98;Available;reference;0;6895 Tomorrow's technology and you;Beekman.G;Science;764102174;Upper Saddle River;2009;QA76B41;Out;reserved;1;7825 Property & security: selected essay;Cathy.S;Law;750131231;Rozelle;2010;D4A3C56;Available;reference;0;20075 Introducing communication theory;Richard.W;IT;714789013;McGraw-Hill;2010;Q360W47;Available;circulation;3;12150 Maths for computing and information technology;Giannasi.F;Mathematics;729890537;Longman;Scientific;1995;QA769M35G;Available;reference;0;13500 Labor economics;George.J;Economics;715784761;McGraw-Hill;2010;HD4901B67;Available;circulation;3;7585 Human physiology:from cells to systems;Sherwood.L;Physiology;707558936;Cengage Learning;2010;QP345S32;Out;circulation;3;11135 bobs;thomas;IT;701000000;UC;1006;QA7548;Available;Circulation;7;5050 but when I run it, it outputs this: OperatingSystems;;;0;;;;;;0;0 Internals;;;0;;;;;;0;0 and;;;0;;;;;;0;0 Design;;;0;;;;;;0;0 principles;William.S;IT;741012759;Upper;41012759;Upper;;0;;;;;;0;0 Saddle;;;0;;;;;;0;0 River;2009;QA7676063;Available;circulation;3;11200;lable;circulation;3;11200;;0;;;;;;0;0 Communication;;;0;;;;;;0;0 Thanks in advance, you're a life saver

    Read the article

  • Anyone willing to help out a javascript n00b? :-)

    - by Splynx
    Since I am asking for a lot, and know it, the following is a wall of text for those who might show some interest and want to know a little before offering their help to me. First a little about my level of programming skills, and a little about what I ask for. Where I'm at: I am not totally new to Javascript, and have dabbled a little with PHP earlier - well have dabbled a lot with PHP in fact, but never got good at it because I program alone. And I have until now never used forums to get help etc. other that searching to see if anyone else had my problem before and what the solution was. So I am not a intuitive or talented programmer, I'm more of a very maticulate programmer and you would be surprised how far you can get with if else... (ok that's a joke hehe). My solutions are usually (I am guessing here) not the best ones - and slow I take it, and the code is usually too long and I have to look up most of the stuff I use (really a lot of it is not done in "freehand"). I have a LOT of experience with HTML and CSS, and have always done well formed markup, as well as I am really into x-browsing and always require that my work validates when it's done. I also worry about optimizing a lot, and work with sprites for images, minimize the number of http requests etc, using H1,H2 etc. where it is logically correct, as well as use the correct elements and not just div span or p it... So because I am a workhorse and very maticulate I can actually pull off some quite "advanced" features, but it's always the basics that bite me in the end. Not fully understanding the syntax and so on usually gives me problems. Have recently discovered jQuery - wich is a lot of fun.... But I want to use it for the DOM node manipulation/handling only. As I mentioned I worry about optimizing, and jQuery used for everything seems... well not optimal, it strikes me as doing it yourself when possible is faster than accesing another script that may take a whole lot of other considerations into perspective when handling your variables and objects (and I am just guessing here since I as explained know nothing). So thats where I'm at... As mentioned I just started with javascript for "real" so I do not have much to show, but at the end of my WOT you can see two unfinisheded scripts I have made so you can see where I'm at roughly - just check out the URL without the /feedback.html for the second example (I am only allowed to post 1 link since I am also a SO n00b) (and for those rushing over to a validation service, remember I wrote "when it's done"...) What I ask for: I am figuring this... I have a piece of code I am working on at the moment, and this little project has taught me a whole lot already, and I have "grown" a lot as a javascript programmer. If I add a whole lot of comments to the script, and explain what it is intended to do, will you then show me where: I am writing incorrect code - making mistakes Where/how my code could be more optimal Where I am just simply being a muppet The code I want to use as the background for the tuition is the one here http://projects.1000monkeys.dk/feedback.html Use firebug and have a quick look see...

    Read the article

  • C++ std::vector problems

    - by Faur Ioan-Aurel
    For 2 days i tried to explain myself some of the things that are happening in my c++ code,and i can't get a good explanation.I must say that i'm more a java programmer.Long time i used quite a bit the C language but i guess Java erased those skills and now i'm hitting a wall trying to port a few classes from java to c++. So let's say that we have this 2 classes: class ForwardNetwork { protected: ForwardLayer* inputLayer; ForwardLayer* outputLayer; vector<ForwardLayer* > layers; public: void ForwardNetwork::getLayers(std::vector< ForwardLayer* >& result ) { for(int i= 0 ;i< layers.size(); i++){ ForwardLayer* lay = dynamic_cast<ForwardLayer*>(this->layers.at(i)); if(lay != NULL) result.push_back(lay); else cout << "Layer at#" << i << " is null" << endl; } } void ForwardNetwork::addLayer ( ForwardLayer* layer ) { if(layer != NULL) cout << "Before push layer is not null" << endl; //setup the forward and back pointer if ( this->outputLayer != NULL ) { layer->setPrevious ( this->outputLayer ); this->outputLayer->setNext ( layer ); } //update the input layer and outputLayer variables if ( this->layers.size() == 0 ) this->inputLayer = this->outputLayer = layer; else this->outputLayer = layer; //push layer in vector this->layers.push_back ( layer ); for(int i = 0; i< layers.size();i++) if(layers[i] != NULL) cout << "Check::Layer[" << i << "] is not null!" << endl; } }; Second class: class Backpropagation : public Train { public: Backpropagation::Backpropagation ( FeedForwardNetwork* network ){ this->network = network; vector<FeedforwardLayer*> vec; network->getLayers(vec); } }; Now if i add from main() some layers into network via addLayer(..) method it's all good.My vector is just as it should.But after i call Backpropagation() constructor with a network object ,when i enter getLayers(), some of my objects from vector have their address set to NULL(they are randomly chosen:for example if i run my app once with 3 layer's into vector ,the first object from vector is null.If i run it second time first 2 objects are null,third time just first object null and so on). Now i can't explain why this is happening.I must say that all the objects that should be in vector they also live inside the network and they are not NULL; This happens everywhere after i done with addLayer() so not just in the getLayers(). I cant get a good grasp to this problem.I thought first that i might modify my vector.But i can't find such thing. Also why if the reference from vector is NULL ,the reference that lives inside ForwardNetwork as a linked list (inputLayer and outputLayer) is not NULL? I must ask for your help.Please ,if you have some advices don't hesitate! PS: as compiler i use g++ part of gcc 4.6.1 under ubuntu 11.10

    Read the article

  • How do I maximize code coverage?

    - by naivedeveloper
    Hey all, the following is a snippet of code taken from the unix ptx utility. I'm attempting to maximize code coverage on this utility, but I am unable to reach the indicated portion of code. Admittedly, I'm not as strong in my C skills as I used to be. The portion of code is indicated with comments, but it is towards the bottom of the block. if (used_length == allocated_length) { allocated_length += (1 << SWALLOW_REALLOC_LOG); block->start = (char *) xrealloc (block->start, allocated_length); } Any help interpreting the indicated portion in order to cover that block would be greatly appreciated. /* Reallocation step when swallowing non regular files. The value is not the actual reallocation step, but its base two logarithm. */ #define SWALLOW_REALLOC_LOG 12 static void swallow_file_in_memory (const char *file_name, BLOCK *block) { int file_handle; /* file descriptor number */ struct stat stat_block; /* stat block for file */ size_t allocated_length; /* allocated length of memory buffer */ size_t used_length; /* used length in memory buffer */ int read_length; /* number of character gotten on last read */ /* As special cases, a file name which is NULL or "-" indicates standard input, which is already opened. In all other cases, open the file from its name. */ bool using_stdin = !file_name || !*file_name || strcmp (file_name, "-") == 0; if (using_stdin) file_handle = STDIN_FILENO; else if ((file_handle = open (file_name, O_RDONLY)) < 0) error (EXIT_FAILURE, errno, "%s", file_name); /* If the file is a plain, regular file, allocate the memory buffer all at once and swallow the file in one blow. In other cases, read the file repeatedly in smaller chunks until we have it all, reallocating memory once in a while, as we go. */ if (fstat (file_handle, &stat_block) < 0) error (EXIT_FAILURE, errno, "%s", file_name); if (S_ISREG (stat_block.st_mode)) { size_t in_memory_size; block->start = (char *) xmalloc ((size_t) stat_block.st_size); if ((in_memory_size = read (file_handle, block->start, (size_t) stat_block.st_size)) != stat_block.st_size) { error (EXIT_FAILURE, errno, "%s", file_name); } block->end = block->start + in_memory_size; } else { block->start = (char *) xmalloc ((size_t) 1 << SWALLOW_REALLOC_LOG); used_length = 0; allocated_length = (1 << SWALLOW_REALLOC_LOG); while (read_length = read (file_handle, block->start + used_length, allocated_length - used_length), read_length > 0) { used_length += read_length; /* Cannot cover from this point...*/ if (used_length == allocated_length) { allocated_length += (1 << SWALLOW_REALLOC_LOG); block->start = (char *) xrealloc (block->start, allocated_length); } /* ...to this point. */ } if (read_length < 0) error (EXIT_FAILURE, errno, "%s", file_name); block->end = block->start + used_length; } /* Close the file, but only if it was not the standard input. */ if (! using_stdin && close (file_handle) != 0) error (EXIT_FAILURE, errno, "%s", file_name); }

    Read the article

  • infoWindow position and click/unclick controls - controlling KML groundoverlays

    - by wendysmith
    I've made a lot of progress on this project (with earlier help with forum member Eric Badger, thanks!!) but I now need help with fine-tuning the infoWindow. Presently, you checkbox one of the historic maps choices -- the map appears as a ground overlay, and if you click it, you get info about the map which appears in a div (yellow area at the bottom). I want the info to appear in a more traditional window on the map, just to the center-right of the overlay map. It should have a close-option (X at the top corner?) Also, if you uncheck one of the boxes -- the overlay map disappears but the info window should close as well.  As you see my javascript skills are very limited. I would very much appreciate your help with this. Here's the test webpage: Here's the script: function showphilpottsmap(philpottsmapcheck) { if (philpottsmapcheck.checked == true) { philpottsmap.setMap(map); } else { philpottsmap.setMap(null); } } function showbrownemap(brownemapcheck) { if (brownemapcheck.checked == true) { brownemap.setMap(map); } else { brownemap.setMap(null); } } function showchewettmap(chewettmapcheck) { if (chewettmapcheck.checked == true) { chewettmap.setMap(map); } else { chewettmap.setMap(null); } } function showjamescanemap(jamescanemapcheck) { if (jamescanemapcheck.checked == true) { jamescanemap.setMap(map); } else { jamescanemap.setMap(null); } } var infoWindow = new google.maps.InfoWindow(); function openIW(FTevent) { infoWindow.setContent(FTevent.infoWindowHtml); infoWindow.setPosition(FTevent.latLng); infoWindow.setOptions({ content: FTevent.infoWindowHtml, position: FTevent.latLng, pixelOffset: FTevent.pixelOffset }); infoWindow.open(map); } var philpottsmap; var brownemap; var chewettmap; var jamescanemap; function initialize() { var mylatlng = new google.maps.LatLng(43.65241745, -79.393923); var myOptions = { zoom: 11, center: mylatlng, streetViewControl: false, mapTypeId: google.maps.MapTypeId.ROADMAP, }; map = new google.maps.Map(document.getElementById("map_canvas"), myOptions); //End map parameters brownemap = new google.maps.KmlLayer('http://wendysmithtoronto.com/mapping/1851map_jdbrowne.kml', {preserveViewport:true, suppressInfoWindows:true}); google.maps.event.addListener(brownemap, 'click', function(kmlEvent) { document.getElementById('sidebarinfo').innerHTML = kmlEvent.featureData.description; }); chewettmap = new google.maps.KmlLayer('http://wendysmithtoronto.com/mapping/1802mapwilliamchewett.kml', {preserveViewport:true, suppressInfoWindows:true}); google.maps.event.addListener(chewettmap, 'click', function(kmlEvent) { document.getElementById('sidebarinfo').innerHTML = kmlEvent.featureData.description; }); philpottsmap = new google.maps.KmlLayer('http://wendysmithtoronto.com/mapping/1818map_phillpotts.kml', {preserveViewport:true, suppressInfoWindows:true}); google.maps.event.addListener(philpottsmap, 'click', function(kmlEvent) { document.getElementById('sidebarinfo').innerHTML = kmlEvent.featureData.description; }); jamescanemap = new google.maps.KmlLayer('http://wendysmithtoronto.com/mapping/1842jamescanemapd.kml', {preserveViewport:true, suppressInfoWindows:true}); google.maps.event.addListener(jamescanemap, 'click', function(kmlEvent) { document.getElementById('sidebarinfo').innerHTML = kmlEvent.featureData.description; }); } Thanks very much! Wendy

    Read the article

  • Help with abstract class in Java with private variable of type List<E>

    - by Nazgulled
    Hi, It's been two years since I last coded something in Java so my coding skills are bit rusty. I need to save data (an user profile) in different data structures, ArrayList and LinkedList, and they both come from List. I want to avoid code duplication where I can and I also want to follow good Java practices. For that, I'm trying to create an abstract class where the private variables will be of type List<E> and then create 2 sub-classes depending on the type of variable. Thing is, I don't know if I'm doing this correctly, you can take a look at my code: Class: DBList import java.util.List; public abstract class DBList { private List<UserProfile> listName; private List<UserProfile> listSSN; public List<UserProfile> getListName() { return this.listName; } public List<UserProfile> getListSSN() { return this.listSSN; } public void setListName(List<UserProfile> listName) { this.listName = listName; } public void setListSSN(List<UserProfile> listSSN) { this.listSSN = listSSN; } } Class: DBListArray import java.util.ArrayList; public class DBListArray extends DBList { public DBListArray() { super.setListName(new ArrayList<UserProfile>()); super.setListSSN(new ArrayList<UserProfile>()); } public DBListArray(ArrayList<UserProfile> listName, ArrayList<UserProfile> listSSN) { super.setListName(listName); super.setListSSN(listSSN); } public DBListArray(DBListArray dbListArray) { super.setListName(dbListArray.getListName()); super.setListSSN(dbListArray.getListSSN()); } } Class: DBListLinked import java.util.LinkedList; public class DBListLinked extends DBList { public DBListLinked() { super.setListName(new LinkedList<UserProfile>()); super.setListSSN(new LinkedList<UserProfile>()); } public DBListLinked(LinkedList<UserProfile> listName, LinkedList<UserProfile> listSSN) { super.setListName(listName); super.setListSSN(listSSN); } public DBListLinked(DBListLinked dbListLinked) { super.setListName(dbListLinked.getListName()); super.setListSSN(dbListLinked.getListSSN()); } } 1) Does any of this make any sense? What am I doing wrong? Do you have any recommendations? 2) It would make more sense for me to have the constructors in DBList and calling them (with super()) in the subclasses but I can't do that because I can't initialize a variable with new List<E>(). 3) I was thought to do deep copies whenever possible and for that I always override the clone() method of my classes and code it accordingly. But those classes never had any lists, sets or maps on them, they only had strings, ints, floats. How do I do deep copies in this situation?

    Read the article

  • Survey Data Model - How to avoid EAV and excessive denormalization?

    - by AlexDPC
    Hi everyone, My database skills are mediocre at best and I have to design a data model for survey data. I have spent some thoughts on this and right now I feel that I am stuck between some kind of EAV model and a design involving hundreds of tables, each with hundreds of columns (and thousands of records). There must be a better way to do this and I hope that the wise folks on this forum can help me. I have already searched various forums, but I couldn't really find a solution. If it has already been given elsewhere, please excuse me and provide me with a link so I can read it up. Some assumptions about the data I have to deal with: Each survey consists of 1 to n questionnaires Each questionnaire consists of 100-2,000 questions (please ignore that 2,000 questions really sound like a lot to answer...) Questions can be of various types: multiple-choice, free text, a number (like age, income, percentages, ...) Each survey involves 10-200 countries (These are not the respondents. The respondents are actually people in the countries.) Depending on the type of questionnaire, each questionnaire is answered by 100-20,000 respondents per country. A country can adapt the questionnaires for a survey, i.e. add, remove or edit questions The data for one country is gathered in a separate database in that country. There is no possibility for online integration from the start. The data for all countries has to be integrated later. This means for example, if a country has deleted a question, that data must somehow be derived from what they sent in order to achieve a uniform design across all countries I will have to write the integration and cleaning software, which will need to work with every country's data In the end the data needs to be exported to flat files, one rectangular grid per country and questionnaire. I have already discussed this topic with people from various backgrounds and have not come to a good solution yet. I mainly got two kinds of opinions. The domain experts, who are used to working with flat files (spreadsheet-style) for data processing and analysis vote for a denormalized structure with loads of tables and columns as I described above (1 table per country and questionnaire). This sounds terrible to me, because I learned that wide tables are to be avoided, it will be annoying to determine which columns are actually in a table when working with it, the database will become cluttered with hundreds of tables (or I even need to set up multiple databases, each with a similar yet a bit differetn design), etc. O-O-programmers vote for a strongly "normalized" design, which would effectively lead to a central table containing all the answers from all respondents to all questions. This table would either need to contain a column of type sql_variant type or multiple answer columns with different types to store answers of different types (multiple choice, free text, ..). The former would essentially be a EAV model. I tend to follow Joe Celko here, who strongly discourages its use (he calls it OTLT or "One True Lookup Table"). The latter would imply that each row would contain null cells for the not applicable types by design. Another alternative I could think of would be to create one table per answer type, i.e., one for multiple-choice questions, one for free text questions, etc.. That's not so generic, it would lead to a lot of union joins, I think and I would have to add a table if a new answer type is invented. Sorry for boring you with all this text and thank you for your input! Cheers, Alex PS: I asked the same question here: http://www.eggheadcafe.com/community/aspnet/13/10242616/survey-data-model--how-to-avoid-eav-and-excessive-denormalization.aspx

    Read the article

  • Please recommend me intermediate-to-advanced Python books to buy.

    - by anonnoir
    I'm in the final year, final semester of my law degree, and will be graduating very soon. (April, to be specific.) But before I begin practice, I plan to take 2 two months off, purely for serious programming study. So I'm currently looking for some Python-related books, gauged intermediate to advanced, which are interesting (because of the subject matter itself) and possibly useful to my future line of work. I've identified 2 possible purchases at the moment: Natural Language Processing with Python. The law deals mostly with words, and I've quite a number of ideas as to where I might go with NLP. Data extraction, summaries, client management systems linked with document templates, etc. Programming Collective Intelligence. This book fascinates me, because I've always liked the idea of machine learning (and I'm currently studying it by the side too, for fun). I'd like to build/play around with Web 2.0 applications; and who knows if I can apply some of the things I learn to my legal work. (E.g. Playground experiments to determine how and under what circumstances judges might be biased, by forcing algorithms to pore through judgments and calculate similarities, etc.) Please feel free to criticize my current choices, but do at least offer or recommend other books that I should read in their place. My budget can deal with 4 books, max. These books will be used heavily throughout the 2 months; I will be reading them back to back, absorbing the explanations given, and hacking away at their code. Also, the books themselves should satisfy 2 main criteria: Application. The book must teach how to solve problems. I like reading theory, but I want to build things and solve problems first. Even playful applications are fine, because games and experiments always have real-world applications sooner or later. Readability. I like reading technical books, no matter how difficult they are. I enjoy the effort and the feeling that you're learning something. But the book shouldn't contain code or explanations that are too cryptic or erratic. Even if it's difficult, the book's content should be accessible with focused reading. Note: I realize that I am somewhat of a beginner to the whole programming thing, so please don't put me down. But from experience, I think it's better to aim up and leave my comfort zone when learning new things, rather than to just remain stagnant the way I am. (At least the difficulty gives me focus: i.e. if a programmer can be that good, perhaps if I sustain my own efforts I too can be as good as him someday.) If anything, I'm also a very determined person, so two months of day-to-night intensive programming study with nothing else on my mind should, I think, give me a bit of a fighting chance to push my programming skills to a much higher level.

    Read the article

  • How to cope with developing against a poor 3rd party API/application?

    - by wsanville
    I'm a web developer, and my organization has recently started to use a proprietary ASP.NET CMS for our web sites. I was excited to get started using the CMS, thinking it would bring a lot of value to our end users and be fun to work with, since my skills are a good match for the types of projects we're using it for. That was about a year ago. Since then, we've ran into all kinds of issues, from blatant bugs in the product, to nasty edge cases in the APIs, to extremely poor documentation for developers. On about a weekly basis, we are forced to pursue workarounds and rewrite some of the out of the box functionality, and even find some of the basic features unusable. In many cases, since this is a closed source application (and obfuscated of course), there's nothing we can do as developers to solve these issues. So my question is, how does one attempt to develop a good application in such a scenario? The application mostly works when using the the exact out of the box behavior, or using one of the company's starter sites. However, my attempts to use the underlying APIs to implement slightly different, yet reasonable behavior has proved to be extremely time consuming (not to mention just as buggy), given the lack of good information about the APIs. I've given this a lot of thought, and my conflicting viewpoints are the following: Strongly advise against any customization to the CMS, as development time will rise exponentially, or even have an extremely high chance of failing. While this is accurate, I do not want to give the impression that I am not willing to code my own solutions to problems and take the initiative to implement something difficult or complex. I don't want to be perceived as someone who is not motivated, lazy, or not knowledgeable to do anything complex, because this is simply not the case. I love coding my own solutions, trying new/difficult things, I just dislike the vendor app we're using. Continue on the path I'm on now, which is hacking my way past all issues I encounter and try my best to deliver an application that meets the needs and specs exactly. My goals are to make it as seamless and easy to use as possible to the end user, even when integrating the CMS with our other applications internally. The problem I'm finding with this approach is it is very time consuming. I open support cases with the vendor on a regular basis to solve issues and to gain knowledge of their APIs, but this is extremely time consuming, and in some cases it leads to dead ends. I post on the vendors forums on a regular basis but have become frustrated as most of my posts get 0 replies. So, what would you, a reasonable developer, do in this case? How can I make the best of the situation? And just for fun, here are some of the code smells and anti-patterns I've dealt with using the product (aside from their own code blatantly failing): Use of StringBuilder to concatenate a giant string that is hard coded and does not change. They use it to concatenate their Javascript and write it out into the body tags of their pages. Methods that accept object or Microsoft.VisualBasic.Collection as the parameters. In the case of the VB Collection, the data is not a list of any kind, it's used instead of making a class. Methods that return a Hashtable of VB Collections Method names of the form MethodName_v45, MethodName_v20, etc... Multiple classes with the same name in different namespaces with different functionality/behavior. Intellisense that reads "Note: this parameter is non functional" Complete lack of coding standards, API is filled with magic numbers and magic strings. Properties with a getter of type object that accepts totally different things, like enum or strings, and throw exceptions at runtime when you pass in something not supported. And much, much, more...

    Read the article

  • Trying to reinvent the wheel of StackOverflow to have a good learning experience. Need some suggesti

    - by Legend
    I want to learn and am not able to do it unless I have a real "mission" to complete. SO is my favorite and I can't imagine a better experience than actually recreating it but not in ASP. I'd like to use PHP+MySQL+jQuery. So far, I have been a self-taught programmer but I would like to master one paradigm that forces you to adhere to the standards. For instance, recently, jQuery forced me to use some "rules". The plugins were supposed to be written in a particular way and that's it. When I started off, everything seemed like Greek and Latin but when I finished a very small plugin, I felt really happy because it forced me to program in a certain way. I am looking for something like this only in a larger project. I've heard a lot about MVC and all but I am confused about the various frameworks out there. Zend seems awesome but looks heavy at the same time and also requires you to have a lot more control over the web-server whereas CakePHP is a good and a fast framework that needs only little control. Do I use one of these or just write my own MVC? I have the following goals: Goals: Site should be fast - I know this depends on my coding skills but I will learn on my way. The framework itself should not slow me down) Setting up the site should not require you to use command-line - This requirement is ok during development. But some frameworks like Symphony require you to initialize certain things through command-line Should support pluggable modules - For instance, if I want to be able to use the FCK editor, I should be able to organize things in a good way. Should be possible to extend - For instance, SO is mainly a Q&A site but I should be able to logically extend it into an Idea Management System (optional but I'm curious). This goes more into code re-usability I guess. I am comfortable with MySQL so I should be done with database design etc. with some serious effort. As for PHP, I can write code on my own but haven't really used any frameworks that much. jQuery, I started off recently and love it. I would be glad if someone can guide me during these initial steps. Precisely, when designing something like SO, I have the following questions: Do I use a framework? If yes, should it be MVC? If MVC, which one is a good and a scalable one? (I'd love something like jQuery that will not die anytime soon) How do I balance the functionality? The same logic can sometimes be made server centric or client centric. (more Ajax?). Is it a good idea to make a heavy javascript site considering the recent advances on client-side JS processing? Just in case anyone is wondering, I am not interested in commercializing any of this. I need a reason to learn something :)

    Read the article

  • Jquery only works the first time

    - by Tripping
    I am trying to teach myself general web development skills. I am trying to create a image upload with preview functionality using HTML5 FileAPI. Till now, I have created a file input which shows the preview of image when selected. Html mark up is below: <div> <!-- Photos --> <fieldset> <legend>PropertyPhotos</legend> <div class="upload-box" id="upload-box-1"> <div class="preview-box"> <img alt="Field for image cutting" id="preview_1" src="@Url.Content("~/Content/empty.png")" /> </div> <div> @Html.FileFor(model => model.File1) @Html.ValidationMessageFor(model => model.File1) </div> </div> <div class="upload-box" id="upload-box-2"> <div class="preview-box"> <img alt="Field for image cutting" id="preview_2" src="@Url.Content("~/Content/empty.png")" /> </div> <div> @Html.FileFor(model => model.File2) @Html.ValidationMessageFor(model => model.File2) </div> </div> <div class="upload-box" id="upload-box-3"> <div class="preview-box"> <img alt="Field for image cutting" id="preview_3" src="@Url.Content("~/Content/empty.png")" /> </div> <div> @Html.FileFor(model => model.File3) @Html.ValidationMessageFor(model => model.File3) </div> </div> </fieldset> </div> The Jquery to show preview and then display the next "upload-box" is as follows: <script type="text/javascript"> $(document).ready(function () { // show first box $("#upload-box-1").fadeIn(); //Get current & next step index var stepNum = $('div.upload-box').attr('id').replace(/[^\d]/g, ''); var nextNum = parseInt(stepNum)+1; //Get the preview image tag var preview = $('#preview_'+stepNum); //Load preview on file tag change and display second upload-box $('#File'+stepNum).change(function (evt) { var f = evt.target.files[0]; var reader = new FileReader(); if (!f.type.match('image.*')) { alert("The selected file does not appear to be an image."); return; } reader.onload = function (e) { preview.attr('src', e.target.result); }; reader.readAsDataURL(f); //Show next upload-box $("#upload-box-" + nextNum).fadeIn(); }); }); </script> However, this code only first for the first time ... i.e. on selecting a file - It shows a preview and then shows the next "upload-box". However, when I browse using the second file it doesn't show any preview. From what I have ready, I need to close the Jquery function so that it can be initialised again but I am not sure how to do that. Any help will be grateful.

    Read the article

  • javascript: Problems with multiple setIntervals running simultaniously

    - by user340879
    Hello, My first post here. I want to make a horizontal menu with submenu's sliding down on mouseover. I know I could use jQuery but this is to practice my javascript skills. I use the following code: var up = new Array() var down = new Array() var submenustart function titleover(headmenu, inter) { submenu = headmenu.lastChild up[inter] = window.clearInterval(up[inter]) down[inter] = window.setInterval("slidedown(submenu)",1) } function slidedown(submenu) { if(submenu.offsetTop < submenustart) { submenu.style.top = submenu.offsetTop + 1 + "px" } } function titleout(headmenu, inter) { submenu = headmenu.lastChild down[inter] = window.clearInterval(down[inter]) up[inter] = window.setInterval("slideup(submenu)", 1) } function slideup(submenu) { if(submenu.offsetTop > submenustart - submenu.clientHeight + 1) { submenu.style.top = submenu.offsetTop - 1 + "px" } } The variable submenustart gets appointed a value in another function which is not relevant for my question. HTML looks like this: <table class="hoofding" id="hoofding"> <tr> <td onmouseover="titleover(this, 0)" onmouseout="titleout(this, 0)"><a href="#" class="hoofdinglink" id="hoofd1">AAAA</a> <table class="menu"> <tr><td><a href="...">1111</a></td></tr> <tr><td><a href="...">2222</a></td></tr> <tr><td><a href="...">3333</a></td></tr> </table></td> <td onmouseover="titleover(this, 1)" onmouseout="titleout(this, 1)"><a href="#" class="hoofdinglink">BBBB</a> <table class="menu"> <tr><td><a href="...">1111</a></td></tr> <tr><td><a href="...">2222</a></td></tr> <tr><td><a href="...">3333</a></td></tr> <tr><td><a href="...">4444</a></td></tr> <tr><td><a href="...">5555</a></td></tr> </table></td> ... </tr> </table> What happens is the following: If I go over and out (for ex) menu A it works fine. If i go now over menu B the interval applied to A is now applied to B. There are now 2 interval functions applied to B. The one originally for A and a new one triggered by the mouseover on B. If I would go to A all the intervals are now applied to A. I have been searching for hours but and I am completely stuck. Thanks in advance.

    Read the article

  • Mr Flibble: As Seen Through a Lens, Darkly

    - by Phil Factor
    One of the rewarding things about getting involved with Simple-Talk has been in meeting and working with some pretty daunting talents. I’d like to say that Dom Reed’s talents are at the end of the visible spectrum, but then there is Richard, who pops up on national radio occasionally, presenting intellectual programs, Andrew, master of the ukulele, with his pioneering local history work, and Tony with marathon running and his past as a university lecturer. However, Dom, who is Red Gate’s head of creative design and who did the preliminary design work for Simple-Talk, has taken the art photography to an extreme that was impossible before Photoshop. He’s not the first person to take a photograph of himself every day for two years, but he is definitely the first to weave the results into a frightening narrative that veers from comedy to pathos, using all the arts of Photoshop to create a fictional character, Mr Flibble.   Have a look at some of the Flickr pages. Uncle Spike The B-Men – Woolverine The 2011 BoyZ iN Sink reunion tour turned out to be their last Error 404 – Flibble not found Mr Flibble is not a normal type of alter-ego. We generally prefer to choose bronze age warriors of impossibly magnificent physique and stamina; superheroes who bestride the world, scorning the forces of evil and anarchy in a series noble and righteous quests. Not so Dom, whose Mr Flibble is vulnerable, and laid low by an addiction to toxic substances. His work has gained an international cult following and is used as course material by several courses in photography. Although his work was for a while ignored by the more conventional world of ‘art’ photography they became famous through the internet. His photos have received well over a million views on Flickr. It was definitely time to turn this work into a book, because the whole sequence of images has its maximum effect when seen in sequence. He has a Kickstarter project page, one of the first following the recent UK launch of the crowdfunding platform. The publication of the book should be a major event and the £45 I shall divvy up will be one of the securest investments I shall ever make. The local news in Cambridge picked up on the project and I can quote from the report by the excellent Cabume website , the source of Tech news from the ‘Cambridge cluster’ Put really simply Mr Flibble likes to dress up and take pictures of himself. One of the benefits of a split personality, however is that Mr Flibble is supported in his endeavour by Reed’s top notch photography skills, supreme mastery of Photoshop and unflinching dedication to the cause. The duo have collaborated to take a picture every day for the past 730-plus days. It is not a big surprise that neither Mr Flibble nor Reed watches any TV: In addition to his full-time role at Cambridge software house,Red Gate Software as head of creativity and the two to five hours a day he spends taking the Mr Flibble shots, Reed also helps organise the . And now Reed is using Kickstarter to see if the world is ready for a Mr Flibble coffee table book. Judging by the early response it is. At the time of writing, just a few days after it went live, ‘I Drink Lead Paint: An absurd photography book by Mr Flibble’ had raised £1,545 of the £10,000 target it needs to raise by the Friday 30 November deadline from 37 backers. Following the standard Kickstarter template, Reed is offering a series of rewards based on the amount pledged, ranging from a Mr Flibble desktop wallpaper for pledges of £5 or more to a signed copy of the book for pledges of £45 or more, right up to a starring role in the book for £1,500. Mr Flibble is unquestionably one of the more deranged Kickstarter hopefuls, but don’t think for a second that he doesn’t have a firm grasp on the challenges he faces on the road to immortalisation on 150 gsm stock. Under the section ‘risks and challenges’ on his Kickstarter page his statement begins: “An angry horde of telepathic iguanas discover the world’s last remaining stock of vintage lead paint and hold me to ransom. Gosh how I love to guzzle lead paint. Anyway… faced with such brazen bravado, I cower at the thought of taking on their combined might and die a sad and lonely Flibble deprived of my one and only true liquid love.” At which point, Reed manages to wrestle away the keyboard, giving him the opportunity to present slightly more cogent analysis of the obstacles the project must still overcome. We asked Reed a few questions about Mr Flibble’s Kickstarter adventure and felt that his responses were worth publishing in full: Firstly, how did you manage it – holding down a full time job and also conceiving and executing these ideas on a daily basis? I employed a small team of ferocious gerbils to feed me ideas on a daily basis. Whilst most of their ideas were incomprehensibly rubbish and usually revolved around food, just occasionally they’d give me an idea like my B-Men series. As a backup plan though, I found that the best way to generate ideas was to actually start taking photos. If I were to stand in front of the camera, pull a silly face, place a vegetable on my head or something else equally stupid, the resulting photo of that would typically spark an idea when I came to look at it. Sitting around idly trying to think of an idea was doomed to result in no ideas. I admit that I really struggled with time. I’m proud that I never missed a day, but it was definitely hard when you were late from work, tired or doing something socially on the same day. I don’t watch TV, which I guess really helps, because I’d frequently be spending 2-5 hours taking and processing the photos every day. Are there any overlaps between software development and creative thinking? Software is an inherently creative business and the speed that it moves ensures you always have to find solutions to new things. Everyone in the team needs to be a problem solver. Has it helped me specifically with my photography? Probably. Working within teams that continually need to figure out new stuff keeps the brain feisty I suppose, and I guess I’m continually exposed to a lot of possible sources of inspiration. How specifically will this Kickstarter project allow you to test the commercial appeal of your work and do you plan to get the book into shops? It’s taken a while to be confident saying it, but I know that people like the work that I do. I’ve had well over a million views of my pictures, many humbling comments and I know I’ve garnered some loyal fans out there who anticipate my next photo. For me, this Kickstarter is about seeing if there’s worth to my work beyond just making people smile. In an online world where there’s an abundance of freely available content, can you hope to receive anything from what you do, or would people just move onto the next piece of content if you happen to ask for some support? A book has been the single-most requested thing that people have asked me to produce and it’s something that I feel would showcase my work well. It’s just hard to convince people in the publishing industry just now to take any kind of risk – they’ve been hit hard. If I can show that people would like my work enough to buy a book, then it sends a pretty clear picture that publishers might hear, or it gives me the confidence enough to invest in myself a bit more – hard to do when you’re riddled with self-doubt! I’d love to see my work in the shops, yes. I could see it being the thing that someone flips through idly as they’re Christmas shopping and recognizing that it’d be just the perfect gift for their difficult to buy for friend or relative. That said, working in the software industry means I’m clearly aware of how I could use technology to distribute my work, but I can’t deny that there’s something very appealing to having a physical thing to hold in your hands. If the project is successful is there a chance that it could become a full-time job? At the moment that seems like a distant dream, as should this be successful, there are many more steps I’d need to take to reach any kind of business viability. Kickstarter seems exactly that – a way for people to help kick start me into something that could take off. If people like my work and want me to succeed with it, then taking a look at my Kickstarter page (and hopefully pledging a bit of support) would make my elbows blush considerably. So there is is. An opportunity to open the wallet just a bit to ensure that one of the more unusual talents sees the light in the format it deserves.  

    Read the article

< Previous Page | 70 71 72 73 74 75 76  | Next Page >