Search Results

Search found 5473 results on 219 pages for 'coding guidelines'.

Page 116/219 | < Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >

  • Listing C/C++ functions (Code analysis in Unix)

    - by Jond
    Whether we're maintaining unfamiliar code or checking out the implementation details of an Apache module it can help if we can quickly traverse the code and build up an overview of what we're looking at. Grep serves most of my daily needs but there are some cases where it just wont do. Here's a common example of how it can help. To find the definition of a PHP function I'm interested in I can type this at the command line: grep -r "function myfunc" . This could be adapted very quickly to C or C++ if we know the return type, but things become more complicated if, say, I want to list every method that my class provides: grep "function " ./src/mine.class.php Since there's no single keyword that denotes a function or method in C++ and because it's generally more complex syntax, I think I'd need some kind of static code analysis tool, smart use of the C Preprocessor or blind faith the coder followed strict code guidelines (# of whitespace, position of curlies etc) to get these sorts of results. What would you recommend? p.s. be nice, this is my first post ;-) :p

    Read the article

  • Secrets of delivering .NET size large products?

    - by Joan Venge
    In software companies I have seen it's really hard to work on very large products where everything depends on everything else. For instance Microsoft works on C#, F#, .NET, WPF, Visual Studio where these things are interconnected. I don't know how many people are involved, but if it's in 100s, how do they keep in sync with everything, so they design and implement features without conflicting with other dependencies and future plans of other products? I am wondering that if MS is able to do this, they must have a very good system. Any guidelines or secrets for MS or non-MS very large software product delivering?

    Read the article

  • Optimal search queries

    - by Macros
    Following on from my last question http://stackoverflow.com/questions/2788082/sql-server-query-performance, and discovering that my method of allowing optional parameters in a search query is sub optimal, does anyone have guidelines on how to approach this? For example, say I have an application table, a customer table and a contact details table, and I want to create an SP which allows searching on some, none or all of surname, homephone, mobile and app ID, I may use something like the following: select * from application a inner join customer c on a.customerid = a.id left join contact hp on (c.id = hp.customerid and hp.contacttype = 'homephone') left join contact mob on (c.id = mob.customerid and mob.contacttype = 'mobile') where (a.ID = @ID or @ID is null) and (c.Surname = @Surname or @Surname is null) and (HP.phonenumber = @Homphone or @Homephone is null) and (MOB.phonenumber = @Mobile or @Mobile is null) The schema used above isn't real, and I wouldn't be using select * in a real world scenario, it is the construction of the where clause I am interested in. Is there a better approach, either dynamic sql or an alternative which can achieve the same result, without the need for many nested conditionals. Some SPs may have 10 - 15 criteria used in this way

    Read the article

  • Is it possible to block a certain character or group of characters from entering into text box or an

    - by Param-Ganak
    Hello friends! I have a text input field like text box or text area. I want to prevent the user from entering certain character or a group of characters. That is for example if I dont want # * @ and numbers from 0-9 these characters. So Whenever user press any of the above character key then that character should not appear in to an input field. It means directly blocking that character. Is this possible in Jquery? Please give me some guidelines to achive it. Thank You

    Read the article

  • checking the return code using python (MAC)

    - by cyberbemon
    I have written a script that checks if an SVN Repo is up and running, the result is based on the return value. import subprocess url = " validurl" def check_svn_status(): subprocess.call(['svn info'+url],shell=True) def get_status(): subprocess.call('echo $?',shell=True) def main(): check_svn_status() get_status() if __name__ == '__main__': main() The problem I'm facing is that if I change the url to something that does't exist I still get the return value as 0, but if I were to run this outside the script, i.e go to the terminal type svn info wrong url and then do a echo $? I get a return value of 1. But I can't re-create this in the python. Any guidelines ?

    Read the article

  • What does "Only catch exceptions you can handle" really mean?

    - by KyleLib
    I'm tasked with writing an Exception Handling Strategy and Guidelines document for a .NET/C# project I'm working on. I'm having a tough go at it. There's plenty of information available for how/when to throw, catch, wrap exceptions, but I'm looking for describing what sorts of things should go on inside the catch block short of wrapping and throwing the exception. try { DoSomethingNotNice(); } catch (ExceptionICanHandle ex) { //Looking for examples of what people are doing in catch blocks //other than throw or wrapping the exception, and throwing. } Thanks in advance

    Read the article

  • JMX Based Monitoring - Part Two - JVM Monitoring

    - by Anthony Shorten
    This the second article in the series focussing on the JMX based monitoring capabilities possible with the Oracle Utilities Application Framework. In all versions of the Oracle utilities Application Framework, it is possible to use the basic JMX based monitoring available with the Java Virtual Machine to provide basic statistics ablut the JVM. In Java 5 and above, the JVM automatically allowed local monitoring of the JVM statistics from an approporiate console. When I say local I mean the monitoring tool must be executed from the same machine (and in some cases the same user that is running the JVM) to connect to the JVM directly. If you are using jconsole, for example, then you must have access to a GUI (X-Windows or Windows) to display the jconsole output. This is the easist way of monitoring without doing too much configration but is not always practical. Java offers a remote monitorig capability to allow yo to connect to a remotely executing JVM from a console (like jconsole). To use this facility additional JVM options must be added to the command line that started the JVM. Details of the additional options for the version of the Java you are running is located at the JMX information site. Typically to remotely connect to a running JVM that JVM must be configured with the following categories of options: JMX Port - The JVM must allow connections on a listening port specified on the command line Connection security - The connection to the JVM can be secured. This is recommended as JMX is not just a monitoring protocol it is a managemet protocol. It is possible to change values in a running JVM using JMX and there are NO "Are you sure?" safeguards. For a Oracle Utilities Application Framework based application there are a few guidelines when configuring and using this JMX based remote monitoring of the JVM's: Online JVM - The JVM used to run the online system is embedded within the J2EE Web Application Server. To enable JMX monitoring on this JVM you can either change the startup script that starts the Web Application Server or check whether your J2EE Web Application natively supports JVM statistics collection. Child JVM's (COBOL only) - The Child JVM's should not be monitored using this method as they are recycled regularly by the configuration and therefore statistics collected are of little value. Batch Threadpoools - Batch already has a JMX interface (which will be covered in another article). Additional monitoring can be enabled but the base supported monitoring is sufficient for most needs. If you are an Oracle Utilities Application Framework site, then you can specify the additional options for JMX Java monitoring on the OPTS paramaters supported for each component of the architecture. Just ensure the port numbers used are unique for each JVM running on any machine.

    Read the article

  • Another Marketing Conference, part two – the afternoon

    - by Roger Hart
    In my previous post, I’ve covered the morning sessions at AMC2012. Here’s the rest of the write-up. I’ve skipped Charles Nixon’s session which was a blend of funky futurism and professional development advice, but you can see his slides here. I’ve also skipped the Google presentation, as it was a little thin on insight. 6 – Brand ambassadors: Getting universal buy in across the organisation, Vanessa Northam Slides are here This was the strongest enforcement of the idea that brand and campaign values need to be delivered throughout the organization if they’re going to work. Vanessa runs internal communications at e-on, and shared her experience of using internal comms to align an organization and thereby get the most out of a campaign. She views the purpose of internal comms as: “…to help leaders, to communicate the purpose and future of an organization, and support change.” This (and culture) primes front line staff, which creates customer experience and spreads brand. You ensure a whole organization knows what’s going on with both internal and external comms. If everybody is aligned and informed, if everybody can clearly articulate your brand and campaign goals, then you can turn everybody into an advocate. Alignment is a powerful tool for delivering a consistent experience and message. The pathological counter example is the one in which a marketing message goes out, which creates inbound customer contacts that front line contact staff haven’t been briefed to handle. The NatWest campaign was again mentioned in this context. The good example was e-on’s cheaper tariff campaign. Building a groundswell of internal excitement, and even running an internal launch meant everyone could contribute to a good customer experience. They found that meter readers were excited – not a group they’d considered as obvious in providing customer experience. But they were a group that has a lot of face-to-face contact with customers, and often were asked questions they may not have been briefed to answer. Being able to communicate a simple new message made it easier for them, and also let them become a sales and marketing asset to the organization. 7 – Goodbye Internet, Hello Outernet: the rise and rise of augmented reality, Matt Mills I wasn’t going to write this up, because it was essentially a sales demo for Aurasma. But the technology does merit some discussion. Basically, it replaces QR codes with visual recognition, and provides a simple-looking back end for attaching content. It’s quite sexy. But here’s my beef with it: QR codes had a clear visual language – when you saw one you knew what it was and what to do with it. They were clunky, but they had the “getting started” problem solved out of the box once you knew what you were looking at. However, they fail because QR code reading isn’t native to the platform. You needed an app, which meant you needed to know to download one. Consequentially, you can’t use QR codes with and ubiquity, or depend on them. This means marketers, content providers, etc, never pushed them, and they remained and awkward oddity, a minority sport. Aurasma half solves problem two, and re-introduces problem one, making it potentially half as useful as a QR code. It’s free, and you can apparently build it into your own apps. Add to that the likelihood of it becoming native to the platform if it takes off, and it may have legs. I guess we’ll see. 8 – We all need to code, Helen Mayor Great title – good point. If there was anybody in the room who didn’t at least know basic HTML, and if Helen’s presentation inspired them to learn, that’s fantastic. However, this was a half hour sales pitch for a basic coding training course. Beyond advocating coding skills it contained no useful content. Marketers may also like to consider some of these resources if they’re looking to learn code: Code Academy – free interactive tutorials Treehouse – learn web design, web dev, or app dev WebPlatform.org – tutorials and documentation for web tech  11 – Understanding our inner creativity, Margaret Boden This session was the most theoretical and probably least actionable of the day. It also held my attention utterly. Margaret spoke fluently, fascinatingly, without slides, on the subject of types of creativity and how they work. It was splendid. Yes, it raised a wry smile whenever she spoke of “the content of advertisements” and gave an example from 1970s TV ads, but even without the attempt to meet the conference’s theme this would have been thoroughly engaging. There are, Margaret suggested, three types of creativity: Combinatorial creativity The most common form, and consisting of synthesising ideas from existing and familiar concepts and tropes. Exploratory creativity Less common, this involves exploring the limits and quirks of a particular constraint or style. Transformational creativity This is uncommon, and arises from finding a way to do something that the existing rules would hold to be impossible. In essence, this involves breaking one of the constraints that exploratory creativity is composed from. Combinatorial creativity, she suggested, is particularly important for attaching favourable ideas to existing things. As such is it probably worth developing for marketing. Exploratory creativity may then come into play in something like developing and optimising an idea or campaign that now has momentum. Transformational creativity exists at the edges of this exploration. She suggested that products may often be transformational, but that marketing seemed unlikely to in her experience. This made me wonder about Listerine. Crucially, transformational creativity is characterised by there being some element of continuity with the strictures of previous thinking. Once it has happened, there may be  move from a revolutionary instance into an explored style. Again, from a marketing perspective, this seems to chime well with the thinking in Youngme Moon’s book: Different Talking about the birth of Modernism is visual art, Margaret pointed out that transformational creativity has historically risked a backlash, demanding what is essentially an education of the market. This is best accomplished by referring back to the continuities with the past in order to make the new familiar. Thoughts The afternoon is harder to sum up than the morning. It felt less concrete, and was troubled by a short run of poor presentations in the middle. Mainly, I found myself wrestling with the internal comms issue. It’s one of those things that seems astonishingly obvious in hindsight, but any campaign – particularly any large one – is doomed if the people involved can’t believe in it. We’ve run things here that haven’t gone so well, of course we have; who hasn’t? I’m not going to air any laundry, but people not being informed (much less aligned) feels like a common factor. It’s tough though. Managing and anticipating information needs across an organization of any size can’t be easy. Even the simple things like ensuring sales and support departments know what’s in a product release, and what messages go with it are easy to botch. The thing I like about framing this as a brand and campaign advocacy problem is that it makes it likely to get addressed. Better is always sexier than less-worse. Any technical communicator who’s ever felt crowded out by a content strategist or marketing copywriter  knows this – increasing revenue gets a seat at the table far more readily than reducing support costs, even if the financial impact is identical. So that’s it from AMC. The big thought-provokers were social buying behaviour and eliciting behaviour change, and the value of internal communications in ensuring successful campaigns and continuity of customer experience. I’ll be chewing over that for a while, and I’d definitely return next year.      

    Read the article

  • Listing common SQL Code Smells.

    - by Phil Factor
    Once you’ve done a number of SQL Code-reviews, you’ll know those signs in the code that all might not be well. These ’Code Smells’ are coding styles that don’t directly cause a bug, but are indicators that all is not well with the code. . Kent Beck and Massimo Arnoldi seem to have coined the phrase in the "OnceAndOnlyOnce" page of www.C2.com, where Kent also said that code "wants to be simple". Bad Smells in Code was an essay by Kent Beck and Martin Fowler, published as Chapter 3 of the book ‘Refactoring: Improving the Design of Existing Code’ (ISBN 978-0201485677) Although there are generic code-smells, SQL has its own particular coding habits that will alert the programmer to the need to re-factor what has been written. See Exploring Smelly Code   and Code Deodorants for Code Smells by Nick Harrison for a grounding in Code Smells in C# I’ve always been tempted by the idea of automating a preliminary code-review for SQL. It would be so useful to trawl through code and pick up the various problems, much like the classic ‘Lint’ did for C, and how the Code Metrics plug-in for .NET Reflector by Jonathan 'Peli' de Halleux is used for finding Code Smells in .NET code. The problem is that few of the standard procedural code smells are relevant to SQL, and we need an agreed list of code smells. Merrilll Aldrich made a grand start last year in his blog Top 10 T-SQL Code Smells.However, I'd like to make a start by discovering if there is a general opinion amongst Database developers what the most important SQL Smells are. One can be a bit defensive about code smells. I will cheerfully write very long stored procedures, even though they are frowned on. I’ll use dynamic SQL occasionally. You can only use them as an aid for your own judgment and it is fine to ‘sign them off’ as being appropriate in particular circumstances. Also, whole classes of ‘code smells’ may be irrelevant for a particular database. The use of proprietary SQL, for example, is only a ‘code smell’ if there is a chance that the database will have to be ported to another RDBMS. The use of dynamic SQL is a risk only with certain security models. As the saying goes,  a CodeSmell is a hint of possible bad practice to a pragmatist, but a sure sign of bad practice to a purist. Plamen Ratchev’s wonderful article Ten Common SQL Programming Mistakes lists some of these ‘code smells’ along with out-and-out mistakes, but there are more. The use of nested transactions, for example, isn’t entirely incorrect, even though the database engine ignores all but the outermost: but it does flag up the possibility that the programmer thinks that nested transactions are supported. If anything requires some sort of general agreement, the definition of code smells is one. I’m therefore going to make this Blog ‘dynamic, in that, if anyone twitters a suggestion with a #SQLCodeSmells tag (or sends me a twitter) I’ll update the list here. If you add a comment to the blog with a suggestion of what should be added or removed, I’ll do my best to oblige. In other words, I’ll try to keep this blog up to date. The name against each 'smell' is the name of the person who Twittered me, commented about or who has written about the 'smell'. it does not imply that they were the first ever to think of the smell! Use of deprecated syntax such as *= (Dave Howard) Denormalisation that requires the shredding of the contents of columns. (Merrill Aldrich) Contrived interfaces Use of deprecated datatypes such as TEXT/NTEXT (Dave Howard) Datatype mis-matches in predicates that rely on implicit conversion.(Plamen Ratchev) Using Correlated subqueries instead of a join   (Dave_Levy/ Plamen Ratchev) The use of Hints in queries, especially NOLOCK (Dave Howard /Mike Reigler) Few or No comments. Use of functions in a WHERE clause. (Anil Das) Overuse of scalar UDFs (Dave Howard, Plamen Ratchev) Excessive ‘overloading’ of routines. The use of Exec xp_cmdShell (Merrill Aldrich) Excessive use of brackets. (Dave Levy) Lack of the use of a semicolon to terminate statements Use of non-SARGable functions on indexed columns in predicates (Plamen Ratchev) Duplicated code, or strikingly similar code. Misuse of SELECT * (Plamen Ratchev) Overuse of Cursors (Everyone. Special mention to Dave Levy & Adrian Hills) Overuse of CLR routines when not necessary (Sam Stange) Same column name in different tables with different datatypes. (Ian Stirk) Use of ‘broken’ functions such as ‘ISNUMERIC’ without additional checks. Excessive use of the WHILE loop (Merrill Aldrich) INSERT ... EXEC (Merrill Aldrich) The use of stored procedures where a view is sufficient (Merrill Aldrich) Not using two-part object names (Merrill Aldrich) Using INSERT INTO without specifying the columns and their order (Merrill Aldrich) Full outer joins even when they are not needed. (Plamen Ratchev) Huge stored procedures (hundreds/thousands of lines). Stored procedures that can produce different columns, or order of columns in their results, depending on the inputs. Code that is never used. Complex and nested conditionals WHILE (not done) loops without an error exit. Variable name same as the Datatype Vague identifiers. Storing complex data  or list in a character map, bitmap or XML field User procedures with sp_ prefix (Aaron Bertrand)Views that reference views that reference views that reference views (Aaron Bertrand) Inappropriate use of sql_variant (Neil Hambly) Errors with identity scope using SCOPE_IDENTITY @@IDENTITY or IDENT_CURRENT (Neil Hambly, Aaron Bertrand) Schemas that involve multiple dated copies of the same table instead of partitions (Matt Whitfield-Atlantis UK) Scalar UDFs that do data lookups (poor man's join) (Matt Whitfield-Atlantis UK) Code that allows SQL Injection (Mladen Prajdic) Tables without clustered indexes (Matt Whitfield-Atlantis UK) Use of "SELECT DISTINCT" to mask a join problem (Nick Harrison) Multiple stored procedures with nearly identical implementation. (Nick Harrison) Excessive column aliasing may point to a problem or it could be a mapping implementation. (Nick Harrison) Joining "too many" tables in a query. (Nick Harrison) Stored procedure returning more than one record set. (Nick Harrison) A NOT LIKE condition (Nick Harrison) excessive "OR" conditions. (Nick Harrison) User procedures with sp_ prefix (Aaron Bertrand) Views that reference views that reference views that reference views (Aaron Bertrand) sp_OACreate or anything related to it (Bill Fellows) Prefixing names with tbl_, vw_, fn_, and usp_ ('tibbling') (Jeremiah Peschka) Aliases that go a,b,c,d,e... (Dave Levy/Diane McNurlan) Overweight Queries (e.g. 4 inner joins, 8 left joins, 4 derived tables, 10 subqueries, 8 clustered GUIDs, 2 UDFs, 6 case statements = 1 query) (Robert L Davis) Order by 3,2 (Dave Levy) MultiStatement Table functions which are then filtered 'Sel * from Udf() where Udf.Col = Something' (Dave Ballantyne) running a SQL 2008 system in SQL 2000 compatibility mode(John Stafford)

    Read the article

  • How to Upload a file from client to server using OFBIZ?

    - by SIVAKUMAR.J
    I'm new to ofbiz so try to keep your answer as simple as possibly. If you can give examples that would be kind. My problem is I created a project inside the ofbiz/hot-deploy folder namely productionmgntSystem. Inside the folder ofbiz\hot-deploy\productionmgntSystem\webapp\productionmgntSystem I created a file app_details_1.ftl. The following are the code of this file <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"> <title>Insert title here</title> <script TYPE="TEXT/JAVASCRIPT" language=""JAVASCRIPT"> function uploadFile() { //alert("Before calling upload.jsp"); window.location='<@ofbizUrl>testing_service1</@ofbizUrl>' } </script> </head> <!-- <form action="<@ofbizUrl>testing_service1</@ofbizUrl>" enctype="multipart/form-data" name="app_details_frm"> --> <form action="<@ofbizUrl>logout1</@ofbizUrl>" enctype="multipart/form-data" name="app_details_frm"> <center style="height: 299px; "> <table border="0" style="height: 177px; width: 788px"> <tr style="height: 115px; "> <td style="width: 103px; "> <td style="width: 413px; "><h1>APPLICATION DETAILS</h1> <td style="width: 55px; "> </tr> <tr> <td style="width: 125px; ">Application name : </td> <td> <input name="app_name_txt" id="txt_1" value=" " /> </td> </tr> <tr> <td style="width: 125px; ">Excell sheet &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;: </td> <td> <input type="file" name="filename"/> </td> </tr> <tr> <td> <!-- <input type="button" name="logout1_cmd" value="Logout" onclick="logout1()"/> --> <input type="submit" name="logout_cmd" value="logout"/> </td> <td> <!-- <input type="submit" name="upload_cmd" value="Submit" /> --> <input type="button" name="upload1_cmd" value="Upload" onclick="uploadFile()"/> </td> </tr> </table> </center> </form> </html> the following coding is present in the file ofbiz\hot-deploy\productionmgntSystem\webapp\productionmgntSystem\WEB-INF\controller.xml ...... ....... ........ <request-map uri="testing_service1"> <security https="true" auth="true"/> <event type="java" path="org.ofbiz.productionmgntSystem.web_app_req.WebServices1" invoke="testingService"/> <response name="ok" type="view" value="ok_view"/> <response name="exception" type="view" value="exception_view"/> </request-map> .......... ............ .......... <view-map name="ok_view" type="ftl" page="ok_view.ftl"/> <view-map name="exception_view" type="ftl" page="exception_view.ftl"/> ................ ............. ............. The following are the coding present in the file ofbiz\hot-deploy\productionmgntSystem\src\org\ofbiz\productionmgntSystem\web_app_req\WebServices1.java package org.ofbiz.productionmgntSystem.web_app_req; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import java.io.DataInputStream; import java.io.FileOutputStream; import java.io.IOException; public class WebServices1 { public static String testingService(HttpServletRequest request, HttpServletResponse response) { //int i=0; String result="ok"; System.out.println("\n\n\t*************************************\n\tInside WebServices1.testingService(HttpServletRequest request, HttpServletResponse response)- Start"); String contentType=request.getContentType(); System.out.println("\n\n\t*************************************\n\tInside WebServices1.testingService(HttpServletRequest request, HttpServletResponse response)- contentType : "+contentType); String str=new String(); // response.setContentType("text/html"); //PrintWriter writer; if ((contentType != null) && (contentType.indexOf("multipart/form-data") >= 0)) { System.out.println("\n\n\t**********************************\n\tInside WebServices1.testingService(HttpServletRequest request, HttpServletResponse response) after if (contentType != null)"); try { // writer=response.getWriter(); System.out.println("\n\n\t**********************************\n\tInside WebServices1.testingService(HttpServletRequest request, HttpServletResponse response) - try Start"); DataInputStream in = new DataInputStream(request.getInputStream()); int formDataLength = request.getContentLength(); byte dataBytes[] = new byte[formDataLength]; int byteRead = 0; int totalBytesRead = 0; //this loop converting the uploaded file into byte code while (totalBytesRead < formDataLength) { byteRead = in.read(dataBytes, totalBytesRead,formDataLength); totalBytesRead += byteRead; } String file = new String(dataBytes); //for saving the file name String saveFile = file.substring(file.indexOf("filename=\"") + 10); saveFile = saveFile.substring(0, saveFile.indexOf("\n")); saveFile = saveFile.substring(saveFile.lastIndexOf("\\")+ 1,saveFile.indexOf("\"")); int lastIndex = contentType.lastIndexOf("="); String boundary = contentType.substring(lastIndex + 1,contentType.length()); int pos; //extracting the index of file pos = file.indexOf("filename=\""); pos = file.indexOf("\n", pos) + 1; pos = file.indexOf("\n", pos) + 1; pos = file.indexOf("\n", pos) + 1; int boundaryLocation = file.indexOf(boundary, pos) - 4; int startPos = ((file.substring(0, pos)).getBytes()).length; int endPos = ((file.substring(0, boundaryLocation)).getBytes()).length; //creating a new file with the same name and writing the content in new file FileOutputStream fileOut = new FileOutputStream("/"+saveFile); fileOut.write(dataBytes, startPos, (endPos - startPos)); fileOut.flush(); fileOut.close(); System.out.println("\n\n\t**********************************\n\tInside WebServices1.testingService(HttpServletRequest request, HttpServletResponse response) - try End"); } catch(IOException ioe) { System.out.println("\n\n\t*********************************\n\tInside WebServices1.testingService(HttpServletRequest request, HttpServletResponse response) - Catch IOException"); //ioe.printStackTrace(); return("exception"); } catch(Exception ex) { System.out.println("\n\n\t*********************************\n\tInside WebServices1.testingService(HttpServletRequest request, HttpServletResponse response) - Catch Exception"); return("exception"); } } else { System.out.println("\n\n\t********************************\n\tInside WebServices1.testingService(HttpServletRequest request, HttpServletResponse response) else part"); result="exception"; } System.out.println("\n\n\t*************************************\n\tInside WebServices1.testingService(HttpServletRequest request, HttpServletResponse response)- End"); return(result); } } I want to upload a file to the server. The file is get from user " tag in the "app_details_1.ftl" file & it is updated into the server by using the method "testingService(HttpServletRequest request, HttpServletResponse response)" in the class "WebServices1". But the file is not uploaded. Give me a good solution for uploading a file to the server.

    Read the article

  • SQL SERVER – Extending SQL Azure with Azure worker role – Guest Post by Paras Doshi

    - by pinaldave
    This is guest post by Paras Doshi. Paras Doshi is a research Intern at SolidQ.com and a Microsoft student partner. He is currently working in the domain of SQL Azure. SQL Azure is nothing but a SQL server in the cloud. SQL Azure provides benefits such as on demand rapid provisioning, cost-effective scalability, high availability and reduced management overhead. To see an introduction on SQL Azure, check out the post by Pinal here In this article, we are going to discuss how to extend SQL Azure with the Azure worker role. In other words, we will attempt to write a custom code and host it in the Azure worker role; the aim is to add some features that are not available with SQL Azure currently or features that need to be customized for flexibility. This way we extend the SQL Azure capability by building some solutions that run on Azure as worker roles. To understand Azure worker role, think of it as a windows service in cloud. Azure worker role can perform background processes, and to handle processes such as synchronization and backup, it becomes our ideal tool. First, we will focus on writing a worker role code that synchronizes SQL Azure databases. Before we do so, let’s see some scenarios in which synchronization between SQL Azure databases is beneficial: scaling out access over multiple databases enables us to handle workload efficiently As of now, SQL Azure database can be hosted in one of any six datacenters. By synchronizing databases located in different data centers, one can extend the data by enabling access to geographically distributed data Let us see some scenarios in which SQL server to SQL Azure database synchronization is beneficial To backup SQL Azure database on local infrastructure Rather than investing in local infrastructure for increased workloads, such workloads could be handled by cloud Ability to extend data to different datacenters located across the world to enable efficient data access from remote locations Now, let us develop cloud-based app that synchronizes SQL Azure databases. For an Introduction to developing cloud based apps, click here Now, in this article, I aim to provide a bird’s eye view of how a code that synchronizes SQL Azure databases look like and then list resources that can help you develop the solution from scratch. Now, if you newly add a worker role to the cloud-based project, this is how the code will look like. (Note: I have added comments to the skeleton code to point out the modifications that will be required in the code to carry out the SQL Azure synchronization. Note the placement of Setup() and Sync() function.) Click here (http://parasdoshi1989.files.wordpress.com/2011/06/code-snippet-1-for-extending-sql-azure-with-azure-worker-role1.pdf ) Enabling SQL Azure databases synchronization through sync framework is a two-step process. In the first step, the database is provisioned and sync framework creates tracking tables, stored procedures, triggers, and tables to store metadata to enable synchronization. This is one time step. The code for the same is put in the setup() function which is called once when the worker role starts. Now, the second step is continuous (or on demand) synchronization of SQL Azure databases by propagating changes between databases. This is done on a continuous basis by calling the sync() function in the while loop. The code logic to synchronize changes between SQL Azure databases should be put in the sync() function. Discussing the coding part step by step is out of the scope of this article. Therefore, let me suggest you a resource, which is given here. Also, note that before you start developing the code, you will need to install SYNC framework 2.1 SDK (download here). Further, you will reference some libraries before you start coding. Details regarding the same are available in the article that I just pointed to. You will be charged for data transfers if the databases are not in the same datacenter. For pricing information, go here Currently, a tool named DATA SYNC, which is built on top of sync framework, is available in CTP that allows SQL Azure <-> SQL server and SQL Azure <-> SQL Azure synchronization (without writing single line of code); however, in some cases, the custom code shown in this blogpost provides flexibility that is not available with Data SYNC. For instance, filtering is not supported in the SQL Azure DATA SYNC CTP2; if you wish to have such a functionality now, then you have the option of developing a custom code using SYNC Framework. Now, this code can be easily extended to synchronize at some schedule. Let us say we want the databases to get synchronized every day at 10:00 pm. This is what the code will look like now: (http://parasdoshi1989.files.wordpress.com/2011/06/code-snippet-2-for-extending-sql-azure-with-azure-worker-role.pdf) Don’t you think that by writing such a code, we are imitating the functionality provided by the SQL server agent for a SQL server? Think about it. We are scheduling our administrative task by writing custom code – in other words, we have developed a “Light weight SQL server agent for SQL Azure!” Since the SQL server agent is not currently available in cloud, we have developed a solution that enables us to schedule tasks, and thus we have extended SQL Azure with the Azure worker role! Now if you wish to track jobs, you can do so by storing this data in SQL Azure (or Azure tables). The reason is that Windows Azure is a stateless platform, and we will need to store the state of the job ourselves and the choice that you have is SQL Azure or Azure tables. Note that this solution requires custom code and also it is not UI driven; however, for now, it can act as a temporary solution until SQL server agent is made available in the cloud. Moreover, this solution does not encompass functionalities that a SQL server agent provides, but it does open up an interesting avenue to schedule some of the tasks such as backup and synchronization of SQL Azure databases by writing some custom code in the Azure worker role. Now, let us see one more possibility – i.e., running BCP through a worker role in Azure-hosted services and then uploading the backup files either locally or on blobs. If you upload it locally, then consider the data transfer cost. If you upload it to blobs residing in the same datacenter, then no transfer cost applies but the cost on blob size applies. So, before choosing the option, you need to evaluate your preferences keeping the cost associated with each option in mind. In this article, I have shown that Azure worker role solution could be developed to synchronize SQL Azure databases. Moreover, a light-weight SQL server agent for SQL Azure can be developed. Also we discussed the possibility of running BCP through a worker role in Azure-hosted services for backing up our precious SQL Azure data. Thus, we can extend SQL Azure with the Azure worker role. But remember: you will be charged for running Azure worker roles. So at the end of the day, you need to ask – am I willing to build a custom code and pay money to achieve this functionality? I hope you found this blog post interesting. If you have any questions/feedback, you can comment below or you can mail me at Paras[at]student-partners[dot]com Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Azure, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • How to Organize a Programming Language Club

    - by Ben Griswold
    I previously noted that we started a language club at work.  You know, I searched around but I couldn’t find a copy of the How to Organize a Programming Language Club Handbook. Maybe it’s sold out?  Yes, Stack Overflow has quite a bit of information on how to learn and teach new languages and there’s also a good number of online tutorials which provide language introductions but I was interested in group learning.  After   two months of meetings, I present to you the Unofficial How to Organize a Programming Language Club Handbook.  1. Gauge interest. Start by surveying prospects. “Excuse me, smart-developer-whom-I-work-with-and-I-think-might-be-interested-in-learning-a-new-coding-language-with-me. Are you interested in learning a new language with me?” If you’re lucky, you work with a bunch of really smart folks who aren’t shy about teaching/learning in a group setting and you’ll have a collective interest in no time.  Simply suggesting the idea is the only effort required.  If you don’t work in this type of environment, maybe you should consider a new place of employment.  2. Make it official. Send out a “Welcome to the Club” email: There’s been talk of folks itching to learn new languages – Python, Scala, F# and Haskell to name a few.  Rather than taking on new languages alone, let’s learn in the open.  That’s right.  Let’s start a languages club.  We’ll have everything a real club needs – secret handshake, goofy motto and a high-and-mighty sense that we’re better than everybody else. T-shirts?  Hell YES!  Anyway, I’ve thrown this idea around the office and no one has laughed at me yet so please consider this your very official invitation to be in THE club. [Insert your ideas about how the club might be run, solicit feedback and suggestions, ask what other folks would like to get out the club, comment about club hazing practices and talk up the T-shirts even more. Finally, call out the languages you are interested in learning and ask the group for their list.] 3.  Send out invitations to the first meeting.  Don’t skimp!  Hallmark greeting cards for everyone.  Personalized.  Hearts over the I’s and everything.  Oh, and be sure to include the list of suggested languages with vote count.  Here the list of languages we are interested in: Python 5 Ruby 4 Objective-C 3 F# 2 Haskell 2 Scala 2 Ada 1 Boo 1 C# 1 Clojure 1 Erlang 1 Go 1 Pi 1 Prolog 1 Qt 1 4.  At the first meeting, there must be cake.  Lots of cake. And you should tackle some very important questions: Which language should we start with?  You can immediately go with the top vote getter or you could do as we did and designate each person to provide a high-level review of each of the proposed languages over the next two weeks.  After all presentations are completed, vote on the language. Our high-level review consisted of answers to a series of questions. Decide how often and where the group will meet.  We, for example, meet for a brown bag lunch every Wednesday.  Decide how you’re going to learn.  We determined that the best way to learn is to just dive in and write code.  After choosing our first language (Python), we talked about building an application, or performing coding katas, but we ultimately choose to complete a series of Project Euler problems.  We kept it simple – each member works out the same two problems each week in preparation of a code review the following Wednesday. 5.  Code, Review, Learn.  Prior to the weekly meeting, everyone uploads their solutions to our internal wiki.  Each Project Euler problem has a dedicated page.  In the meeting, we use a really fancy HD projector to show off each member’s solution.  It is very important to use an HD projector.  Again, don’t skimp!  Each code author speaks to their solution, everyone else comments, applauds, points fingers and laughs, etc.  As much as I’ve learned from solving the problems on my own, I’ve learned at least twice as much at the group code review.  6.  Rinse. Lather. Repeat.  We’ve hosted the language club for 7 weeks now.  The first meeting just set the stage.  The next two meetings provided a review of the languages followed by a first language selection.  The remaining meetings focused on Python and Project Euler problems.  Today we took a vote as to whether or not we’re ready to switch to another language and/or another problem set.  Pretty much everyone wants to stay the course for a few more weeks at least.  Until then, we’ll continue to code the next two solutions, review and learn. Again, we’ve been having a good time with the programming language club.  I’m glad it got off the ground.  What do you think?  Would you be interested in a language club?  Any suggestions on what we might do better?

    Read the article

  • Oracle Database 11gR2 11.2.0.3 Certified with E-Business Suite on HP-UX PA-RISC

    - by John Abraham
    As a follow up to our original announcement, Oracle Database 11g Release 2 (11.2.0.3) is now certified with Oracle E-Business Suite Release 11i and Release 12 on the following HP-UX platforms: Release 11i (11.5.10.2 + ATG PF.H RUP 6 and higher) : HP-UX PA-RISC (64-bit) (11.31) Release 12 (12.0.4 and higher, 12.1.1 and higher): HP-UX PA-RISC (64-bit) (11.31) This announcement for Oracle E-Business Suite 11i and R12 includes: Real Application Clusters (RAC) Oracle Database Vault Transparent Data Encryption (Column Encryption) TDE Tablespace Encryption Advanced Security Option (ASO)/Advanced Networking Option (ANO) Export/Import Process for Oracle E-Business Suite Release 11i and Release 12 Database Instances Transportable Database and Transportable Tablespaces Data Migration Processes for Oracle E-Business Suite Release 11i and Release 12 References MOS Document 881505.1 - Interoperability Notes - Oracle E-Business Suite Release 11i with Oracle Database 11g Release 2 (11.2.0) MOS Document 1058763.1 - Interoperability Notes - Oracle E-Business Suite Release 12 with Oracle Database 11g Release 2 (11.2.0) MOS Document 1091086.1 - Integrating Oracle E-Business Suite Release 11i with Oracle Database Vault 11gR2 MOS Document 1091083.1 - Integrating Oracle E-Business Suite Release 12 with Oracle Database Vault 11gR2 MOS Document 216205.1 - Database Initialization Parameters for Oracle E-Business Suite 11i MOS Document 396009.1 - Database Initialization Parameters for Oracle Applications Release 12 MOS Document 761570.1 - Database Preparation Guidelines for an Oracle E-Business Suite Release 12.1.1 Upgrade MOS Document 823586.1 - Using Oracle 11g Release 2 Real Application Clusters with Oracle E-Business Suite Release 11i MOS Document 823587.1 - Using Oracle 11g Release 2 Real Application Clusters with Oracle E-Business Suite Release 12 MOS Document 403294.1 - Using Transparent Data Encryption (TDE) Column Encryption with Oracle E-Business Suite Release 11i MOS Document 732764.1 - Using Transparent Data Encryption (TDE) Column Encryption with Oracle E-Business Suite Release 12 MOS Document 828223.1 - Using TDE Tablespace Encryption with Oracle E-Business Suite Release 11i MOS Document 828229.1 - Using TDE Tablespace Encryption with Oracle E-Business Suite Release 12 MOS Document 391248.1 - Encrypting Oracle E-Business Suite Release 11i Network Traffic using Advanced Security Option and Advanced Networking Option MOS Document 732764.1 - Using Transparent Data Encryption (TDE) Column Encryption with Oracle E-Business Suite Release 12 MOS Document 557738.1 - Export/Import Process for Oracle E-Business Suite Release 11i Database Instances Using Oracle Database 11g Release 1 or 11g Release 2 MOS Document 741818.1 - Export/Import Process for Oracle E-Business Suite Release 12 Database Instances Using Oracle Database 11g Release 1 or 11g Release 2 MOS Document 1366265.1 - Using Transportable Tablespaces to Migrate Oracle Applications 11i Using Oracle Database 11g Release 2 MOS Document 1311487.1 - Using Transportable Tablespaces to Migrate Oracle E-Business Suite Release 12 Using Oracle Database 11g Release 2 MOS Document 729309.1 - Using Transportable Database to Migrate Oracle E-Business Suite Release 11i Using Oracle Database 10g Release 2 or 11g MOS Document 734763.1 - Using Transportable Database to Migrate Oracle E-Business Suite Release 12 Using Oracle Database 10g Release 2 or 11g Please also review the platform-specific Oracle Database Installation Guides for operating system and other prerequisites.

    Read the article

  • Programming doesn&rsquo;t have to be Magic

    - by Wes McClure
    In the show LOST, the Swan Station had a button that “had to be pushed” every 100 minutes to avoid disaster.  Several characters in the show took it upon themselves to have faith and religiously push the button, resetting the clock and averting the unknown “disaster”.  There are striking similarities in this story to the code we write every day.  Here are some common ones that I encounter: “I don’t know what it does but the application doesn’t work without it” “I added that code because I saw it in other similar places, I didn’t understand it, but thought it was necessary.” (for consistency, or to make things “work”) “An error message recommended it” “I copied that code” (and didn’t look at what it was doing) “It was suggested in a forum online and it fixed my problem so I left it” In all of these cases we haven’t done our due diligence to understand what the code we are writing is actually doing.  In the rush to get things done it seems like we’re willing to push any button (add any line of code) just to get our desired result and move on.  All of the above explanations are common things we encounter, and are valid ways to work through a problem we have, but when we find a solution to a task we are working on (whether a bug or a feature), we should take a moment to reflect on what we don’t understand.  Remove what isn’t necessary, comprehend and simplify what is.  Why is it detrimental to commit code we don’t understand? Perpetuates unnecessary code If you copy code that isn’t necessary, someone else is more likely to do so, especially peers Perpetuates tech debt Adding unnecessary code leads to extra code that must be understood, maintained and eventually cleaned up in longer lived projects Tech debt begets tech debt as other developers copy or use this code as guidelines in similar situations Increases maintenance How do we know the code is simplified if we don’t understand it? Perpetuates a lack of ownership Makes it seem ok to commit anything so long as it “gets the job done” Perpetuates the notion that programming is magic If we don’t take the time to understand every line of code we add, then we are contributing to the notion that it is simply enough to make the code work, regardless of how. TLDR Don’t commit code that you don’t understand, take the time to understand it, simplify it and then commit it!

    Read the article

  • Silverlight Cream for June 28, 2011 -- #1112

    - by Dave Campbell
    In this Issue: WindowsPhoneGeek, John Papa, Mike Taulty, Erno de Weerd, Stephen Price, Chris Rouw, Peter Kuhn, Damian Schenkelman, Michael Washington, and Manas Patnaik. Above the Fold: Silverlight: "Binding to View Model properties in Data Templates. The RootBinding Markup Extension" Damian Schenkelman WP7: "Storing Files in SQL Server using WCF RIA Services and Silverlight – Part 3" Chris Rouw LightSwitch: "Saving Files To File System With LightSwitch (Uploading Files)" Michael Washington Shoutouts: Steve Wortham announced a change to his XilverlightXAP.com site... they're now accepting XAML illustrations: Introducing XAML Illustrations, Increased Payouts to Contributors, and More Amid all the discussions that I've tried to avoid, Michael Washinton is Betting The House On LightSwitch From SilverlightCream.com: Dynamically updating a data bound LongListSelector in Windows Phone WindowsPhoneGeek's latest is on using the LongListSelector from the Toolkit and dynamically updating it with data... detailed guidelines and plenty of pictures and code as always. Silverlight TV 77: Exploring 3D with Aaron Oneal John Papa has Silverlight TV number 77 up and is chatting with Aaron Oneal, program manager of the Silverlight 3D efforts... too cool. Silverlight WebBrowser Control for Offline Apps (Part 2) Mike Taulty wrote this post in Silverlight 5 Beta, but says it should be fine in 4... a continuation of his HTML Content display using the WebBrowser control while offline Windows Phone 7: Databinding and the Pivot Control Erno de Weerd discusses the Pivot control in WP7 based on his attempts to use it in an app. Required Attribute on an Entity Stephen Price has a new post at XAML Source... first is this one on setting the required attribute and the troubles you can get into if it's not set correctly Storing Files in SQL Server using WCF RIA Services and Silverlight – Part 3 Chris Rouw has Part 3 of his series on Storing files in SQL Server using FILESTREAM Storage in SQL Server 2008 and Silverlight... this time he's viewing files stored in the FILESTREAM from the LOB app. Getting ready for the Windows Phone 7 Exam 70-599 (Part 4) Peter Kuhn has Part 4 of his series on getting ready for the WP7 exam up at SilverlightShow... the date is coming up soon... are you ready? Binding to View Model properties in Data Templates. The RootBinding Markup Extension Damian Schenkelman has a Silverlight 5 Beta post up... digging into the XAML Markup Extensions and popping out a RootBindingExtensionthat helps bind to a property in a view model from a DataTemplate. Saving Files To File System With LightSwitch (Uploading Files) Michael Washington has a cool tutorial up on his new LightSwitch Help Website... File Upload to a server file system using LightSwitch, plus a project to download... good stuff! Microsoft Media Platform (MMPPF): Player Framework for Silverlight Manas Patnaik's latest post is about the Media Player Project... some of the history of media with Silvelight and how to go about using the Media Player Project bits Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Static background noise while using new headset Ubuntu 13.04

    - by ThundLayr
    Today I bought a new gaming headset (Gx-Gaming Lychas), and when I tried to record some gameplay-comentary I noticed that there always is a static background noise, I just recorded an example so you guys can listen it (no downloaded needed): http://www47.zippyshare.com/v/65167832/file.html I'm using Kubuntu 13.04 and Kernel version is 3.8.0-19, my laptop is an Acer Travelmate 5760Z, I tried tons of configurations on Alsamixer and none of them made result, I really need to get this working so any kind of help will be very aprecciated. cat /proc/asound/cards: 0 [PCH ]: HDA-Intel - HDA Intel PCH HDA Intel PCH at 0xc6400000 irq 44 cat /proc/asound/card0/codec#0 Codec: Conexant CX20588 Address: 0 AFG Function Id: 0x1 (unsol 1) Vendor Id: 0x14f1506c Subsystem Id: 0x10250574 Revision Id: 0x100003 No Modem Function Group found Default PCM: rates [0x160]: 44100 48000 96000 bits [0xe]: 16 20 24 formats [0x1]: PCM Default Amp-In caps: N/A Default Amp-Out caps: N/A State of AFG node 0x01: Power states: D0 D1 D2 D3 D3cold CLKSTOP EPSS Power: setting=D0, actual=D0 GPIO: io=4, o=0, i=0, unsolicited=1, wake=0 IO[0]: enable=0, dir=0, wake=0, sticky=0, data=0, unsol=0 IO[1]: enable=0, dir=0, wake=0, sticky=0, data=0, unsol=0 IO[2]: enable=0, dir=0, wake=0, sticky=0, data=0, unsol=0 IO[3]: enable=0, dir=0, wake=0, sticky=0, data=0, unsol=0 Node 0x10 [Audio Output] wcaps 0xc1d: Stereo Amp-Out R/L Control: name="Headphone Playback Volume", index=0, device=0 ControlAmp: chs=3, dir=Out, idx=0, ofs=0 Control: name="Headphone Playback Switch", index=0, device=0 ControlAmp: chs=3, dir=Out, idx=0, ofs=0 Device: name="CX20588 Analog", type="Audio", device=0 Amp-Out caps: ofs=0x4a, nsteps=0x4a, stepsize=0x03, mute=1 Amp-Out vals: [0x4a 0x4a] Converter: stream=8, channel=0 PCM: rates [0x560]: 44100 48000 96000 192000 bits [0xe]: 16 20 24 formats [0x1]: PCM Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Node 0x11 [Audio Output] wcaps 0xc1d: Stereo Amp-Out R/L Control: name="Speaker Playback Volume", index=0, device=0 ControlAmp: chs=3, dir=Out, idx=0, ofs=0 Control: name="Speaker Playback Switch", index=0, device=0 ControlAmp: chs=3, dir=Out, idx=0, ofs=0 Amp-Out caps: ofs=0x4a, nsteps=0x4a, stepsize=0x03, mute=1 Amp-Out vals: [0x80 0x80] Converter: stream=8, channel=0 PCM: rates [0x560]: 44100 48000 96000 192000 bits [0xe]: 16 20 24 formats [0x1]: PCM Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Node 0x12 [Audio Output] wcaps 0x611: Stereo Digital Converter: stream=0, channel=0 Digital: Digital category: 0x0 IEC Coding Type: 0x0 PCM: rates [0x160]: 44100 48000 96000 bits [0xe]: 16 20 24 formats [0x5]: PCM AC3 Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Node 0x13 [Beep Generator Widget] wcaps 0x70000c: Mono Amp-Out Control: name="Beep Playback Volume", index=0, device=0 ControlAmp: chs=1, dir=Out, idx=0, ofs=0 Control: name="Beep Playback Switch", index=0, device=0 ControlAmp: chs=1, dir=Out, idx=0, ofs=0 Amp-Out caps: ofs=0x07, nsteps=0x07, stepsize=0x0f, mute=0 Amp-Out vals: [0x00] Node 0x14 [Audio Input] wcaps 0x100d1b: Stereo Amp-In R/L Control: name="Capture Volume", index=0, device=0 ControlAmp: chs=3, dir=In, idx=0, ofs=0 Control: name="Capture Switch", index=0, device=0 ControlAmp: chs=3, dir=In, idx=0, ofs=0 Device: name="CX20588 Analog", type="Audio", device=0 Amp-In caps: ofs=0x4a, nsteps=0x50, stepsize=0x03, mute=1 Amp-In vals: [0x50 0x50] [0x80 0x80] [0x80 0x80] [0x80 0x80] Converter: stream=4, channel=0 SDI-Select: 0 PCM: rates [0x160]: 44100 48000 96000 bits [0xe]: 16 20 24 formats [0x1]: PCM Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Connection: 4 0x17* 0x18 0x23 0x24 Node 0x15 [Audio Input] wcaps 0x100d1b: Stereo Amp-In R/L Amp-In caps: ofs=0x4a, nsteps=0x50, stepsize=0x03, mute=1 Amp-In vals: [0x4a 0x4a] [0x4a 0x4a] [0x4a 0x4a] [0x4a 0x4a] Converter: stream=0, channel=0 SDI-Select: 0 PCM: rates [0x160]: 44100 48000 96000 bits [0xe]: 16 20 24 formats [0x1]: PCM Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Connection: 4 0x17* 0x18 0x23 0x24 Node 0x16 [Audio Input] wcaps 0x100d1b: Stereo Amp-In R/L Amp-In caps: ofs=0x4a, nsteps=0x50, stepsize=0x03, mute=1 Amp-In vals: [0x4a 0x4a] [0x4a 0x4a] [0x4a 0x4a] [0x4a 0x4a] Converter: stream=0, channel=0 SDI-Select: 0 PCM: rates [0x160]: 44100 48000 96000 bits [0xe]: 16 20 24 formats [0x1]: PCM Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Connection: 4 0x17* 0x18 0x23 0x24 Node 0x17 [Audio Selector] wcaps 0x30050d: Stereo Amp-Out Control: name="Mic Boost Volume", index=0, device=0 ControlAmp: chs=3, dir=Out, idx=0, ofs=0 Amp-Out caps: ofs=0x00, nsteps=0x04, stepsize=0x27, mute=0 Amp-Out vals: [0x04 0x04] Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Connection: 4 0x1a 0x1b* 0x1d 0x1e Node 0x18 [Audio Selector] wcaps 0x30050d: Stereo Amp-Out Amp-Out caps: ofs=0x00, nsteps=0x04, stepsize=0x27, mute=0 Amp-Out vals: [0x00 0x00] Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Connection: 4 0x1a* 0x1b 0x1d 0x1e Node 0x19 [Pin Complex] wcaps 0x400581: Stereo Control: name="Headphone Jack", index=0, device=0 Pincap 0x0000001c: OUT HP Detect Pin Default 0x04214040: [Jack] HP Out at Ext Right Conn = 1/8, Color = Green DefAssociation = 0x4, Sequence = 0x0 Pin-ctls: 0xc0: OUT HP Unsolicited: tag=01, enabled=1 Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Connection: 2 0x10* 0x11 Node 0x1a [Pin Complex] wcaps 0x400481: Stereo Control: name="Internal Mic Phantom Jack", index=0, device=0 Pincap 0x00001324: IN Detect Vref caps: HIZ 50 80 Pin Default 0x90a70130: [Fixed] Mic at Int N/A Conn = Analog, Color = Unknown DefAssociation = 0x3, Sequence = 0x0 Misc = NO_PRESENCE Pin-ctls: 0x24: IN VREF_80 Unsolicited: tag=00, enabled=0 Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Node 0x1b [Pin Complex] wcaps 0x400581: Stereo Control: name="Mic Jack", index=0, device=0 Pincap 0x00011334: IN OUT EAPD Detect Vref caps: HIZ 50 80 EAPD 0x0: Pin Default 0x04a19020: [Jack] Mic at Ext Right Conn = 1/8, Color = Pink DefAssociation = 0x2, Sequence = 0x0 Pin-ctls: 0x24: IN VREF_80 Unsolicited: tag=02, enabled=1 Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Connection: 2 0x10* 0x11 Node 0x1c [Pin Complex] wcaps 0x400581: Stereo Pincap 0x00000014: OUT Detect Pin Default 0x40f001f0: [N/A] Other at Ext N/A Conn = Unknown, Color = Unknown DefAssociation = 0xf, Sequence = 0x0 Misc = NO_PRESENCE Pin-ctls: 0x40: OUT Unsolicited: tag=00, enabled=0 Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Connection: 2 0x10* 0x11 Node 0x1d [Pin Complex] wcaps 0x400581: Stereo Pincap 0x00010034: IN OUT EAPD Detect EAPD 0x0: Pin Default 0x40f001f0: [N/A] Other at Ext N/A Conn = Unknown, Color = Unknown DefAssociation = 0xf, Sequence = 0x0 Misc = NO_PRESENCE Pin-ctls: 0x40: OUT Unsolicited: tag=00, enabled=0 Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Connection: 2 0x10* 0x11 Node 0x1e [Pin Complex] wcaps 0x400481: Stereo Pincap 0x00000024: IN Detect Pin Default 0x40f001f0: [N/A] Other at Ext N/A Conn = Unknown, Color = Unknown DefAssociation = 0xf, Sequence = 0x0 Misc = NO_PRESENCE Pin-ctls: 0x00: Unsolicited: tag=00, enabled=0 Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Node 0x1f [Pin Complex] wcaps 0x400501: Stereo Control: name="Speaker Phantom Jack", index=0, device=0 Pincap 0x00000010: OUT Pin Default 0x92170110: [Fixed] Speaker at Int Front Conn = Analog, Color = Unknown DefAssociation = 0x1, Sequence = 0x0 Misc = NO_PRESENCE Pin-ctls: 0x40: OUT Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Connection: 2 0x10 0x11* Node 0x20 [Pin Complex] wcaps 0x400781: Stereo Digital Pincap 0x00000010: OUT Pin Default 0x40f001f0: [N/A] Other at Ext N/A Conn = Unknown, Color = Unknown DefAssociation = 0xf, Sequence = 0x0 Misc = NO_PRESENCE Pin-ctls: 0x00: Unsolicited: tag=00, enabled=0 Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Connection: 1 0x12 Node 0x21 [Audio Output] wcaps 0x611: Stereo Digital Converter: stream=0, channel=0 Digital: Digital category: 0x0 IEC Coding Type: 0x0 PCM: rates [0x160]: 44100 48000 96000 bits [0xe]: 16 20 24 formats [0x5]: PCM AC3 Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Node 0x22 [Pin Complex] wcaps 0x400781: Stereo Digital Pincap 0x00000010: OUT Pin Default 0x40f001f0: [N/A] Other at Ext N/A Conn = Unknown, Color = Unknown DefAssociation = 0xf, Sequence = 0x0 Misc = NO_PRESENCE Pin-ctls: 0x00: Unsolicited: tag=00, enabled=0 Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Connection: 1 0x21 Node 0x23 [Pin Complex] wcaps 0x40040b: Stereo Amp-In Amp-In caps: ofs=0x00, nsteps=0x04, stepsize=0x2f, mute=0 Amp-In vals: [0x00 0x00] Pincap 0x00000020: IN Pin Default 0x40f001f0: [N/A] Other at Ext N/A Conn = Unknown, Color = Unknown DefAssociation = 0xf, Sequence = 0x0 Misc = NO_PRESENCE Pin-ctls: 0x00: Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Node 0x24 [Audio Mixer] wcaps 0x20050b: Stereo Amp-In Amp-In caps: ofs=0x4a, nsteps=0x4a, stepsize=0x03, mute=1 Amp-In vals: [0x00 0x00] [0x00 0x00] Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Connection: 2 0x10 0x11 Node 0x25 [Vendor Defined Widget] wcaps 0xf00000: Mono

    Read the article

  • Oracle Optimized Solutions at Oracle OpenWorld 2012

    - by ferhatSF
    Have you registered for Oracle OpenWorld 2012 in San Francisco from September 30 to October 4? Visit the Oracle OpenWorld 2012 site today for registration and more information. Come join us to hear how Oracle Optimized Solutions can help you save money, reduce integration risks, and improve user productivity. Oracle Optimized Solutions are designed, pre-tested, tuned and fully documented architectures for optimal performance and availability. They provide written guidelines to help size, configure, purchase and deploy enterprise solutions that address common IT problems. Built with flexibility in mind, Oracle Optimized Solutions can be deployed as complete solutions or easily tailored to meet your specific needs - they are proven to save money, reduce integration risks and improve user productivity. Here is a preview of the planned Oracle OpenWorld sessions(*) on Oracle Optimized Solutions. October 1, 2012 Monday Time Session ID Title Location 12:15 PM CON7916 Accelerate Oracle E-Business Suite Deployment with SPARC SuperCluster Moscone West - 2001 03:15 PM GEN9691 General Session: Accelerate Your Business with the Oracle Hardware Advantage Moscone North - Hall D 04:45 PM CON4821 Building a Flexible Enterprise Cloud Infrastructure on Oracle SPARC Systems Moscone West - 2001 October 2, 2012 Tuesday Time Session ID Title Location 10:15 AM CON4561 Backup-and-Recovery Best Practices with Oracle Engineered Systems Products Moscone South - 252 11:45 AM CON3851 Optimizing JD Edwards EnterpriseOne on SPARC T4 Servers for Best Performance Moscone West - 2000 01:15 PM GEN11472 General Session: Breakthrough Efficiency in Private Cloud Infrastructure Moscone West - 3014 01:15 PM CON4600 Extreme Storage Scale and Efficiency: Lessons from a 100,000-Person Organization Moscone South - 252 05:00 PM CON9465 Next-Generation Directory: Oracle Unified Directory Moscone West - 3008 05:00 PM CON4088 Accelerate Your SAP Landscape with the Oracle SPARC SuperCluster Moscone West - 2001 05:00 PM CON7743 High-Performance Security for Oracle Applications Using SPARC T4 Systems Moscone West - 2000 05:00 PM CON3857 Archive Strategies for 100 Percent Data Availability Moscone South - 270 October 3, 2012 Wednesday Time Session ID Title Location 10:15 AM CON6528 Configure Oracle Hybrid Columnar Compression to Optimize Query Database Performance up to 10x Moscone South - 252 11:45 AM CON2590 Breakthrough in Private Cloud Management on SPARC T-Series Servers Moscone South - 270 01:15 PM CON4289 Oracle Optimized Solution for Siebel CRM at ACCOR Moscone West - 2000 05:00 PM CON7570 Improve PeopleSoft HCM Performance and Reliability with SPARC SuperCluster Moscone South - 252 * Schedule subject to change In addition, there will be Oracle Optimized Solutions Hands-On-Labs sessions planned. Please enroll ahead of time as space is limited: Oracle Optimized Solutions: Hands on Labs in Oracle OpenWorld Place: Marriott Marquis - Salon 14/15 Date and Time Session ID Title Monday October 1, 2012 01:45 PM HOL9868 Enterprise Cloud Infrastructure for SPARC with Oracle Enterprise Manager Ops Center 12c Monday October 1, 2012 03:15 PM HOL9907 Oracle Virtual Desktop Infrastructure Performance and Tablet Mobility Wednesday October 3, 2012 05:00 PM HOL9870 x86 Enterprise Cloud Infrastructure with Oracle VM 3.x and Sun ZFS Storage Appliance Thursday October 4, 2012 11:15 AM HOL9869 0 to Database Backup and Recovery in 60 Minutes Oracle Optimized Solutions executives and experts will also be at hand for discussions and follow ups. And don’t forget to catch live demonstrations of our complete Oracle Optimized Solutions while at Oracle OpenWorld 2012 in San Francisco. We recommend the use of the Schedule Builder tool to plan your visit to the conference and for pre-enrollment in sessions of your interest. We hope to see you there!

    Read the article

  • A Good Developer is So Hard to Find

    - by James Michael Hare
    Let me start out by saying I want to damn the writers of the Toughest Developer Puzzle Ever – 2. It is eating every last shred of my free time! But as I've been churning through each puzzle and marvelling at the brain teasers and trivia within, I began to think about interviewing developers and why it seems to be so hard to find good ones.  The problem is, it seems like no matter how hard we try to find the perfect way to separate the chaff from the wheat, inevitably someone will get hired who falls far short of expectations or someone will get passed over for missing a piece of trivia or a tricky brain teaser that could have been an excellent team member.   In shops that are primarily software-producing businesses or other heavily IT-oriented businesses (Microsoft, Amazon, etc) there often exists a much tighter bond between HR and the hiring development staff because development is their life-blood. Unfortunately, many of us work in places where IT is viewed as a cost or just a means to an end. In these shops, too often, HR and development staff may work against each other due to differences in opinion as to what a good developer is or what one is worth.  It seems that if you ask two different people what makes a good developer, often you will get three different opinions.   With the exception of those shops that are purely development-centric (you guys have it much easier!), most other shops have management who have very little knowledge about the development process.  Their view can often be that development is simply a skill that one learns and then once aquired, that developer can produce widgets as good as the next like workers on an assembly-line floor.  On the other side, you have many developers that feel that software development is an art unto itself and that the ability to create the most pure design or know the most obscure of keywords or write the shortest-possible obfuscated piece of code is a good coder.  So is it a skill?  An Art?  Or something entirely in between?   Saying that software is merely a skill and one just needs to learn the syntax and tools would be akin to saying anyone who knows English and can use Word can write a 300 page book that is accurate, meaningful, and stays true to the point.  This just isn't so.  It takes more than mere skill to take words and form a sentence, join those sentences into paragraphs, and those paragraphs into a document.  I've interviewed candidates who could answer obscure syntax and keyword questions and once they were hired could not code effectively at all.  So development must be more than a skill.   But on the other end, we have art.  Is development an art?  Is our end result to produce art?  I can marvel at a piece of code -- see it as concise and beautiful -- and yet that code most perform some stated function with accuracy and efficiency and maintainability.  None of these three things have anything to do with art, per se.  Art is beauty for its own sake and is a wonderful thing.  But if you apply that same though to development it just doesn't hold.  I've had developers tell me that all that matters is the end result and how you code it is entirely part of the art and I couldn't disagree more.  Yes, the end result, the accuracy, is the prime criteria to be met.  But if code is not maintainable and efficient, it would be just as useless as a beautiful car that breaks down once a week or that gets 2 miles to the gallon.  Yes, it may work in that it moves you from point A to point B and is pretty as hell, but if it can't be maintained or is not efficient, it's not a good solution.  So development must be something less than art.   In the end, I think I feel like development is a matter of craftsmanship.  We use our tools and we use our skills and set about to construct something that satisfies a purpose and yet is also elegant and efficient.  There is skill involved, and there is an art, but really it boils down to being able to craft code.  Crafting code is far more than writing code.  Anyone can write code if they know the syntax, but so few people can actually craft code that solves a purpose and craft it well.  So this is what I want to find, I want to find code craftsman!  But how?   I used to ask coding-trivia questions a long time ago and many people still fall back on this.  The thought is that if you ask the candidate some piece of coding trivia and they know the answer it must follow that they can craft good code.  For example:   What C++ keyword can be applied to a class/struct field to allow it to be changed even from a const-instance of that class/struct?  (answer: mutable)   So what do we prove if a candidate can answer this?  Only that they know what mutable means.  One would hope that this would infer that they'd know how to use it, and more importantly when and if it should ever be used!  But it rarely does!  The problem with triva questions is that you will either: Approve a really good developer who knows what some obscure keyword is (good) Reject a really good developer who never needed to use that keyword or is too inexperienced to know how to use it (bad) Approve a really bad developer who googled "C++ Interview Questions" and studied like hell but can't craft (very bad) Many HR departments love these kind of tests because they are short and easy to defend if a legal issue arrises on hiring decisions.  After all it's easy to say a person wasn't hired because they scored 30 out of 100 on some trivia test.  But unfortunately, you've eliminated a large part of your potential developer pool and possibly hired a few duds.  There are times I've hired candidates who knew every trivia question I could throw out them and couldn't craft.  And then there are times I've interviewed candidates who failed all my trivia but who I took a chance on who were my best finds ever.    So if not trivia, then what?  Brain teasers?  The thought is, these type of questions measure the thinking power of a candidate.  The problem is, once again, you will either: Approve a good candidate who has never heard the problem and can solve it (good) Reject a good candidate who just happens not to see the "catch" because they're nervous or it may be really obscure (bad) Approve a candidate who has studied enough interview brain teasers (once again, you can google em) to recognize the "catch" or knows the answer already (bad). Once again, you're eliminating good candidates and possibly accepting bad candidates.  In these cases, I think testing someone with brain teasers only tests their ability to answer brain teasers, not the ability to craft code. So how do we measure someone's ability to craft code?  Here's a novel idea: have them code!  Give them a computer and a compiler, or a whiteboard and a pen, or paper and pencil and have them construct a piece of code.  It just makes sense that if we're going to hire someone to code we should actually watch them code.  When they're done, we can judge them on several criteria: Correctness - does the candidate's solution accurately solve the problem proposed? Accuracy - is the candidate's solution reasonably syntactically correct? Efficiency - did the candidate write or use the more efficient data structures or algorithms for the job? Maintainability - was the candidate's code free of obfuscation and clever tricks that diminish readability? Persona - are they eager and willing or aloof and egotistical?  Will they work well within your team? It may sound simple, or it may sound crazy, but when I'm looking to hire a developer, I want to see them actually develop well-crafted code.

    Read the article

  • Oracle Tutor: Create Accessible Content for the Disabled Community

    - by emily.chorba(at)oracle.com
    For many reasons--legal, business, and ethical--Oracle recognizes the need for its applications, and our customers' and partners' products built with our tools, to be usable by the disabled community. The following features of Tutor Author and Publisher software facilitate the creation of accessible HTML content for the disabled community.TablesThe following formatting guidelines will ensure that Tutor documents containing tables will be accessible once they are converted to HTML.• Determine whether a table is a "data table" or whether you are using a table simply for formatting. If it's a data table, you must use a heading for each column, and you should format this heading row as "table heading" style and select Table > Heading Rows Repeat.• For non data tables, it is not necessary to include a heading row.GraphicsTo create accessible graphics, add a caption to the graphic. In Microsoft Office 2000 and greater, right-click on the graphic and select Format Picture > Web (tab) > Alternative Text or select the graphic then Format > Picture > Web (tab) Alternative Text. Enter the appropriate information in the dialog box.When a document containing a graphic with alternative text is converted to HTML by Tutor, the HTML document will contain the appropriate accessibility information.Javascript elementsThe tabbed format and other javascript elements in the HTML version of the Tutor documents may not be accessible to all users. A link to an accessible/printable version of the document is available in the upper right corner of all Tutor documents.Repetitive dataIf repetitive data such as the distribution section and the ownership section are causing accessibility issues with your Tutor documents, you can insert a bookmark in the appropriate location of the document, and, when the document is converted to HTML, the bookmark will be converted to an A NAME reference (also known as an internal link). With this reference, you can create a link in Header.txt that can be prepended to each Tutor document that allows the user to bypass repetitive sections. Tutor and Oracle ApplicationsRegarding accessibility, please check Oracle's website on accessibility http://www.oracle.com/accessibility/ to find out what version of E-Business Suite is certified to work with screen readers. Oracle Tutor 11.5.6A and greater works with screen readers such as JAWS.There is no certification between Oracle Tutor and Oracle Applications because there are no related dependencies. It doesn't matter which version of the Oracle Applications you are running. Therefore, it is possible to use Oracle Tutor with earlier versions of Oracle Applications.Oracle Business Process Converter and Oracle ApplicationsOracle Business Process Converter (OBPC) converts Visio, XPDL, and Tutor models to Oracle Business Process Architect and Oracle Business Process Management. The OBPC is one of a collection of plugins to Oracle JDeveloper. Please see the VPAT as the same considerations apply.Learn MoreFor more information about Tutor, visit Oracle.Com or the Tutor Blog. Post your questions at the Tutor Forum. Emily ChorbaPrinciple Product Manager Oracle Tutor & BPM

    Read the article

  • Extreme Performance and Scale Delivered by SOA on Oracle Exalogic

    - by J Swaroop
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Demands to incorporate internet-scale applications, data, and social media traffic with existing IT infrastructure require extreme availability, reliability, and scalability. In this session on industrial-strength SOA, learn how Oracle Exalogic and Oracle Exadata engineered systems address these requirements. Topics covered: (1) how SOA and BPM benefit from “hardware and software engineered for each other,” (2) how Oracle Exadata provides the data tier with unparalleled scalability and performance for SOA and BPM running on Oracle Exalogic (3) customer case studies (4) best practices and topology guidelines (5) information on tools that help operate, manage, provision, and deploy—to help reduce overall TCO. Extreme engineering at its best! Session details: 10/2/12 (Tuesday) 11:45 AM - Moscone South -308

    Read the article

  • Premature-Optimization and Performance Anxiety

    - by James Michael Hare
    While writing my post analyzing the new .NET 4 ConcurrentDictionary class (here), I fell into one of the classic blunders that I myself always love to warn about.  After analyzing the differences of time between a Dictionary with locking versus the new ConcurrentDictionary class, I noted that the ConcurrentDictionary was faster with read-heavy multi-threaded operations.  Then, I made the classic blunder of thinking that because the original Dictionary with locking was faster for those write-heavy uses, it was the best choice for those types of tasks.  In short, I fell into the premature-optimization anti-pattern. Basically, the premature-optimization anti-pattern is when a developer is coding very early for a perceived (whether rightly-or-wrongly) performance gain and sacrificing good design and maintainability in the process.  At best, the performance gains are usually negligible and at worst, can either negatively impact performance, or can degrade maintainability so much that time to market suffers or the code becomes very fragile due to the complexity. Keep in mind the distinction above.  I'm not talking about valid performance decisions.  There are decisions one should make when designing and writing an application that are valid performance decisions.  Examples of this are knowing the best data structures for a given situation (Dictionary versus List, for example) and choosing performance algorithms (linear search vs. binary search).  But these in my mind are macro optimizations.  The error is not in deciding to use a better data structure or algorithm, the anti-pattern as stated above is when you attempt to over-optimize early on in such a way that it sacrifices maintainability. In my case, I was actually considering trading the safety and maintainability gains of the ConcurrentDictionary (no locking required) for a slight performance gain by using the Dictionary with locking.  This would have been a mistake as I would be trading maintainability (ConcurrentDictionary requires no locking which helps readability) and safety (ConcurrentDictionary is safe for iteration even while being modified and you don't risk the developer locking incorrectly) -- and I fell for it even when I knew to watch out for it.  I think in my case, and it may be true for others as well, a large part of it was due to the time I was trained as a developer.  I began college in in the 90s when C and C++ was king and hardware speed and memory were still relatively priceless commodities and not to be squandered.  In those days, using a long instead of a short could waste precious resources, and as such, we were taught to try to minimize space and favor performance.  This is why in many cases such early code-bases were very hard to maintain.  I don't know how many times I heard back then to avoid too many function calls because of the overhead -- and in fact just last year I heard a new hire in the company where I work declare that she didn't want to refactor a long method because of function call overhead.  Now back then, that may have been a valid concern, but with today's modern hardware even if you're calling a trivial method in an extremely tight loop (which chances are the JIT compiler would optimize anyway) the results of removing method calls to speed up performance are negligible for the great majority of applications.  Now, obviously, there are those coding applications where speed is absolutely king (for example drivers, computer games, operating systems) where such sacrifices may be made.  But I would strongly advice against such optimization because of it's cost.  Many folks that are performing an optimization think it's always a win-win.  That they're simply adding speed to the application, what could possibly be wrong with that?  What they don't realize is the cost of their choice.  For every piece of straight-forward code that you obfuscate with performance enhancements, you risk the introduction of bugs in the long term technical debt of the application.  It will become so fragile over time that maintenance will become a nightmare.  I've seen such applications in places I have worked.  There are times I've seen applications where the designer was so obsessed with performance that they even designed their own memory management system for their application to try to squeeze out every ounce of performance.  Unfortunately, the application stability often suffers as a result and it is very difficult for anyone other than the original designer to maintain. I've even seen this recently where I heard a C++ developer bemoaning that in VS2010 the iterators are about twice as slow as they used to be because Microsoft added range checking (probably as part of the 0x standard implementation).  To me this was almost a joke.  Twice as slow sounds bad, but it almost never as bad as you think -- especially if you're gaining safety.  The only time twice is really that much slower is when once was too slow to begin with.  Think about it.  2 minutes is slow as a response time because 1 minute is slow.  But if an iterator takes 1 microsecond to move one position and a new, safer iterator takes 2 microseconds, this is trivial!  The only way you'd ever really notice this would be in iterating a collection just for the sake of iterating (i.e. no other operations).  To my mind, the added safety makes the extra time worth it. Always favor safety and maintainability when you can.  I know it can be a hard habit to break, especially if you started out your career early or in a language such as C where they are very performance conscious.  But in reality, these type of micro-optimizations only end up hurting you in the long run. Remember the two laws of optimization.  I'm not sure where I first heard these, but they are so true: For beginners: Do not optimize. For experts: Do not optimize yet. This is so true.  If you're a beginner, resist the urge to optimize at all costs.  And if you are an expert, delay that decision.  As long as you have chosen the right data structures and algorithms for your task, your performance will probably be more than sufficient.  Chances are it will be network, database, or disk hits that will be your slow-down, not your code.  As they say, 98% of your code's bottleneck is in 2% of your code so premature-optimization may add maintenance and safety debt that won't have any measurable impact.  Instead, code for maintainability and safety, and then, and only then, when you find a true bottleneck, then you should go back and optimize further.

    Read the article

  • Game timings and formats

    - by topright
    There are more or less standardized TV-show/movie formats and recommended timings: 1. By the early 1960s, television companies commonly presented half-hour long "comedy" series, or one hour long "dramas." Half-hour series were mostly restricted to situation comedy or family comedy, and were usually aired with either a live or artificial laugh track. One hour dramas included genre series such as police and detective series, westerns, science fiction, and, later, serialized prime time soap operas. Programs today still overwhelmingly conform to these half-hour and one hour guidelines. Source 2. In the United States, most medical dramas are one hour long. Source 3. Traditionally serials were broadcast as fifteen minute installments each weekday in daytime slots. In 1956 As the World Turns debuted as the first half-hour soap opera. All soap operas broadcast half-hour episodes by the end of the 1960s. With increased popularity in the 1970s most soap operas expanded to an hour (Another World even expanded to ninety minutes for a short time). More than half of the serials had expanded to one hour episodes by 1980. As of 2010, six of the seven US serials air one hour episodes each weekday. Source Interesting. Are there any standards of timing in game development? Well, 5-20 minutes casual games, of course. There is even a "5-minutes-game" site. And 1-hour-gamer site. Are there 1-week, 1-year, 1-eternity game formats? Chess and Go - deep games that you can study all your life; but they are played in hour or several days (pro games). Addictive long-term online role-playing games (without win-condition) are played in monthes and, possibly, years. Replayability is an important factor to consider. It's good when game design document contains a line: "A game is designed for solving in X hours". How can it be measured before there is any prototype or demo? When you know your game format, you know your audience (and vice versa). It is practical question. Are there psychological researches about dynamic of gaming interest and involvement? And is there a correlation between game format and game genre?

    Read the article

  • SQL University: What and why of database refactoring

    - by Mladen Prajdic
    This is a post for a great idea called SQL University started by Jorge Segarra also famously known as SqlChicken on Twitter. It’s a collection of blog posts on different database related topics contributed by several smart people all over the world. So this week is mine and we’ll be talking about database testing and refactoring. In 3 posts we’ll cover: SQLU part 1 - What and why of database testing SQLU part 2 - What and why of database refactoring SQLU part 3 - Tools of the trade This is a second part of the series and in it we’ll take a look at what database refactoring is and why do it. Why refactor a database To know why refactor we first have to know what refactoring actually is. Code refactoring is a process where we change module internals in a way that does not change that module’s input/output behavior. For successful refactoring there is one crucial thing we absolutely must have: Tests. Automated unit tests are the only guarantee we have that we haven’t broken the input/output behavior before refactoring. If you haven’t go back ad read my post on the matter. Then start writing them. Next thing you need is a code module. Those are views, UDFs and stored procedures. By having direct table access we can kiss fast and sweet refactoring good bye. One more point to have a database abstraction layer. And no, ORM’s don’t fall into that category. But also know that refactoring is NOT adding new functionality to your code. Many have fallen into this trap. Don’t be one of them and resist the lure of the dark side. And it’s a strong lure. We developers in general love to add new stuff to our code, but hate fixing our own mistakes or changing existing code for no apparent reason. To be a good refactorer one needs discipline and focus. Now we know that refactoring is all about changing inner workings of existing code. This can be due to performance optimizations, changing internal code workflows or some other reason. This is a typical black box scenario to the outside world. If we upgrade the car engine it still has to drive on the road (preferably faster) and not fly (no matter how cool that would be). Also be aware that white box tests will break when we refactor. What to refactor in a database Refactoring databases doesn’t happen that often but when it does it can include a lot of stuff. Let us look at a few common cases. Adding or removing database schema objects Adding, removing or changing table columns in any way, adding constraints, keys, etc… All of these can be counted as internal changes not visible to the data consumer. But each of these carries a potential input/output behavior change. Dropping a column can result in views not working anymore or stored procedure logic crashing. Adding a unique constraint shows duplicated data that shouldn’t exist. Foreign keys break a truncate table command executed from an application that runs once a month. All these scenarios are very real and can happen. With the proper database abstraction layer fully covered with black box tests we can make sure something like that does not happen (hopefully at all). Changing physical structures Physical structures include heaps, indexes and partitions. We can pretty much add or remove those without changing the data returned by the database. But the performance can be affected. So here we use our performance tests. We do have them, right? Just by adding a single index we can achieve orders of magnitude performance improvement. Won’t that make users happy? But what if that index causes our write operations to crawl to a stop. again we have to test this. There are a lot of things to think about and have tests for. Without tests we can’t do successful refactoring! Fixing bad code We all have some bad code in our systems. We usually refer to that code as code smell as they violate good coding practices. Examples of such code smells are SQL injection, use of SELECT *, scalar UDFs or cursors, etc… Each of those is huge code smell and can result in major code changes. Take SELECT * from example. If we remove a column from a table the client using that SELECT * statement won’t have a clue about that until it runs. Then it will gracefully crash and burn. Not to mention the widely unknown SELECT * view refresh problem that Tomas LaRock (@SQLRockstar on Twitter) and Colin Stasiuk (@BenchmarkIT on Twitter) talk about in detail. Go read about it, it’s informative. Refactoring this includes replacing the * with column names and most likely change to application using the database. Breaking apart huge stored procedures Have you ever seen seen a stored procedure that was 2000 lines long? I have. It’s not pretty. It hurts the eyes and sucks the will to live the next 10 minutes. They are a maintenance nightmare and turn into things no one dares to touch. I’m willing to bet that 100% of time they don’t have a single test on them. Large stored procedures (and functions) are a clear sign that they contain business logic. General opinion on good database coding practices says that business logic has no business in the database. That’s the applications part. Refactoring such behemoths requires writing lots of edge case tests for the stored procedure input/output behavior and then start to refactor it. First we split the logic inside into smaller parts like new stored procedures and UDFs. Those then get called from the master stored procedure. Once we’ve successfully modularized the database code it’s best to transfer that logic into the applications consuming it. This only leaves the stored procedure with common data manipulation logic. Of course this isn’t always possible so having a plethora of performance and behavior unit tests is absolutely necessary to confirm we’ve actually improved the codebase in some way.   Refactoring is not a popular chore amongst developers or managers. The former don’t like fixing old code, the latter can’t see the financial benefit. Remember how we talked about being lousy at estimating future costs in the previous post? But there comes a time when it must be done. Hopefully I’ve given you some ideas how to get started. In the last post of the series we’ll take a look at the tools to use and an example of testing and refactoring.

    Read the article

  • The Dispose Pattern (and FxCop warnings)

    - by Scott Dorman
    [This is actually a response to Bill’s blog post, but since it isn’t possible to leave this as a comment on his blog it’s a post here.] There are many different ways to implement the Dispose pattern correctly. Some are (in my opinion) better than others. In Bill’s blog post he presents a particular pattern, which is an excerpt from his book (Effective C#). The issue centers around the fact that a reader took the code sample presented in the book and ran FxCop (Code Analysis) on it, which generated a warning: “Ensure that base.Dispose() is always called.” The “lesson learned” that Bill presents is that “tools are there to help us, not control us.” While I completely agree with the belief that tools are there to help us, I think it’s important to understand why FxCop is raising this particular warning. The code presented in Bill’s book looks like: // Have its own disposed flag.private bool disposed = false;protected override void Dispose(bool isDisposing){ // Don't dispose more than once. if (disposed) return; if (isDisposing) { // TODO: free managed resources here. } // TODO: free unmanaged resources here. // Let the base class free its resources. // Base class is responsible for calling // GC.SuppressFinalize( ) base.Dispose(isDisposing); // Set derived class disposed flag: disposed = true;} This code does follow all of the guidelines for implementing the Dispose pattern. In this case, it’s presumably part of a larger example showing how to implement the pattern as part of a base class. The reason FxCop is warning you about this code is the first if statement in the Dispose method, which will cause the method to exit if disposed is true. The problem here is that there is the possibility that if the disposed flag is true, the call to base.Dispose() will never be executed. As Bill points out, it is possible for some other code elsewhere in the class to set this flag. He states that this is an “unlikely occurrence.” While that is probably true, it can be a potentially dangerous assumption to make and is one that can be easily corrected. By changing the code slightly you can remove this assumption and correct the FxCop violation. private bool disposed = false;protected override void Dispose(bool disposing){ if (!disposed) { if (disposing) { // Dispose managed resources. } // Dispose unmanaged resources. disposed = true; } base.Dispose(disposing);} Using this implementation allows the call to base.Dispose() to always occur, which ensures that the the disposal chain is always properly followed. Technorati Tags: .NET,C#,Dispose Pattern

    Read the article

  • Waterfall Model (SDLC) vs. Prototyping Model

    The characters in the fable of the Tortoise and the Hare can easily be used to demonstrate the similarities and differences between the Waterfall and Prototyping software development models. This children fable is about a race between a consistently slow moving but steadfast turtle and an extremely fast but unreliable rabbit. After closely comparing each character’s attributes in correlation with both software development models, a trend seems to appear in that the Waterfall closely resembles the Tortoise in that Waterfall Model is typically a slow moving process that is broken up in to multiple sequential steps that must be executed in a standard linear pattern. The Tortoise can be quoted several times in the story saying “Slow and steady wins the race.” This is the perfect mantra for the Waterfall Model in that this model is seen as a cumbersome and slow moving. Waterfall Model Phases Requirement Analysis & Definition This phase focuses on defining requirements for a project that is to be developed and determining if the project is even feasible. Requirements are collected by analyzing existing systems and functionality in correlation with the needs of the business and the desires of the end users. The desired output for this phase is a list of specific requirements from the business that are to be designed and implemented in the subsequent steps. In addition this phase is used to determine if any value will be gained by completing the project. System Design This phase focuses primarily on the actual architectural design of a system, and how it will interact within itself and with other existing applications. Projects at this level should be viewed at a high level so that actual implementation details are decided in the implementation phase. However major environmental decision like hardware and platform decision are typically decided in this phase. Furthermore the basic goal of this phase is to design an application at the system level in those classes, interfaces, and interactions are defined. Additionally decisions about scalability, distribution and reliability should also be considered for all decisions. The desired output for this phase is a functional  design document that states all of the architectural decisions that have been made in regards to the project as well as a diagrams like a sequence and class diagrams. Software Design This phase focuses primarily on the refining of the decisions found in the functional design document. Classes and interfaces are further broken down in to logical modules based on the interfaces and interactions previously indicated. The output of this phase is a formal design document. Implementation / Coding This phase focuses primarily on implementing the previously defined modules in to units of code. These units are developed independently are intergraded as the system is put together as part of a whole system. Software Integration & Verification This phase primarily focuses on testing each of the units of code developed as well as testing the system as a whole. There are basic types of testing at this phase and they include: Unit Test and Integration Test. Unit Test are built to test the functionality of a code unit to ensure that it preforms its desired task. Integration testing test the system as a whole because it focuses on results of combining specific units of code and validating it against expected results. The output of this phase is a test plan that includes test with expected results and actual results. System Verification This phase primarily focuses on testing the system as a whole in regards to the list of project requirements and desired operating environment. Operation & Maintenance his phase primarily focuses on handing off the competed project over to the customer so that they can verify that all of their requirements have been met based on their original requirements. This phase will also validate the correctness of their requirements and if any changed need to be made. In addition, any problems not resolved in the previous phase will be handled in this section. The Waterfall Model’s linear and sequential methodology does offer a project certain advantages and disadvantages. Advantages of the Waterfall Model Simplistic to implement and execute for projects and/or company wide Limited demand on resources Large emphasis on documentation Disadvantages of the Waterfall Model Completed phases cannot be revisited regardless if issues arise within a project Accurate requirement are never gather prior to the completion of the requirement phase due to the lack of clarification in regards to client’s desires. Small changes or errors that arise in applications may cause additional problems The client cannot change any requirements once the requirements phase has been completed leaving them no options for changes as they see their requirements changes as the customers desires change. Excess documentation Phases are cumbersome and slow moving Learn more about the Major Process in the Sofware Development Life Cycle and Waterfall Model. Conversely, the Hare shares similar traits with the prototyping software development model in that ideas are rapidly converted to basic working examples and subsequent changes are made to quickly align the project with customers desires as they are formulated and as software strays from the customers vision. The basic concept of prototyping is to eliminate the use of well-defined project requirements. Projects are allowed to grow as the customer needs and request grow. Projects are initially designed according to basic requirements and are refined as requirement become more refined. This process allows customer to feel their way around the application to ensure that they are developing exactly what they want in the application This model also works well for determining the feasibility of certain approaches in regards to an application. Prototypes allow for quickly developing examples of implementing specific functionality based on certain techniques. Advantages of Prototyping Active participation from users and customers Allows customers to change their mind in specifying requirements Customers get a better understanding of the system as it is developed Earlier bug/error detection Promotes communication with customers Prototype could be used as final production Reduced time needed to develop applications compared to the Waterfall method Disadvantages of Prototyping Promotes constantly redefining project requirements that cause major system rewrites Potential for increased complexity of a system as scope of the system expands Customer could believe the prototype as the working version. Implementation compromises could increase the complexity when applying updates and or application fixes When companies trying to decide between the Waterfall model and Prototype model they need to evaluate the benefits and disadvantages for both models. Typically smaller companies or projects that have major time constraints typically head for more of a Prototype model approach because it can reduce the time needed to complete the project because there is more of a focus on building a project and less on defining requirements and scope prior to the start of a project. On the other hand, Companies with well-defined requirements and time allowed to generate proper documentation should steer towards more of a waterfall model because they are in a position to obtain clarified requirements and have to design and optimal solution prior to the start of coding on a project.

    Read the article

< Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >