Search Results

Search found 3587 results on 144 pages for 'programmer jokes'.

Page 80/144 | < Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >

  • Listing common SQL Code Smells.

    - by Phil Factor
    Once you’ve done a number of SQL Code-reviews, you’ll know those signs in the code that all might not be well. These ’Code Smells’ are coding styles that don’t directly cause a bug, but are indicators that all is not well with the code. . Kent Beck and Massimo Arnoldi seem to have coined the phrase in the "OnceAndOnlyOnce" page of www.C2.com, where Kent also said that code "wants to be simple". Bad Smells in Code was an essay by Kent Beck and Martin Fowler, published as Chapter 3 of the book ‘Refactoring: Improving the Design of Existing Code’ (ISBN 978-0201485677) Although there are generic code-smells, SQL has its own particular coding habits that will alert the programmer to the need to re-factor what has been written. See Exploring Smelly Code   and Code Deodorants for Code Smells by Nick Harrison for a grounding in Code Smells in C# I’ve always been tempted by the idea of automating a preliminary code-review for SQL. It would be so useful to trawl through code and pick up the various problems, much like the classic ‘Lint’ did for C, and how the Code Metrics plug-in for .NET Reflector by Jonathan 'Peli' de Halleux is used for finding Code Smells in .NET code. The problem is that few of the standard procedural code smells are relevant to SQL, and we need an agreed list of code smells. Merrilll Aldrich made a grand start last year in his blog Top 10 T-SQL Code Smells.However, I'd like to make a start by discovering if there is a general opinion amongst Database developers what the most important SQL Smells are. One can be a bit defensive about code smells. I will cheerfully write very long stored procedures, even though they are frowned on. I’ll use dynamic SQL occasionally. You can only use them as an aid for your own judgment and it is fine to ‘sign them off’ as being appropriate in particular circumstances. Also, whole classes of ‘code smells’ may be irrelevant for a particular database. The use of proprietary SQL, for example, is only a ‘code smell’ if there is a chance that the database will have to be ported to another RDBMS. The use of dynamic SQL is a risk only with certain security models. As the saying goes,  a CodeSmell is a hint of possible bad practice to a pragmatist, but a sure sign of bad practice to a purist. Plamen Ratchev’s wonderful article Ten Common SQL Programming Mistakes lists some of these ‘code smells’ along with out-and-out mistakes, but there are more. The use of nested transactions, for example, isn’t entirely incorrect, even though the database engine ignores all but the outermost: but it does flag up the possibility that the programmer thinks that nested transactions are supported. If anything requires some sort of general agreement, the definition of code smells is one. I’m therefore going to make this Blog ‘dynamic, in that, if anyone twitters a suggestion with a #SQLCodeSmells tag (or sends me a twitter) I’ll update the list here. If you add a comment to the blog with a suggestion of what should be added or removed, I’ll do my best to oblige. In other words, I’ll try to keep this blog up to date. The name against each 'smell' is the name of the person who Twittered me, commented about or who has written about the 'smell'. it does not imply that they were the first ever to think of the smell! Use of deprecated syntax such as *= (Dave Howard) Denormalisation that requires the shredding of the contents of columns. (Merrill Aldrich) Contrived interfaces Use of deprecated datatypes such as TEXT/NTEXT (Dave Howard) Datatype mis-matches in predicates that rely on implicit conversion.(Plamen Ratchev) Using Correlated subqueries instead of a join   (Dave_Levy/ Plamen Ratchev) The use of Hints in queries, especially NOLOCK (Dave Howard /Mike Reigler) Few or No comments. Use of functions in a WHERE clause. (Anil Das) Overuse of scalar UDFs (Dave Howard, Plamen Ratchev) Excessive ‘overloading’ of routines. The use of Exec xp_cmdShell (Merrill Aldrich) Excessive use of brackets. (Dave Levy) Lack of the use of a semicolon to terminate statements Use of non-SARGable functions on indexed columns in predicates (Plamen Ratchev) Duplicated code, or strikingly similar code. Misuse of SELECT * (Plamen Ratchev) Overuse of Cursors (Everyone. Special mention to Dave Levy & Adrian Hills) Overuse of CLR routines when not necessary (Sam Stange) Same column name in different tables with different datatypes. (Ian Stirk) Use of ‘broken’ functions such as ‘ISNUMERIC’ without additional checks. Excessive use of the WHILE loop (Merrill Aldrich) INSERT ... EXEC (Merrill Aldrich) The use of stored procedures where a view is sufficient (Merrill Aldrich) Not using two-part object names (Merrill Aldrich) Using INSERT INTO without specifying the columns and their order (Merrill Aldrich) Full outer joins even when they are not needed. (Plamen Ratchev) Huge stored procedures (hundreds/thousands of lines). Stored procedures that can produce different columns, or order of columns in their results, depending on the inputs. Code that is never used. Complex and nested conditionals WHILE (not done) loops without an error exit. Variable name same as the Datatype Vague identifiers. Storing complex data  or list in a character map, bitmap or XML field User procedures with sp_ prefix (Aaron Bertrand)Views that reference views that reference views that reference views (Aaron Bertrand) Inappropriate use of sql_variant (Neil Hambly) Errors with identity scope using SCOPE_IDENTITY @@IDENTITY or IDENT_CURRENT (Neil Hambly, Aaron Bertrand) Schemas that involve multiple dated copies of the same table instead of partitions (Matt Whitfield-Atlantis UK) Scalar UDFs that do data lookups (poor man's join) (Matt Whitfield-Atlantis UK) Code that allows SQL Injection (Mladen Prajdic) Tables without clustered indexes (Matt Whitfield-Atlantis UK) Use of "SELECT DISTINCT" to mask a join problem (Nick Harrison) Multiple stored procedures with nearly identical implementation. (Nick Harrison) Excessive column aliasing may point to a problem or it could be a mapping implementation. (Nick Harrison) Joining "too many" tables in a query. (Nick Harrison) Stored procedure returning more than one record set. (Nick Harrison) A NOT LIKE condition (Nick Harrison) excessive "OR" conditions. (Nick Harrison) User procedures with sp_ prefix (Aaron Bertrand) Views that reference views that reference views that reference views (Aaron Bertrand) sp_OACreate or anything related to it (Bill Fellows) Prefixing names with tbl_, vw_, fn_, and usp_ ('tibbling') (Jeremiah Peschka) Aliases that go a,b,c,d,e... (Dave Levy/Diane McNurlan) Overweight Queries (e.g. 4 inner joins, 8 left joins, 4 derived tables, 10 subqueries, 8 clustered GUIDs, 2 UDFs, 6 case statements = 1 query) (Robert L Davis) Order by 3,2 (Dave Levy) MultiStatement Table functions which are then filtered 'Sel * from Udf() where Udf.Col = Something' (Dave Ballantyne) running a SQL 2008 system in SQL 2000 compatibility mode(John Stafford)

    Read the article

  • API Message Localization

    - by Jesse Taber
    In my post, “Keep Localizable Strings Close To Your Users” I talked about the internationalization and localization difficulties that can arise when you sprinkle static localizable strings throughout the different logical layers of an application. The main point of that post is that you should have your localizable strings reside as close to the user-facing modules of your application as possible. For example, if you’re developing an ASP .NET web forms application all of the localizable strings should be kept in .resx files that are associated with the .aspx views of the application. In this post I want to talk about how this same concept can be applied when designing and developing APIs. An API Facilitates Machine-to-Machine Interaction You can typically think about a web, desktop, or mobile application as a collection “views” or “screens” through which users interact with the underlying logic and data. The application can be designed based on the assumption that there will be a human being on the other end of the screen working the controls. You are designing a machine-to-person interaction and the application should be built in a way that facilitates the user’s clear understanding of what is going on. Dates should be be formatted in a way that the user will be familiar with, messages should be presented in the user’s preferred language, etc. When building an API, however, there are no screens and you can’t make assumptions about who or what is on the other end of each call. An API is, by definition, a machine-to-machine interaction. A machine-to-machine interaction should be built in a way that facilitates a clear and unambiguous understanding of what is going on. Dates and numbers should be formatted in predictable and standard ways (e.g. ISO 8601 dates) and messages should be presented in machine-parseable formats. For example, consider an API for a time tracking system that exposes a resource for creating a new time entry. The JSON for creating a new time entry for a user might look like: 1: { 2: "userId": 4532, 3: "startDateUtc": "2012-10-22T14:01:54.98432Z", 4: "endDateUtc": "2012-10-22T11:34:45.29321Z" 5: }   Note how the parameters for start and end date are both expressed as ISO 8601 compliant dates in UTC. Using a date format like this in our API leaves little room for ambiguity. It’s also important to note that using ISO 8601 dates is a much, much saner thing than the \/Date(<milliseconds since epoch>)\/ nonsense that is sometimes used in JSON serialization. Probably the most important thing to note about the JSON snippet above is the fact that the end date comes before the start date! The API should recognize that and disallow the time entry from being created, returning an error to the caller. You might inclined to send a response that looks something like this: 1: { 2: "errors": [ {"message" : "The end date must come after the start date"}] 3: }   While this may seem like an appropriate thing to do there are a few problems with this approach: What if there is a user somewhere on the other end of the API call that doesn’t speak English?  What if the message provided here won’t fit properly within the UI of the application that made the API call? What if the verbiage of the message isn’t consistent with the rest of the application that made the API call? What if there is no user directly on the other end of the API call (e.g. this is a batch job uploading time entries once per night unattended)? The API knows nothing about the context from which the call was made. There are steps you could take to given the API some context (e.g.allow the caller to send along a language code indicating the language that the end user speaks), but that will only get you so far. As the designer of the API you could make some assumptions about how the API will be called, but if we start making assumptions we could very easily make the wrong assumptions. In this situation it’s best to make no assumptions and simply design the API in such a way that the caller has the responsibility to convey error messages in a manner that is appropriate for the context in which the error was raised. You would work around some of these problems by allowing callers to add metadata to each request describing the context from which the call is being made (e.g. accepting a ‘locale’ parameter denoting the desired language), but that will add needless clutter and complexity. It’s better to keep the API simple and push those context-specific concerns down to the caller whenever possible. For our very simple time entry example, this can be done by simply changing our error message response to look like this: 1: { 2: "errors": [ {"code": 100}] 3: }   By changing our error error from exposing a string to a numeric code that is easily parseable by another application, we’ve placed all of the responsibility for conveying the actual meaning of the error message on the caller. It’s best to have the caller be responsible for conveying this meaning because the caller understands the context much better than the API does. Now the caller can see error code 100, know that it means that the end date submitted falls before the start date and take appropriate action. Now all of the problems listed out above are non-issues because the caller can simply translate the error code of ‘100’ into the proper action and message for the current context. The numeric code representation of the error is a much better way to facilitate the machine-to-machine interaction that the API is meant to facilitate. An API Does Have Human Users While APIs should be built for machine-to-machine interaction, people still need to wire these interactions together. As a programmer building a client application that will consume the time entry API I would find it frustrating to have to go dig through the API documentation every time I encounter a new error code (assuming the documentation exists and is accurate). The numeric error code approach hurts the discoverability of the API and makes it painful to integrate with. We can help ease this pain by merging our two approaches: 1: { 2: "errors": [ {"code": 100, "message" : "The end date must come after the start date"}] 3: }   Now we have an easily parseable numeric error code for the machine-to-machine interaction that the API is meant to facilitate and a human-readable message for programmers working with the API. The human-readable message here is not intended to be viewed by end-users of the API and as such is not really a “localizable string” in my opinion. We could opt to expose a locale parameter for all API methods and store translations for all error messages, but that’s a lot of extra effort and overhead that doesn’t add a lot real value to the API. I might be a bit of an “ugly American”, but I think it’s probably fine to have the API return English messages when the target for those messages is a programmer. When resources are limited (which they always are), I’d argue that you’re better off hard-coding these messages in English and putting more effort into building more useful features, improving security, tweaking performance, etc.

    Read the article

  • Source-control 'wet-work'?

    - by Phil Factor
    When a design or creative work is flawed beyond remedy, it is often best to destroy it and start again. The other day, I lost the code to a long and intricate SQL batch I was working on. I’d thought it was impossible, but it happened. With all the technology around that is designed to prevent this occurring, this sort of accident has become a rare event.  If it weren’t for a deranged laptop, and my distraction, the code wouldn’t have been lost this time.  As always, I sighed, had a soothing cup of tea, and typed it all in again.  The new code I hastily tapped in  was much better: I’d held in my head the essence of how the code should work rather than the details: I now knew for certain  the start point, the end, and how it should be achieved. Instantly the detritus of half-baked thoughts fell away and I was able to write logical code that performed better.  Because I could work so quickly, I was able to hold the details of all the columns and variables in my head, and the dynamics of the flow of data. It was, in fact, easier and quicker to start from scratch rather than tidy up and refactor the existing code with its inevitable fumbling and half-baked ideas. What a shame that technology is now so good that developers rarely experience the cleansing shock of losing one’s code and having to rewrite it from scratch.  If you’ve never accidentally lost  your code, then it is worth doing it deliberately once for the experience. Creative people have, until Technology mistakenly prevented it, torn up their drafts or sketches, threw them in the bin, and started again from scratch.  Leonardo’s obsessive reworking of the Mona Lisa was renowned because it was so unusual:  Most artists have been utterly ruthless in destroying work that didn’t quite make it. Authors are particularly keen on writing afresh, and the results are generally positive. Lawrence of Arabia actually lost the entire 250,000 word manuscript of ‘The Seven Pillars of Wisdom’ by accidentally leaving it on a train at Reading station, before rewriting a much better version.  Now, any writer or artist is seduced by technology into altering or refining their work rather than casting it dramatically in the bin or setting a light to it on a bonfire, and rewriting it from the blank page.  It is easy to pick away at a flawed work, but the real creative process is far more brutal. Once, many years ago whilst running a software house that supplied commercial software to local businesses, I’d been supervising an accounting system for a farming cooperative. No packaged system met their needs, and it was all hand-cut code.  For us, it represented a breakthrough as it was for a government organisation, and success would guarantee more contracts. As you’ve probably guessed, the code got mangled in a disk crash just a week before the deadline for delivery, and the many backups all proved to be entirely corrupted by a faulty tape drive.  There were some fragments left on individual machines, but they were all of different versions.  The developers were in despair.  Strangely, I managed to re-write the bulk of a three-month project in a manic and caffeine-soaked weekend.  Sure, that elegant universally-applicable input-form routine was‘nt quite so elegant, but it didn’t really need to be as we knew what forms it needed to support.  Yes, the code lacked architectural elegance and reusability. By dawn on Monday, the application passed its integration tests. The developers rose to the occasion after I’d collapsed, and tidied up what I’d done, though they were reproachful that some of the style and elegance had gone out of the application. By the delivery date, we were able to install it. It was a smaller, faster application than the beta they’d seen and the user-interface had a new, rather Spartan, appearance that we swore was done to conform to the latest in user-interface guidelines. (we switched to Helvetica font to look more ‘Bauhaus’ ). The client was so delighted that he forgave the new bugs that had crept in. I still have the disk that crashed, up in the attic. In IT, we have had mixed experiences from complete re-writes. Lotus 123 never really recovered from a complete rewrite from assembler into C, Borland made the mistake with Arago and Quattro Pro  and Netscape’s complete rewrite of their Navigator 4 browser was a white-knuckle ride. In all cases, the decision to rewrite was a result of extreme circumstances where no other course of action seemed possible.   The rewrite didn’t come out of the blue. I prefer to remember the rewrite of Minix by young Linus Torvalds, or the rewrite of Bitkeeper by a slightly older Linus.  The rewrite of CP/M didn’t do too badly either, did it? Come to think of it, the guy who decided to rewrite the windowing system of the Xerox Star never regretted the decision. I’ll agree that one should often resist calls for a rewrite. One of the worst habits of the more inexperienced programmer is to denigrate whatever code he or she inherits, and then call loudly for a complete rewrite. They are buoyed up by the mistaken belief that they can do better. This, however, is a different psychological phenomenon, more related to the idea of some motorcyclists that they are operating on infinite lives, or the occasional squaddies that if they charge the machine-guns determinedly enough all will be well. Grim experience brings out the humility in any experienced programmer.  I’m referring to quite different circumstances here. Where a team knows the requirements perfectly, are of one mind on methodology and coding standards, and they already have a solution, then what is wrong with considering  a complete rewrite? Rewrites are so painful in the early stages, until that point where one realises the payoff, that even I quail at the thought. One needs a natural disaster to push one over the edge. The trouble is that source-control systems, and disaster recovery systems, are just too good nowadays.   If I were to lose this draft of this very blog post, I know I’d rewrite it much better. However, if you read this, you’ll know I didn’t have the nerve to delete it and start again.  There was a time that one prayed that unreliable hardware would deliver you from an unmaintainable mess of a codebase, but now technology has made us almost entirely immune to such a merciful act of God. An old friend of mine with long experience in the software industry has long had the idea of the ‘source-control wet-work’,  where one hires a malicious hacker in some wild eastern country to hack into one’s own  source control system to destroy all trace of the source to an application. Alas, backup systems are just too good to make this any more than a pipedream. Somehow, it would be difficult to promote the idea. As an alternative, could one construct a source control system that, on doing all the code-quality metrics, would systematically destroy all trace of source code that failed the quality test? Alas, I can’t see many managers buying into the idea. In reading the full story of the near-loss of Toy Story 2, it set me thinking. It turned out that the lucky restoration of the code wasn’t the happy ending one first imagined it to be, because they eventually came to the conclusion that the plot was fundamentally flawed and it all had to be rewritten anyway.  Was this an early  case of the ‘source-control wet-job’?’ It is very hard nowadays to do a rapid U-turn in a development project because we are far too prone to cling to our existing source-code.

    Read the article

  • Source-control 'wet-work'?

    - by Phil Factor
    When a design or creative work is flawed beyond remedy, it is often best to destroy it and start again. The other day, I lost the code to a long and intricate SQL batch I was working on. I’d thought it was impossible, but it happened. With all the technology around that is designed to prevent this occurring, this sort of accident has become a rare event.  If it weren’t for a deranged laptop, and my distraction, the code wouldn’t have been lost this time.  As always, I sighed, had a soothing cup of tea, and typed it all in again.  The new code I hastily tapped in  was much better: I’d held in my head the essence of how the code should work rather than the details: I now knew for certain  the start point, the end, and how it should be achieved. Instantly the detritus of half-baked thoughts fell away and I was able to write logical code that performed better.  Because I could work so quickly, I was able to hold the details of all the columns and variables in my head, and the dynamics of the flow of data. It was, in fact, easier and quicker to start from scratch rather than tidy up and refactor the existing code with its inevitable fumbling and half-baked ideas. What a shame that technology is now so good that developers rarely experience the cleansing shock of losing one’s code and having to rewrite it from scratch.  If you’ve never accidentally lost  your code, then it is worth doing it deliberately once for the experience. Creative people have, until Technology mistakenly prevented it, torn up their drafts or sketches, threw them in the bin, and started again from scratch.  Leonardo’s obsessive reworking of the Mona Lisa was renowned because it was so unusual:  Most artists have been utterly ruthless in destroying work that didn’t quite make it. Authors are particularly keen on writing afresh, and the results are generally positive. Lawrence of Arabia actually lost the entire 250,000 word manuscript of ‘The Seven Pillars of Wisdom’ by accidentally leaving it on a train at Reading station, before rewriting a much better version.  Now, any writer or artist is seduced by technology into altering or refining their work rather than casting it dramatically in the bin or setting a light to it on a bonfire, and rewriting it from the blank page.  It is easy to pick away at a flawed work, but the real creative process is far more brutal. Once, many years ago whilst running a software house that supplied commercial software to local businesses, I’d been supervising an accounting system for a farming cooperative. No packaged system met their needs, and it was all hand-cut code.  For us, it represented a breakthrough as it was for a government organisation, and success would guarantee more contracts. As you’ve probably guessed, the code got mangled in a disk crash just a week before the deadline for delivery, and the many backups all proved to be entirely corrupted by a faulty tape drive.  There were some fragments left on individual machines, but they were all of different versions.  The developers were in despair.  Strangely, I managed to re-write the bulk of a three-month project in a manic and caffeine-soaked weekend.  Sure, that elegant universally-applicable input-form routine was‘nt quite so elegant, but it didn’t really need to be as we knew what forms it needed to support.  Yes, the code lacked architectural elegance and reusability. By dawn on Monday, the application passed its integration tests. The developers rose to the occasion after I’d collapsed, and tidied up what I’d done, though they were reproachful that some of the style and elegance had gone out of the application. By the delivery date, we were able to install it. It was a smaller, faster application than the beta they’d seen and the user-interface had a new, rather Spartan, appearance that we swore was done to conform to the latest in user-interface guidelines. (we switched to Helvetica font to look more ‘Bauhaus’ ). The client was so delighted that he forgave the new bugs that had crept in. I still have the disk that crashed, up in the attic. In IT, we have had mixed experiences from complete re-writes. Lotus 123 never really recovered from a complete rewrite from assembler into C, Borland made the mistake with Arago and Quattro Pro  and Netscape’s complete rewrite of their Navigator 4 browser was a white-knuckle ride. In all cases, the decision to rewrite was a result of extreme circumstances where no other course of action seemed possible.   The rewrite didn’t come out of the blue. I prefer to remember the rewrite of Minix by young Linus Torvalds, or the rewrite of Bitkeeper by a slightly older Linus.  The rewrite of CP/M didn’t do too badly either, did it? Come to think of it, the guy who decided to rewrite the windowing system of the Xerox Star never regretted the decision. I’ll agree that one should often resist calls for a rewrite. One of the worst habits of the more inexperienced programmer is to denigrate whatever code he or she inherits, and then call loudly for a complete rewrite. They are buoyed up by the mistaken belief that they can do better. This, however, is a different psychological phenomenon, more related to the idea of some motorcyclists that they are operating on infinite lives, or the occasional squaddies that if they charge the machine-guns determinedly enough all will be well. Grim experience brings out the humility in any experienced programmer.  I’m referring to quite different circumstances here. Where a team knows the requirements perfectly, are of one mind on methodology and coding standards, and they already have a solution, then what is wrong with considering  a complete rewrite? Rewrites are so painful in the early stages, until that point where one realises the payoff, that even I quail at the thought. One needs a natural disaster to push one over the edge. The trouble is that source-control systems, and disaster recovery systems, are just too good nowadays.   If I were to lose this draft of this very blog post, I know I’d rewrite it much better. However, if you read this, you’ll know I didn’t have the nerve to delete it and start again.  There was a time that one prayed that unreliable hardware would deliver you from an unmaintainable mess of a codebase, but now technology has made us almost entirely immune to such a merciful act of God. An old friend of mine with long experience in the software industry has long had the idea of the ‘source-control wet-work’,  where one hires a malicious hacker in some wild eastern country to hack into one’s own  source control system to destroy all trace of the source to an application. Alas, backup systems are just too good to make this any more than a pipedream. Somehow, it would be difficult to promote the idea. As an alternative, could one construct a source control system that, on doing all the code-quality metrics, would systematically destroy all trace of source code that failed the quality test? Alas, I can’t see many managers buying into the idea. In reading the full story of the near-loss of Toy Story 2, it set me thinking. It turned out that the lucky restoration of the code wasn’t the happy ending one first imagined it to be, because they eventually came to the conclusion that the plot was fundamentally flawed and it all had to be rewritten anyway.  Was this an early  case of the ‘source-control wet-job’?’ It is very hard nowadays to do a rapid U-turn in a development project because we are far too prone to cling to our existing source-code.

    Read the article

  • Super constructor must be a first statement in Java constructor [closed]

    - by Val
    I know the answer: "we need rules to prevent shooting into your own foot". Ok, I make millions of programming mistakes every day. To be prevented, we need one simple rule: prohibit all JLS and do not use Java. If we explain everything by "not shooting your foot", this is reasonable. But there is not much reason is such reason. When I programmed in Delphy, I always wanted the compiler to check me if I read uninitializable. I have discovered myself that is is stupid to read uncertain variable because it leads unpredictable result and is errorenous obviously. By just looking at the code I could see if there is an error. I wished if compiler could do this job. It is also a reliable signal of programming error if function does not return any value. But I never wanted it do enforce me the super constructor first. Why? You say that constructors just initialize fields. Super fields are derived; extra fields are introduced. From the goal point of view, it does not matter in which order you initialize the variables. I have studied parallel architectures and can say that all the fields can even be assigned in parallel... What? Do you want to use the unitialized fields? Stupid people always want to take away our freedoms and break the JLS rules the God gives to us! Please, policeman, take away that person! Where do I say so? I'm just saying only about initializing/assigning, not using the fields. Java compiler already defends me from the mistake of accessing notinitialized. Some cases sneak but this example shows how this stupid rule does not save us from the read-accessing incompletely initialized in construction: public class BadSuper { String field; public String toString() { return "field = " + field; } public BadSuper(String val) { field = val; // yea, superfirst does not protect from accessing // inconstructed subclass fields. Subclass constr // must be called before super()! System.err.println(this); } } public class BadPost extends BadSuper { Object o; public BadPost(Object o) { super("str"); this. o = o; } public String toString() { // superconstructor will boom here, because o is not initialized! return super.toString() + ", obj = " + o.toString(); } public static void main(String[] args) { new BadSuper("test 1"); new BadPost(new Object()); } } It shows that actually, subfields have to be inilialized before the supreclass! Meantime, java requirement "saves" us from writing specializing the class by specializing what the super constructor argument is, public class MyKryo extends Kryo { class MyClassResolver extends DefaultClassResolver { public Registration register(Registration registration) { System.out.println(MyKryo.this.getDepth()); return super.register(registration); } } MyKryo() { // cannot instantiate MyClassResolver in super super(new MyClassResolver(), new MapReferenceResolver()); } } Try to make it compilable. It is always pain. Especially, when you cannot assign the argument later. Initialization order is not important for initialization in general. I could understand that you should not use super methods before initializing super. But, the requirement for super to be the first statement is different. It only saves you from the code that does useful things simply. I do not see how this adds safety. Actually, safety is degraded because we need to use ugly workarounds. Doing post-initialization, outside the constructors also degrades safety (otherwise, why do we need constructors?) and defeats the java final safety reenforcer. To conclude Reading not initialized is a bug. Initialization order is not important from the computer science point of view. Doing initalization or computations in different order is not a bug. Reenforcing read-access to not initialized is good but compilers fail to detect all such bugs Making super the first does not solve the problem as it "Prevents" shooting into right things but not into the foot It requires to invent workarounds, where, because of complexity of analysis, it is easier to shoot into the foot doing post-initialization outside the constructors degrades safety (otherwise, why do we need constructors?) and that degrade safety by defeating final access modifier When there was java forum alive, java bigots attecked me for these thoughts. Particularly, they dislaked that fields can be initialized in parallel, saying that natural development ensures correctness. When I replied that you could use an advanced engineering to create a human right away, without "developing" any ape first, and it still be an ape, they stopped to listen me. Cos modern technology cannot afford it. Ok, Take something simpler. How do you produce a Renault? Should you construct an Automobile first? No, you start by producing a Renault and, once completed, you'll see that this is an automobile. So, the requirement to produce fields in "natural order" is unnatural. In case of alarmclock or armchair, which are still chair and clock, you may need first develop the base (clock and chair) and then add extra. So, I can have examples where superfields must be initialized first and, oppositely, when they need to be initialized later. The order does not exist in advance. So, the compiler cannot be aware of the proper order. Only programmer/constructor knows is. Compiler should not take more responsibility and enforce the wrong order onto programmer. Saying that I cannot initialize some fields because I did not ininialized the others is like "you cannot initialize the thing because it is not initialized". This is a kind of argument we have. So, to conclude once more, the feature that "protects" me from doing things in simple and right way in order to enforce something that does not add noticeably to the bug elimination at that is a strongly negative thing and it pisses me off, altogether with the all the arguments to support it I've seen so far. It is "a conceptual question about software development" Should there be the requirement to call super() first or not. I do not know. If you do or have an idea, you have place to answer. I think that I have provided enough arguments against this feature. Lets appreciate the ones who benefit form it. Let it just be something more than simple abstract and stupid "write your own language" or "protection" kind of argument. Why do we need it in the language that I am going to develop?

    Read the article

  • SQL Developer at Oracle Open World 2012

    - by thatjeffsmith
    We have a lot going on in San Francisco this fall. One of the most personal exciting bits, for what will be my 4th or 5th Open World, is that this will be my FIRST as a member of Team Oracle. I’ve presented once before, but most years it was just me pressing flesh at the vendor booths. After 3-4 days of standing and talking, you’re ready to just go home and not do anything for a few weeks. This time I’ll have a chance to walk around and talk with our users and get a good idea of what’s working and what’s not. Of course it will be a great opportunity for you to find us and get to know your SQL Developer team! 3.4 miles across and back – thanks Ashley for signing me up for the run! This year is going to be a bit crazy. Work wise I’ll be presenting twice, working a booth, and proctoring several of our Hands-On Labs. The fun parts will be equally crazy though – running across the Bay Bridge (I don’t run), swimming the Bay (I don’t swim), having my wife fly out on Wednesday for the concert, and then our first WhiskyFest on Friday (I do drink whisky though.) But back to work – let’s talk about EVERYTHING you can expect from the SQL Developer team. Booth Hours We’ll have 2 ‘demo pods’ in the Exhibition Hall over at Moscone South. Look for the farm of Oracle booths, we’ll be there under the signs that say ‘SQL Developer.’ There will be several people on hand, mostly developers (yes, they still count as people), who can answer your questions or demo the latest features. Come by and say ‘Hi!’, and let us know what you like and what you think we can do better. Seriously. Monday 10AM – 6PM Tuesday 9:45AM – 6PM Wednesday 9:45AM – 4PM Presentations Stop by for an hour, pull up a chair, sit back and soak in all the SQL Developer goodness. You’ll only have to suffer my bad jokes for two of the presentations, so please at least try to come to the other ones. We’ll be talking about data modeling, migrations, source control, and new features in versions 3.1 and 3.2 of SQL Developer and SQL Developer Data Modeler. Day Time Event Monday 10:454:45 What’s New in SQL Developer Why Move to Oracle Application Express Listener Tueday 10:1511:455:00 Using Subversion in Oracle SQL Developer Data Modeler Oracle SQL Developer Tips & Tricks Database Design with Oracle SQL Developer Data Modeler Wednesday 11:453:30 Migrating Third-Party Databases and Applications to Oracle Exadata 11g Enterprise Options and Management Packs for Developers Hands On Labs (HOLs) The Hands On Labs allow you to come into a classroom environment, sit down at a computer, and run through some exercises. We’ll provide the hardware, software, and training materials. It’s self-paced, but we’ll have several helpers walking around to answer questions and chat up any SQL Developer or database topic that comes to mind. If your employer is sending you to Open World for all that great training, the HOLs are a great opportunity to capitalize on that. They are only 60 minutes each, so you don’t have to worry about burning out. And there’s no homework! Of course, if you do want to take the labs home with you, many are already available via the Developer Day Hands-On Database Applications Developer Lab. You will need your own computer for those, but we’ll take care of the rest. Wednesday PL/SQL Development and Unit Testing with Oracle SQL Developer 10:15 Performance Tuning with Oracle SQL Developer 11:45 Thursday The Soup to Nuts of Data Modeling with Oracle SQL Developer Data Modeler 11:15 Some Parting Advice Always wanted to meet your favorite Oracle authors, speakers, and thought-leaders? Don’t be shy, walk right up to them and introduce yourself. Normal social rules still apply, but at the conference everyone is open and up for meeting and talking with attendees. Just understand if there’s a line that you might only get a minute or two. It’s a LONG conference though, so you’ll have plenty of time to catch up with everyone. If you’re going to be around on Tuesday evening, head on over to the OTN Lounge from 4:30 to 6:30 and hang out for our Tweet Meet. That’s right, all the Oracle nerds on Twitter will be there in one place. Be sure to put your Twitter handle on your name tag so we know who you are!

    Read the article

  • Going to the Score Cards - Exceptional DBA Awards 2011

    - by Rodney
    This year marks my 4th year as a judge for the Exceptional DBA Awards, founded by Red Gate in 2008 to "recognize the essential but often overlooked contributions of DBAs, the unsung heroes of the IT community." As a professional DBA myself I have been honored to participate as a judge. It is not an easy job because there is a voluminous amount of nominees from all over the world. Each judge has to read through every word of the nominee's answers, deciding what makes each person special and stand out amongst their peers. What drives them? What single element of their submission will shine above all others? It is my hope that what I am about to divulge to you as a judge will prompt you to think about yourself or someone you know and decide that you may be the exceptional DBA who can take home the gold at this year's award ceremony in Seattle. We are more than a few weeks into the nomination process and there are quite a number of submissions already. I can not tell you how many as that would not be fair. I can say it is not 1 million or more. I can also say that it is not 100,000. But that is all I can say about that. However, I can tell you that it is enough this year that we are breaking records on the number of people who have been influenced, inspired or intrigued by the awards in the past. I remember them all like it were yesterday. fuzzy thought cloud here. It was a rainy day in Seattle (all memories for each award ceremony will start thusly) and I was in the hotel going over my notes on what I wanted to say about the winner of the 2008 Red Gate Exceptional DBA Award. The notes were on index cards that I had either bought or stolen from my wife, I do not recall, but I was nervous which was unlike me. This was, after all, a big night for the winner. Of course, we, the judges and the SQL community, had already decided the winner and now all that remained was to present the award. The room was packed. It was Casino night, sponsored by sqlservercentral.com. Money (fake), drinks (not fake) and camaraderie flowed through the room. Dan McClain won the award that year. He worked for Anheuser-Busch at the time. I promise that did not influence my decision. We presented Dan with the award. He was very proud of this achievement, rightfully so, as was the SQL community for him. I spoke with Dan throughout the conference and realized how huge this award was for him, not just personally but professionally. It was a rainy day in Seattle in 2009 and I was nervous. I was asked to speak to a group of people again as a judge for the Exceptional DBA Awards. This year, Josef Richberg would be the recipient of the award, but he would not be able to attend. We all prayed for him as he fought through an illness and congratulated him for his accomplishments as a DBA for his company. He got better and sallied forth and continued to give back to the SQL community that he saw as one big family. In 2010, and I am getting ahead of myself, he was asked to be a judge himself for the very award he had just received the year before. It was a sunny day in Seattle and I missed it, because it was in July and I was not there. It was a rainy day in Seattle and it is 2010 and Tracy Hamlin enters a submission that blows this judge away. She is managing a 50 Terabyte distributed database ("50 Gigabytes! Are you kidding me!!!", Rodney jokes.)  and loves her daily job as a DBA working with developers, mentoring them and teaching them best practices with kindness and patience. She is a people person who just happens to have 10+ years experience with RDBMS'. She wins the award and goes on to be recognized as famous at PASS. It will be a rainy day in Seattle this year when I sit amongst my old constituent judges and friends, Brad McGehee, http://www.simple-talk.com/books/sql-books/how-to-become-an-exceptional-dba,-2nd-edition/, Steve Jones, whom we all know and love at http://www.sqlservercentral.com and a young upstart to the SQL Community, this cat named Brent Ozar to announce the 2011 winner. I personally have not heard of Brent but I am told I have interviewed him for a DBA position several years ago and turned him down, http://www.brentozar.com/archive/2011/05/exceptional-dba-contest/ . I hope that did not jeopardize his future in the SQL world. I am a big hearted oaf and would feel horrible. Hopefully I will meet him at PASS and we can work this all out and I can help him get a DBA job. The rain has stopped and a new year is upon us. The stakes are high...the competition is fierce...the rewards are incredible. The entry form awaits you. http://www.exceptionaldba.com/ I very much look forward to meeting you and presenting the award to you in front of hundreds of your envious but proud peers as the new Exceptional DBA for 2011 at the PASS Summit. Here is what you could win: The Exceptional DBA of the Year receives full conference registration for the 2011 PSS Summit in Seattle, where the awards ceremony will take place, four nights' hotel accommodation, and $300 towards travel expenses. They will also be featured on Simple-Talk. Are you ready? Are you nervous?

    Read the article

  • An MCM exam, Rob? Really?

    - by Rob Farley
    I took the SQL 2008 MCM Knowledge exam while in Seattle for the PASS Summit ten days ago. I wasn’t planning to do it, but I got persuaded to try. I was meaning to write this post to explain myself before the result came out, but it seems I didn’t get typing quickly enough. Those of you who know me will know I’m a big fan of certification, to a point. I’ve been involved with Microsoft Learning to help create exams. I’ve kept my certifications current since I first took an exam back in 1998, sitting many in beta, across quite a variety of topics. I’ve probably become quite good at them – I know I’ve definitely passed some that I really should’ve failed. I’ve also written that I don’t think exams are worth studying for. (That’s probably not entirely true, but it depends on your motivation. If you’re doing learning, I would encourage you to focus on what you need to know to do your job better. That will help you pass an exam – but the two skills are very different. I can coach someone on how to pass an exam, but that’s a different kind of teaching when compared to coaching someone about how to do a job. For example, the real world includes a lot of “it depends”, where you develop a feel for what the influencing factors might be. In an exam, its better to be able to know some of the “Don’t use this technology if XYZ is true” concepts better.) As for the Microsoft Certified Master certification… I’m not opposed to the idea of having the MCM (or in the future, MCSM) cert. But the barrier to entry feels quite high for me. When it was first introduced, the nearest testing centres to me were in Kuala Lumpur and Manila. Now there’s one in Perth, but that’s still a big effort. I know there are options in the US – such as one about an hour’s drive away from downtown Seattle, but it all just seems too hard. Plus, these exams are more expensive, and all up – I wasn’t sure I wanted to try them, particularly with the fact that I don’t like to study. I used to study for exams. It would drive my wife crazy. I’d have some exam scheduled for some time in the future (like the time I had two booked for two consecutive days at TechEd Australia 2005), and I’d make sure I was ready. Every waking moment would be spent pouring over exam material, and it wasn’t healthy. I got shaken out of that, though, when I ended up taking four exams in those two days in 2005 and passed them all. I also worked out that if I had a Second Shot available, then failing wasn’t a bad thing at all. Even without Second Shot, I’m much more okay about failing. But even just trying an MCM exam is a big effort. I wouldn’t want to fail one of them. Plus there’s the illusion to maintain. People have told me for a long time that I should just take the MCM exams – that I’d pass no problem. I’ve never been so sure. It was almost becoming a pride-point. Perhaps I should fail just to demonstrate that I can fail these things. Anyway – boB Taylor (@sqlboBT) persuaded me to try the SQL 2008 MCM Knowledge exam at the PASS Summit. They set up a testing centre in one of the room there, so it wasn’t out of my way at all. I had to squeeze it in between other commitments, and I certainly didn’t have time to even see what was on the syllabus, let alone study. In fact, I was so exhausted from the week that I fell asleep at least once (just for a moment though) during the actual exam. Perhaps the questions need more jokes, I’m not sure. I knew if I failed, then I might disappoint some people, but that I wouldn’t’ve spent a great deal of effort in trying to pass. On the other hand, if I did pass I’d then be under pressure to investigate the MCM Lab exam, which can be taken remotely (therefore, a much smaller amount of effort to make happen). In some ways, passing could end up just putting a bunch more pressure on me. Oh, and I did.

    Read the article

  • An MCM exam, Rob? Really?

    - by Rob Farley
    I took the SQL 2008 MCM Knowledge exam while in Seattle for the PASS Summit ten days ago. I wasn’t planning to do it, but I got persuaded to try. I was meaning to write this post to explain myself before the result came out, but it seems I didn’t get typing quickly enough. Those of you who know me will know I’m a big fan of certification, to a point. I’ve been involved with Microsoft Learning to help create exams. I’ve kept my certifications current since I first took an exam back in 1998, sitting many in beta, across quite a variety of topics. I’ve probably become quite good at them – I know I’ve definitely passed some that I really should’ve failed. I’ve also written that I don’t think exams are worth studying for. (That’s probably not entirely true, but it depends on your motivation. If you’re doing learning, I would encourage you to focus on what you need to know to do your job better. That will help you pass an exam – but the two skills are very different. I can coach someone on how to pass an exam, but that’s a different kind of teaching when compared to coaching someone about how to do a job. For example, the real world includes a lot of “it depends”, where you develop a feel for what the influencing factors might be. In an exam, its better to be able to know some of the “Don’t use this technology if XYZ is true” concepts better.) As for the Microsoft Certified Master certification… I’m not opposed to the idea of having the MCM (or in the future, MCSM) cert. But the barrier to entry feels quite high for me. When it was first introduced, the nearest testing centres to me were in Kuala Lumpur and Manila. Now there’s one in Perth, but that’s still a big effort. I know there are options in the US – such as one about an hour’s drive away from downtown Seattle, but it all just seems too hard. Plus, these exams are more expensive, and all up – I wasn’t sure I wanted to try them, particularly with the fact that I don’t like to study. I used to study for exams. It would drive my wife crazy. I’d have some exam scheduled for some time in the future (like the time I had two booked for two consecutive days at TechEd Australia 2005), and I’d make sure I was ready. Every waking moment would be spent pouring over exam material, and it wasn’t healthy. I got shaken out of that, though, when I ended up taking four exams in those two days in 2005 and passed them all. I also worked out that if I had a Second Shot available, then failing wasn’t a bad thing at all. Even without Second Shot, I’m much more okay about failing. But even just trying an MCM exam is a big effort. I wouldn’t want to fail one of them. Plus there’s the illusion to maintain. People have told me for a long time that I should just take the MCM exams – that I’d pass no problem. I’ve never been so sure. It was almost becoming a pride-point. Perhaps I should fail just to demonstrate that I can fail these things. Anyway – boB Taylor (@sqlboBT) persuaded me to try the SQL 2008 MCM Knowledge exam at the PASS Summit. They set up a testing centre in one of the room there, so it wasn’t out of my way at all. I had to squeeze it in between other commitments, and I certainly didn’t have time to even see what was on the syllabus, let alone study. In fact, I was so exhausted from the week that I fell asleep at least once (just for a moment though) during the actual exam. Perhaps the questions need more jokes, I’m not sure. I knew if I failed, then I might disappoint some people, but that I wouldn’t’ve spent a great deal of effort in trying to pass. On the other hand, if I did pass I’d then be under pressure to investigate the MCM Lab exam, which can be taken remotely (therefore, a much smaller amount of effort to make happen). In some ways, passing could end up just putting a bunch more pressure on me. Oh, and I did.

    Read the article

  • Passing the CAML thru the EY of the NEEDL

    - by PointsToShare
    © 2011 By: Dov Trietsch. All rights reserved Passing the CAML thru the EY of the NEEDL Definitions: CAML (Collaborative Application Markup Language) is an XML based markup language used in Microsoft SharePoint technologies  Anonymous: A camel is a horse designed by committee  Dov Trietsch: A CAML is a HORS designed by Microsoft  I was advised against putting any Camel and Sphinx rhymes in here. Look it up in Google!  _____ Now that we have dispensed with the dromedary jokes (BTW, I have many more, but they are not fit to print!), here is an interesting problem and its solution.  We have built a list where the title must be kept unique so I needed to verify the existence (or absence) of a list item with a particular title. Two methods came to mind:  1: Span the list until the title is found (result = found) or until the list ends (result = not found). This is an algorithm of complexity O(N) and for long lists it is a performance sucker. 2: Use a CAML query instead. Here, for short list we’ll encounter some overhead, but because the query results in an SQL query on the content database, it is of complexity O(LogN), which is significantly better and scales perfectly. Obviously I decided to go with the latter and this is where the CAML s--t hit the fan.   A CAML query returns a SPListItemCollection and I simply checked its Count. If it was 0, the item did not already exist and it was safe to add a new item with the given title. Otherwise I cancelled the operation and warned the user. The trouble was that I always got a positive. Most of the time a false positive. The count was greater than 0 regardles of the title I checked (except when the list was empty, which happens only once). This was very disturbing indeed. To solve my immediate problem which was speedy delivery, I reverted to the “Span the list” approach, but the problem bugged me, so I wrote a little console app by which I tested and tweaked and tested, time and again, until I found the solution. Yes, one can pass the proverbial CAML thru the ey of the needle (e’s missing on purpose).  So here are my conclusions:  CAML that does not work:  Note: QT is my quote:  char QT = Convert.ToChar((int)34); string titleQuery = "<Query>><Where><Eq>"; titleQuery += "<FieldRef Name=" + QT + "Title" + QT + "/>"; titleQuery += "<Value Type=" + QT + "Text" + QT + ">" + uniqueID + "</Value></Eq></Where></Query>"; titleQuery += "<ViewFields><FieldRef Name=" + QT + "Title" + QT + "/></ViewFields>";  Why? Even though U2U generates it, the <Query> and </Query> tags do not belong in the query that you pass. Start your query with the <Where> clause.  Also the <ViewFiels> clause does not belong. I used this clause to limit the returned collection to a single column, and I still wish to do it. I’ll show how this is done a bit later.   When you use the <Query> </Query> tags in you query, it’s as if you did not specify the query at all. What you get is the all inclusive default query for the list. It returns evey column and every item. It is expensive for both server and network because it does all the extra processing and eats plenty of bandwidth.   Now, here is the CAML that works  string titleQuery = "<Where><Eq>"; titleQuery += "<FieldRef Name=" + QT + "Title" + QT + "/>"; titleQuery += "<Value Type=" + QT + "Text" + QT + ">" + uniqueID + "</Value></Eq></Where>";  You’ll also notice that inside the unusable <ViewFields> clause above, we have a <FieldRef> clause. This is what we pass to the SPQuery object. Here is how:  SPQuery query = new SPQuery(); query.Query = titleQuery; query.ViewFields = "<FieldRef Name=" + QT + "Title" + QT + "/>"; query.RowLimit = 1; SPListItemCollection col = masterList.GetItems(query);  Two thing to note: we enter the view fields into the SPQuery object and we also limited the number of rows that the query returns. The latter is not always done, but in an existence test, there is no point in returning hundreds of rows. The query will now return one item or none, which is all we need in order to verify the existence (or non-existence) of items. Limiting the number of columns and the number of rows is a great performance enhancer. That’s all folks!!

    Read the article

  • User is trying to leave! Set at-least confirm alert on browser(tab) close event!!

    - by kaushalparik27
    This is something that might be annoying or irritating for end user. Obviously, It's impossible to prevent end user from closing the/any browser. Just think of this if it becomes possible!!!. That will be a horrible web world where everytime you will be attacked by sites and they will not allow to close your browser until you confirm your shopping cart and do the payment. LOL:) You need to open the task manager and might have to kill the running browser exe processes.Anyways; Jokes apart, but I have one situation where I need to alert/confirm from the user in any anyway when they try to close the browser or change the url. Think of this: You are creating a single page intranet asp.net application where your employee can enter/select their TDS/Investment Declarations and you wish to at-least ALERT/CONFIRM them if they are attempting to:[1] Close the Browser[2] Close the Browser Tab[3] Attempt to go some other site by Changing the urlwithout completing/freezing their declaration.So, Finally requirement is clear. I need to alert/confirm the user what he is going to do on above bulleted events. I am going to use window.onbeforeunload event to set the javascript confirm alert box to appear.    <script language="JavaScript" type="text/javascript">        window.onbeforeunload = confirmExit;        function confirmExit() {            return "You are about to exit the system before freezing your declaration! If you leave now and never return to freeze your declaration; then they will not go into effect and you may lose tax deduction, Are you sure you want to leave now?";        }    </script>See! you are halfway done!. So, every time browser unloads the page, above confirm alert causes to appear on front of user like below:By saying here "every time browser unloads the page"; I mean to say that whenever page loads or postback happens the browser onbeforeunload event will be executed. So, event a button submit or a link submit which causes page to postback would tend to execute the browser onbeforeunload event to fire!So, now the hurdle is how can we prevent the alert "Not to show when page is being postback" via any button/link submit? Answer is JQuery :)Idea is, you just need to set the script reference src to jQuery library and Set the window.onbeforeunload event to null when any input/link causes a page to postback.Below will be the complete code:<head runat="server">    <title></title>    <script src="jquery.min.js" type="text/javascript"></script>    <script language="JavaScript" type="text/javascript">        window.onbeforeunload = confirmExit;        function confirmExit() {            return "You are about to exit the system before freezing your declaration! If you leave now and never return to freeze your declaration; then they will not go into effect and you may lose tax deduction, Are you sure you want to leave now?";        }        $(function() {            $("a").click(function() {                window.onbeforeunload = null;            });            $("input").click(function() {                window.onbeforeunload = null;            });        });    </script></head><body>    <form id="form1" runat="server">    <div></div>    </form></body></html>So, By this post I have tried to set the confirm alert if user try to close the browser/tab or try leave the site by changing the url. I have attached a working example with this post here. I hope someone might find it helpful.

    Read the article

  • Microsoft TechEd 2010 - Day 2 @ Bangalore

    - by sathya
    Microsoft TechEd 2010 - Day 2 @ Bangalore Today is the day 2 @ Microsoft TechEd 2010. We had lot of technical sessions as usual there were many tracks going on side by side and I was attending the Web simplified track, Which comprised of the following sessions :   Developing a scalable Media Application using ASP.NET MVC - This was a kind of little advanced stuff. Anyways I couldn't understand much because this was not my piece of cake and I havent worked on this before ASP.Net MVC Unplugged - This was really great because this session covered from the basics of MVC showing what is Model,View and Controller and how it worked and the speaker went into the details of the same. Building RESTful Applications with the Open Data Protocol - There were some concepts explained about this from the basics on how to build RESTful Services and it went on till some advanced configurations of the same. Developing Scalable Web Applications with AppFabric Caching - This session showed about the integration of AppFabric with the .Net Web Applications. Instead of using Inproc Sessions, we can use this AppFabric as a substitute for Caching and outofProc Session Storage without writing code and doing a little bit of configurations which brings in High Scalability, performance to our applications. (But unfortunately there were no demos for this session ) Deep Dive : WCF RIA Services - This session was also an interactive one, in this the speaker presented from the basics of WCF and took a Book Store Application as a sample and explained all details concepts on linking with RIA Services   Apart from these sessions, in between there happened some small events in the breaks like Some discussions about Technology, Innovations Music Jokes Mimicry, etc. And on doing all these things, the developers were given some kool gifts / goodies like USBs, T-Shirts, etc. And today I got a chance to do the following certification : (70-562) Microsoft Certified Technology Specialist in .NET 3.5 Web Applications Since I already have an MCTS in .NET 2.0, I wanted to do an MCPD and for doing the same I was required to do an update to my MCTS with the .NET 3.5 framework and I did the same I cleared it and now am an MCTS in .NET 3.5 Web Apps And on doing this I got a T-Shirt and they gave something called Learning $ of worth 30$. And in various stalls for attending each quiz or some game or some referrals we got some Learning $ which we can redeem later based on our Total Learning $. I got 105 $ which i was able to redeem and got a Microsoft Learning BagPack, 1 free Microsoft certification offer, a laptop light and an e-learning content activated. And after all these sessions and small events, we had something called Demo Extravaganza like I mentioned yesterday. This was a great funfilled event with lot of goodies for the attendees. There were some lucky draw which enabled 2 attendees to get Netbooks (Sponsored by Intel) and 1 attendee to get X-box (Sponsored by Citrix). After Choosing the raffle in the lucky draw they kept it on a device called Microsoft Surface which is a kind of big touch screen device and on putting the raffle on that it detected the code of the attendee and said intelligently how many sessions that person has attended and if he has attended more than 5 he got a Netbook and this was coded by a guy called Imran. Apart from they showed demos on : Research by 2 Tamilnadu students from Krishna Arts and Science college, taken 1200 photographs of their college from different angles and put that up in Bing maps using silverlight and linked with Photosynth, which showed a 3d view of their college based on the photos they uploaded Reasearch by Microsoft on Panaramic HD views of the images. One young guy from Microsoft Research showed a demo of this on Srivilliputhur Andal Temple, in Tamil Nadu and its history with a panoramic view of the temple and the near by places with narration of the historical information on the same and with the videos embedded in it with high definition images which we can zoom to a very detailed level. Some Demo on a business app with Silverlight, Business Intelligence (BI) and maps integrated. It showed the sales of a particular product across locations. Some kool demos by 2 geeks who used Robots to show their development talents. 2 Robots fought with each other 2 Robots danced in sync for the A.R. Rehman song Humma Humma... A dream home project by Raman. He is currently using the same in his home too. Robots are controlling his home currently. They showed a video on this. Here are the list of activities that Robot does for him When he reads a book, robot automatically scans that and shows that image of that person in the screen (TV or comp) in front of him. It shows a wikipedia about that person. It says that person is not in linked in. do you want to add him If he sees an IPL Match news in the book and smiles it understands he is interested in that and opens a website related to that and shows the current game and the scorecard. It cooks for him It cleans the room for him whenever he leaves the house when he is doing something if some intruder comes inside his house his computer automatically switches his screen showing the video of the person coming inside. When he wakes up it automatically opens up the system, loads his mails and the news by the side, etc. Some Demos on Microsoft Pivot. This was there in livelabs but it is now available in getpivot.com its a pivoting of the pictorial data based on some categories and filters on the searches that we do. And finally on filling up some feedback forms we got T-Shirts and Microsoft Visual Studio 2010 Training Kit CDs. Whats more on TechEd??? Stay tuned!!! Will update you soon on the other happenings!! PS : I typed a lot of content for more than a hour but I pressed a backspace and it went to the previous page and all my content were lost and I was not able to retrieve the same and I typed everything again.

    Read the article

  • SQLAuthority Guest Post – Lessons from Life and Work by Srini Chandra (Author of 3 Lives, in search of bliss)

    - by pinaldave
    Work and life are confusing terms together. How can one consider work outside of life. Work should be part of life or are we considering ourselves dead when we are at work. I have often seen developers and DBA complaining and confused about their job, work and life. Complaining is easy and everyone can do. I have heard quite often expression – “I do not have any other option.” I requested Srini Chanda (renowned author of Amazon Best Seller 3 Lives, in search of bliss (Amazon | Flipkart) to write a guest post on this subject which developer can read and appreciate. Let us see Srini’s thoughts in his own words. Each of us who works in the technology industry carries an especially heavy burden nowadays. For, fate has placed in our hands an awesome power to shape our society and its consciousness. For that reason, we must pay more and more attention to issues of professionalism, social responsibility and ethics. Equally importantly, the responsibility lies in our hands to ensure that we view our work and career as an opportunity to enlighten and lift ourselves up. Story: A Prisoner, 20 years and a Wheel Many years ago, I heard this story from a professor when I was a student at Carnegie Mellon. A man was sentenced to 20 years in prison. During his time in prison, he was asked to turn a wheel every day. So, every day he turned the wheel. At times, when he was tired or puzzled and stopped turning the wheel, he would be flogged with a whip. The man did not know anything about the wheel other than that it was placed outside his jail somewhere. He wondered if the wheel crushed corn or if it ground wheat or something similar. He wondered if turning the wheel was useful to anyone. At the end of his jail term, he rushed out to see what the wheel was doing. To his disappointment, he found that the wheel was not connected to anything. All these years, he had been toiling for nothing. He gave a loud, frustrated shout and dropped dead. How many of us are turning wheels wondering what it is connected to? How many of us have unstated, uncaring attitudes towards our careers? How many of us view work as drudgery, as no more than a way to earn that next paycheck? How many of us have wondered about the spiritually uplifting aspect of work? Can a workforce that views work as merely a chore, be ethical? Can it produce truly life enhancing technology? Can it make positive contributions to the quality of life of a society? I think not. Thanks to Pinal and you, his readers, for giving me this opportunity to share my thoughts in a series of guest posts. I’d like to present a few ways over the next few weeks, in which we can tap into the liberating potential of work and make our lives better in the process. Now, please allow me to tell you another version of the story that the good professor shared with us in the classroom that day. Story: A Prisoner, 20 years, a Wheel and the LIFE A man was sentenced to 20 years in prison. During his time in prison, he was asked to turn a wheel every day. So, every day he turned the wheel. At first, his whole body and mind rebelled against his predicament. So, his limbs grew weary and his mind became numb and confused. And then, his self-awareness began to grow. He began to wonder how he came to be in the prison in the first place. He looked around and saw all his fellow prisoners also turning the wheel. His wife, his parents, his friends and his children – they were all in the prison too, and turning their own wheels! He began to wonder how this came about. As he wondered more and more, he began to focus less on his physical drudgery and boredom. And he began to clearly see his inner spirit which guided him in ways that allowed him to see the world with a universal view. His inner spirit guided him towards the source of eternal wisdom and happiness. He began to see the source of happiness in everything around him – his prison bound relationships, even his jailers and in his wheel. He became a source of light to those around him. His wheel jokes and humor infected them with joy and happiness. Finally, the day came for his release from jail. He walked calmly outside the jail and laughed aloud when he saw that the wheel was not connected to anything. He knelt down, kissed it and thanked it for the wisdom it taught him. Life is the prison. The wheel is your work. Both are sacred. Both have enormous powers to teach us wisdom and bring us happiness. Whether we allow them to do so, is a choice we have to make. Over the next few weeks, I hope to share with you a few lessons that I have learnt at the wheel in my two decades of my career (prison). Thank you for reading, and do let me know what you think. Reference: Srini Chandra (3 Lives, in search of bliss), Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Book Review, T SQL, Technology

    Read the article

  • How to build a great relationship with your colleagues

    - by Maria Sandu
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} When you start new job, you worry about your performance, about being able to do what the manager asks you to do, but you also worry about the relations with your colleagues. How will you get along with them? What if they don’t like you? Have you ever felt you’re „the new guy” and your colleagues have already their own way of talking one to each other, their own jokes? It’s a common feeling and can actually become stressful. I am Norbert, Middleware Presales Intern in Hungary and I’ve been working within Oracle for only 1 month. Joining such a big company has been a challenge from many perspectives. One of them was adapting with the environment and getting to know all my colleagues. You know it’s quite difficult to introduce yourself, to try to liaise with them and find some common topics, so I felt very lucky and comfortable when my manager introduced me to all of my colleagues. It was easier to accommodate and we basically we had a starting point for our discussions. We started to talk about what my position means, for how many years they’ve been within Oracle, other Oracle related topics, but also more personal stuff like what they do after work. Having this opportunity of talking with all of them helped me introduce myself in a proper way and actually I told them many things about myself. Networking wasn’t my best skill, but these first days were really helpful from a network point of view. What else can you do to get along with your colleagues? One second thing I consider as being really helpful in networking is asking work-related questions. For instance, when you don’t know how to do something or don’t understand it, asking one of your colleagues will also help you to make a connection with him and you could easily continue the discussion with some other topics which are more personal. It’s a very effective strategy and in a company like Oracle people are very willing to help you with your tasks and perform at a high level. If you see your colleagues going to lunch, you should join them. It will help you become part of their community, finding out what’s new in their lives, you’ll, step-by-step, take part in their conversations and be up to date with the hot topics they talk about. One other opportunity of becoming part of your colleagues’ community are the internal events. Subscribing to the local free time activities mailing list is very useful for finding out information about when they’re going out and have a drink or attending all sorts of events. For instance, this is how I’ve found out about a party within Oracle that most of the employees here attend. It’s a wonderful opportunity for chatting and make a stronger connection to some of them. How important is attending these events? Think about how much time you spend at work. You’d like to enjoy your work and the environment, so getting along with your colleagues is a nice thing to have. I recently attended a corporate party whose purpose was to facilitate the interaction and communication between employees. It was a real success and we had a lot of fun, especially because it was a costume party.  All the fancy dresses and funny clothes we wore made the atmosphere really enjoyable. It was easy to liaise with colleague with whom I had never interacted with before. There was a friendly spirit among us, chatting about personal stuff and about various pleasant things. Working in an international company is not an easy thing because you interact with many people and they have different styles, but all these opportunities of informal interaction are a good way to adapt to the new working environment.

    Read the article

  • Summit reflections

    - by Rob Farley
    So far, my three PASS Summit experiences have been notably different to each other. My first, I wasn’t on the board and I gave two regular sessions and a Lightning Talk in which I told jokes. My second, I was a board advisor, and I delivered a precon, a spotlight and a Lightning Talk in which I sang. My third (last week), I was a full board director, and I didn’t present at all. Let’s not talk about next year. I’m not sure there are many options left. This year, I noticed that a lot more people recognised me and said hello. I guess that’s potentially because of the singing last year, but could also be because board elections can bring a fair bit of attention, and because of the effort I’ve put in through things like 24HOP... Yeah, ok. It’d be the singing. My approach was very different though. I was watching things through different eyes. I looked for the things that seemed to be working and the things that didn’t. I had staff there again, and was curious to know how their things were working out. I knew a lot more about what was going on behind the scenes to make various things happen, and although very little about the Summit was actually my responsibility (based on not having that portfolio), my perspective had moved considerably. Before the Summit started, Board Members had been given notebooks – an idea Tom (who heads up PASS’ marketing) had come up with after being inspired by seeing Bill walk around with a notebook. The plan was to take notes about feedback we got from people. It was a good thing, and the notebook forms a nice pair with the SQLBits one I got a couple of years ago when I last spoke there. I think one of the biggest impacts of this was that during the first keynote, Bill told everyone present about the notebooks. This set a tone of “we’re listening”, and a number of people were definitely keen to tell us things that would cause us to pull out our notebooks. PASSTV was a new thing this year. Justin, the host, featured on the couch and talked a lot of people about a lot of things, including me (he talked to me about a lot of things, I don’t think he talked to a lot people about me). Reaching people through online methods is something which interests me a lot – it has huge potential, and I love the idea of being able to broadcast to people who are unable to attend in person. I’m keen to see how this medium can be developed over time. People who know me will know that I’m a keen advocate of certification – I've been SQL certified since version 6.5, and have even been involved in creating exams. However, I don’t believe in studying for exams. I think training is worthwhile for learning new skills, but the goal should be on learning those skills, not on passing an exam. Exams should be for proving that the skills are there, not a goal in themselves. The PASS Summit is an excellent place to take exams though, and with an attitude of professional development throughout the event, why not? So I did. I wasn’t expecting to take one, but I was persuaded and took the MCM Knowledge Exam. I hadn’t even looked at the syllabus, but tried it anyway. I was very tired, and even fell asleep at one point during it. I’ll find out my result at some point in the future – the Prometric site just says “Tested” at the moment. As I said, it wasn’t something I was expecting to do, but it was good to have something unexpected during the week. Of course it was good to catch up with old friends and make new ones. I feel like every time I’m in the US I see things develop a bit more, with more and more people knowing who I am, who my staff are, and recognising the LobsterPot brand. I missed being a presenter, but I definitely enjoyed seeing many friends on the list of presenters. I won’t try to list them, because there are so many these days that people might feel sad if I don’t mention them. For those that I managed to see, I was pleased to see that the majority of them have lifted their presentation skills since I last saw them, and I happily told them as much. One person who I will mention was Paul White, who travelled from New Zealand to his first PASS Summit. He gave two sessions (a regular session and a half-day), packed large rooms of people, and had everyone buzzing with enthusiasm. I spoke to him after the event, and he told me that his expectations were blown away. Paul isn’t normally a fan of crowds, and the thought of 4000 people would have been scary. But he told me he had no idea that people would welcome him so well, be so friendly and so down to earth. He’s seen the significance of the SQL Server community, and says he’ll be back. It’ll be good to see him there. Will you be there too?

    Read the article

  • Summit reflections

    - by Rob Farley
    So far, my three PASS Summit experiences have been notably different to each other. My first, I wasn’t on the board and I gave two regular sessions and a Lightning Talk in which I told jokes. My second, I was a board advisor, and I delivered a precon, a spotlight and a Lightning Talk in which I sang. My third (last week), I was a full board director, and I didn’t present at all. Let’s not talk about next year. I’m not sure there are many options left. This year, I noticed that a lot more people recognised me and said hello. I guess that’s potentially because of the singing last year, but could also be because board elections can bring a fair bit of attention, and because of the effort I’ve put in through things like 24HOP... Yeah, ok. It’d be the singing. My approach was very different though. I was watching things through different eyes. I looked for the things that seemed to be working and the things that didn’t. I had staff there again, and was curious to know how their things were working out. I knew a lot more about what was going on behind the scenes to make various things happen, and although very little about the Summit was actually my responsibility (based on not having that portfolio), my perspective had moved considerably. Before the Summit started, Board Members had been given notebooks – an idea Tom (who heads up PASS’ marketing) had come up with after being inspired by seeing Bill walk around with a notebook. The plan was to take notes about feedback we got from people. It was a good thing, and the notebook forms a nice pair with the SQLBits one I got a couple of years ago when I last spoke there. I think one of the biggest impacts of this was that during the first keynote, Bill told everyone present about the notebooks. This set a tone of “we’re listening”, and a number of people were definitely keen to tell us things that would cause us to pull out our notebooks. PASSTV was a new thing this year. Justin, the host, featured on the couch and talked a lot of people about a lot of things, including me (he talked to me about a lot of things, I don’t think he talked to a lot people about me). Reaching people through online methods is something which interests me a lot – it has huge potential, and I love the idea of being able to broadcast to people who are unable to attend in person. I’m keen to see how this medium can be developed over time. People who know me will know that I’m a keen advocate of certification – I've been SQL certified since version 6.5, and have even been involved in creating exams. However, I don’t believe in studying for exams. I think training is worthwhile for learning new skills, but the goal should be on learning those skills, not on passing an exam. Exams should be for proving that the skills are there, not a goal in themselves. The PASS Summit is an excellent place to take exams though, and with an attitude of professional development throughout the event, why not? So I did. I wasn’t expecting to take one, but I was persuaded and took the MCM Knowledge Exam. I hadn’t even looked at the syllabus, but tried it anyway. I was very tired, and even fell asleep at one point during it. I’ll find out my result at some point in the future – the Prometric site just says “Tested” at the moment. As I said, it wasn’t something I was expecting to do, but it was good to have something unexpected during the week. Of course it was good to catch up with old friends and make new ones. I feel like every time I’m in the US I see things develop a bit more, with more and more people knowing who I am, who my staff are, and recognising the LobsterPot brand. I missed being a presenter, but I definitely enjoyed seeing many friends on the list of presenters. I won’t try to list them, because there are so many these days that people might feel sad if I don’t mention them. For those that I managed to see, I was pleased to see that the majority of them have lifted their presentation skills since I last saw them, and I happily told them as much. One person who I will mention was Paul White, who travelled from New Zealand to his first PASS Summit. He gave two sessions (a regular session and a half-day), packed large rooms of people, and had everyone buzzing with enthusiasm. I spoke to him after the event, and he told me that his expectations were blown away. Paul isn’t normally a fan of crowds, and the thought of 4000 people would have been scary. But he told me he had no idea that people would welcome him so well, be so friendly and so down to earth. He’s seen the significance of the SQL Server community, and says he’ll be back. It’ll be good to see him there. Will you be there too?

    Read the article

  • Ubuntu keyboard detection from bash script

    - by Ryan Brubaker
    Excuse my ignorance of linux OS/hardware issues...I'm just a programmer :) I have an application that calls out to some bash scripts to launch external applications, in this case Firefox. The application runs on a kiosk with touch screen capability. When launching Firefox, I also launch a virtual keyboard application that allows the user to have keyboard input. However, the kiosk also has both PS/2 and USB slots that would allow a user to plug-in a keyboard. If a keyboard were plugged in, it would be nice if I didn't have to launch the virtual keyboard and provide more screen space for the Firefox window. Is there a way for me to detect if a keyboard is plugged in from the bash script? Would it show up in /dev, and if so, would it show up at a consistent location? Would it make a difference if the user used a PS/2 or USB keyboard? Thanks!

    Read the article

  • Best book for learning linux shell scripting?

    - by chakrit
    I normally works on Windows machines but on some occasions I do switch to development on linux. And my most recent project will be written entirely on a certain linix platforms (not the standard Apache/MySQL/PHP setup). So I thought it would pay to learn to write some linux automation script now. I can get around the system, start/stop services, compile/install stuffs fine. Those are probably basic drills for a programmer. But if, for example, I wanted to deploy a certain application automatically to a newly minted linux machine every month I'd love to know how to do it. So if I wanted to learn serious linux shell scripting, what book should I be reading? Thanks

    Read the article

  • DNSCurve vs DNSSEC

    - by Bill Gray
    Can someone informed, please give a lengthy reply about the differences and advantages/disadvantages of both approaches? I am not a DNS expert, not a programmer. I have a decent basic understanding of DNS, and enough knowledge to understand how things like the kaminsky bug work. From what I understand, DNSCurve has stronger encryption, is far simpler to setup, and an altogether better solution. DNSSEC is needlessly complicated and uses breakable encryption, however it provides end to end security, something DNSCurve does not. However, many of the articles I have read have seemed to indicate that end to end security is of little use or makes no difference. So which is true? Which is the better solution, or what are the disadvantages/advantages of each? edit: I would appreciate if someone could explain what is gained by encrypting the message contents, when the goal is authentication rather than confidentiality. The proof that keys are 1024bit RSA keys is here.

    Read the article

  • Socket(TCPIP) Unstable

    - by Lee Kwan Wee
    I have a setup of a SCPI server in a Win7 PC and have 2 other programs talking to it locally(127.0.0.1) over TCPIP socket 5025 and 5029. This worked well and stable in a fresh PC, but when we moved it into our production lines and the IT dept added their policies and stuff, it became unstable. The PC is connected to the production floor server but both of the programs are running locally in the PC. The connection tends to be disconnected when there is an idle period. And it takes 5-6times to refreshing the connection to get it back. I'm not a programmer myself, so I'm hoping to see if anyone here can help with some answers. Thank you very much!! Regards, KwanWee.

    Read the article

  • Make mysqldump output USE statements or full table names when dumping a single table with where clause

    - by tobyodavies
    Is it possible to get mysqldump to output USE statements for a single (partial) table dump? I've already got some scripts that I'd like to reuse which run mysqldump with some arguments and apply them to a remote server. However, since I haven't bothered to parse all the arguments to mysqldump, and there is no USE in the dump, the remote server is saying no database selected. I'm a programmer more than anything else, so I can easily use sed to modify the dump before applying it in the worst case, but those scripts won't allow me to do this as I don't have access to the dump between creation and application. EDIT: the ability to output fully qualified table names may also solve my problem

    Read the article

  • Can Linux file permissions be fooled?

    - by puk
    I came across this example today and I wondered how reliable Linux file permissions are for hiding information $ mkdir fooledYa $ mkdir fooledYa/ohReally $ chmod 0300 fooledYa/ $ cd fooledYa/ $ ls >>> ls: cannot open directory .: Permission denied $ cd ohReally $ ls -ld . >>> drwxrwxr-x 2 user user 4096 2012-05-30 17:42 . Now I am not a Linux OS expert, so I have no doubt that someone out there will explain to me that this is perfectly logical from the OS's point of view. However, my question still stands, is it possible to fool, not hack, the OS into letting you view files/inode info which you are not supposed to? What if I had issued the command chmod 0000 fooledYa, could an experienced programmer find some round about way to read a file such as fooledYa/ohReally/foo.txt?

    Read the article

  • How to protect my VPS from winlogon RDP spam requests

    - by Valentin Kuzub
    I got some hackers constantly hitting my RDP and generating thousands of audit failures in event log. Password is pretty elaborate so I dont think bruteforcing will get them anywhere. I am using VPS and I am pretty much a noob in Windows Server security (am a programmer myself and its my webserver for my site). Which is a recommended approach to deal with this? I would rather block IPs after some amount of failures for example. Sorry if question is not appropriate.

    Read the article

  • Attempting to migrate to Aptana from Dreamweaver - how to?

    - by Kerry
    I have been using Dreamweaver for years, and have used Dreamweaver MX, Dreamweaver MX 2004, Dreamweaver 8 & Dreamweaver CS4. They all carry a common theme and were relatively easy to migrate to each other. I use, however, very few features of dreamweaver. Specifically: Syntax highlighting Telesense FTP management/site management The search/replace The idea I get from Aptana is its much more geared toward the web development/programmer side of things, have better visualizations of classes, etc., and handle everything else just as well. The problem is it is not at all clear to me. I managed to start a new project, but it didn't associate it with an FTP. So, I created an FTP, and it looks like I can have many FTPs for one project. My question then is, is there a tutorial or guide to migrating to Aptana from a Dreamweavor's perspective?

    Read the article

  • development server?

    - by ajsie
    for a project there will be me and one more programmer to develop a web service. i wonder how the development environment should be like. cause we need central storage (documents, pictures, business materials etc), file version handling, lamp (testing the web service) etc. i have never set up an environment for this before and want to have suggestions from experienced people which tools to use for effective collaboration. what crossed my mind: seperate applications: - google wave (for communication forth and back, setting up guide lines, other information) - team viewer (desktop sharing) - skype (calling) vps (ubuntu server): - svn (version tracking) - ftp (central storage) - lamp (testing the web service) - ssh (managing the vps) is this an appropriate programming environment? and regarding the vps, is it best practice to use ONE vps for all tasks listed up there? all suggestions and feedbacks are welcome!

    Read the article

< Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >