Search Results

Search found 14407 results on 577 pages for 'business rules'.

Page 436/577 | < Previous Page | 432 433 434 435 436 437 438 439 440 441 442 443  | Next Page >

  • JCP.Next - Early Adopters of JCP 2.8

    - by Heather VanCura
    JCP.Next is a series of three JSRs (JSR 348, JSR 355 and JSR 358), to be defined through the JCP process itself, with the JCP Executive Committee serving as the Expert Group. The proposed JSRs will modify the JCP's processes  - the Process Document and Java Specification Participation Agreement (JSPA) and will apply to all new JSRs for all Java platforms.   The first - JCP.next.1, or more formally JSR 348, Towards a new version of the Java Community Process - was completed and put into effect in October 2011 as JCP 2.8. This focused on a small number of simple but important changes to make our process more transparent and to enable broader participation. We're already seeing the benefits of these changes as new and existing JSRs adopt the new requirements. The second - JSR 355, Executive Committee Merge, is also Final. You can read the JCP 2.9 Process Document .  As part of the JSR 355 Final Release, the JCP Executive Committee published revisions to the JCP Process Document (version 2.9) and the EC Standing Rules (version 2.2).  The changes went into effect following the 2012 EC Elections in November. The third JSR 358, A major revision of the Java Community Process was submitted in June 2012.  This JSR will modify the Java Specification Participation Agreement (JSPA) as well as the Process Document, and will tackle a large number of complex issues, many of them postponed from JSR 348. For these reasons, the JCP EC (acting as the Expert Group for this JSR), expects to spend a considerable amount of time working on. The JSPA is defined by the JCP as "a one-year, renewable agreement between the Member and Oracle. The success of the Java community depends upon an open and transparent JCP program.  JSR 358, A major revision of the Java Community Process, is now in process and can be followed on java.net. The following JSRs and Spec Leads were the early adopters of JCP 2.8, who voluntarily migrated their JSRs from JCP 2.x to JCP 2.8 or above.  More candidates for 2012 JCP Star Spec Leads! JSR 236, Concurrency Utilities for Java EE (Anthony Lai/Oracle), migrated April 2012 JSR 308, Annotations on Java Types (Michael Ernst, Alex Buckley/Oracle), migrated September 2012 JSR 335, Lambda Expressions for the Java Programming Language (Brian Goetz/Oracle), migrated October 2012 JSR 337, Java SE 8 Release Contents (Mark Reinhold/Oracle) – EG Formation, migrated September 2012 JSR 338, Java Persistence 2.1 (Linda DeMichiel/Oracle), migrated January 2012 JSR 339, JAX-RS 2.0: The Java API for RESTful Web Services (Santiago Pericas-Geertsen, Marek Potociar/Oracle), migrated July 2012 JSR 340, Java Servlet 3.1 Specification (Shing Wai Chan, Rajiv Mordani/Oracle), migrated August 2012 JSR 341, Expression Language 3.0 (Kin-man Chung/Oracle), migrated August 2012 JSR 343, Java Message Service 2.0 (Nigel Deakin/Oracle), migrated March 2012 JSR 344, JavaServer Faces 2.2 (Ed Burns/Oracle), migrated September 2012 JSR 345, Enterprise JavaBeans 3.2 (Marina Vatkina/Oracle), migrated February 2012 JSR 346, Contexts and Dependency Injection for Java EE 1.1 (Pete Muir/RedHat) – migrated December 2011

    Read the article

  • How To: Automatically Remove www from a Domain in IIS7

    I recently moved the DevMavens.com site from one server to another and needed to ensure that the www.devmavens.com domain correctly redirected to simply devmavens.com.  This is important for SEO reasons (you dont want multiple domains to refer to the same content) and its generally better to use the shorter URL (www is so 20th century) rather than wasting 4 characters for zero gain. My friend and IIS guru Scott Forsyth pointed me to his blog post on how to set up IIS URL Rewriting.  To get started, you simply install IIS Rewrite from this link using the super awesome Web Platform Installer.  You should get something like this when youre done with the install: If you already have IIS Manager open, you may need to close it and re-open it before you see the URL Rewrite module.  Once you do, you should see it listed for any given Site under the IIS section: Double click on the URL Rewrite icon, and then choose the Add Rule(s) action.  You can simply create a blank rule, and name it Redirect from www to domain.com.  Essentially were following the instructions from Scott Forsyths post, but in reverse since hes showing how to add 4 useless characters to the URL and Im interested in removing them. After adding the name, well set the Match Url sections Using dropdown to Wildcards and specify a pattern of simply * to match anything. In the Conditions section we need to add a new condition with an Input of {HTTP_HOST} such that it should match the pattern www.devmavens.com (replace this with your domain). Ignore the Server Variables section. Set the action to Redirect and the Redirect URL to http://devmavens.com/{R:0} (replace with your domain).  The {R:0} will be replaced with whatever the user had entered.  So if they were going to http://www.devmavens.com/default.aspx theyll now be going to http://devmavens.com/default.aspx. The complete Inbound Rule should look like this: Thats it!  Test it out and make sure you havent accidentally used my exact URLs and started sending all of your users to devmavens.com! :)  Be sure to read Scotts post for more information on how to use regular expressions for your rules, and how to set them up via web.config rather than IIS manager. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • SQL SERVER – Inviting Ideas for SQL in Sixty Seconds – 12/12/12

    - by pinaldave
    Today is 12/12/12 – I am not sure when will I write this kind of date again – maybe never. This opportunity comes once in a lifetime when we have the same date, month and year all have same digit. December 12th is one of the most fantastic day in my personal life. Four years ago, this day I got married to my wife – Nupur Dave.  Here are photos of our wedding (Dec 12, 2008). Here is a very interesting photo of myself earlier this year. It is not photoshoped or modified photo. The only modification I have done here is to add arrow and speech bubble. Every Wednesday I tried to put one SQL in Sixty Seconds Video. The journey has been fantastic and so far I have put a total of 35 SQL in Sixty Seconds Video. The goal of the video is to learn something in 1 minute. In our daily life we are all very busy and hardly have time for anything. No matter how much we are busy – we all have one minute of time. Sometime we wait for a minute in elevators, at the escalator, at a coffee shop, or just waiting for our phone reboot. Today is a fantastic day – 12/12/12. Let me invite all of you submits SQL in Sixty Seconds idea. If I like your idea and create a sixty second video over it – you will win surprise learning material from me. There are two very simple rules of the contest: - I should have not have already recorded the tip. The tip should be descriptive. Do not just suggest to cover “Performance Tuning” or “How to Create Index” or “More of reporting services”. The tip should have around 100 words of description explaining SQL Tip. The contest is open forever. The winner will be announced whenever I use the tip to convert to video. If I use your tip, I will for sure mention in the blog post that it is inspired from your suggestion. Meanwhile, do not forget to subscribe YouTube Channel. Here are my latest three videos from SQL in Sixty Seconds. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: About Me, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • SQL SERVER – Identify Numbers of Non Clustered Index on Tables for Entire Database

    - by pinaldave
    Here is the script which will give you numbers of non clustered indexes on any table in entire database. SELECT COUNT(i.TYPE) NoOfIndex, [schema_name] = s.name, table_name = o.name FROM sys.indexes i INNER JOIN sys.objects o ON i.[object_id] = o.[object_id] INNER JOIN sys.schemas s ON o.[schema_id] = s.[schema_id] WHERE o.TYPE IN ('U') AND i.TYPE = 2 GROUP BY s.name, o.name ORDER BY schema_name, table_name Here is the small story behind why this script was needed. I recently went to meet my friend in his office and he introduced me to his colleague in office as someone who is an expert in SQL Server Indexing. I politely said I am yet learning about Indexing and have a long way to go. My friend’s colleague right away said – he had a suggestion for me with related to Index. According to him he was looking for a script which will count all the non clustered on all the tables in the database and he was not able to find that on SQLAuthority.com. I was a bit surprised as I really do not remember all the details about what I have written so far. I quickly pull up my phone and tried to look for the script on my custom search engine and he was correct. I never wrote a script which will count all the non clustered indexes on tables in the whole database. Excessive indexing is not recommended in general. If you have too many indexes it will definitely negatively affect your performance. The above query will quickly give you details of numbers of indexes on tables on your entire database. You can quickly glance and use the numbers as reference. Please note that the number of the index is not a indication of bad indexes. There is a lot of wisdom I can write here but that is not the scope of this blog post. There are many different rules with Indexes and many different scenarios. For example – a table which is heap (no clustered index) is often not recommended on OLTP workload (here is the blog post to identify them), drop unused indexes with careful observation (here is the script for it), identify missing indexes and after careful testing add them (here is the script for it). Even though I have given few links here it is just the tip of the iceberg. If you follow only above four advices your ship may still sink. Those who wants to learn the subject in depth can watch the videos here after logging in. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Index, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Using SQL Source Control and Vault Professional Part 4

    - by Ajarn Mark Caldwell
    Two weeks ago I upgraded our installation of Fortress to the latest version, which is now named Vault Professional.  This is the version of Vault (i.e. Vault Standard 5.1 / Vault Professional 5.1) that will be officially supported with Red-Gate SQL Source Control 2.1.  While the folks at Red-Gate did a fantastic job of working with me to get SQL Source Control to work with the older Fortress version, we weren’t going to just sit on that.  There are a couple of things that Vault Professional cleaned up for us, such as improved integration with Visual Studio 2010, so it was a win all around. Shortly after that upgrade, I received notice from Red-Gate that they had a new Early Access version of SQL Source Control available that included the ability to source control static data.  The idea here is that you probably have a few fairly static lookup tables in your system, and those data values are similar in concept to source code, and should be versioned in your source control management system also.  I agree with this, but please be wise…somebody out there is bound to try to use this feature as their disaster recovery for their entire database, and that is NOT the purpose.  First off, you should never have your PROD (or LIVE, whatever you call it) system attached to source control.  Source Control is for development, not for PROD systems.  Second, use the features that are intended for this purpose, such as BACKUP and RESTORE. Laying that tangent aside, it is great that now you can include these critical values in your repository and make them part of a deployment process.  As you would guess, SQL Source Control uses SQL Data Compare to create the data change scripts just like it uses SQL Compare to create the schema change scripts.  Once again, they did a very good job with the integration to their other products.  At this point we are really starting to see some good payback on our investment in the full SQL Developer Bundle.  Those products were worth the investment back when we only used them sporadically for troubleshooting and DBA analysis, but now with SQL Source Control, they are becoming everyday-use products for the development team. I like this software (SQL Source Control) so much that I am about to break my own rules and distribute it to my team to use even though it is still in beta.  This is the first time that I have approved the use of any beta software in a production scenario (actively building our next versions of internal software) but I predict that the usability and productivity gain of using SQL Source Control over manual scripting is worth the risk.  Of course, I have also put this beta software through its paces pretty well to be comfortable with it, and Red-Gate has proven their responsiveness to issues that came up in my early beta testing, and so I am willing to bet on their continued support.  Likewise, SourceGear, the maker of Vault Professional, has proven itself to me as well, and so the combination of SQL Source Control with Vault Professional is the new standard for my development team.

    Read the article

  • Responding to Invites

    - by Daniel Moth
    Following up from my post about Sending Outlook Invites here is a shorter one on how to respond. Whatever your choice (ACCEPT, TENTATIVE, DECLINE), if the sender has not unchecked the "Request Response" option, then send your response. Always send your response. Even if you think the sender made a mistake in keeping it on, send your response. Seriously, not responding is plain rude. If you knew about the meeting, and you are happy investing your time in it, and the time and location work for you, and there is an implicit/explicit agenda, then ACCEPT and send it. If one or more of those things don't work for you then you have a few options. Send a DECLINE explaining why. Reply with email to ask for further details or for a change to be made. If you don’t receive a response to your email, send a DECLINE when you've waited enough. Send a TENTATIVE if you haven't made up your mind yet. Hint: if they really require you there, they'll respond asking "why tentative" and you have a discussion about it. When you deem appropriate, instead of the options above, you can also use the counter propose feature of Outlook but IMO that feature has questionable interaction model and UI (on both sender and recipient) so many people get confused by it. BTW, two of my outlook rules are relevant to invites. The first one auto-marks as read the ACCEPT responses if there is no comment in the body of the accept (I check later who has accepted and who hasn't via the "Tracking" button of the invite). I don’t have a rule for the DECLINE and TENTATIVE cause typically I follow up with folks that send those.   The second rule ensures that all Invites go to a specific folder. That is the first folder I see when I triage email. It is also the only folder which I have configured to show a count of all items inside it, rather than the unread count - when sending a response to an invite the item disappears from the folder and hence it is empty and not nagging me. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Can you Trust Search?

    - by David Dorf
    An awful lot of referrals to e-commerce sites come from web searches. Retailers rely on search engine optimization (SEO) to correctly position their website so they can be found. Search on "blue jeans" and the results are determined by a semi-secret algorithm -- in my case Levi.com, Banana Republic, and ShopStyle show up. The NY Times recently uncovered a situation where JCPenney, via third-parties hired to help with SEO, was caught manipulating search results so they were erroneously higher in page rankings. No doubt this helped drive additional sales during this part Christmas. The article, The Dirty Little Secrets of Search, is well worth reading. My friend Ron Kleinman started an interesting discussion at the ARTS Linkedin forum. He posed the question: The ability of a single company to "punish" any retailer (by significantly impacting their on-line sales volume) who does not play by their rules ... is this a good thing or a bad thing? Clearly JCP was in the wrong and needed to be punished, but should that decision lie with Google alone? Don't get me wrong -- I'm certainly not advocating we create a Department of Search where bureaucrats think of ways to spend money, but Google wields an awful lot of power in this situation, and it makes me feel uncomfortable. Now Google is incorporating more social aspects into their search results. For example, when Google knows its me (i.e. I'm logged in when using Google) search results will be influenced by my Twitter network. In an effort to increase relevance, the blogs and re-tweeted articles from my network will be higher in the search results than they otherwise would be. So in the case of product searches, things discussed in my network will rise to the top. Continuing my blue jean example, if someone in my network had been discussing Macy's perhaps they would now be higher in the result set. soapbox: I already have lots of spammers posting bogus comments to this blog in an effort to create additional links to their sites and thus increase their search ranking. Should I expect a similar situation in Twitter and eventually Facebook? Now retailers need to expand their SEO efforts to incorporate social media as well, but do us all a favor and please don't cheat.

    Read the article

  • Separating physics and game logic from UI code

    - by futlib
    I'm working on a simple block-based puzzle game. The game play consists pretty much of moving blocks around in the game area, so it's a trivial physics simulation. My implementation, however, is in my opinion far from ideal and I'm wondering if you can give me any pointers on how to do it better. I've split the code up into two areas: Game logic and UI, as I did with a lot of puzzle games: The game logic is responsible for the general rules of the game (e.g. the formal rule system in chess) The UI displays the game area and pieces (e.g. chess board and pieces) and is responsible for animations (e.g. animated movement of chess pieces) The game logic represents the game state as a logical grid, where each unit is one cell's width/height on the grid. So for a grid of width 6, you can move a block of width 2 four times until it collides with the boundary. The UI takes this grid, and draws it by converting logical sizes into pixel sizes (that is, multiplies it by a constant). However, since the game has hardly any game logic, my game logic layer [1] doesn't have much to do except collision detection. Here's how it works: Player starts to drag a piece UI asks game logic for the legal movement area of that piece and lets the player drag it within that area Player lets go of a piece UI snaps the piece to the grid (so that it is at a valid logical position) UI tells game logic the new logical position (via mutator methods, which I'd rather avoid) I'm not quite happy with that: I'm writing unit tests for my game logic layer, but not the UI, and it turned out all the tricky code is in the UI: Stopping the piece from colliding with others or the boundary and snapping it to the grid. I don't like the fact that the UI tells the game logic about the new state, I would rather have it call a movePieceLeft() method or something like that, as in my other games, but I didn't get far with that approach, because the game logic knows nothing about the dragging and snapping that's possible in the UI. I think the best thing to do would be to get rid of my game logic layer and implement a physics layer instead. I've got a few questions regarding that: Is such a physics layer common, or is it more typical to have the game logic layer do this? Would the snapping to grid and piece dragging code belong to the UI or the physics layer? Would such a physics layer typically work with pixel sizes or with some kind of logical unit, like my game logic layer? I've seen event-based collision detection in a game's code base once, that is, the player would just drag the piece, the UI would render that obediently and notify the physics system, and the physics system would call a onCollision() method on the piece once a collision is detected. What is more common? This approach or asking for the legal movement area first? [1] layer is probably not the right word for what I mean, but subsystem sounds overblown and class is misguiding, because each layer can consist of several classes.

    Read the article

  • AI for a mixed Turn Based + Real Time battle system - Something "Gambit like" the right approach?

    - by Jason L.
    This is maybe a question that's been asked 100 times 1,000 different ways. I apologize for that :) I'm in the process of building the AI for a game I'm working on. The game is a turn based one, in the vein of Final Fantasy but also has a set of things that happen in real time (reactions). I've experimented with FSM, HFSMs, and Behavior Trees. None of them felt "right" to me and all felt either too limiting or too generic / big. The idea I'm toying with now is something like a "Rules engine" that could be likened to the Gambit system from Final Fantasy 12. I would have a set of predefined personalities. Each of these personalities would have a set of conditions it would check on each event (Turn start, time to react, etc). These conditions would be priority ordered, and the first one that returns true would be the action I take. These conditions can also point to a "choice" action, which is just an action that will make a choice based on some Utility function. Sort of a mix of FSM/HFSM and a Utility Function approach. So, a "gambit" with the personality of "Healer" may look something like this: (ON) Ally HP = 0% - Choose "Relife" spell (ON) Ally HP < 50% - Choose Heal spell (ON) Self HP < 65% - Choose Heal spell (ON) Ally Debuff - Choose Debuff Removal spell (ON) Ally Lost Buff - Choose Buff spell Likewise, a "gambit" with the personality of "Agressor" may look like this: (ON) Foe HP < 10% - Choose Attack skill (ON) Foe any - Choose target - Choose Attack skill (ON) Self Lost Buff - Choose Buff spell (ON) Foe HP = 0% - Taunt the player What I like about this approach is it makes sense in my head. It also would be extremely easy to build an "AI Editor" with an approach like this. What I'm worried about is.. would it be too limiting? Would it maybe get too complicated? Does anyone have any experience with AIs in Turn Based games that could maybe provide me some insight into this approach.. or suggest a different approach? Many thanks in advance!!!

    Read the article

  • Hey You! Stop Using the Apply Button and Just Click OK! [Geek Rants]

    - by The Geek
    As a computer geek, I often find myself helping people, and watching them change settings on their PC… and they almost always click the Apply button, and then the OK button. Why? Whenever you encounter a dialog box in Windows, there are the standard OK, Cancel, Apply buttons—but you don’t actually have to click the Apply button first. The OK button does the same thing, saves the settings, and then closes the dialog box… saving you an extra click. Don’t believe me? Try it out for yourself. Only the worst possible application won’t behave that way, and you probably don’t want to use that type of application to begin with. The only exception to this rule is a multiple tab dialog box, on a badly written application. Sometimes… your settings on one tab won’t stick unless you click Apply. Note that in this particular case, you can make changes in any one of the tabs, and they will carry through without having to click Apply, because this dialog window is well written. We’re just using the screenshot as an example of a multiple tab setting interface. So now that you know better, you can tell us… do you always use the Apply button first? Have you ever found an instance where it behaves differently? Similar Articles Productive Geek Tips Got Awesome Skills? Why Not Write for How-To Geek?Customize Your Windows Vista Logon ScreenUse Outlook Rules to Prevent "Oh No!" After Sending EmailsGot Awesome Geek Skills? The How-To Geek is Looking for WritersQuick Firefox UI Tweaks TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Recycle ! Find That Elusive Icon with FindIcons Looking for Good Windows Media Player 12 Plug-ins? Find Out the Celebrity You Resemble With FaceDouble Whoa ! Use Printflush to Solve Printing Problems

    Read the article

  • Amazon.com Cutting Off Colorado Affiliates

    - by Joe Mayo
    I received an email from Amazon.com today, essentially cutting off my affiliate status because I'm in Colorado. Colorado recently passed legislation that requires retailers to either collect sales tax for on-line transactions or engage in an onerous process that makes you wish you had collected sales tax.  After I Tweeted this, Mike Jones tweeted a link to the legislation.  Here's an excerpt from Amazon.com's email: "Dear Colorado-based Amazon Associate: We are writing from the Amazon Associates Program to inform you that the Colorado government recently enacted a law to impose sales tax regulations on online retailers. The regulations are burdensome and no other state has similar rules. The new regulations do not require online retailers to collect sales tax. Instead, they are clearly intended to increase the compliance burden to a point where online retailers will be induced to "voluntarily" collect Colorado sales tax -- a course we won't take. We and many others strongly opposed this legislation, known as HB 10-1193, but it was enacted anyway. Regrettably, as a result of the new law, we have decided to stop advertising through Associates based in Colorado. We plan to continue to sell to Colorado residents, however, and will advertise through other channels, including through Associates based in other states. There is a right way for Colorado to pursue its revenue goals, but this new law is a wrong way. As we repeatedly communicated to Colorado legislators, including those who sponsored and supported the new law, we are not opposed to collecting sales tax within a constitutionally-permissible system applied even-handedly. The US Supreme Court has defined what would be constitutional, and if Colorado would repeal the current law or follow the constitutional approach to collection, we would welcome the opportunity to reinstate Colorado-based Associates. You may express your views of Colorado's new law to members of the General Assembly and to Governor Ritter, who signed the bill. Your Associates account has been closed as of March 8, 2010, and we will no longer pay advertising fees for customers you refer to Amazon.com after that date. Please be assured that all qualifying advertising fees earned prior to March 8, 2010, will be processed and paid in accordance with our regular payment schedule. Based on your account closure date of March 8, any final payments will be paid by May 31, 2010. We have enjoyed working with you and other Colorado-based participants in the Amazon Associates Program, and wish you all the best in your future.   Best Regards,   The Amazon Associates Team"

    Read the article

  • Adding Attributes to Generated Classes

    ASP.NET MVC 2 adds support for data annotations, implemented via attributes on your model classes.  Depending on your design, you may be using an OR/M tool like Entity Framework or LINQ-to-SQL to generate your entity classes, and you may further be using these entities directly as your Model.  This is fairly common, and alleviates the need to do mapping between POCO domain objects and such entities (though there are certainly pros and cons to using such entities directly). As an example, the current version of the NerdDinner application (available on CodePlex at nerddinner.codeplex.com) uses Entity Framework for its model.  Thus, there is a NerdDinner.edmx file in the project, and a generated NerdDinner.Models.Dinner class.  Fortunately, these generated classes are marked as partial, so you can extend their behavior via your own partial class in a separate file.  However, if for instance the generated Dinner class has a property Title of type string, you cant then add your own Title of type string for the purpose of adding data annotations to it, like this: public partial class Dinner { [Required] public string Title { get;set; } } This will result in a compilation error, because the generated Dinner class already contains a definition of Title.  How then can we add attributes to this generated code?  Do we need to go into the T4 template and add a special case that says if were generated a Dinner class and it has a Title property, add this attribute?  Ick. MetadataType to the Rescue The MetadataType attribute can be used to define a type which contains attributes (metadata) for a given class.  It is applied to the class you want to add metadata to (Dinner), and it refers to a totally separate class to which youre free to add whatever methods and properties you like.  Using this attribute, our partial Dinner class might look like this: [MetadataType(typeof(Dinner_Validation))] public partial class Dinner {}   public class Dinner_Validation { [Required] public string Title { get; set; } } In this case the Dinner_Validation class is public, but if you were concerned about muddying your API with such classes, it could instead have been created as a private class within Dinner.  Having the validation attributes specified in their own class (with no other responsibilities) complies with the Single Responsibility Principle and makes it easy for you to test that the validation rules you expect are in place via these annotations/attributes. Thanks to Julie Lerman for her help with this.  Right after she showed me how to do this, I realized it was also already being done in the project I was working on. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Best pathfinding for a 2D world made by CPU Perlin Noise, with random start- and destinationpoints?

    - by Mathias Lykkegaard Lorenzen
    I have a world made by Perlin Noise. It's created on the CPU for consistency between several devices (yes, I know it takes time - I have my techniques that make it fast enough). Now, in my game you play as a fighter-ship-thingy-blob or whatever it's going to be. What matters is that this "thing" that you play as, is placed in the middle of the screen, and moves along with the camera. The white stuff in my world are walls. The black stuff is freely movable. Now, as the player moves around he will constantly see "monsters" spawning around him in a circle (a circle that's larger than the screen though). These monsters move inwards and try to collide with the player. This is the part that's tricky. I want these monsters to constantly spawn, moving towards the player, but avoid walls entirely. I've added a screenshot below that kind of makes it easier to understand (excuse me for my bad drawing - I was using Paint for this). In the image above, the following rules apply. The red dot in the middle is the player itself. The light-green rectangle is the boundaries of the screen (in other words, what the player sees). These boundaries move with the player. The blue circle is the spawning circle. At the circumference of this circle, monsters will spawn constantly. This spawncircle moves with the player and the boundaries of the screen. Each monster spawned (shown as yellow triangles) wants to collide with the player. The pink lines shows the path that I want the monsters to move along (or something similar). What matters is that they reach the player without colliding with the walls. The map itself (the one that is Perlin Noise generated on the CPU) is saved in memory as two-dimensional bit-arrays. A 1 means a wall, and a 0 means an open walkable space. The current tile size is pretty small. I could easily make it a lot larger for increased performance. I've done some path algorithms before such as A*. I don't think that's entirely optimal here though.

    Read the article

  • Achieving Zero Downtime Deployment

    - by MattW
    I am trying to achieve zero downtime deployments so I can deploy less during off hours and more during "slower" hours - or anytime, in theory. My current setup, somewhat simplified: Web Server A (.NET App) Web Server B (.NET App) Database Server (SQL Server) My current deployment process: "Stop" the sites on both Web Server A and B Upgrade the database schema for the version of the app being deployed Update Web Server A Update Web Server B Bring everything back online Current Problem This leads to a small amount of downtime each month - about 30 mins. I do this during off hours, so it isn't a huge problem - but it is something I'd like to get away from. Also - there is no way to really go 'back'. I don't generally make rollback DB scripts - only upgrade scripts. Leveraging The Load Balancer I'd love to be able to upgrade one Web Server at a time. Take Web Server A out of the load balancer, upgrade it, put it back online, then repeat for Web Server B. The problem is the database. Each version of my software will need to execute against a different version of the database - so I am sort of "stuck". Possible Solution A current solution I am considering is adopting the following rules: Never delete a database table. Never delete a database column. Never rename a database column. Never reorder a column. Every stored procedure must be versioned. Meaning - 'spFindAllThings' will become 'spFindAllThings_2' when it is edited. Then it becomes 'spFindAllThings_3' when edited again. Same rule applies to views. While, this seems a bit extreme - I think it solves the problem. Each version of the application will be hitting the DB in a non breaking way. The code expects certain results from the views/stored procedures - and this keeps that 'contract' valid. The problem is - it just seeps sloppy. I know I can clean up old stored procedures after the app is deployed for awhile, but it just feels dirty. Also - it depends on all of the developers following these rule, which will mostly happen, but I imagine someone will make a mistake. Finally - My Question Is this sloppy or hacky? Is anybody else doing it this way? How are other people solving this problem?

    Read the article

  • Should I be put off a junior role that uses an online development test?

    - by Ninefingers
    I've applied for a junior development role, or rather been found by a recruiter looking for a developer. In order to get to a telephone interview stage I've been asked to sit one of those online coding assessments. This wasn't quite what I expected. I consider myself a fairly good developer for my age and experience, but I've no illusions about being Don Knuth or anything. The test was a series of incredibly obtuse questions asking about the results of various obscure evaluations. About 30 minutes in I was thinking to myself I hadn't intended to enter an obfuscated code contest/code golf exercise. After my last telephone interview I was asked to build something. I did. That seemed fair. Go away and work this out is more my in office experience of programming than "please evaluate this combination of lambdas, filters, maps, lists, tuples etc". So I'm a little put off, to be honest. I never claimed to know the language inside out or all the little corner cases. My questions, then: Should I be put off? Why? Why not? Are these kinds of tests what I should be expecting for junior roles? Should I learn stuff exam style? That seems to be the objective of these tests, for which you are timed and not supposed to use references or books? Normally, in the course of development I have a fairly good idea of basic types, rules, flow control and whatever. Occasionally I'll come up on something I need to use a regex for and have to go and remind myself of the exact piece of syntax I need if trying what I think should work doesn't. Or I'll come up against a module I've not used before and go and look it up. For example, if I wanted to write a server using sockets in C right now, I'd probably check the last piece of code I wrote doing that (and or the various books I have) and work from there. Chances are I probably couldn't do it exactly from scratch and from memory, although I can tell you you'd need a socket(), bind(), listen() and accept() call and you might also want select() depending on whether you intend to pthread_create or not. So I know what the calls are, but not their specific parameter list. What are your experiences if you are a recruiting manager? Are you after programmers who can quote you the API or do you not mind if your programmers have a few books on their desk and google function calls every so often?

    Read the article

  • My Package Version Number Appears Greater Yet apt-get Doesn't Select It

    - by nutznboltz
    Backstory: It was determined that when using lxc container VMs the Nagios nrpe shutdown script when run on the host of the containers would kill the nrpe processes inside the containers. This was remediated by changing the script to use pidfiles instead of searching the process table for the nrpe process. Regrettably start-stop-daemon is a C program that resulted from translating a Perl script and it shows. There are far too many global varibles in start-stop-daemon.c and although there are some nice blocks of comments there are far to few comments that explain the intent behind variable names such as "schedule" (the string "schedule" appears in many contexts.) The manual page for start-stop-daemon strongly suggests that unless you use the "--retry" option the start-stop-daemon program may return before the process it sent a signal to actually calls exit() and terminates, however it doesn't actually state this in plain English. The obtuseness of start-stop-daemon is most likely the reason that the "fixed" version of the script includes a dubious comment indicating that sometimes the pid file has not been removed. I can easily see why someone would not understand that he left the --retry option missing. This bug also causes failures when the script is given the "restart" option; the nrpe daemon will shutdown but not start up again. Did I mention that since applying the update our nrpe servers started crashing over and over? Repairing this is why I am doing this work. I have been working on remediating the fix. You can see my current work in this PPA. Actual Question: The upstream version number of nagios-nrpe-server in lucid-updates is 2.12-4ubuntu1.10.04.1 My PPA uses this version number 2.12-4ubuntu1.10.04.1.1~ppa1~lucid1 I check the rules here and use this test program and I am lead to believe that the version number I use in my PPA is greater than the one in lucid-updates yet when I ran: sudo add-apt-repository ppa:nutznboltz/nrpe-unbreak-lp-600941 sudo apt-get update sudo aptitiude dist-upgrade The replacement package was not installed. I was able to install it using sudo aptitude install nagios-nrpe-server=2.12-4ubuntu1.10.04.1.1~ppa1~lucid1 Can anyone explain this behavior? Why didn't my version number appear greater to "aptitude dist-upgrade"? Thanks $ cat /etc/apt/preferences Package: * Pin: release a=lucid-backports Pin-Priority: 400 Package: * Pin: release a=lucid-security Pin-Priority: 990 Package: * Pin: release a=lucid-updates Pin-Priority: 900 Package: * Pin: release a=lucid-proposed Pin-Priority: 400 $ ls /etc/apt/preferences.d/ $ Should not make any difference as a PPA cannot be in any of those pockets. I went ahead and bumped the version number in the PPA to 2.12-4ubuntu1.10.04.2~ppa1~lucid1. I'll see if that makes a difference. I do notice that lintian complains: W: nagios-nrpe-server: debian-revision-not-well-formed 2.12-4ubuntu1.10.04.2~ppa1~lucid1

    Read the article

  • Oracle ERP Cloud Solution Defines Revenue Recognition Software Market

    - by Steve Dalton
    Normal 0 false false false EN-US X-NONE X-NONE Revenue is a fundamental yardstick of a company's performance, and one of the most important metrics for investors in the capital markets. So it’s no surprise that the accounting standard boards have devoted significant resources to this topic, with a key goal of ensuring that companies use a consistent method of recognizing revenue. Due to the myriad of revenue-generating transactions, and the divergent ways organizations recognize revenue today, the IFRS and FASB have been working for 12 years on a common set of accounting standards that apply to all industries in virtually all countries. Through their joint efforts on May 28, 2014 the FASB and IFRS released the IFRS 15 / ASU 2014-9 (Revenue from Contracts with Customers) converged accounting standard. This standard applies to revenue in all public companies, but heavily impacts organizations in any industry that might have complex sales contracts with multiple distinct deliverables (obligations). For example, an auto dealer who bundles free service with the sale of a car can only recognize the service revenue once the owner of the car brings it in for work. Similarly, high-tech companies that bundle software licenses, consulting, and support services on a sales contract will recognize bundled service revenue once the services are delivered. Now all companies need to review their revenue for hidden bundling and implicit obligations. Numerous time-consuming and judgmental activities must be performed to properly recognize revenue for complex sales contracts. To illustrate, after the contract is identified, organizations must identify and examine the distinct deliverables, determine the estimated selling price (ESP) for each deliverable, then allocate the total contract price to each deliverable based on the ESPs. In terms of accounting, organizations must determine whether the goods or services have been delivered or performed to the customer’s satisfaction, then either book revenue in the current period or record a liability for the obligation if revenue will be recognized in a future accounting period. Oracle Revenue Management Cloud was architected and developed so organizations can simplify and streamline revenue recognition. Among other capabilities, the solution uses business rules to efficiently identify and examine contracts, intelligently calculate and allocate deliverable prices based on prescribed inputs, and accurately recognize revenue for each deliverable based on customer satisfaction. "Oracle works very closely with our customers, the Big 4 accounting firms, and the accounting standard boards to deliver an adaptive, comprehensive, new generation revenue recognition solution,” said Rondy Ng, Senior Vice President, Applications Development. “With the recently announced IFRS 15 / ASU 2014-9, Oracle is ready to support customer adoption of the new standard with our Revenue Management Cloud,” said Rondy. Oracle Revenue Management Cloud, an integral part of Oracle Financials Cloud, helps organizations comply with accounting standards, provides them with confidence that reported revenue is materially accurate, and simplifies the accounting process for revenue recognition. Stay tuned to this blog for regular updates on Oracle Revenue Management Cloud. We also invite you to review our new oracle.com ERP pages @ oracle.com/erp. We will be updating these pages very soon with more information about Oracle Revenue Management Cloud.

    Read the article

  • Creating metadata value relationships

    - by kyle.hatlestad
    I was recently asked an question about an interesting use case. They wanted content to be submitted into UCM with a particular ID in a custom metadata field. But they wanted that ID to be translated during submission into an employee name in another metadata field upon submission. My initial thought was that this could be done with a dependent choice list (DCL). One option list field driving the choices in another. But this didn't work in this case for a couple of reasons. First, the number of IDs could potentially be very large. So making that into a drop-down list would not be practical. The preference would be for that field to simply be a text field to type in the ID. Secondly, data could be submitted through different methods other then the web-based check-in form. And without an interface to select the DCL choices, the system needed a way to determine and populate the name field. So instead I went the approach of having the value of the ID field drive the value of the Name field using the derived field approach in my rule. In looking at it though, it was easy to simply copy the value of the ID field into the Name field...but to have it look up and translate the value proved to be the tricky part. So here is the approach I took... First I created my two metadata fields as standard text fields in the Configuration Manager applet. Next I create a table that stores the relationship between the IDs and Names. I then create a View into that table and set the column to the EmployeeID. I now create a new Application Field and set it as an option list using the View I created in the previous step. The reason I create it as an Application field is because I don't need to display the field or store a value in it. I simply need to make use of the option list in the next step... Finally, I create a Rule in which I select the Employee Name field and turn on the 'Is derived field' checkbox. I edit the derived value and add a new condition. Because the option list is a Application field and not an Information field, I can't use the Compute button. Instead, I insert this line directly in the Value field: @getFieldViewValue("EmployeeMapping",#active.xEmployeeID, "EmployeeName") The "EmployeeMapping" parameter designates that the value should be pulled from the EmployeeMapping Application field that I had created in the previous step. The #active.xEmployeeID field is the ID value that should be pulled from what the user entered. "EmployeeName" is the column name in the table which has the value which corresponds to the ID. The extracted name then becomes the value within our Employee Name field. That's it. You can then add additional Rules to make the Name field read-only/hidden on the check-in page and such.

    Read the article

  • Bad Data is Really the Monster

    - by Dain C. Hansen
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Bad Data is really the monster – is an article written by Bikram Sinha who I borrowed the title and the inspiration for this blog. Sinha writes: “Bad or missing data makes application systems fail when they process order-level data. One of the key items in the supply-chain industry is the product (aka SKU). Therefore, it becomes the most important data element to tie up multiple merchandising processes including purchase order allocation, stock movement, shipping notifications, and inventory details… Bad data can cause huge operational failures and cost millions of dollars in terms of time, resources, and money to clean up and validate data across multiple participating systems. Yes bad data really is the monster, so what do we do about it? Close our eyes and hope it stays in the closet? We’ve tacked this problem for some years now at Oracle, and with our latest introduction of Oracle Enterprise Data Quality along with our integrated Oracle Master Data Management products provides a complete, best-in-class answer to the bad data monster. What’s unique about it? Oracle Enterprise Data Quality also combines powerful data profiling, cleansing, matching, and monitoring capabilities while offering unparalleled ease of use. What makes it unique is that it has dedicated capabilities to address the distinct challenges of both customer and product data quality – [different monsters have different needs of course!]. And the ability to profile data is just as important to identify and measure poor quality data and identify new rules and requirements. Included are semantic and pattern-based recognition to accurately parse and standardize data that is poorly structured. Finally all of the data quality components are integrated with Oracle Master Data Management, including Oracle Customer Hub and Oracle Product Hub, as well as Oracle Data Integrator Enterprise Edition and Oracle CRM. Want to learn more? On Tuesday Nov 15th, I invite you to listen to our webcast on Reduce ERP consolidation risks with Oracle Master Data Management I’ll be joined by our partner iGate Patni and be talking about one specific way to deal with the bad data monster specifically around ERP consolidation. Look forward to seeing you there!

    Read the article

  • Wordpress Installation (on IIS and SQL Server)

    - by Davide Mauri
    To proceed with the installation of Wordpress on SQL Server and IIS, first of all, you need to do the following steps Create a database on SQL Server that will be used by Wordpress Create login that can access to the just created database and put the user into ddladmin, db_datareader, db_datawriter roles Download and unpack Wordpress 3.3.2 (latest version as of 27 May 2012) zip file into a directory of your choice Download the wp-db-abstraction 1.1.4 (latest version as of 27 May 2012) plugin from wordpress.org website Now that the basic action has been done, you can start to setup and configure your Wordpress installation. Unpack and follow the instructions in the README.TXT file to install the Database Abstraction Layer. Mainly you have to: Upload wp-db-abstraction.php and the wp-db-abstraction directory to wp-content/mu-plugins.  This should be parallel to your regular plugins directory.  If the mu-plugins directory does not exist, you must create it. Put the db.php file from inside the wp-db-abstraction.php directory to wp-content/db.php Now you can create an application pool in IIS like the following one Create a website, using the above Application Pool, that points to the folder where you unpacked Wordpress files. Be sure to give the “Write” permission to the IIS account, as pointed out in this (old, but still quite valid) installation manual: http://wordpress.visitmix.com/development/installing-wordpress-on-sql-server#iis Now you’re ready to go. Point your browser to the configured website and the Wordpress installation screen will be there for you. When you’re requested to enter information to connect to MySQL database, simply skip that page, leaving the default values. If you have installed the Database Abstraction Layer, another database installation screen will appear after the one used by MySQL, and here you can enter the configuration information needed to connect to SQL Server. After having finished the installation steps, you should be able to access and navigate your wordpress site.  A final touch, and it’s done: just add the needed rewrite rules http://wordpress.visitmix.com/development/installing-wordpress-on-sql-server#urlrewrite and that’s it! Well. Not really. Unfortunately the current (as of 27 May 2012) version of the Database Abstraction Layer (1.1.4) has some bugs. Luckily they can be quickly fixed: Backslash Fix http://wordpress.org/support/topic/plugin-wp-db-abstraction-fix-problems-with-backslash-usage Select Top 0 Fix Make the change to the file “.\wp-content\mu-plugins\wp-db-abstraction\translations\sqlsrv\translations.php” suggested by “debettap”   http://sourceforge.net/tracker/?func=detail&aid=3485384&group_id=315685&atid=1328061 And now you have a 100% working Wordpress installation on SQL Server! Since I also wanted to take advantage of SQL Server Full Text Search, I’ve created a very simple wordpress plugin to setup full-text search and to use it as website search engine: http://wpfts.codeplex.com/ Enjoy!

    Read the article

  • Oracle Utilities Application Framework V4.2.0.0.0 Released

    - by ACShorten
    The Oracle Utilities Application Framework V4.2.0.0.0 has been released with Oracle Utilities Customer Care And Billing V2.4. This release includes new functionality and updates to existing functionality and will be progressively released across the Oracle Utilities applications. The release is quite substantial with lots of new and exciting changes. The release notes shipped with the product includes a summary of the changes implemented in V4.2.0.0.0. They include the following: Configuration Migration Assistant (CMA) - A new data management capability to allow you to export and import Configuration Data from one environment to another with support for Approval/Rejection of individual changes. Database Connection Tagging - Additional tags have been added to the database connection to allow database administrators, Oracle Enterprise Manager and other Oracle technology the ability to monitor and use individual database connection information. Native Support for Oracle WebLogic - In the past the Oracle Utilities Application Framework used Oracle WebLogic in embedded mode, and now, to support advanced configuration and the ExaLogic platform, we are adding Native Support for Oracle WebLogic as configuration option. Native Web Services Support - In the past the Oracle Utilities Application Framework supplied a servlet to handle Web Services calls and now we offer an alternative to use the native Web Services capability of Oracle WebLogic. This allows for enhanced clustering, a greater level of Web Service standards support, enchanced security options and the ability to use the Web Services management capabilities in Oracle WebLogic to implement higher levels of management including defining additional security rules to control access to individual Web Services. XML Data Type Support - Oracle Utilities Application Framework now allows implementors to define XML Data types used in Oracle in the definition of custom objects to take advantage of XQuery and other XML features. Fuzzy Operator Support - Oracle Utilities Application Framework supports the use of the fuzzy operator in conjunction with Oracle Text to take advantage of the fuzzy searching capabilities within the database. Global Batch View - A new JMX based API has been implemented to allow JSR120 compliant consoles the ability to view batch execution across all threadpools in the Coherence based Named Cache Cluster. Portal Personalization - It is now possible to store the runtime customizations of query zones such as preferred sorting, field order and filters to reuse as personal preferences each time that zone is used. These are just the major changes and there are quite a few more that have been delivered (and more to come in the service packs!!). Over the next few weeks we will be publishing new whitepapers and new entries in this blog outlining new facilities that you want to take advantage of.

    Read the article

  • MS Tech Ed 2011 Coming Soon

    - by sonam
    Microsoft Tech ed 2010 was a great success. Infact  Most of such conferences always provide a great place to meet other  technology enthusiasts and ofcourse,whats in the pipeline for future products of a company or field.. And yet again,MS Tech ed India is coming on 23-25 march  in Banglore,India.Well,the place is  ofcourse right suited for any IT/Computing conference.After all,Its Silicon Valley of India.. From Last year.I remember  a session by Harish about  “Building pure client side apps with  Jquery and Microsoft Ajax .” Here’s the video: http://live.viasilverlight.com/TechEdOnDemand/Breakouts/TheWebSimplified1/Session4/AjaxClientSideApps.wmv At that time only,I got to know that jquery is so easy to use for  ajax or client side templating.Though I prefer jquery over  Microsoft Ajax many folds.UpdatePanel  is Dead for sure in my view. I believe,Web forms will be dead sooner or later with ASP.Net  MVC  gaining share many folds.(TODO:Learn MVC). The new standard is surely:JQUERY . Between,Last years videos and ppt’s  are available to browse and download: http://microsoftteched.in/2010/downloads.aspx After going through Tech Ad 2011 session agendas : http://www.microsoft.com/india/teched2011/agenda.aspx Few of my personal choices to watch would be: Day 1: a) Identity And Access Control in the Cloud        b)Windows 7 at  Home:Digitizing your Home.(Sounds cool.)        c) And ofcourse,Jquery and MS ajax(Lets see if MS can do something that’s not already happening with their version Of Ajax).. Day 2:  a) Lap Around Silverlight 5 and Html 5 as I have heard some hot talks that html5 will kill Silverlight,(I don’t see it in near future though).        b) Html 5 more than “Html 5”…Google will be seeing this one. Day3: a) Cross Browser applications in Azure       b)VS 2010 sessions of automated testing azure apps etc. Windows Phone 7 sessions will surely be of more interest now after MS-Nokia Deal. Though,Personally,I would want atleast some worth of  sessions on MS  future in Robotics,AI.Perhaps  I am looking at wrong place..(When is PDC?) And Since,Bill Gates  consider Robotics as the next big thing, Refer  this one : http://www.cs.virginia.edu/robins/A_Robot_in_Every_Home.pdf  I am sure,they wont loose this new hot spot to competitors,  like how google rules in Online  Search now.Robotics and AI will surely provide a big battlefield  for future.See,What IBM is doing with IBM Watson. OR see this, http://www.sciencedaily.com/releases/2011/02/110218083711.htm this is cool only if you can control your mind.Atleast,I’ll prefer regular driving (I would devote my mind seeing  people,places which we see on road).thats what jouney makes “cool”.:P.

    Read the article

  • Is ASP.NET MVC completely (and exclusively) based on conventions?

    - by Mike Valeriano
    --TL;DR Is there a "Hello World!" ASP.NET MVC tutorial out there that doesn't rely on conventions and "stock" projects? Is it even possible to take advantage of the technology without reusing the default file structure, and start from a single "hello_world.asp" file or something (like in PHP)? Am I completely mistaken and I should be looking somewhere else, maybe this? I'm interested in the MVC framework, not Web Forms --Background I've played a bit with PHP in the past, just for fun, and now I'm back to it since web development became relevant for me once again. I'm no professional, but I try to gain as much knowledge and control over the technology I'm working with as possible. I'm using Visual Studio 2012 for C# - my "desktop" language of choice - and since I got the Professional Edition from Dreamspark, the Web Development Tools are available, including ASP.NET MVC 4. I won't touch Web Forms, but the MVC Framework got my attention because the MVC pattern is something I can really relate to, since it provides the control I want but... not quite. Learning PHP was easy - and right form the start I could just create a "hello_world.php" file and just do something like this for immediate results: <!-- file: hello_world.php --> <?php> echo "Hello World!"; <?> But I couldn't find a single ASP.NET (MVC) tutorial out there (I'll be sure to buy one of the upcoming MVC 4 books, only a month away or so) that would start like that. They all start with a sample project, building up knowledge from the basics and heavily using conventions as they go along. Which is fine, I suppose, but it's now the best way for me to learn things. Even the "Empty" project template for a new ASP.NET MVC 4 Application in VS2012 is not empty at all: several files and folders are created for you - much like a new C# desktop application project, but with C# I can in fact start from scratch, creating the project structure myself. It is not the case with PHP: I can choose from a plethora of different MVC frameworks I can just create my own framework I can just skip frameworks altogether, and toss random PHP along with my HTML on a single file and make it work I understand the framework needs to establish some rules, but what if I just want to create a single page website with some C# logic behind it? Do I really need to create a whole bloat of files and folders for the sake of convention? Also, please understand that I haven't gotten far on any of those tutorials mainly because of this reason, but, if that's the only way to do it, I'll go for it using one of the books I've mentioned before. This is my first contact with ASP.NET but from the few comparisons I've read, I believe I should stay the hell away from Web Forms. Thank you. (Please forgive the broken English - it is not my primary language.)

    Read the article

  • SQL – Contest to Get The Date – Win USD 50 Amazon Gift Cards and Cool Gift

    - by Pinal Dave
    If you are a regular reader of this blog – you will find no issue at all in resolving this puzzle. This contest is based on my experience with NuoDB. If you are not familiar with NuoDB, here are few pointers for you. Step by Step Guide to Download and Install NuoDB – Getting Started with NuoDB Quick Start with Admin Sections of NuoDB – Manage NuoDB Database Quick Start with Explorer Sections of NuoDB – Query NuoDB Database In today’s contest you have to answer following questions: Q 1: Precision of NOW() What is the precision of the NuoDB’s NOW() function, which returns current date time? Hint: Run following script on NuoDB Console Explorer section: SELECT NOW() AS CurrentTime FROM dual; Here is the image. I have masked the area where the time precision is displayed. Q 2: Executing Date and Time Script When I execute following script - SELECT 'today' AS Today, 'tomorrow' AS Tomorrow, 'yesterday' AS Yesterday FROM dual; I will get the following result:   NOW – What will be the answer when we execute following script? and WHY? SELECT CAST('today' AS DATE) AS Today, CAST('tomorrow' AS DATE) AS Tomorrow, CAST('yesterday'AS DATE) AS Yesterday FROM dual; HINT: Install NuoDB (it takes 90 seconds). Prizes: 2 Amazon Gifts 2 Limited Edition Hoodies (US resident only)   Rules: Please leave an answer in the comments section below. You must answer both the questions together in a single comment. US resident who wants to qualify to win NuoDB apparel please mention your country in the comment. You can resubmit your answer multiple times, the latest entry will be considered valid. Last day to participate in the puzzle is June 24, 2013. All valid answer will be kept hidden till June 24, 2013. The winner will be announced on June 25, 2013. Two Winners will get USD 25 worth Amazon Gift Card. (Total Value = 25 x 2 = 50 USD) The winner will be selected using a random algorithm from all the valid answers. Anybody with a valid email address can take part in the contest. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: NuoDB

    Read the article

  • Why no more macro languages?

    - by Muhammad Alkarouri
    In this answer to a previous question of mine about scripting languages suitability as shells, DigitalRoss identifies the difference between the macro languages and the "parsed typed" languages in terms of string treatment as the main reason that scripting languages are not suitable for shell purposes. Macro languages include nroff and m4 for example. What are the design decisions (or compromises) needed to create a macro programming language? And why are most of the mainstream languages parsed rather than macro? This very similar question (and the accepted answer) covers fairly well why the parsed typed languages, take C for example, suffer from the use of macros. I believe my question here covers different grounds: Macro languages or those working on a textual level are not wholly failures. Arguably, they include bash, Tcl and other shell languages. And they work in a specific niche such as shells as explained in my links above. Even m4 had a fairly long time of success, and some of the web template languages can be regarded as macro languages. It is quite possible that macros and parsed typing do not go well together and that is why macros "break" common languages. In the answer to the linked question, a macro like #define TWO 1+1 would have been covered by the common rules of the language rather than conflicting with those of the host language. And issues like "macros are not typed" and "code doesn't compile" are not relevant in the context of a language designed as untyped and interpreted with little concern for efficiency. The question about the design decisions needed to create a macro language pertain to a hobby project which I am currently working on on designing a new shell. Taking the previous question in context would clarify the difference between adding macros to a parsed language and my objective. I hope the clarification shows that the question linked doesn't cover this question, which is two parts: If I want to create a macro language (for a shell or a web template, for example), what limitations and compromises (and guidelines, if exist) need to be done? (Probably answerable by a link or reference) Why have no macro languages succeed in becoming mainstream except in particular niches? What makes typed languages successful in large programming, while "stringly-typed" languages succeed in shells and one-liner like environments?

    Read the article

< Previous Page | 432 433 434 435 436 437 438 439 440 441 442 443  | Next Page >