Search Results

Search found 7722 results on 309 pages for 'pitfalls to avoid'.

Page 209/309 | < Previous Page | 205 206 207 208 209 210 211 212 213 214 215 216  | Next Page >

  • Where to draw the line between development-led security and administration-led security?

    - by haylem
    There are cases where you have the opportunity, as a developer, to enforce stricter security features and protections on a software, though they could very well be managed at an environmental level (ie, the operating system would take care of it). Where would you say you draw the line, and what elements do you factor in your decision? Concrete Examples User Management is the OS's responsibility Not exactly meant as a security feature, but in a similar case Google Chrome used to not allow separate profiles. The invoked reason (though it now supports multiple profiles for a same OS user) used to be that user management was the operating system's responsibility. Disabling Web-Form Fields A recurrent request I see addressed online is to have auto-completion be disabled on form fields. Auto-completion didn't exist in old browsers, and was a welcome feature at the time it was introduced for people who needed to fill in forms often. But it also brought in some security concerns, and so some browsers started to implement, on top of the (obviously needed) setting in their own preference/customization panel, an autocomplete attribute for form or input fields. And this has now been introduced into the upcoming HTML5 standard. For browsers who do not listen to this attribute, strange hacks *\ are offered, like generating unique IDs and names for fields to avoid them from being suggested in future forms (which comes with another herd of issues, like polluting your local auto-fill cache and not preventing a password from being stored in it, but instead probably duplicating its occurences). In this particular case, and others, I'd argue that this is a user setting and that it's the user's desire and the user's responsibility to enable or disable auto-fill (by disabling the feature altogether). And if it is based on an internal policy and security requirement in a corporate environment, then substitute the user for the administrator in the above. I assume it could be counter-argued that the user may want to access non-critical applications (or sites) with this handy feature enabled, and critical applications with this feature disabled. But then I'd think that's what security zones are for (in some browsers), or the sign that you need a more secure (and dedicated) environment / account to use these applications. * I obviously don't deny the ingenuity of the people who were forced to find workarounds, just the necessity of said workarounds. Questions That was a tad long-winded, so I guess my questions are: Would you in general consider it to be the application's (hence, the developer's) responsiblity? Where do you draw the line, if not in the "general" case?

    Read the article

  • How one decision can turn web services to hell

    - by DigiMortal
    In this posting I will show you how one stupid decision may turn developers life to hell. There is a project where bunch of complex applications exchange data frequently and it is very hard to change something without additional expenses. Well, one analyst thought that string is silver bullet of web services. Read what happened. Bad bad mistake In the early stages of integration project there was analyst who also established architecture and technical design for web services. There was one very bad mistake this analyst made: All data must be converted to strings before exchange! Yes, that’s correct, this was the requirement. All integers, decimals and dates are coming in and going out as strings. There was also explanation for this requirement: This way we can avoid data type conversion errors! Well, this guy works somewhere else already and I hope he works in some burger restaurant – far away from computers. Consequences If you first look at this requirement it may seem like little annoying piece of crap you can easily survive. But let’s see the real consequences one stupid decision can cause: hell load of data conversions are done by receiving applications and SSIS packages, SSIS packages are not error prone and they depend heavily on strings they get from different services, there are more than one format per type that is used in different services, for larger amounts of data all these conversion tasks slow down the work of integration packages, practically all developers have been in hurry with some SSIS import tasks and some fields that are not used in different calculations in SSAS cube are imported without data conversions (by example, some prices are strings in format “1.021 $”). The most painful problem for developers is the part of data conversions because they don’t expect that there is such a stupid requirement stated and therefore they are not able to estimate the time their tasks take on these web services. Also developers must be prepared for cases when suddenly some service sends data that is not in acceptable format and they must solve the problems ASAP. This puts unexpected load on developers and they are not very happy with it because they can’t understand why they have to live with this horror if it is possible to fix. What to do if you see something like this? Well, explain the problem to customer and demand special tasks to project schedule to get this mess solved before going on with new developments. It is cheaper to solve the problems now that later.

    Read the article

  • The Red Gate Guide to SQL Server Team based Development Free e-book

    - by Mladen Prajdic
    After about 6 months of work, the new book I've coauthored with Grant Fritchey (Blog|Twitter), Phil Factor (Blog|Twitter) and Alex Kuznetsov (Blog|Twitter) is out. They're all smart folks I talk to online and this book is packed with good ideas backed by years of experience. The book contains a good deal of information about things you need to think of when doing any kind of multi person database development. Although it's meant for SQL Server, the principles can be applied to any database platform out there. In the book you will find information on: writing readable code, documenting code, source control and change management, deploying code between environments, unit testing, reusing code, searching and refactoring your code base. I've written chapter 5 about Database testing and chapter 11 about SQL Refactoring. In the database testing chapter (chapter 5) I cover why you should test your database, why it is a good idea to have a database access interface composed of stored procedures, views and user defined functions, what and how to test. I talk about how there are many testing methods like black and white box testing, unit and integration testing, error and stress testing and why and how you should do all those. Sometimes you have to convince management to go for testing in the development lifecycle so I give some pointers and tips how to do that. Testing databases is a bit different from testing object oriented code in a way that to have independent unit tests you need to rollback your code after each test. The chapter shows you ways to do this and also how to avoid it. At the end I show how to test various database objects and how to test access to them. In the SQL Refactoring chapter (chapter 11) I cover why refactor and where to even begin refactoring. I also who you a way to achieve a set based mindset to solve SQL problems which is crucial to good SQL set based programming and a few commonly seen problems to refactor. These problems include: using functions on columns in the where clause, SELECT * problems, long stored procedure with many input parameters, one subquery per condition in the select statement, cursors are good for anything problem, using too large data types everywhere and using your data in code for business logic anti-pattern. You can read more about it and download it here: The Red Gate Guide to SQL Server Team-based Development Hope you like it and send me feedback if you wish too.

    Read the article

  • ArchBeat Facebook Friday: Top 10 Posts - August 8-14, 2014

    - by Bob Rhubart-Oracle
    5,307 people pay attention to the OTN ArchBeat Facebook Page. Here are the Top 10 posts from that page for the last seven days, August 8-14, 2014. Podcast: ODTUG Kscope 2014: Anatomy of a User Conference - Part 3 There is more to a great user conference than a shared interest in Oracle products. In the final segment of this 3-part OTN ArchBeat Podcast panelists Danny Bryant , Chet "ORACLENERD" Justice, Cameron Lackpour, Debra Lilley, and Mike Riley discuss the nature and importance of community Oracle SOA Suite 12c: The LDAP Adapter quick and easy | Maarten Smeets Maarten Smeets' how-to post describes the installation and configuration of an LDAP server and browser (ApacheDS and Apache Directory Studio). Process level Exception Handling in BPM12c | Abhishek Mittal When an exception occurs while running a process flow you have two choices: 1) retry running the flow object that caused that process flow or 2) move the process instance to the next flow object in the main process flow. Abhishek Mittal shows you how to do both. Building a Responsive WebCenter Portal Application | JayJay Zheng Oracle ACE JayJay Zheng's article addresses the essentials of responsive web design, shows you how to design and develop a responsive WebCenter Portal application, and reviews key development considerations. Cloud Control authorization with Active Directory | Jeroen Gouma Jeroen Gouma takes you step-by-step through the user authortization process in Oracle Enterprise Manager Cloud Control 12c. Video: CIOs Guide to Oracle Products and Solutions | Jessica Keyes The CIO's Guide to Oracle Products and Solutions author Jessica Keyes talks about why input from users and developers is essential to CIOs who want to avoid being escorted out of the building by security guards. Read A CIO's Guide to Oracle Cloud Computing, a sample chapter from the book. Twitter Tuesday - Top 10 @ArchBeat Tweets - August 5-11, 2014 @OTNArchBeat followers from across the galaxy have spoken! Here are the Top 10 tweets for the past seven days. Topics include: Hyperion, OBIEE, ODI, Oracle MAF, and SOA Suite. Recap: Fusion Middleware Summer Camps - Lisbon 2014 | Simon Haslam Oracle ACE Director Simon Haslam's recap of his experience at the Oracle Fusion Middleware Summer Camp in Lisbon, Portugal will make you wish you had been there. WebLogic Data Source Connection Labeling | Steve Felts The connection labeling feature was added in WLS release 10.3.6, and enhanced in release WLS 12.1.3. This post by Steve Felts describes two new connection properties that can be configured on the data source descriptor. Why Mobile Apps <3 REST/JSON | Martin Jarvis Martin Jarvis explores the preference for REST and JSON over SOAP and XML for mobile web services.

    Read the article

  • Oracle Solaris at the OpenStack Summit in Atlanta

    - by Glynn Foster
    I had the fortune of attending my 2nd OpenStack summit in Atlanta a few weeks ago and it turned out to be a really excellent event. Oracle had many folks there this time around across a variety of different engineering teams - Oracle Solaris, Oracle ZFSSA, Oracle Linux, Oracle VM and more. Really great to see continuing momentum behind the project and we're very happy to be involved. Here's a list of the highlights that I had during the summit: The operators track was a really excellent addition, with a chance for users/administrators to voice their opinions based on experiences. Really good to hear how OpenStack is making businesses more agile, but also equally good to hear about some of the continuing frustrations they have (fortunately many of them are new and being addressed). Seeing this discussion morph into a "Win the enterprise" working group is also very pleasing. Enjoyed Troy Toman's keynote (Rackspace) about designing a planet scale cloud OS and the interoperability challenges ahead of us. I've been following some of the discussion around DefCore for a bit and while I have some concerns, I think it's mostly heading in the right direction. Certainly seems like there's a balance to strike to ensure that this effects the OpenStack vendors in such a way as to avoid negatively impacting our end users. Also enjoyed Toby Ford's keynote (AT&T) about his desire for a NVF (Network Function Virtualization) architecture. What really resonated was also his desire for OpenStack to start addressing the typical enterprise workload, being less like cattle and more like pets. The design summit was, as per usual, pretty intense for - definitely would get more value from these if I knew the code base a little better. Nevertheless, attended some really great sessions and got a better feeling of the roadmap for Juno. Markus Flierl gave a great presentation (see below) at the demo theatre for what we're doing with OpenStack on Oracle Solaris (and more widely at Oracle across different products). Based on the discussions that we had at the Oracle booth, there's a huge amount of interest there and we talked to some great customers during the week about their thoughts and directions in this respect. Undoubtedly Atlanta had some really good food. Highlights were the smoked ribs and brisket and the SweetWater brewing company. That said, I also loved the fried chicken, fried green tomatoes and collared greens, and wonderful hosting of "big momma" at Pitty Pat's Porch. Couldn't quite bring myself to eat biscuits and gravy in the morning though. Visiting the World of Coca-Cola just before flying out. A total brain washing exercise, but very enjoyable. And very much liked Beverly (contrary to many other opinions on the internet) - but then again, I'd happily drink tonic water every day of the year... Looking forward to Paris in November!

    Read the article

  • How to Name Linked Servers

    - by Bill Graziano
    I did another SQL Server migration over the weekend that dealt with linked servers.  I’ve seen all kinds of odd naming schemes and there are a few I like and a few I suggest you avoid. Don’t name your linked server for its IP address.  At some point whatever is on the other end of that IP address will move.  You’ll probably need to point your linked server to a new IP address but not change the name of the linked server.  And then you’ve completely lost any context around this.  Bonus points if a new SQL Server eventually ends up at the old IP address further adding confusion when you’re trying to troubleshoot. Don’t name your linked server based on its instance name.  This one is less obvious.  It sounds nice to have a linked server named [VSRV1\SQLTRAN01].  You know what it is and it’s easy to use.  It’s less nice when you’ve got 200 stored procedures that all reference this linked server but the database they reference has moved to a new instance.  Now when you query this you’re actually querying a different instance. (Please note: I’m not saying it’s a good idea to have 200 stored procedures that all reference a linked server.  I’m just saying it’s not all that uncommon.) Consider naming your linked server something that you can easily search on.  See my note above.  You can also get around this by always enclosing the name in brackets.  That is harder to enforce unless you use some odd characters in it. Consider naming your linked server based on the function.  For example, I’ve had some luck having a linked server named [DW] that points to our data warehouse server.  That server can change names or physically move and all I need to do is update the linked server to point to the new destination.  The descriptive name of the linked server is still accurate.  No code needs to change and people still know what it is just by looking at it. Consider naming your linked server for the database.  I’m still thinking through this one.  It may mean you have multiple linked servers that point to the same instance.  I’ve found that database names rarely change.  It also makes it easier to move individual databases to new servers. Consider pointing your linked servers to DNS entries and not IP addresses.  I’ve done this for reporting databases and had some success.  Especially for read-only snapshots that can get created on the main database or on the mirror.  What issues have you had with linked server names?  What has worked for you?  Where are the holes in my approach?

    Read the article

  • Best pathfinding for a 2D world made by CPU Perlin Noise, with random start- and destinationpoints?

    - by Mathias Lykkegaard Lorenzen
    I have a world made by Perlin Noise. It's created on the CPU for consistency between several devices (yes, I know it takes time - I have my techniques that make it fast enough). Now, in my game you play as a fighter-ship-thingy-blob or whatever it's going to be. What matters is that this "thing" that you play as, is placed in the middle of the screen, and moves along with the camera. The white stuff in my world are walls. The black stuff is freely movable. Now, as the player moves around he will constantly see "monsters" spawning around him in a circle (a circle that's larger than the screen though). These monsters move inwards and try to collide with the player. This is the part that's tricky. I want these monsters to constantly spawn, moving towards the player, but avoid walls entirely. I've added a screenshot below that kind of makes it easier to understand (excuse me for my bad drawing - I was using Paint for this). In the image above, the following rules apply. The red dot in the middle is the player itself. The light-green rectangle is the boundaries of the screen (in other words, what the player sees). These boundaries move with the player. The blue circle is the spawning circle. At the circumference of this circle, monsters will spawn constantly. This spawncircle moves with the player and the boundaries of the screen. Each monster spawned (shown as yellow triangles) wants to collide with the player. The pink lines shows the path that I want the monsters to move along (or something similar). What matters is that they reach the player without colliding with the walls. The map itself (the one that is Perlin Noise generated on the CPU) is saved in memory as two-dimensional bit-arrays. A 1 means a wall, and a 0 means an open walkable space. The current tile size is pretty small. I could easily make it a lot larger for increased performance. I've done some path algorithms before such as A*. I don't think that's entirely optimal here though.

    Read the article

  • SQL SERVER – A Puzzle Part 3 – Fun with SEQUENCE in SQL Server 2012 – Guess the Next Value

    - by pinaldave
    Before continuing this blog post – please read the two part of the SEQUENCE Puzzle here A Puzzle – Fun with SEQUENCE in SQL Server 2012 – Guess the Next Value and A Puzzle Part 2 – Fun with SEQUENCE in SQL Server 2012 – Guess the Next Value Where we played a simple guessing game about predicting next value. The answers the of puzzle is shared on the blog posts as a comment. Now here is the next puzzle based on yesterday’s puzzle. I recently shared the puzzle of the blog post on local user group and it was appreciated by attendees. First execute the script which I have written here. Today’s script is bit different than yesterday’s script as well it will require you to do some service related activities. I suggest you try this on your personal computer’s test environment when no one is working on it. Do not attempt this on production server as it will for sure get you in trouble. The purpose to learn how sequence behave during the unexpected shutdowns and services restarts. Now guess what will be the next value as requested in the query. USE AdventureWorks2012 GO -- Create sequence CREATE SEQUENCE dbo.SequenceID AS BIGINT START WITH 1 INCREMENT BY 1 MINVALUE 1 MAXVALUE 500 CYCLE CACHE 100; GO -- Following will return 1 SELECT next value FOR dbo.SequenceID; ------------------------------------- -- simulate server crash by restarting service -- do not attempt this on production or any server in use ------------------------------------ -- Following will return ??? SELECT next value FOR dbo.SequenceID; -- Clean up DROP SEQUENCE dbo.SequenceID; GO Once the server is restarted what will be the next value for SequenceID. We can learn interesting trivia’s about this new feature of SQL Server using this puzzle. Hint: Pay special attention to the difference between new number and earlier number. Can you see the same number in the definition of the CREATE SEQUENCE? Bonus Question: How to avoid the behavior demonstrated in above mentioned query. Does it have any effect of performance? I suggest you try to attempt to answer this question without running this code in SQL Server 2012. You can restart SQL Server using command prompt as well. I will follow up of the answer in comments below. Recently my friend Vinod Kumar wrote excellent blog post on SQL Server 2012: Using SEQUENCE, you can head over there for learning sequence in details. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Am I experienced enough to learn and develop immediately using Ruby on Rails?

    - by acheong87
    General Question I understand that discussions revolving around questions of this form run the risk of becoming too specific to help others. So, perhaps a better, general question would be: What kind of experience, if any, translates easily to Ruby on Rails; and if none, then what's the learning curve like, in comparison to other popular languages? Background I have the opportunity to build a website using whatever technologies I wish to use. It's a fairly simple website, for listing products, taking payments, managing customer data, providing a back-end portal for employees to manage data, possibly hooking in flight information (the products are travel related), possibly integrating a blog and all the social-networking goodies. Specific Problem I have to let the client know by tonight whether I'm interested in taking up this project, before he talks to other potential developers, but I'm on the fence. I already work a full-time C++ development job, so the money doesn't do it for me. It's the opportunity to (be paid to) learn some new technologies and to have a real, running product in the end. I've heard and read great things about Ruby, and am really intrigued. I zipped through some introductory Ruby tutorials, no sweat. However I found the Rails tutorials a little overwhelming, especially not being able to try it out anywhere. And researching Rails hosts like Heroku and EngineYard makes me think that maybe I don't know what I'm getting myself into. The ship's leaving port! I wish I had more time to learn, better yet play with the language, but I have to decide soon! Should I venture or pass? Additional Details My experiences are in C/C++/Tcl/Perl/PHP/jQuery, and basic knowledge of Java/C#. I didn't study C.S. formally so I wasn't exposed to design principles, programming paradigms, etc., which is my greatest concern. Will my lack of understanding in this realm make RoR frustrating to learn? Will it be so incompatible with a C++ "way" of thinking that I'll wish I never started? Am I putting my client at risk by attempting this? If it helps, I'm quick to learn new things (self-taught so far) and care a great deal about correctness, using things for their intended purposes, and so on. I've read numerous recommendations of Agile Development with Rails and would love to read it (though perhaps, while developing in parallel, for shortness of time). Worse comes to worst, I'd give up and do the standard LAMP gig, of course, not charging the client for wasted time. But I'm hoping to avoid the project altogether if it's gonna come down to that! Thanks in advance for any tips, insights, votes of confidence, votes of discouragement (for the better), and such.

    Read the article

  • Free eBook with SQL Server performance tips and nuggets

    - by Claire Brooking
    I’ve often found that the kind of tips that turn out to be helpful are the ones that encourage me to make a small step outside of a routine. No dramatic changes – just a quick suggestion that changes an approach. As a languages student at university, one of the best I spotted came from outside the lecture halls and ended up saving me time (and lots of huffing and puffing) – the use of a rainbow of sticky notes for well-used pages and letter categories in my dictionary. Simple, but armed with a heavy dictionary that could double up as a step stool, those markers were surprisingly handy. When the Simple-Talk editors told me about a book they were planning that would give a series of tips for developers on how to improve database performance, we all agreed it needed to contain a good range of pointers for big-hitter performance topics. But we wanted to include some of the smaller, time-saving nuggets too. We hope we’ve struck a good balance. The 45 Database Performance Tips eBook covers different tips to help you avoid code that saps performance, whether that’s the ‘gotchas’ to be aware of when using Object to Relational Mapping (ORM) tools, or what to be aware of for indexes, database design, and T-SQL. The eBook is also available to download with SQL Prompt from Red Gate. We often hear that it’s the productivity-boosting side of SQL Prompt that makes it useful for everyday coding. So when a member of the SQL Prompt team mentioned an idea to make the most of tab history, a new feature in SQL Prompt 6 for SQL Server Management Studio, we were intrigued. Now SQL Prompt can save tabs we have been working on in SSMS as a way to maintain an active template for queries we often recycle. When we need to reuse the same code again, we search for our saved tab (and we can also customize its name to speed up the search) to get started. We hope you find the eBook helpful, and as always on Simple-Talk, we’d love to hear from you too. If you have a performance tip for SQL Server you’d like to share, email Melanie on the Simple-Talk team ([email protected]) and we’ll publish a collection in a follow-up post.

    Read the article

  • Do I need to contact a lawyer to report a GPL violation in software distributed on Apple's App Store?

    - by Rinzwind
    Some company is selling software through Apple's App Store which uses portions of code that I released publicly under the GPL. The company is violating the licensing terms in two ways, by (1) not preserving my copyright statement, and not releasing their code under the GPL license and (2) by distributing my GPL-licensed code through Apple's App Store. (The Free Software Foundation has made clear that the terms of the GPL and those of the App Store are incompatible.) I want to report this to Apple, and ask that they take appropriate action. I have tried mailing them to ask for more information about the reporting process, and have received the automated reply quoted below. The last point in the list of things one needs to provide, the “a statement by you, made under penalty of perjury,” sounds as if they mean some kind of specific legal document. I'm not sure. Does this mean I need to contact a lawyer just to file the report? I'd like to avoid going through that hassle if at all possible. (Besides an answer to this specific question, I'd welcome comments and experience reports from anyone who has already had to deal with a GPL violation on Apple's App Store.) Thank you for contacting Apple's Copyright Agent. If you believe that your work has been copied in a way that constitutes infringement on Apple’s Web site, please provide the following information: an electronic or physical signature of the person authorized to act on behalf of the owner of the copyright interest; a description of the copyrighted work that you claim has been infringed; a description of where the material that you claim is infringing is located on the site; your address, telephone number, and email address; a statement by you that you have a good faith belief that the disputed use is not authorized by the copyright owner, its agent, or the law; a statement by you, made under penalty of perjury, that the above information in your Notice is accurate and that you are the copyright owner or authorized to act on the copyright owner’s behalf. For further information, please review Apple's Legal Information & Notices/Claims of Copyright Infringement at: http://www.apple.com/legal/trademark/claimsofcopyright.html To expedite the processing of your claim regarding any alleged intellectual property issues related to iTunes (music/music videos, podcasts, TV, Movies), please send a copy of your notice to [email protected] For claims concerning a software application, please send a copy of your notice to [email protected]. Due to the high volume of e-mails we receive, this may be the only reply you receive from [email protected]. Please be assured, however, that Apple's Copyright Agent and/or the iTunes Legal Team will promptly investigate and take appropriate action concerning your report.

    Read the article

  • const vs. readonly for a singleton

    - by GlenH7
    First off, I understand there are folk who oppose the use of singletons. I think it's an appropriate use in this case as it's constant state information, but I'm open to differing opinions / solutions. (See The singleton pattern and When should the singleton pattern not be used?) Second, for a broader audience: C++/CLI has a similar keyword to readonly with initonly, so this isn't strictly a C# type question. (Literal field versus constant variable in C++/CLI) Sidenote: A discussion of some of the nuances on using const or readonly. My Question: I have a singleton that anchors together some different data structures. Part of what I expose through that singleton are some lists and other objects, which represent the necessary keys or columns in order to connect the linked data structures. I doubt that anyone would try to change these objects through a different module, but I want to explicitly protect them from that risk. So I'm currently using a "readonly" modifier on those objects*. I'm using readonly instead of const with the lists as I read that using const will embed those items in the referencing assemblies and will therefore trigger a rebuild of those referencing assemblies if / when the list(s) is/are modified. This seems like a tighter coupling than I would want between the modules, but I wonder if I'm obsessing over a moot point. (This is question #2 below) The alternative I see to using "readonly" is to make the variables private and then wrap them with a public get. I'm struggling to see the advantage of this approach as it seems like wrapper code that doesn't provide much additional benefit. (This is question #1 below) It's highly unlikely that we'll change the contents or format of the lists - they're a compilation of things to avoid using magic strings all over the place. Unfortunately, not all the code has converted over to using this singleton's presentation of those strings. Likewise, I don't know that we'd change the containers / classes for the lists. So while I normally argue for the encapsulations advantages a get wrapper provides, I'm just not feeling it in this case. A representative sample of my singleton public sealed class mySingl { private static volatile mySingl sngl; private static object lockObject = new Object(); public readonly Dictionary<string, string> myDict = new Dictionary<string, string>() { {"I", "index"}, {"D", "display"}, }; public enum parms { ABC = 10, DEF = 20, FGH = 30 }; public readonly List<parms> specParms = new List<parms>() { parms.ABC, parms.FGH }; public static mySingl Instance { get { if(sngl == null) { lock(lockObject) { if(sngl == null) sngl = new mySingl(); } } return sngl; } } private mySingl() { doSomething(); } } Questions: Am I taking the most reasonable approach in this case? Should I be worrying about const vs. readonly? is there a better way of providing this information?

    Read the article

  • Merge Join component sorted outputs [SSIS]

    - by jamiet
    One question that I have been asked a few times of late in regard to performance tuning SSIS data flows is this: Why isn’t the Merge Join output sorted (i.e.IsSorted=True)? This is a fair question. After all both of the Merge Join inputs are sorted, hence why wouldn’t the output be sorted as well? Well here’s a little secret, the Merge Join output IS sorted! There’s a caveat though – it is only under certain circumstances and SSIS itself doesn’t do a good job of informing you of it. Let’s take a look at an example. Here we have a dataflow that consumes data from the [AdventureWorks2008].[Sales].[SalesOrderHeader] & [AdventureWorks2008].[Sales].[SalesOrderDetail] tables then joins them using a Merge Join component: Let’s take a look inside the editor of the Merge Join: We are joining on the [SalesOrderId] field (which is what the two inputs just happen to be sorted upon). We are also putting [SalesOrderHeader].[SalesOrderId] into the output. Believe it or not the output from this Merge Join component is sorted (i.e. has IsSorted=True) but unfortunately the Merge Join component does not have an Advanced Editor hence it is hidden away from us. There are a couple of ways to prove to you that is the case; I could open up the package XML inside the .dtsx file and show you the metadata but there is an easier way than that – I can attach a Sort component to the output. Take a look: Notice that the Sort component is attempting to sort on the [SalesOrderId] column. This gives us the following warning: Validation warning. DFT Get raw data: {992B7C9A-35AD-47B9-A0B0-637F7DDF93EB}: The data is already sorted as specified so the transform can be removed. The warning proves that the output from the Merge Join is sorted! It must be noted that the Merge Join output will only have IsSorted=True if at least one of the join columns is included in the output. So there you go, the Merge Join component can indeed produce a sorted output and that’s very useful in order to avoid unnecessary expensive Sort operations downstream. Hope this is useful to someone out there! @Jamiet  P.S. Thank you to Bob Bojanic on the SSIS product team who pointed this out to me!

    Read the article

  • CodePlex Daily Summary for Tuesday, February 22, 2011

    CodePlex Daily Summary for Tuesday, February 22, 2011Popular ReleasesSearchable Property Updater for Microsoft Dynamics CRM 2011: Searchable Property Updater (1.0.121.59): Initial releaseJHINFORM7: JHINFORM 7 VR. 0.0.2: Versión 0.0.1 En estado de desarrolloSilverlight????[???]: silverlight????[???]2.0: ???????,?????,????????silverlight????。DBSourceTools: DBSourceTools_1.3.0.0: Release 1.3.0.0 Changed editors from FireEdit to ICSharpCode.TextEditor. Complete re-vamp of Intellisense ( further testing needed). Hightlight Field and Table Names in sql scripts. Added field dropdown on all tables and views in DBExplorer. Added data option for viewing data in Tables. Fixed comment / uncomment bug as reported by tareq. Included Synonyms in scripting engine ( nickt_ch ).IronPython: 2.7 Release Candidate 1: We are pleased to announce the first Release Candidate for IronPython 2.7. This release contains over two dozen bugs fixed in preparation for 2.7 Final. See the release notes for 60193 for details and what has already been fixed in the earlier 2.7 prereleases. - IronPython TeamCaliburn Micro: A Micro-Framework for WPF, Silverlight and WP7: Caliburn.Micro 1.0 RC: This is the official Release Candicate for Caliburn.Micro 1.0. The download contains the binaries, samples and VS templates. VS Templates The templates included are designed for situations where the Caliburn.Micro source needs to be embedded within a single project solution. This was targeted at government and other organizations that expressed specific requirements around using an open source project like this. NuGet This release does not have a corresponding NuGet package. The NuGet pack...Caliburn: A Client Framework for WPF and Silverlight: Caliburn 2.0 RC: This is the official Release Candidate for Caliburn 2.0. It contains all binaries, samples and generated code docs.A2Command: 2011-02-21 - Version 1.0: IntroductionThis is the full release version of A2Command 1.0, dated February 21, 2011. These notes supersede any prior version's notes. All prior releases may be found on the project's website at http://a2command.codeplex.com/releases/ where you can read the release notes for older versions as well as download them. This version of A2Command is intended to replace any previous version you may have downloaded in the past. There were several bug fixes made after Release Candidate 2 and all...Chiave File Encryption: Chiave 0.9: Application for file encryption and decryption using 512 Bit rijndael encyrption algorithm with simple to use UI. Its written in C# and compiled in .Net version 3.5. It incorporates features of Windows 7 like Jumplists, Taskbar progress and Aero Glass. Feedbacks are Welcome!....Rawr: Rawr 4.0.20 Beta: Rawr is now web-based. The link to use Rawr4 is: http://elitistjerks.com/rawr.phpThis is the Cataclysm Beta Release. More details can be found at the following link http://rawr.codeplex.com/Thread/View.aspx?ThreadId=237262 As of the 4.0.16 release, you can now also begin using the new Downloadable WPF version of Rawr!This is a pre-alpha release of the WPF version, there are likely to be a lot of issues. If you have a problem, please follow the Posting Guidelines and put it into the Issue Trac...Azure Storage Samples: Version 1.0 (February 2011): These downloads contain source code. Each is a complete sample that fully exercises Windows Azure Storage across blobs, queues, and tables. The difference between the downloads is implementation approach. Storage DotNet CS.zip is a .NET StorageClient library implementation in the C# language. This library come with the Windows Azure SDK. Contains helper classes for accessing blobs, queues, and tables. Storage REST CS.zip is a REST implementation in the C# language. The code to implement R...MiniTwitter: 1.66: MiniTwitter 1.66 ???? ?? ?????????? 2 ??????????????????? User Streams ?????????Windows Phone 7 Isolated Storage Explorer: WP7 Isolated Storage Explorer v1.0 Beta: Current release features:WPF desktop explorer client Visual Studio integrated tool window explorer client (Visual Studio 2010 Professional and above) Supported operations: Refresh (isolated storage information), Add Folder, Add Existing Item, Download File, Delete Folder, Delete File Explorer supports operations running on multiple remote applications at the same time Explorer detects application disconnect (1-2 second delay) Explorer confirms operation completed status Explorer d...Document.Editor: 2011.6: Whats new for Document.Editor 2011.6: New Left to Right and Left to Right support New Indent more/less support Improved Home tab Improved Tooltips/shortcut keys Minor Bug Fix's, improvements and speed upsCatel - WPF and Silverlight MVVM library: 1.2: Catel history ============= (+) Added (*) Changed (-) Removed (x) Error / bug (fix) For more information about issues or new feature requests, please visit: http://catel.codeplex.com =========== Version 1.2 =========== Release date: ============= 2011/02/17 Added/fixed: ============ (+) DataObjectBase now supports Isolated Storage out of the box: Person.Save(myStream) stores a whole object graph in Silverlight (+) DataObjectBase can now be converted to Json via Person.ToJson(); (+)...??????????: All-In-One Code Framework ??? 2011-02-18: ?????All-In-One Code Framework?2011??????????!!http://i3.codeplex.com/Project/Download/FileDownload.aspx?ProjectName=1code&DownloadId=128165 ?????,?????AzureBingMaps??????,??Azure,WCF, Silverlight, Window Phone????????,????????????????????????。 ???: Windows Azure SQL Azure Windows Azure AppFabric Windows Live Messenger Connect Bing Maps ?????: ??????HTML??? ??Windows PC?Mac?Silverlight??? ??Windows Phone?Silverlight??? ?????:http://blog.csdn.net/sjb5201/archive/2011...Image.Viewer: 2011: First version of 2011Silverlight Toolkit: Silverlight for Windows Phone Toolkit - Feb 2011: Silverlight for Windows Phone Toolkit OverviewSilverlight for Windows Phone Toolkit offers developers additional controls for Windows Phone application development, designed to match the rich user experience of the Windows Phone 7. Suggestions? Features? Questions? Ask questions in the Create.msdn.com forum. Add bugs or feature requests to the Issue Tracker. Help us shape the Silverlight Toolkit with your feedback! Please clearly indicate that the work items and issues are for the phone t...thinktecture WSCF.blue: WSCF.blue V1 Update (1.0.11): Features Added a new option that allows properties on data contract types to be marked as virtual. Bug Fixes Fixed a bug caused by certain project properties not being available on Web Service Software Factory projects. Fixed a bug that could result in the WrapperName value of the MessageContractAttribute being incorrect when the Adjust Casing option is used. The menu item code now caters for CommandBar instances that are not available. For example the Web Item CommandBar does not exist ...Terminals: Version 2 - RC1: The Third build includes the fix for NLA support. A merged in patch dropped the UI support. Its back now. All patch's except 1 are left. Cheers, -Rob The Second build is up. It takes most patch's sent in from the community. One such patch was around security & how the application handles Passwords. You may find that all of your passwords are now invalidated. You may need to reenter all of your credentials. This would be a good time to use the Credential Manager for each connecti...New ProjectsAllTalk: This is a chat client for Windows Phone 7.AssertFramework: AssertFramework is an implementation of Visual Studio/MSTest assert classes. The Asset and StringAssert classes have been implemented so far. CollectionAssert will be implemented next.AsyncInRuby: Async Web Development in RubyAuto Numbering for CRM 4.0: Reuse and standardize Auto Numbering for CRM 4.0BitRaise: Raise money bit by bit!BoxGame: BoxGame is a small project to develop a RPG in XNA.CCI Explorer (An alternative of .NET Reflector): CCI Explorer is an alternative to RedGate Reflector. It use the Microsoft Common Compiler Infrastructure to decompil and view source executable code. The application is writing in WPF and use the MVVM pattern.Configuration Manager Client Health Check Tool: There are many pitfalls with maintaining ConfigMgr managed systems so they install the client software and can continuously report to the hierarchy. This project provides a scripted solution that detects many issues and automates their repair.cppERF: Class ERF function. Test on VC++ 2008 express, and cygwin.CUITe (Coded UI Test enhanced) Framework: CUITe (Coded UI Test enhanced) Framework is a thin layer developed on top of Microsoft Visual Studio Team Test's Coded UI Test engine which helps reduce code, increases readability and maintainability, while also providing a bunch of cool features for the automation engineer.DocMetaMan : Bulk document Upload and MetaData (Taxonomy) Setter: DocMetaMap lets user select a root folder and upload the documents to selected document library in SharePoint 2010. The tool presents a nice GUI prompting the user to select the metadata / taxonomy to be associated with the documents before uploading them to SharePoint. DotNetNuke Azure Accelerator: DNN Azure Accelerator is a project based on the Azure Accelerators Project to publish the famous DotNetNuke Community CMS in the Windows Azure Platform.GK PlatyPi Robotics - Team 2927: Graham-Kapowsin HS Robotics Club's code repository.HgReport: This is a Mercurial reporting engine written in .NET 3.5. The program will allow you to write your own report templates and execute them against a local Mercurial repository to produce text reports, including HTML, with statistics and other items from the repository history.Image Steganography: 'Image Steganography' allows you to embed text and files into images, with optional encryption.im-me-messenger: A simple instant messenger application for the IM-ME messenging gadgetISEFun: PowerShell module(s) to simplify work in it. It contains PowerShell scripts, compiled libs and some formating files. Several modules will come in one batch as optional features.Kailua - The forgotten methods in the ADO.NET API.: Provide standard calls for vendor specific functionality through ADO.NET. Additional functionality includes: enumerate databases, tables, views, columns, stored procedures, parameters; get an autogenerated primary key; return top N rows; and more. Also some non-ADO classes.Linkual: Linkual makes it easier for blog authors to publish their articles in multiple languages. They will no longer have to set up a separate blog for each language. It is developed in C# and ASP.NET MVC.Lumen - Index discovery and querying: Index discovery and querying framework based on Lucene.netMars Rover Exercise: A squad of robotic rovers are to be landed by NASA on a plateau on Mars. This plateau, which is curiously rectangular, must be navigated by the rovers so that their on board cameras can get a complete view of the surrounding terrain to send back to Earth. Message splitting envelope in Biztalk 2009: Message splitting envelope in Biztalk 2009. The project contains: Source code, Examples. Article describing how to develop it: http://www.biztalkbrasil.com.br/2011/02/envelope-sample-using-flat-file.html.Microsoft Dynamics CRM 2011 Development Framework: Framework for developing Microsoft Dynamics CRM 2011 Applications.Potluck Central: Event Manger is a simple place were you can manage your potlucks.PowerSqueakTasks: For now PowerSqueakTasks primary goal is to integrate MsBuild with Powershell. It provides one simple task, that executes Powershell script in a batch manner - creates PS variables using MSBuild item metadata and then runs specified script over them.PSS Airbus Sound Extender: This application offers users of PSS Airbus the sound extension (like electricity, air-conditioning, apu) for standard PSS Airbus 32x planes. Tested with FS2004 and PSS A319. No sound files are distribute with the package, but explaining manual, how to achieve them, is included.SCCM Client Center Automation Library: SCCM Client Automation library (previously smsclictr.automation.dll) is a .NET Library (C#) to automate and access SCCM (Microsoft System Center Configuration Manager) 2007 Agent functions over WMI.Seng 401 Awesome TSS: Telephone switching system for SENG401 course project. Developed in Visual C#.Silverlight????[???]: flyer???????????,????????。????????silverlight??????????。Simple Notify: SimpleNotify is a lightweight client-server implementation that allows you to notify many users in your network with custom messages in a very simple way. There are a couple of ways how you want to push these messages to your clients. SimpleNotify is developed in C#.Slingshot: SlingshotSmartTTS: A smart text to speech app!SystemSoupRMS: SaaS RMStest project101: test source controlUse BizTalk Logging Events in BizUnit Tests: This project will demonstrate how to use the instrumentation from the Microsoft BizTalk CAT Team logging framework to help you test the internals of your BizTalk solutionWalkme HealthVault Application: Walking application for HealthVault.WikiChanges: WikiChanges is a "Recent Changes" monitor for MediaWiki installations that uses non-intrusive, non-annoying yet useful notifications on the corner with link shortcuts to pages, diff, hist, undo and various other links.Win4 Movie Project: This application is being developed for a class group projectWPF UI Authorization infrastructure (MVVM controlled): This infrastructure provide Attribute base authorization for UI elements within WPF applications

    Read the article

  • Tips on ensuring Model Quality

    - by [email protected]
    Given enough data that represents well the domain and models that reflect exactly the decision being optimized, models usually provide good predictions that ensure lift. Nevertheless, sometimes the modeling situation is less than ideal. In this blog entry we explore the problems found in a few such situations and how to avoid them.1 - The Model does not reflect the problem you are trying to solveFor example, you may be trying to solve the problem: "What product should I recommend to this customer" but your model learns on the problem: "Given that a customer has acquired our products, what is the likelihood for each product". In this case the model you built may be too far of a proxy for the problem you are really trying to solve. What you could do in this case is try to build a model based on the result from recommendations of products to customers. If there is not enough data from actual recommendations, you could use a hybrid approach in which you would use the [bad] proxy model until the recommendation model converges.2 - Data is not predictive enoughIf the inputs are not correlated with the output then the models may be unable to provide good predictions. For example, if the input is the phase of the moon and the weather and the output is what car did the customer buy, there may be no correlations found. In this case you should see a low quality model.The solution in this case is to include more relevant inputs.3 - Not enough cases seenIf the data learned does not include enough cases, at least 200 positive examples for each output, then the quality of recommendations may be low. The obvious solution is to include more data records. If this is not possible, then it may be possible to build a model based on the characteristics of the output choices rather than the choices themselves. For example, instead of using products as output, use the product category, price and brand name, and then combine these models.4 - Output leaking into input giving the false impression of good quality modelsIf the input data in the training includes values that have changed or are available only because the output happened, then you will find some strong correlations between the input and the output, but these strong correlations do not reflect the data that you will have available at decision (prediction) time. For example, if you are building a model to predict whether a web site visitor will succeed in registering, and the input includes the variable DaysSinceRegistration, and you learn when this variable has already been set, you will probably see a big correlation between having a Zero (or one) in this variable and the fact that registration was successful.The solution is to remove these variables from the input or make sure they reflect the value as of the time of decision and not after the result is known. 

    Read the article

  • Overload Avoidance

    - by mikef
    A little under a year ago, Matt Simmons wrote a rather reflective article about his terrifying brush with stress-induced ill health. SysAdmins and DBAs have always been prime victims of work-related stress, but I wonder if that predilection is perhaps getting worse, despite the best efforts of Matt and his trusty side-kick, HR. The constant pressure from share-holders and CFOs to 'streamline' the workforce is partially to blame, but the more recent culprit is technology itself. I can't deny that the rise of technologies like virtualization, PowerCLI, PowerShell, and a host of others has been a tremendous boon. As a result, individual IT professionals are now able to handle more and more tasks and manage increasingly large and complex environments. But, without a doubt, this is a two-edged sword; The reward for competence is invariably more work. Unfortunately, SysAdmins play such a pivotal role in modern business that it's easy to see how they can very quickly become swamped in conflicting demands coming from different directions. However, that doesn't justify the ridiculous hours many are asked (or volunteer) to devote to their work. Admirably though their commitment is, it isn't healthy for them, it sets a dangerous expectation, and eventually something will snap. There are times when everyone needs to step up to the plate outside of 'normal' work hours, but that time isn't all the time. Naturally, with all that lovely technology, you can automate more and more of those tricky tasks to keep on top of the workload, but you are still only human. Clever though you may be, there is a very real limit to how far technology can take you. I'm not suggesting that you avoid these technologies, or deliberately aim for mediocrity; I'm just saying that you need to be more than just technically skilled (and Wesley Nonapeptide riffs on and around this topic in his excellent 'Telepathic Robot Drones' blog post). You need to be able to manage expectations, not just Exchange. Specifically, that means your own expectations of what you are capable of, because those come before everyone else's. After all, how can you keep your work-life balance under control, if you're the one setting the bar way too high? Talking to your manager, or discussing issues with your users, is only going to be productive if you have some facts to work with. "Know Thyself" is the first law of managing work overload, and this is obviously a skill which people develop over time; the fact that veteran Sysadmins exist at all is testament to this. I'd just love to know how you get to that point. Personally, I'm using RescueTime to keep myself honest, but I'm open to recommendations for better methods. Do you track your own time, do you have an intuitive sense of what is possible, or do you just rely on someone else to handle that all for you? Cheers, Michael

    Read the article

  • Impulsioned jumping

    - by Mutoh
    There's one thing that has been puzzling me, and that is how to implement a 'faux-impulsed' jump in a platformer. If you don't know what I'm talking about, then think of the jumps of Mario, Kirby, and Quote from Cave Story. What do they have in common? Well, the height of your jump is determined by how long you keep the jump button pressed. Knowing that these character's 'impulses' are built not before their jump, as in actual physics, but rather while in mid-air - that is, you can very well lift your finger midway of the max height and it will stop, even if with desacceleration between it and the full stop; which is why you can simply tap for a hop and hold it for a long jump -, I am mesmerized by how they keep their trajetories as arcs. My current implementation works as following: While the jump button is pressed, gravity is turned off and the avatar's Y coordenate is decremented by the constant value of the gravity. For example, if things fall at Z units per tick, it will rise Z units per tick. Once the button is released or the limit is reached, the avatar desaccelerates in an amount that would make it cover X units until its speed reaches 0; once it does, it accelerates up until its speed matches gravity - sticking to the example, I could say it accelerates from 0 to Z units/tick while still covering X units. This implementation, however, makes jumps too diagonal, and unless the avatar's speed is faster than the gravity, which would make it way too fast in my current project (it moves at about 4 pixels per tick and gravity is 10 pixels per tick, at a framerate of 40FPS), it also makes it more vertical than horizontal. Those familiar with platformers would notice that the character's arc'd jump almost always allows them to jump further even if they aren't as fast as the game's gravity, and when it doesn't, if not played right, would prove itself to be very counter-intuitive. I know this because I could attest that my implementation is very annoying. Has anyone ever attempted at similar mechanics, and maybe even succeeded? I'd like to know what's behind this kind of platformer jumping. If you haven't ever had any experience with this beforehand and want to give it a go, then please, don't try to correct or enhance my explained implementation, unless I was on the right way - try to make up your solution from scratch. I don't care if you use gravity, physics or whatnot, as long as it shows how these pseudo-impulses work, it does the job. Also, I'd like its presentation to avoid a language-specific coding; like, sharing us a C++ example, or Delphi... As much as I'm using the XNA framework for my project and wouldn't mind C# stuff, I don't have much patience to read other's code, and I'm certain game developers of other languages would be interested in what we achieve here, so don't mind sticking to pseudo-code. Thank you beforehand.

    Read the article

  • How to Use Your Android Phone as a Modem; No Rooting Required

    - by Jason Fitzpatrick
    If your cellular provider’s mobile hotspot/tethering plans are too pricey, skip them and tether your phone to your computer without inflating your monthly bill. Read on to see how you can score free mobile internet. We recently received a letter from a How-To Geek reader, requesting help linking their Android phone to their laptop to avoid the highway robbery their cellular provider was insisting upon: Dear How-To Geek, I recently found out that my cellphone company charges $30 a month to use your smartphone as a data modem. That’s an outrageous price when I already pay an extra $15 a month charge just because they insist that because I have a smartphone I need a data plan because I’ll be using so much more data than other users. They expect me to pay what amounts to a $45 a month premium just to do some occasional surfing and email checking from the comfort of my laptop instead of the much smaller smartphone screen! Surely there is a work around? I’m running Windows 7 on my laptop and I have an Android phone running Android OS 2.2. Help! Sincerely, No Double Dipping! Well Double Dipping, this is a sentiment we can strongly related to as many of us on staff are in a similar situation. It’s absurd that so many companies charge you to use the data connection on the phone you’re already paying for. There is no difference in bandwidth usage if you stream Pandora to your phone or to your laptop, for example. Fortunately we have a solution for you. It’s not free-as-in-beer but it only costs $16 which, over the first year of use alone, will save you $344. Let’s get started! Latest Features How-To Geek ETC What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) How To Remove People and Objects From Photographs In Photoshop Make Efficient Use of Tab Bar Space by Customizing Tab Width in Firefox See the Geeky Work Done Behind the Scenes to Add Sounds to Movies [Video] Use a Crayon to Enhance Engraved Lettering on Electronics Adult Swim Brings Their Programming Lineup to iOS Devices Feel the Chill of the South Atlantic with the Antarctica Theme for Windows 7 Seas0nPass Now Offers Untethered Apple TV Jailbreaking

    Read the article

  • Adding a Role to a Responsibility for Use with the Oracle E-Business Suite SDK for Java JAAS Implementation

    - by Juan Camilo Ruiz
    This new post on the series of ADF integration with Oracle E-Business Suite, was written by Sara Woodhull, Principal Product Manager on the Oracle E-Business Suite Applications Technology team. Based on a previous post of the series, a reader asked what to do if you have an existing responsibility assigned to lots of users, instead of the UMX role that the Oracle E-Business Suite SDK for Java JAAS Implementation requires.  It would be tedious to assign a new role directly to hundreds or thousands of users, so naturally we’d like to avoid that if possible. Most people don’t know this, but it’s possible to assign a UMX role to a responsibility in Oracle User Management. Once you do that, users with your responsibility will all inherit your UMX role automatically. You can then proceed with using your UMX role with JAAS for ADF. Here is how to assign a UMX role to a responsibility in Oracle E-Business Suite: In the User Management responsibility, go to the Roles & Role Inheritance page. Search for the responsibility you want. In the search results table, click the “View In Hierarchy” icon for your responsibility. Note that the codes for responsibilities start with FND_RESP, while the codes for roles start with UMX. In the Role Inheritance Hierarchy, click on the Add Node icon (green plus + ) for your responsibility. Now you will see what appears to be the same page again but it is a little different (note the text at the top telling you the role you select will be inherited…).  This time, either search or expand nodes until you find your custom UMX role.  Use the Quick Select to choose that role. You will be sent back to the first screen, where you should see a confirmation message at the top. On the same page you can verify that the custom UMX role is underneath the responsibility.  You may need to expand one or more nodes to see the UMX role under the responsibility. You might see some other roles that have been inherited as well. Now that your users have the UMX role, you can test that the UMX role is being passed through to your ADF application through the Oracle E-Business Suite SDK for Java JAAS feature. Happy coding!

    Read the article

  • What Am I doing for my 5 months 1 week old baby for his first cold?

    - by Rekha
    With my limited friends circle with babies, I heard that babies in their fifth month usually gets affected by cold all of a sudden. This is unavoidable and I feel it is good for the babies as they start developing their immune system right on. We usually take our son for a small walk so that he is able to explore the world out there. We have been taking him for a stroll almost everyday since we was 3 months old. But only now he is getting his first cold. So things happen only now I think. Till that day I had somehow brought some routine for him for his feed and naps during day time. Night time is still sleepless ones anyway. So when he first got his cold, that night it went on as usual getting up every few hours. From the next day on, he stopped feeding at the usual time. In the night he goes for 8-10 hours between feeds but still wakes up every two hours and cries for about 10 minutes and when we lift him for sometime, he goes back to sleep. Now to the point what all to expect: * Babies starts feeding frequently * Babies spit up increases * They dislike the food they liked before (we have just started solid foods as per our pediatricians advice) * Since they cannot breathe properly, they cannot sleep for a long time *Pees and poops less frequently. What all we did for our baby to make him comfortable during this time: * Feed the baby less quantity and more frequently whenever they expect to feed (Do not follow the schedule) * When the quantity is less, spit ups is also less * If the baby does not like the food (vegetable or fruit), avoid them till they get back to normal * Try to elevate their head while sleeping ( we used the cradle – sitting position, bouncer, car seat and finally all four sides we placed firm pillows so that he does not roll and kept a pillow for his head till his hip – this one is recommended usually but we have an eye on him always) * No need for any medicines – it will take a week for them to get back to normal whether or not we gives medicines * We applied home made Cold Reliever (heat 2 teaspoons of coconut oil with a small camphor piece till the camphor disappears – then cool the mixture) on his chest, back, neck and feet and covered the feet with socks in the nights * Bathe him everyday – hot (that your baby can withstand) *  Whenever possible give him a steam bath (open the hot showers for sometime in the bathroom and close the doors. When the steam is bearable, close the shower and take the baby inside the bathroom for sometime) * Also use a bulb syringe provided in the hospital to clear the nose (use some saline water drops for ease) * Cuddle him when he cries But please do take him to his doctor if he has fever and other symptoms other than cold or the cold persists for more than 5 – 7 days. Image Credit : MiriamBJDolls

    Read the article

  • Merge Join component sorted outputs [SSIS]

    - by jamiet
    One question that I have been asked a few times of late in regard to performance tuning SSIS data flows is this: Why isn’t the Merge Join output sorted (i.e.IsSorted=True)? This is a fair question. After all both of the Merge Join inputs are sorted, hence why wouldn’t the output be sorted as well? Well here’s a little secret, the Merge Join output IS sorted! There’s a caveat though – it is only under certain circumstances and SSIS itself doesn’t do a good job of informing you of it. Let’s take a look at an example. Here we have a dataflow that consumes data from the [AdventureWorks2008].[Sales].[SalesOrderHeader] & [AdventureWorks2008].[Sales].[SalesOrderDetail] tables then joins them using a Merge Join component: Let’s take a look inside the editor of the Merge Join: We are joining on the [SalesOrderId] field (which is what the two inputs just happen to be sorted upon). We are also putting [SalesOrderHeader].[SalesOrderId] into the output. Believe it or not the output from this Merge Join component is sorted (i.e. has IsSorted=True) but unfortunately the Merge Join component does not have an Advanced Editor hence it is hidden away from us. There are a couple of ways to prove to you that is the case; I could open up the package XML inside the .dtsx file and show you the metadata but there is an easier way than that – I can attach a Sort component to the output. Take a look: Notice that the Sort component is attempting to sort on the [SalesOrderId] column. This gives us the following warning: Validation warning. DFT Get raw data: {992B7C9A-35AD-47B9-A0B0-637F7DDF93EB}: The data is already sorted as specified so the transform can be removed. The warning proves that the output from the Merge Join is sorted! It must be noted that the Merge Join output will only have IsSorted=True if at least one of the join columns is included in the output. So there you go, the Merge Join component can indeed produce a sorted output and that’s very useful in order to avoid unnecessary expensive Sort operations downstream. Hope this is useful to someone out there! @Jamiet  P.S. Thank you to Bob Bojanic on the SSIS product team who pointed this out to me!

    Read the article

  • Where would my different development rhythm be suitable for the work?

    - by DarenW
    Over the years I have worked on many projects, with some successful and a great benefit to the company, and some total failures with me getting fired or otherwise leaving. What is the difference? Naturally I prefer the former and wish to avoid the latter, so I'm pondering this issue. The key seems to be that my personal approach differs from the norm. I write code first, letting it be all spaghetti and chaos, using whatever tools "fit my hand" that I'm fluent in. I try to organize it, then give up and start over with a better design. I go through cycles, from thinking-design to coding-testing. This may seem to be the same as any other development process, Agile or whatever, cycling between design and coding, but there does seem to be a subtle difference: The methods (ideally) followed by most teams goes design, code; design, code; ... while I'm going code, design; code, design; (if that makes any sense.) Music analogy: some types of music have a strong downbeat while others have prominent syncopation. In practice, I just can't think in terms of UML, specifications and so on, but grok things only by attempting to code and debug and refactor ad-hoc. I need the grounding provided by coding in order to think constructively, then to offer any opinions, advice or solutions to the team and get real work done. In positions where I can initially hack up cowboy code without constraints of tool or language choices, I easily gain a "feel" for the data, requirements etc and eventually do good work. In formalized positions where paperwork and pure "design" comes first and only later any coding (even for small proof-of-concept projects), I am lost at sea and drown. Therefore, I'd like to know how to either 1) change my rhythm to match the more formalized methodology-oriented team ways of doing things, or 2) find positions at organizations where my sense of development rhythm is perfect for the work. It's probably unrealistic for a person to change their fundamental approach to things. So option 2) is preferred. So where I can I find such positions? How common is my approach and where is it seen as viable but different, and not dismissed as undisciplined or cowboy coder ways?

    Read the article

  • Programming and Ubiquitous Language (DDD) in a non-English domain

    - by Sandor Drieënhuizen
    I know there are some questions already here that are closely related to this subject but none of them take Ubiquitous Language as the starting point so I think that justifies this question. For those who don't know: Ubiquitous Language is the concept of defining a (both spoken and written) language that is equally used across developers and domain experts to avoid inconsistencies and miscommunication due to translation problems and misunderstanding. You will see the same terminology show up in code, conversations between any team member, functional specs and whatnot. So, what I was wondering about is how to deal with Ubiquitous Language in non-English domains. Personally, I strongly favor writing programming code in English completely, including comments but ofcourse excluding constants and resources. However, in a non-English domain, I'm forced to make a decision either to: Write code reflecting the Ubiquitous Language in the natural language of the domain. Translate the Ubiquitous Language to English and stop communicating in the natural language of the domain. Define a table that defines how the Ubiquitous Language translates to English. Here are some of my thoughts based on these options: 1) I have a strong aversion against mixed-language code, that is coding using type/member/variable names etc. that are non-English. Most programming languages 'breathe' English to a large extent and most of the technical literature, design pattern names etc. are in English as well. Therefore, in most cases there's just no way of writing code entirely in a non-English language so you end up with mixed languages anyway. 2) This will force the domain experts to start thinking and talking in the English equivalent of the UL, something that will probably not come naturally to them and therefore hinders communication significantly. 3) In this case, the developers communicate with the domain experts in their native language while the developers communicate with each other in English and most importantly, they write code using the English translation of the UL. I'm sure I don't want to go for the first option and I think option 3 is much better than option 2. What do you think? Am I missing other options? UPDATE Today, about year later, having dealt with this issue on a daily basis, I have to say that option 3 has worked out pretty well for me. It wasn't as tedious as I initially feared and translating in real time while talking to the client wasn't a problem either. I also found the following advantages to be true, based on my experience. Translating the UL makes you pay more attention to defining the UL and even the domain itself, especially when you don't know how to translate a term and you have to start looking through dictionaries etc. This has even caused me to reconsider domain modeling decisions a few times. It helps you make your knowledge of the English language more profound. Obviously, your code is much more pleasant to look at instead of being a mind boggling obscenity.

    Read the article

  • Easy Profiling Point Insertion

    - by Geertjan
    One really excellent feature of NetBeans IDE is its Profiler. What's especially cool is that you can analyze code fragments, that is, you can right-click in a Java file and then choose Profiling | Insert Profiling Point. When you do that, you're able to analyze code fragments, i.e., from one statement to another statement, e.g., how long a particular piece of code takes to execute: https://netbeans.org/kb/docs/java/profiler-profilingpoints.html However, right-clicking a Java file and then going all the way down a longish list of menu items, to find "Profiling", and then "Insert Profiling Point" is a lot less easy than right-clicking in the sidebar (known as the glyphgutter) and then setting a profiling point in exactly the same way as a breakpoint: That's much easier and more intuitive and makes it far more likely that I'll use the Profiler at all. Once profiling points have been set then, as always, another menu item is added for managing the profiling point: To achieve this, I added the following to the "layer.xml" file: <folder name="Editors"> <folder name="AnnotationTypes"> <file name="profiler.xml" url="profiler.xml"/> <folder name="ProfilerActions"> <file name="org-netbeans-modules-profiler-ppoints-ui-InsertProfilingPointAction.shadow"> <attr name="originalFile" stringvalue="Actions/Profile/org-netbeans-modules-profiler-ppoints-ui-InsertProfilingPointAction.instance"/> <attr name="position" intvalue="300"/> </file> </folder> </folder> </folder> Notice that a "profiler.xml" file is referred to in the above, in the same location as where the "layer.xml" file is found. Here is the content: <!DOCTYPE type PUBLIC '-//NetBeans//DTD annotation type 1.1//EN' 'http://www.netbeans.org/dtds/annotation-type-1_1.dtd'> <type name='editor-profiler' description_key='HINT_PROFILER' localizing_bundle='org.netbeans.eppi.Bundle' visible='true' type='line' actions='ProfilerActions' severity='ok' browseable='false'/> Only disadvantage is that this registers the profiling point insertion in the glyphgutter for all file types. But that's true for the debugger too, i.e., there's no MIME type specific glyphgutter, instead, it is shared by all MIME types. Little bit confusing that the profiler point insertion can now, in theory, be set for all MIME types, but that's also true for the debugger, even though it doesn't apply to all MIME types. That probably explains why the profiling point insertion can only be done, officially, from the right-click popup menu of Java files, i.e., the developers wanted to avoid confusion and make it available to Java files only. However, I think that, since I'm already aware that I can't set the Java debugger in an HTML file, I'm also aware that the Java profiler can't be set that way as well. If you find this useful too, you can download and install the NBM from here: http://plugins.netbeans.org/plugin/55002

    Read the article

  • A big flat text file or a HTML site for language documentation?

    - by Bad Sector
    A project of mine is a small embeddable Tcl-like scripting language, LIL. While i'm mostly making it for my own use, i think it is interesting enough for others to use, so i want it to have a nice (but not very "wordy") documentation. So far i'm using a single flat readme.txt file. It explains the language's syntax, features, standard functions, how to use the C API, etc. Also it is easy to scan and read in almost every environment out there, from basic text-only terminals to full-fledged high-end graphical desktop environments. However, while i tried to keep things nicely formatted (as much as this is possible in plain text), i still think that being a big (and growing) wall of text, it isn't as easy on the eyes as it could be. Also i feel that sometimes i'm not writing as much as i want in order to avoid expanding the text too much. So i thought i could use another project of mine, QuHelp, which is basically a help site generator for sites like this one with a sidebar that provides a tree of topics/subtopics and offline full text search. With this i can use HTML to format the documentation and if i use QuHelp for some other project that uses LIL, i can import LIL's documentation as part of the other project's documentation. However converting the existing documentation to QuHelp/HTML isn't a small task, especially when it comes to functions (i'll need to put more detail on them than what currently exists in the readme.txt file). Also it loses the wide range of availability that it currently has (even if QuHelp's generated code degrades gracefully down to console-only web browsers, plain text is readable from everywhere, including from popular editors such as Vim and Emacs - i had someone once telling me that he likes LIL's documentation because it is readable without leaving his editor). So, my question is simply this: should i keep the documentation as it is now in the form of a single readme.txt file or should i convert it to something like the site i mentioned above? There is also the option to do both, but i'm not sure if i'll be able to always keep them in sync or if it is worth the effort. After asking around in IRC i've got mixed answers: some liked the wide availability of the single text file, others said that it is looks as bad as a man page (personally i don't mind that - i can read man pages just fine - but other people might have issues reading them). What do you think?

    Read the article

< Previous Page | 205 206 207 208 209 210 211 212 213 214 215 216  | Next Page >