Search Results

Search found 3403 results on 137 pages for 'monthly statements'.

Page 101/137 | < Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >

  • Old School Wizardry Tip: Batch File Comments

    - by jkauffman
    Johnny, the Endangered Keyboard-Driven Windows User Some of my proudest, obscure Windows tricks are losing their relevance. I know I’m not alone. Keyboard shortcuts are going the way of the dodo. I used to induce fearful awe by slapping Ctrl+Shift+Esc in front of the lowly, pedestrian Windows users. No windows key on the keyboard? No problem: Ctrl+Esc. No menu key on the keyboard: Shift+F10. I am also firmly planted in the habit of closing windows with the Alt+Space menu (Alt+Space, C); and I harbor a brooding, slow=growing list of programs that fail to support this correctly (that means you, Paint.NET). Every time a new version of windows comes out, the support for some of these minor time-saving habits get pared out. Will I complain publicly? Nope, I know my old ways should be axed to conserve precious design energy. In fact, I disapprove of fierce un-intuitiveness for the sake of alleged productivity. Like vim, for example. If you approach a program after being away for 5 years, having to recall encyclopedic knowledge is a flaw. The RTFM disciples have lost. Anyway, some of the items in my arsenal of goofy time-saving tricks are still relevant today. I wanted to draw attention to one that’s stood the test of time. Remember Batch Files? Yes, it’s true, batch files are fading faster than the world of print. But they're not dead yet. I still run into some situations where I opt to use batch files. They are still relevant for build processes, or just various development workflow tools. Sure, there’s powershell, but there’s that stupid Set-ExecutionPolicy speed bump standing in your way; can you really spare the time to A) hunt down that setting on all machines affected and/or B) make futile efforts to convince your coworkers/boss that the hassle was worth it? When possible, I prefer the batch file wild card. And whenever I return to batch files, I end up researching some of the unintuitive aspects such as parameters, quote handling, and ERRORLEVEL. But I never have to remember to use “REM” for comment lines, because there’s a cleaner way to do them! Double Colon For Eye-Friendly Comments Here is a very simple batch file, with pretty much minimal content: @ECHO OFF SETLOCAL REM This is a comment ECHO This batch file doesn’t do much If you code on a daily basis, this may be more suitable to your eyes: @ECHO OFF SETLOCAL :: This is a comment ECHO This batch file doesn’t do much Works great! I imagine I find it preferable due to the similarity to comments in other situations: // or ;  or # I’ve often make visual pseudo-line breaks in my code, and this colon-based syntax works wonders: @ECHO OFF SETLOCAL :: Do stuff ECHO Doing Stuff :::::::::::::::::::::::::::: :: Do more stuff ECHO This batch file doesn’t do much Not only is it more readable, but there’s a slight performance benefit. The batch file engine sees this as an invalid line label and immediately reads the following line. Use that fact to your advantage if this trick leads you into heated nerd debate. Two Pitfalls to Avoid Be aware of that there are a couple situations where this hack will fail you. It most likely won’t be a problem unless you’re getting really sophisticated with your batch files. Pitfall #1: Inline comments @ECHO OFF SETLOCAL IF EXIST C:\SomeFile.txt GOTO END ::This will fail :END Unfortunately, this fails. You can only have whitespace to the left of your comments. Pitfall #2: Code Blocks @ECHO OFF SETLOCAL IF EXIST C:\SomeFile.txt (         :: This will fail         ECHO HELLO ) Code blocks, such as if statements and for loops, cannot contain these comments. This is ultimately due to the fact that entire code blocks are processed as a single line. I originally learned this from Rob van der Woude’s site. He goes into more depth about the behavior of the pitfalls as well, if you are interested in further details. I hope this trick earns you serious geek rep!

    Read the article

  • What is the SharePoint Action Framework and Why do I need it ?

    - by SAF
    For those out there that are a little curious as to whether SAF is any use to your organisation, please read this FAQ.  What is SAF ? SAF is free to use. SAF is the "SharePoint Action Framework", it was built by myself and Hugo (plus a few others along the way). SAF is written entirely in C# code available from : http://saf.codeplex.com.   SAF is a way to automate SharePoint configuration changes. An Action is a command/class/task/script written in C# that performs a unit of execution against SharePoint such as "CreateWeb"  or "AddLookupColumn". A SAF Macro is collection of one or more Actions. SAF Macro can be run from Msbuild, a Feature, StsAdm or common plain old .Net code. Parameters can be passed to a Macro at run-time from a variety of sources such as "Environment Variable", "*.config", "Msbuild Properties", Feature Properties, command line args, .net code. SAF emits lots of trace statements at run-time, these can be viewed using "DebugView". One Action can pass parameters to another Action. Parameters can be set using Expression Syntax such as "DateTime.Now".  You should consider SAF is you suffer from one of the following symptoms... "Our developers write lots of code to deploy changes at release time - it's always rushed" "I don't want my developers shelling out to Powershell or Stsadm from a Feature". "We have loads of Console applications now, I have lost track of where they are, or what they do" "We seem to be writing similar scripts against SharePoint in lots of ways, testing is hard". "My scripts often have lots of errors - they are done at the last minute". "When something goes wrong - I have no idea what went wrong or how to solve it". "Our Features get stuck and bomb out half way through - there no way to roll them back". "We have tons of Features now - I can't keep track". "We deploy Features to run one-off tasks" "We have a library of reusable scripts, but, we can only run it in one way, sometimes we want to run it from MSbuild and a Feature". "I want to automate the deployment of changes to our development environment". "I would like to run a housekeeping task on a scheduled basis"   So I like the sound of SAF - what's the problems ?  Realistically, there are few things that need to be considered: Someone on your team will need to spend a day or 2 understanding SAF and deciding exactly how you want to use it. I would suggest a Tech Lead, SysAdm or SP Architect will need to download it, try out the examples, look through the unit tests. Ask us questions. Although, SAF can be downloaded and set to go in a few minutes, you will still need to address issues such as - "Do you want to execute your Macros in MsBuild or from a Feature ?" You will need to decide who is going to do your deployments - is it each developer to themself, or do you require a dedicated Build Manager ? As most environments (Dev, QA, Live etc) require different settings (e.g. Urls, Database names, accounts etc), you will more than likely want to define these and set a properties file up for each environment. (These can then be injected into Saf at run-time). There may be no Action to solve your particular problem. If this is the case, suggest it to us - we can try and write it, or write it yourself. It's very easy to write a new Action - we have an approach to easily unit test it, document it and author it. For example, I wrote one to deploy  a WSP in 2 hours the other day. Alternatively, Saf can also call Stsadm commands and Powershell scripts.   Anyway, I do hope this helps! If you still need help, or a quick start, we can also offer consultancy around SAF. If you want to know more give us a call or drop an email to [email protected]

    Read the article

  • Thoughts on my new template language/HTML generator?

    - by Ralph
    I guess I should have pre-faced this with: Yes, I know there is no need for a new templating language, but I want to make a new one anyway, because I'm a fool. That aside, how can I improve my language: Let's start with an example: using "html5" using "extratags" html { head { title "Ordering Notice" jsinclude "jquery.js" } body { h1 "Ordering Notice" p "Dear @name," p "Thanks for placing your order with @company. It's scheduled to ship on {@ship_date|dateformat}." p "Here are the items you've ordered:" table { tr { th "name" th "price" } for(@item in @item_list) { tr { td @item.name td @item.price } } } if(@ordered_warranty) p "Your warranty information will be included in the packaging." p(class="footer") { "Sincerely," br @company } } } The "using" keyword indicates which tags to use. "html5" might include all the html5 standard tags, but your tags names wouldn't have to be based on their HTML counter-parts at all if you didn't want to. The "extratags" library for example might add an extra tag, called "jsinclude" which gets replaced with something like <script type="text/javascript" src="@content"></script> Tags can be optionally be followed by an opening brace. They will automatically be closed at the closing brace. If no brace is used, they will be closed after taking one element. Variables are prefixed with the @ symbol. They may be used inside double-quoted strings. I think I'll use single-quotes to indicate "no variable substitution" like PHP does. Filter functions can be applied to variables like @variable|filter. Arguments can be passed to the filter @variable|filter:@arg1,arg2="y" Attributes can be passed to tags by including them in (), like p(class="classname"). You will also be able to include partial templates like: for(@item in @item_list) include("item_partial", item=@item) Something like that I'm thinking. The first argument will be the name of the template file, and subsequent ones will be named arguments where @item gets the variable name "item" inside that template. I also want to have a collection version like RoR has, so you don't even have to write the loop. Thoughts on this and exact syntax would be helpful :) Some questions: Which symbol should I use to prefix variables? @ (like Razor), $ (like PHP), or something else? Should the @ symbol be necessary in "for" and "if" statements? It's kind of implied that those are variables. Tags and controls (like if,for) presently have the exact same syntax. Should I do something to differentiate the two? If so, what? This would make it more clear that the "tag" isn't behaving like just a normal tag that will get replaced with content, but controls the flow. Also, it would allow name-reuse. Do you like the attribute syntax? (round brackets) How should I do template inheritance/layouts? In Django, the first line of the file has to include the layout file, and then you delimit blocks of code which get stuffed into that layout. In CakePHP, it's kind of backwards, you specify the layout in the controller.view function, the layout gets a special $content_for_layout variable, and then the entire template gets stuffed into that, and you don't need to delimit any blocks of code. I guess Django's is a little more powerful because you can have multiple code blocks, but it makes your templates more verbose... trying to decide what approach to take Filtered variables inside quotes: "xxx {@var|filter} yyy" "xxx @{var|filter} yyy" "xxx @var|filter yyy" i.e, @ inside, @ outside, or no braces at all. I think no-braces might cause problems, especially when you try adding arguments, like @var|filter:arg="x", then the quotes would get confused. But perhaps a braceless version could work for when there are no quotes...? Still, which option for braces, first or second? I think the first one might be better because then we're consistent... the @ is always nudged up against the variable. I'll add more questions in a few minutes, once I get some feedback.

    Read the article

  • The Power of Goals

    - by BuckWoody
    Every year we read blogs, articles, magazines, hear news stories and blurbs on making New Year’s Resolutions. Well, I for one don’t do that. I do something else. Each year, on January 1, my wife, daughter and I get up early - like before 6:00 A.M. - and find a breakfast place that’s open. When I used to live in Safety Harbor, Florida, that was the “Paradise Café”, which has some of the best waffles around…but I digress. We find that restaurant and have a great breakfast while everyone else is recuperating from the night before. And we bring along a worn leather book that we’ve been writing in since my daughter wasn’t even old enough to read. It’s our book of Goals. A resolution, as it is purely defined, is a decision to change, stop or start an action. It has a sense of continuance, and that’s the issue. Some people decide things like “I’m going to lose weight” or “I’m going to spend more time with my family or hobby”. But a goal is different. A goal tends to have a defined start and end point. It’s something that can be measured. So each year on January 1 we sit down with the little leather book and we make a few - and only a few - individual and family goals. Sometimes it’s to exercise three times a week at the gym, sometimes it’s to save a certain percentage of income, and sometimes it’s to give away some of our possessions or to help someone we know in a specific way. Each person is responsible for their own goals - coming up with them, and coming up with a plan to meet them. Then we write it down in the little leather book. But it doesn’t end there. Each month, we grab the little leather book and read out the goals from that year to each person with a question or two: How are you doing on your goal? And what are you doing about reaching it? Can I help? Am I helping? At the end of the year, we put a checkmark by the goals we reached, and an X by the ones we didn’t. There’s no judgment, there’s no statements, each person is just expected to handle the success or failure in their own way. We also have family goals, and those we work on together. This might seem a little “corny” to some people. “I don’t need to write goals down” they say, “I keep track in my head of the things I do all the time. That’s silly.” But let me give you a little challenge: find a book, get with your family, and write down the things you want to do by the next January 1. Each month, look at the book. You can make goals for your career, your education, your spiritual side, your family, whatever. But if you make your goals realistic, think them through, and think about how you will achieve them, you will be surprised by the power of written goals.

    Read the article

  • Come see us at JavaU at JavaOne!

    - by tmcginn
    In just a little under a month, JavaOne will be in full swing (no pun intended) and thousands of Java developers will gather to hear the latest Java news, immerse themselves in Java technology and learn some new things. This year, I am fortunate enough to be able to attend, along with my Java curriculum development colleagues Matt Heimer and Mike Williams. We start our week at JavaOne teaching a one-day session at JavaU on Sunday morning. If you have never attended a training session through JavaU, you should check it out. There are some terrific sessions this year, and it might help to justify your trip to JavaOne if you can say it was for training! This year I am teaching a one day session on Java SE 7 New Features - a great session for anyone interested in the specific details of what is new in Java SE 7. Matt is teaching a one-day session on Developing Portable Java EE applications with the Enterprise JavaBeans 3.1 API and Java Persistence 2.0 API  EJB, and Mike is doing a one-day session on developing Rich Client applications with Java SE 7 using Java FX 2. I asked Matt and Mike to tell me what developers can expect from their sessions. Matt: "My session will get you up to speed on everything you need to know to create portable Java EE 6 applications using EJB 3.1 and JPA 2. I am going to cover why everyone can benefit from using EJBs (and why developers should relearn them if they haven't looked at them for years). Students who attend my session will see JPA examples showcasing how to use relational databases in an enterprise applications without programming to JDBC and without writing SQL statements. EJB and JPA benefit from being paired together, so I will also show how transaction management is easier in a container. I encourage students to bring a laptop and code as they learn!" Mike: "My session covers how to develop a rich client application using Java FX 2. Starting with the basic concepts of JavaFX, students will see how a JavaFX application is built from its layout, to its controls, to its data structures. In addition, more advanced controls like charts, smart tables, and transitions will be added to the application. Finally, a quick review of JavaFX concurrency and data binding is included. Blended with the core concepts the session will include some of the latest JavaFX technology. This includes using Scene Builder to create a JavaFX UI and connecting your XML UI definition to Java code.  In addition, packaging of the JavaFX application will be covered with some examples of the new native packaging features." As I mentioned, my session covers the changes in the Java for SE 7, including the  language changes that were voted into Java SE 7 from Project Coin. I will also look at how you can take advantage if the the new I/O library (NIO.2) for writing applications that work with files, directories and file systems. We will also look at the changes in Asynchronous I/O that are a part of the changes in NIO/2. We will spend some time looking at the changes to the Java Virtual Machine as well, including support for dynamically typed languages (JSR-292). We will spend some time looking at the Java Concurrency enhancements (JSR-166), including the new Fork/Join framework. And we'll round out the day with a look at changes in Swing, XML and a number of smaller changes in the API's. And, if these topics aren't grabbing your interest, take a look at the other 10 sessions that range from topics on architecture to how to pass the Oracle Certified Programmer I and II exams. See you soon!

    Read the article

  • XNA RenderTarget2D Sample

    - by Michael B. McLaughlin
    I remember being scared of render targets when I first started with XNA. They seemed like weird magic and I didn’t understand them at all. There’s nothing to be frightened of, though, and they are pretty easy to learn how to use. The first thing you need to know is that when you’re drawing in XNA, you aren’t actually drawing to the screen. Instead you’re drawing to this thing called the “back buffer”. Internally, XNA maintains two sections of graphics memory. Each one is exactly the same size as the other and has all the same properties (such as surface format, whether there’s a depth buffer and/or a stencil buffer, and so on). XNA flips between these two sections of memory every update-draw cycle. So while you are drawing to one, it’s busy drawing the other one on the screen. Then the current update-draw cycle ends, it flips, and the section you were just drawing to gets drawn to the screen while the one that was being drawn to the screen before is now the one you’ll be drawing on. This is what’s meant by “double buffering”. If you drew directly to the screen, the player would see all of those draws taking place as they happened and that would look odd and not very good at all. Those two sections of graphics memory are render targets. All a render target is, is a section of graphics memory to which things can be drawn. In addition to the two that XNA maintains automatically, you can also create and set your own using RenderTarget2D and GraphicsDevice.SetRenderTarget. Using render targets lets you do all sorts of neat post-processing effects (like bloom) to make your game look cooler. It also just lets you do things like motion blur and lets you create mirrors in 3D games. There are quite a lot of things that render targets let you do. To go along with this post, I wrote up a simple sample for how to create and use a RenderTarget2D. It’s available under the terms of the Microsoft Public License and is available for download on my website here: http://www.bobtacoindustries.com/developers/utils/RenderTarget2DSample.zip . Other than the ‘using’ statements, every line is commented in detail so that it should (hopefully) be easy to follow along with and understand. If you have any questions, leave a comment here or drop me a line on Twitter. One last note. While creating the sample I came across an interesting quirk. If you start by creating a Windows Game, and then make a copy for Windows Phone 7, the drop-down that lets you choose between drawing to a WP7 device and the WP7 emulator stays grayed-out. To resolve this, you need to right click on the Windows Phone 7 version in the Solution Explorer, and choose “Set as StartUp Project”. The bar will then become active, letting you change the target you which to deploy to. If you want another version to be the one that starts up when you press F5 to start debugging, just go and right-click on that version and choose “Set as StartUp Project” for it once you’ve set the WP7 target (device or emulator) that you want.

    Read the article

  • What are the disadvantages of self-encapsulation?

    - by Dave Jarvis
    Background Tony Hoare's billion dollar mistake was the invention of null. Subsequently, a lot of code has become riddled with null pointer exceptions (segfaults) when software developers try to use (dereference) uninitialized variables. In 1989, Wirfs-Brock and Wikerson wrote: Direct references to variables severely limit the ability of programmers to re?ne existing classes. The programming conventions described here structure the use of variables to promote reusable designs. We encourage users of all object-oriented languages to follow these conventions. Additionally, we strongly urge designers of object-oriented languages to consider the effects of unrestricted variable references on reusability. Problem A lot of software, especially in Java, but likely in C# and C++, often uses the following pattern: public class SomeClass { private String someAttribute; public SomeClass() { this.someAttribute = "Some Value"; } public void someMethod() { if( this.someAttribute.equals( "Some Value" ) ) { // do something... } } public void setAttribute( String s ) { this.someAttribute = s; } public String getAttribute() { return this.someAttribute; } } Sometimes a band-aid solution is used by checking for null throughout the code base: public void someMethod() { assert this.someAttribute != null; if( this.someAttribute.equals( "Some Value" ) ) { // do something... } } public void anotherMethod() { assert this.someAttribute != null; if( this.someAttribute.equals( "Some Default Value" ) ) { // do something... } } The band-aid does not always avoid the null pointer problem: a race condition exists. The race condition is mitigated using: public void anotherMethod() { String someAttribute = this.someAttribute; assert someAttribute != null; if( someAttribute.equals( "Some Default Value" ) ) { // do something... } } Yet that requires two statements (assignment to local copy and check for null) every time a class-scoped variable is used to ensure it is valid. Self-Encapsulation Ken Auer's Reusability Through Self-Encapsulation (Pattern Languages of Program Design, Addison Wesley, New York, pp. 505-516, 1994) advocated self-encapsulation combined with lazy initialization. The result, in Java, would resemble: public class SomeClass { private String someAttribute; public SomeClass() { setAttribute( "Some Value" ); } public void someMethod() { if( getAttribute().equals( "Some Value" ) ) { // do something... } } public void setAttribute( String s ) { this.someAttribute = s; } public String getAttribute() { String someAttribute = this.someAttribute; if( someAttribute == null ) { setAttribute( createDefaultValue() ); } return someAttribute; } protected String createDefaultValue() { return "Some Default Value"; } } All duplicate checks for null are superfluous: getAttribute() ensures the value is never null at a single location within the containing class. Efficiency arguments should be fairly moot -- modern compilers and virtual machines can inline the code when possible. As long as variables are never referenced directly, this also allows for proper application of the Open-Closed Principle. Question What are the disadvantages of self-encapsulation, if any? (Ideally, I would like to see references to studies that contrast the robustness of similarly complex systems that use and don't use self-encapsulation, as this strikes me as a fairly straightforward testable hypothesis.)

    Read the article

  • SQL Server Optimizer Malfunction?

    - by Tony Davis
    There was a sharp intake of breath from the audience when Adam Machanic declared the SQL Server optimizer to be essentially "stuck in 1997". It was during his fascinating "Query Tuning Mastery: Manhandling Parallelism" session at the recent PASS SQL Summit. Paraphrasing somewhat, Adam (blog | @AdamMachanic) offered a convincing argument that the optimizer often delivers flawed plans based on assumptions that are no longer valid with today’s hardware. In 1997, when Microsoft engineers re-designed the database engine for SQL Server 7.0, SQL Server got its initial implementation of a cost-based optimizer. Up to SQL Server 2000, the developer often had to deploy a steady stream of hints in SQL statements to combat the occasionally wilful plan choices made by the optimizer. However, with each successive release, the optimizer has evolved and improved in its decision-making. It is still prone to the occasional stumble when we tackle difficult problems, join large numbers of tables, perform complex aggregations, and so on, but for most of us, most of the time, the optimizer purrs along efficiently in the background. Adam, however, challenged further any assumption that the current optimizer is competent at providing the most efficient plans for our more complex analytical queries, and in particular of offering up correctly parallelized plans. He painted a picture of a present where complex analytical queries have become ever more prevalent; where disk IO is ever faster so that reads from disk come into buffer cache faster than ever; where the improving RAM-to-data ratio means that we have a better chance of finding our data in cache. Most importantly, we have more CPUs at our disposal than ever before. To get these queries to perform, we not only need to have the right indexes, but also to be able to split the data up into subsets and spread its processing evenly across all these available CPUs. Improvements such as support for ColumnStore indexes are taking things in the right direction, but, unfortunately, deficiencies in the current Optimizer mean that SQL Server is yet to be able to exploit properly all those extra CPUs. Adam’s contention was that the current optimizer uses essentially the same costing model for many of its core operations as it did back in the days of SQL Server 7, based on assumptions that are no longer valid. One example he gave was a "slow disk" bias that may have been valid back in 1997 but certainly is not on modern disk systems. Essentially, the optimizer assesses the relative cost of serial versus parallel plans based on the assumption that there is no IO cost benefit from parallelization, only CPU. It assumes that a single request will saturate the IO channel, and so a query would not run any faster if we parallelized IO because the disk system simply wouldn’t be able to handle the extra pressure. As such, the optimizer often decides that a serial plan is lower cost, often in cases where a parallel plan would improve performance dramatically. It was challenging and thought provoking stuff, as were his techniques for driving parallelism through query logic based on subsets of rows that define the "grain" of the query. I highly recommend you catch the session if you missed it. I’m interested to hear though, when and how often people feel the force of the optimizer’s shortcomings. Barring mistakes, such as stale statistics, how often do you feel the Optimizer fails to find the plan you think it should, and what are the most common causes? Is it fighting to induce it toward parallelism? Combating unexpected plans, arising from table partitioning? Something altogether more prosaic? Cheers, Tony.

    Read the article

  • How should I plan the inheritance structure for my game?

    - by Eric Thoma
    I am trying to write a platform shooter in C++ with a really good class structure for robustness. The game itself is secondary; it is the learning process of writing it that is primary. I am implementing an inheritance tree for all of the objects in my game, but I find myself unsure on some decisions. One specific issue that it bugging me is this: I have an Actor that is simply defined as anything in the game world. Under Actor is Character. Both of these classes are abstract. Under Character is the Philosopher, who is the main character that the user commands. Also under Character is NPC, which uses an AI module with stock routines for friendly, enemy and (maybe) neutral alignments. So under NPC I want to have three subclasses: FriendlyNPC, EnemyNPC and NeutralNPC. These classes are not abstract, and will often be subclassed in order to make different types of NPC's, like Engineer, Scientist and the most evil Programmer. Still, if I want to implement a generic NPC named Kevin, it would nice to be able to put him in without making a new class for him. I could just instantiate a FriendlyNPC and pass some values for the AI machine and for the dialogue; that would be ideal. But what if Kevin is the one benevolent Programmer in the whole world? Now we must make a class for him (but what should it be called?). Now we have a character that should inherit from Programmer (as Kevin has all the same abilities but just uses the friendly AI functions) but also should inherit from FriendlyNPC. Programmer and FriendlyNPC branched away from each other on the inheritance tree, so inheriting from both of them would have conflicts, because some of the same functions have been implemented in different ways on the two of them. 1) Is there a better way to order these classes to avoid these conflicts? Having three subclasses; Friendly, Enemy and Neutral; from each type of NPC; Engineer, Scientist, and Programmer; would amount to a huge number of classes. I would share specific implementation details, but I am writing the game slowly, piece by piece, and so I haven't implemented past Character yet. 2) Is there a place where I can learn these programming paradigms? I am already trying to take advantage of some good design patterns, like MVC architecture and Mediator objects. The whole point of this project is to write something in good style. It is difficult to tell what should become a subclass and what should become a state (i.e. Friendly boolean v. Friendly class). Having many states slows down code with if statements and makes classes long and unwieldy. On the other hand, having a class for everything isn't practical. 3) Are there good rules of thumb or resources to learn more about this? 4) Finally, where does templating come in to this? How should I coordinate templates into my class structure? I have never actually taken advantage of templating honestly, but I hear that it increases modularity, which means good code.

    Read the article

  • Customizing the Test Status on the TFS 2010 SSRS Stories Overview Report

    - by Bob Hardister
    This post shows how to customize the SQL query used by the Team Foundation Server 2010 SQL Server Reporting Services (SSRS) Stories Overview Report. The objective is to show test status for the current version while including user story status of the current and prior versions.  Why? Because we don’t copy completed user stories into the next release. We only want one instance of a user story for the product because we believe copies can get out of sync when they are supposed to be the same. In the example below, work items for the current version are on the area path root and prior versions are not on the area path root. However, you can use area path or iteration path criteria in the query as suits your needs. In any case, here’s how you do it: 1. Download a copy of the report RDL file as a backup 2. Open the report by clicking the edit down arrow and selecting “Edit in Report Builder” 3. Right click on the dsOverview Dataset and select Dataset Properties 4. Update the following SQL per the comments in the code: Customization 1 of 3 … -- Get the list deliverable workitems that have Test Cases linked DECLARE @TestCases Table (DeliverableID int, TestCaseID int); INSERT @TestCases     SELECT h.ID, flh.TargetWorkItemID     FROM @Hierarchy h         JOIN FactWorkItemLinkHistory flh             ON flh.SourceWorkItemID = h.ID                 AND flh.WorkItemLinkTypeSK = @TestedByLinkTypeSK                 AND flh.RemovedDate = CONVERT(DATETIME, '9999', 126)                 AND flh.TeamProjectCollectionSK = @TeamProjectCollectionSK         JOIN [CurrentWorkItemView] wi ON flh.TargetWorkItemID = wi.[System_ID]                  AND wi.[System_WorkItemType] = @TestCase             AND wi.ProjectNodeGUID  = @ProjectGuid              --  Customization 1 of 3: only include test status information when test case area path = root. Added the following 2 statements              AND wi.AreaPath = '{the root area path of the team project}'  …          Customization 2 of 3 … -- Get the Bugs linked to the deliverable workitems directly DECLARE @Bugs Table (ID int, ActiveBugs int, ResolvedBugs int, ClosedBugs int, ProposedBugs int) INSERT @Bugs     SELECT h.ID,         SUM (CASE WHEN wi.[System_State] = @Active THEN 1 ELSE 0 END) Active,         SUM (CASE WHEN wi.[System_State] = @Resolved THEN 1 ELSE 0 END) Resolved,         SUM (CASE WHEN wi.[System_State] = @Closed THEN 1 ELSE 0 END) Closed,         SUM (CASE WHEN wi.[System_State] = @Proposed THEN 1 ELSE 0 END) Proposed     FROM @Hierarchy h         JOIN FactWorkItemLinkHistory flh             ON flh.SourceWorkItemID = h.ID             AND flh.TeamProjectCollectionSK = @TeamProjectCollectionSK         JOIN [CurrentWorkItemView] wi             ON wi.[System_WorkItemType] = @Bug             AND wi.[System_Id] = flh.TargetWorkItemID             AND flh.RemovedDate = CONVERT(DATETIME, '9999', 126)             AND wi.[ProjectNodeGUID] = @ProjectGuid              --  Customization 2 of 3: only include test status information when test case area path = root. Added the following statement              AND wi.AreaPath = '{the root area path of the team project}'       GROUP BY h.ID … Customization 2 of 3 … -- Add the Bugs linked to the Test Cases which are linked to the deliverable workitems -- Walks the links from the user stories to test cases (via the tested by link), and then to -- bugs that are linked to the test case. We don't need to join to the test case in the work -- item history view. -- --    [WIT:User Story/Requirement] --> [Link:Tested By]--> [Link:any type] --> [WIT:Bug] INSERT @Bugs SELECT tc.DeliverableID,     SUM (CASE WHEN wi.[System_State] = @Active THEN 1 ELSE 0 END) Active,     SUM (CASE WHEN wi.[System_State] = @Resolved THEN 1 ELSE 0 END) Resolved,     SUM (CASE WHEN wi.[System_State] = @Closed THEN 1 ELSE 0 END) Closed,     SUM (CASE WHEN wi.[System_State] = @Proposed THEN 1 ELSE 0 END) Proposed FROM @TestCases tc     JOIN FactWorkItemLinkHistory flh         ON flh.SourceWorkItemID = tc.TestCaseID         AND flh.RemovedDate = CONVERT(DATETIME, '9999', 126)         AND flh.TeamProjectCollectionSK = @TeamProjectCollectionSK     JOIN [CurrentWorkItemView] wi         ON wi.[System_Id] = flh.TargetWorkItemID         AND wi.[System_WorkItemType] = @Bug         AND wi.[ProjectNodeGUID] = @ProjectGuid         --  Customization 3 of 3: only include test status information when test case area path = root. Added the following statement         AND wi.AreaPath = '{the root area path of the team project}'     GROUP BY tc.DeliverableID … 5. Save the report and you’re all set. Note: you may need to re-apply custom parameter changes like pre-selected sprints.

    Read the article

  • Strategies for invoking subclass methods on generic objects

    - by Brad Patton
    I've run into this issue in a number of places and have solved it a bunch of different ways but looking for other solutions or opinions on how to address. The scenario is when you have a collection of objects all based off of the same superclass but you want to perform certain actions based only on instances of some of the subclasses. One contrived example of this might be an HTML document made up of elements. You could have a superclass named HTMLELement and subclasses of Headings, Paragraphs, Images, Comments, etc. To invoke a common action across all of the objects you declare a virtual method in the superclass and specific implementations in all of the subclasses. So to render the document you could loop all of the different objects in the document and call a common Render() method on each instance. It's the case where again using the same generic objects in the collection I want to perform different actions for instances of specific subclass (or set of subclasses). For example (an remember this is just an example) when iterating over the collection, elements with external links need to be downloaded (e.g. JS, CSS, images) and some might require additional parsing (JS, CSS). What's the best way to handle those special cases. Some of the strategies I've used or seen used include: Virtual methods in the base class. So in the base class you have a virtual LoadExternalContent() method that does nothing and then override it in the specific subclasses that need to implement it. The benefit being that in the calling code there is no object testing you send the same message to each object and let most of them ignore it. Two downsides that I can think of. First it can make the base class very cluttered with methods that have nothing to do with most of the hierarchy. Second it assumes all of the work can be done in the called method and doesn't handle the case where there might be additional context specific actions in the calling code (i.e. you want to do something in the UI and not the model). Have methods on the class to uniquely identify the objects. This could include methods like ClassName() which return a string with the class name or other return values like enums or booleans (IsImage()). The benefit is that the calling code can use if or switch statements to filter objects to perform class specific actions. The downside is that for every new class you need to implement these methods and can look cluttered. Also performance could be less than some of the other options. Use language features to identify objects. This includes reflection and language operators to identify the objects. For example in C# there is the is operator that returns true if the instance matches the specified class. The benefit is no additional code to implement in your object hierarchy. The only downside seems to be the lack of using something like a switch statement and the fact that your calling code is a little more cluttered. Are there other strategies I am missing? Thoughts on best approaches?

    Read the article

  • Fast Data - Big Data's achilles heel

    - by thegreeneman
    At OOW 2013 in Mark Hurd and Thomas Kurian's keynote, they discussed Oracle's Fast Data software solution stack and discussed a number of customers deploying Oracle's Big Data / Fast Data solutions and in particular Oracle's NoSQL Database.  Since that time, there have been a large number of request seeking clarification on how the Fast Data software stack works together to deliver on the promise of real-time Big Data solutions.   Fast Data is a software solution stack that deals with one aspect of Big Data, high velocity.   The software in the Fast Data solution stack involves 3 key pieces and their integration:  Oracle Event Processing, Oracle Coherence, Oracle NoSQL Database.   All three of these technologies address a high throughput, low latency data management requirement.   Oracle Event Processing enables continuous query to filter the Big Data fire hose, enable intelligent chained events to real-time service invocation and augments the data stream to provide Big Data enrichment. Extended SQL syntax allows the definition of sliding windows of time to allow SQL statements to look for triggers on events like breach of weighted moving average on a real-time data stream.    Oracle Coherence is a distributed, grid caching solution which is used to provide very low latency access to cached data when the data is too big to fit into a single process, so it is spread around in a grid architecture to provide memory latency speed access.  It also has some special capabilities to deploy remote behavioral execution for "near data" processing.   The Oracle NoSQL Database is designed to ingest simple key-value data at a controlled throughput rate while providing data redundancy in a cluster to facilitate highly concurrent low latency reads.  For example, when large sensor networks are generating data that need to be captured while analysts are simultaneously extracting the data using range based queries for upstream analytics.  Another example might be storing cookies from user web sessions for ultra low latency user profile management, also leveraging that data using holistic MapReduce operations with your Hadoop cluster to do segmented site analysis.  Understand how NoSQL plays a critical role in Big Data capture and enrichment while simultaneously providing a low latency and scalable data management infrastructure thru clustered, always on, parallel processing in a shared nothing architecture. Learn how easily a NoSQL cluster can be deployed to provide essential services in industry specific Fast Data solutions. See these technologies work together in a demonstration highlighting the salient features of these Fast Data enabling technologies in a location based personalization service. The question then becomes how do these things work together to deliver an end to end Fast Data solution.  The answer is that while different applications will exhibit unique requirements that may drive the need for one or the other of these technologies, often when it comes to Big Data you may need to use them together.   You may have the need for the memory latencies of the Coherence cache, but just have too much data to cache, so you use a combination of Coherence and Oracle NoSQL to handle extreme speed cache overflow and retrieval.   Here is a great reference to how these two technologies are integrated and work together.  Coherence & Oracle NoSQL Database.   On the stream processing side, it is similar as with the Coherence case.  As your sliding windows get larger, holding all the data in the stream can become difficult and out of band data may need to be offloaded into persistent storage.  OEP needs an extreme speed database like Oracle NoSQL Database to help it continue to perform for the real time loop while dealing with persistent spill in the data stream.  Here is a great resource to learn more about how OEP and Oracle NoSQL Database are integrated and work together.  OEP & Oracle NoSQL Database.

    Read the article

  • MySQL Connector/Net 6.6.2 has been released

    - by fernando
    MySQL Connector/Net 6.6.2, a new version of the all-managed .NET driver for MySQL has been released.  This is the first of two beta releases intended to introduce users to the new features in the release.  This release is feature complete it should be stable enough for users to understand the new features and how we expect them to work.  As is the case with all non-GA releases, it should not be used in any production environment.  It is appropriate for use with MySQL server versions 5.0-5.6 It is now available in source and binary form from http://dev.mysql.com/downloads/connector/net/#downloads and mirror sites (note that not all mirror sites may be up to date at this point-if you can't find this version on some mirror, please try again later or choose another download site.) The 6.6 version of MySQL Connector/Net brings the following new features:   * Stored routine debugging   * Entity Framework 4.3 Code First support   * Pluggable authentication (now third parties can plug new authentications mechanisms into the driver).   * Full Visual Studio 2012 support: everything from Server Explorer to Intellisense & the Stored Routine debugger. Stored Procedure Debugging ------------------------------------------- We are very excited to introduce stored procedure debugging into our Visual Studio integration.  It works in a very intuitive manner by simply clicking 'Debug Routine' from Server Explorer. You can debug stored routines, functions & triggers. Some of the new features in this release include:   * Besides normal breakpoints, you can define conditional & pass count breakpoints.   * Now the debugger editor shows colorizing.   * Now you can change the values of locals in a function scope (previously caused deadlock due to functions executing within their own transaction).   * Now you can also debug triggers for 'replace' sql statements.   * In general anything related to locals, watches, breakpoints, stepping & call stack should work in a similar way to the C#'s Visual Studio debugger. Some limitations remains, due to the current debugger architecture:   * Some MySQL functions cannot be debugged currently (get_lock, release_lock, begin, commit, rollback, set transaction level)..   * Only one debug session may be active on a given server. The Debugger is feature complete at this point. We look forward to your feedback. Documentation ------------------------------------- The documentation is still being developed and will be readily available soon (before Beta 2).  You can view current Connector/Net documentation at http://dev.mysql.com/doc/refman/5.5/en/connector-net.html You can find our team blog at http://blogs.oracle.com/MySQLOnWindows. You can also post questions on our forums at http://forums.mysql.com/. Enjoy and thanks for the support! 

    Read the article

  • When will EBS 12.2 be released?

    - by Steven Chan (Oracle Development)
    The most frequently asked question at OpenWorld this year was, "When will EBS 12.2 be released?" Sadly, Oracle's communication policies prohibit us from speculating about release dates for unreleased software. We are not permitted to give estimates, rough timelines, guesses, or anything else that remotely resembles specific guidance on release dates. You can monitor My Oracle Support and this blog for updates on EBS 12.2.  I'll post them here as soon as they're available.  I'm embedding an old favourite from 2007 in its entirety here, since it applies equally to new releases as well as certifications. "Loose Lips Sink Ships" (March 20, 2007)If I were to sort emails in my inbox into groups, the biggest -- by far -- would be the one for emails that start with, "When will _____ be certified with the E-Business Suite?"  I answer these dutifully but know that my replies can sometimes be maddening, for two reasons:  technical uncertainty, and Oracle's rules for such communications. On the Spiral Model of CertificationsTechnology stack certifications tend to be highly iterative in nature.  As a result, statements about certification dates tend to be accurate only when made in hindsight.  Laypeople are horrified to hear this, but it's the ugly truth.  Uncertainty is simply inherent to the process.  I've become inured to it over the years, but it might come as a surprise to you that it can take many cycles to get fully-released software to work together.  Take this scenario: We test a particular combination of Component A and B. If we encounter a problem, say, with Component A, we log a bug. We receive a new version of Component A. The process iterates again. The reality is this: until a certification is completed and released, there's no accurate way of telling how many iterations are yet to come.  This is true regardless of the number of iterations that have already been completed.  Our Lips Are SealedGenerally, people understand that things are subject to change, so the second reason I can't say anything specific is actually much more important than the first.  "Loose lips might sink ships" was coined in World War II in an effort to remind people that careless talk can have serious consequences.  Curiously, this applies to Oracle's communications about upcoming features, configurations, and releases, too.  As a publicly traded company, we have very strict policies that prohibit us from linking specific releases to specific dates.  If you've ever listened to an earnings call with analysts, you'll often hear them asking, "Can you add a little more color to that statement?"  For certifications, color is usually the only thing that I have.  Sometimes I can provide a bit more information about the technical nature of the certification in question, such as expected footprints or version levels.  I can occasionally share technical issues that we've found, too, to convey the degree of risk or complexity involved in the certification.  Aside from that, there's little additional information about specific dates, date ranges, or even speculation about dates that I can provide... that is, without having one of those uncomfortable conversations with Oracle Legal.  So, as much as it pains me to do so, when it comes to dates, I'm always forced to conclude with a generic reply that blandly states one of the following: We're working on that certification right now That certification is in the pipeline but hasn't been started yet We don't have plans for that certification Don't Shoot the MessengerThankfully, I've developed a thick skin over the years -- which is a good thing, considering the colorful and energetic responses I've received over the years after answering these questions.  However, on behalf of my Oracle colleagues who are faced with these questions every day in the field, I urge you to remember that they're required to follow these same corporate rules about date disclosures.  It never hurts to ask, but don't be too disappointed if we can't provide you with a detailed answer.  The Go-Go's had it right, after all.  Related Articles Webcast Replay Available: Technical Preview of EBS 12.2 Online Patching

    Read the article

  • Property overwrite behaviour

    - by jeremyj
    I thought it worth sharing about property overwrite behaviour because i found it confusing at first in the hope of preventing some learning pain for the uninitiated with MSBuild :-)The confusion for me came because of the redundancy of using a Condition statement in a _project_ level property to test that a property has not been previously set. What i mean is that the following two statements are always identical in behaviour, regardless if the property has been supplied on the command line -  <PropertyGroup>    <PropA Condition=" '$(PropA)' == '' ">PropA set at project level</PropA>  </PropertyGroup>has the same behaviour regardless of command line override as -  <PropertyGroup>     <PropA>PropA set at project level</PropA>   </PropertyGroup>  i.e. the two above property declarations have the same result whether the property is overridden on the command line or not.To prove this experiment with the following .proj file -<?xml version="1.0" encoding="utf-8"?><Project ToolsVersion="4.0" >  <PropertyGroup>    <PropA Condition=" '$(PropA)' == '' ">PropA set at project level</PropA>  </PropertyGroup>  <Target Name="Target1">    <Message Text="PropA: $(PropA)"/>  </Target>  <Target Name="Target2">    <PropertyGroup>      <PropA>PropA set in Target2</PropA>    </PropertyGroup>    <Message Text="PropA: $(PropA)"/>  </Target>  <Target Name="Target3">    <PropertyGroup>      <PropA Condition=" '$(PropA)' == '' ">PropA set in Target3</PropA>    </PropertyGroup>    <Message Text="PropA: $(PropA)"/>  </Target>  <Target Name="Target4">    <PropertyGroup>      <PropA Condition=" '$(PropA)' != '' ">PropA set in Target4</PropA>    </PropertyGroup>    <Message Text="PropA: $(PropA)"/>  </Target></Project>Try invoking it using both of the following invocations and observe its output -1)>msbuild blog.proj /t:Target1;Target2;Target3;Target42)>msbuild blog.proj /t:Target1;Target2;Target3;Target4 "/p:PropA=PropA set on command line"Then try those two invocations with the following three variations of specifying PropA at the project level -1)  <PropertyGroup>     <PropA Condition=" '$(PropA)' == '' ">PropA set at project level</PropA>   </PropertyGroup> 2)   <PropertyGroup>     <PropA>PropA set at project level</PropA>   </PropertyGroup>3)  <PropertyGroup>     <PropA Condition=" '$(PropA)' != '' ">PropA set at project level</PropA>   </PropertyGroup>

    Read the article

  • Broken Views

    - by Ajarn Mark Caldwell
    “SELECT *” isn’t just hazardous to performance, it can actually return blatantly wrong information. There are a number of blog posts and articles out there that actively discourage the use of the SELECT * FROM …syntax.  The two most common explanations that I have seen are: Performance:  The SELECT * syntax will return every column in the table, but frequently you really only need a few of the columns, and so by using SELECT * your are retrieving large volumes of data that you don’t need, but the system has to process, marshal across tiers, and so on.  It would be much more efficient to only select the specific columns that you need. Future-proof:  If you are taking other shortcuts in your code, along with using SELECT *, you are setting yourself up for trouble down the road when enhancements are made to the system.  For example, if you use SELECT * to return results from a table into a DataTable in .NET, and then reference columns positionally (e.g. myDataRow[5]) you could end up with bad data if someone happens to add a column into position 3 and skewing all the remaining columns’ ordinal position.  Or if you use INSERT…SELECT * then you will likely run into errors when a new column is added to the source table in any position. And if you use SELECT * in the definition of a view, you will run into a variation of the future-proof problem mentioned above.  One of the guys on my team, Mike Byther, ran across this in a project we were doing, but fortunately he caught it while we were still in development.  I asked him to put together a test to prove that this was related to the use of SELECT * and not some other anomaly.  I’ll walk you through the test script so you can see for yourself what happens. We are going to create a table and two views that are based on that table, one of them uses SELECT * and the other explicitly lists the column names.  The script to create these objects is listed below. IF OBJECT_ID('testtab') IS NOT NULL DROP TABLE testtabgoIF OBJECT_ID('testtab_vw') IS NOT NULL DROP VIEW testtab_vwgo IF OBJECT_ID('testtab_vw_named') IS NOT NULL DROP VIEW testtab_vw_namedgo CREATE TABLE testtab (col1 NVARCHAR(5) null, col2 NVARCHAR(5) null)INSERT INTO testtab(col1, col2)VALUES ('A','B'), ('A','B')GOCREATE VIEW testtab_vw AS SELECT * FROM testtabGOCREATE VIEW testtab_vw_named AS SELECT col1, col2 FROM testtabgo Now, to prove that the two views currently return equivalent results, select from them. SELECT 'star', col1, col2 FROM testtab_vwSELECT 'named', col1, col2 FROM testtab_vw_named OK, so far, so good.  Now, what happens if someone makes a change to the definition of the underlying table, and that change results in a new column being inserted between the two existing columns?  (Side note, I normally prefer to append new columns to the end of the table definition, but some people like to keep their columns alphabetized, and for clarity for later people reviewing the schema, it may make sense to group certain columns together.  Whatever the reason, it sometimes happens, and you need to protect yourself and your code from the repercussions.) DROP TABLE testtabgoCREATE TABLE testtab (col1 NVARCHAR(5) null, col3 NVARCHAR(5) NULL, col2 NVARCHAR(5) null)INSERT INTO testtab(col1, col3, col2)VALUES ('A','C','B'), ('A','C','B')goSELECT 'star', col1, col2 FROM testtab_vwSELECT 'named', col1, col2 FROM testtab_vw_named I would have expected that the view using SELECT * in its definition would essentially pass-through the column name and still retrieve the correct data, but that is not what happens.  When you run our two select statements again, you see that the View that is based on SELECT * actually retrieves the data based on the ordinal position of the columns at the time that the view was created.  Sure, one work-around is to recreate the View, but you can’t really count on other developers to know the dependencies you have built-in, and they won’t necessarily recreate the view when they refactor the table. I am sure that there are reasons and justifications for why Views behave this way, but I find it particularly disturbing that you can have code asking for col2, but actually be receiving data from col3.  By the way, for the record, this entire scenario and accompanying test script apply to SQL Server 2008 R2 with Service Pack 1. So, let the developer beware…know what assumptions are in effect around your code, and keep on discouraging people from using SELECT * syntax in anything but the simplest of ad-hoc queries. And of course, let’s clean up after ourselves.  To eliminate the database objects created during this test, run the following commands. DROP TABLE testtabDROP VIEW testtab_vwDROP VIEW testtab_vw_named

    Read the article

  • SQL Server: Writing CASE expressions properly when NULLs are involved

    - by Mladen Prajdic
    We’ve all written a CASE expression (yes, it’s an expression and not a statement) or two every now and then. But did you know there are actually 2 formats you can write the CASE expression in? This actually bit me when I was trying to add some new functionality to an old stored procedure. In some rare cases the stored procedure just didn’t work correctly. After a quick look it turned out to be a CASE expression problem when dealing with NULLS. In the first format we make simple “equals to” comparisons to a value: SELECT CASE <value> WHEN <equals this value> THEN <return this> WHEN <equals this value> THEN <return this> -- ... more WHEN's here ELSE <return this> END Second format is much more flexible since it allows for complex conditions. USE THIS ONE! SELECT CASE WHEN <value> <compared to> <value> THEN <return this> WHEN <value> <compared to> <value> THEN <return this> -- ... more WHEN's here ELSE <return this> END Now that we know both formats and you know which to use (the second one if that hasn’t been clear enough) here’s an example how the first format WILL make your evaluation logic WRONG. Run the following code for different values of @i. Just comment out any 2 out of 3 “SELECT @i =” statements. DECLARE @i INTSELECT  @i = -1 -- first resultSELECT  @i = 55 -- second resultSELECT  @i = NULL -- third resultSELECT @i AS OriginalValue, -- first CASE format. DON'T USE THIS! CASE @i WHEN -1 THEN '-1' WHEN NULL THEN 'We have a NULL!' ELSE 'We landed in ELSE' END AS DontUseThisCaseFormatValue, -- second CASE format. USE THIS! CASE WHEN @i = -1 THEN '-1' WHEN @i IS NULL THEN 'We have a NULL!' ELSE 'We landed in ELSE' END AS UseThisCaseFormatValue When the value of @i is –1 everything works as expected, since both formats go into the –1 WHEN branch. When the value of @i is 55 everything again works as expected, since both formats go into the ELSE branch. When the value of @i is NULL the problems become evident. The first format doesn’t go into the WHEN NULL branch because it makes an equality comparison between two NULLs. Because a NULL is an unknown value: NULL = NULL is false. That is why the first format goes into the ELSE Branch but the second format correctly handles the proper IS NULL comparison.   Please use the second more explicit format. Your future self will be very grateful to you when he doesn’t have to discover these kinds of bugs.

    Read the article

  • Is changing my job now a wise decision? [closed]

    - by FlaminPhoenix
    First a little background about myself. I am a javascript programmer with 3.8 years of experience. I joined my current company a year and 3 months ago, and I was recruited as a javascript programmer. I was under the impression I was a programmer in a programming team but this was not the case. No one else except me and my manager knows anything about programming in my team. The other two teammates, copy paste stuff from websites into excel sheets. I was told I was being recruited for a new project, and it was true. The only problem was that the server side language they were using was PHP. They were using a popular library with PHP, and I had never worked with PHP before. Nevertheless, I learnt it well enough to get things working, and received high praise from my boss's boss on whichever project I worked on. Words like "wow" , "This looks great, the clients gonna be impressed with this." were sprinkled every now and then on reviewing my work. They even managed to sell my work to a couple of clients and as I understand, both of my projects are going to fetch them a pretty buck. The problem: I was asked to move into a project which my manager was handling. I asked them for training on the project which never came, and sure enough I couldnt complete my first task on the new project without shortcomings. I told my manager there were things I didnt know how to get done in the new project due to lack of training. His project had 0 documentation. I was told he would "take care" of everything relating to those shortcomings. In the meantime, I was asked to switch to another project. My manager made the necessary changes and later told me that the build had "broken" on the production server and that I needed to "test" my changes before saying things were done. I never deployed it on the production server. He did. I never saw / had the opportunity to see the final build before it went to production. He called me for a separate meeting and started pointing fingers at me, but I took full responsibility even if I didnt have to. He later on got on a call with his boss, in my presence, and gave him the impression that it was all my fault. I did not confront him about this so far. I have worked late / done overtime without them asking a lot, but last week, I just got home from work, and I got calls asking me to solve an issue which till then they had kept quiet about even though they were informed about it. I asked my manager why I hadnt been tasked with this when I was in office. He started telling me which statements to put where, as if to mock me, and that this "is hardly an overtime issue" and this pissed me off. Also, during the previous meeting, he was constantly talking highly about his work, at the same time trying to demean mine. In the meantime, I have attended an interview with another MNC, and the interviewers there were fully respectful of my decision to leave my current company. Its a software company, so I can expect my colleagues to know a lot more than me. Im told I can expect their offer anytime this week. My questions: Is my anger towards my manager justified? While leaving, do I tell him that its because of his actions that Im leaving? Do I erupt in anger and tell him that he shouldnt have put the blame on me since he was the one doing the deployment? This is going to be my second resignation to this company. The first time I wanted to resign, I was asked to stay back and my manager promised a lot of changes, a couple of which were made. How do I keep myself from getting into such situations with my employers in the future?

    Read the article

  • Wireless AAA for a small, bandwidth-limited hotel.

    - by Anthony Hiscox
    We (the tech I work with and myself) live in a remote northern town where Internet access is somewhat of a luxury, and bandwidth is quite limited. Here, overage charges ranging from few hundreds, to few thousands of dollars a month, is not uncommon. I myself incur regular monthly charges just through my regular Internet usage at home (I am allowed 10G for $60CAD!) As part of my work, I have found myself involved with several hotels that are feeling this. I know that I can come up with something to solve this problem, but I am relatively new to system administration and I don't want my dreams to overcome reality. So, I pass these ideas on to you, those with much more experience than I, in hopes you will share some of your thoughts and concerns. This system must be cost effective, yes the charges are high here, but the trust in technology is the lowest I've ever seen. Must be capable of helping client reduce their usage (squid) Allow a limited (throughput and total usage) amount of free Internet, as this is often franchise policy. Allow a user to track their bandwidth usage Allow (optional) higher speed and/or usage for an additional charge. This fee can be obtained at the front desk on checkout and should not require the use of PayPal or Credit Card. Unfortunately some franchises have ridiculous policies that require the use of a third party remote service to authenticate guests to your network. This means WPA is out, and it also means that I do not auth before Internet usage, that will be their job. However, I do require the ABILITY to perform authentication for Internet access if a hotel does not have this policy. I will still have to track bandwidth (under a guest account by default) and provide the same limiting, however the guest often will require a complete 'unlimited' access, in terms of existence, not throughput. Provide firewalling capabilities for hotels that have nothing, Office, and Guest network segregation (some of these guys are running their office on the guest network, with no encryption, and a simple TOS to get on!) Prevent guests from connecting to other guests, however provide a means to allow this to happen. IE. Each guest connects to a page and allows the other guest, this writes a iptables rule (with python-netfilter) and allows two rooms to play a game, for instance. My thoughts on how to implement this. One decent box (we'll call it a router now) with a lot of ram, and 3 NIC's: Internet Office Guests (AP's + In Room Ethernet) Router Firewall Rules Guest can talk to router only, through which they are routed to where they need to go, including Internet services. Office can be used to bridge Office to Internet if an existing solution is not in place, otherwise, it simply works for a network accessible web (webmin+python-webmin?) interface. Router Software: OpenVZ provides virtualization for a few services I don't really trust. Squid, FreeRADIUS and Apache. The only service directly accessible to guests is Apache. Apache has mod_wsgi and django, because I can write quickly using django and my needs are low. It also potentially has the FreeRADIUS mod, but there seems to be some caveats with this. Firewall rules are handled on the router with iptables. Webmin (or a custom django app maybe) provides abstracted control over any features that the staff may need to access. Python, if you haven't guessed it's the language I feel most comfortable in, and I use it for almost everything. And finally, has this been done, is it a overly massive project not worth taking on for one guy, and/or is there some tools I'm missing that could be making my life easier? For the record, I am fairly good with Python, but not very familiar with many other languages (I can struggle through PHP, it's a cosmetic issue there). I am also an avid linux user, and comfortable with config files and command line. Thank you for your time, I look forward to reading your responses. Edit: My apologies if this is not a Q&A in the sense that some were expecting, I'm just looking for ideas and to make sure I'm not trying to do something that's been done. I'm looking at pfSense now as a possible start for what I need.

    Read the article

  • My PC suddenly doesn't detect the primary drive (SSD)

    - by smoth190
    My computer has been working fine for months, and it worked today, but tonight I went to start it up to find that my OCZ Vertex 2 isn't being found. When I turn on my computer, the loading screen gets stuck at "Detecting IDE drives...". After a while, it keeps going and lists the drives it finds. The first one in the list should be my Vertex 2, but it just says "None". The computer proceeds to get stuck on "Loading operating system...", which is understandable because the drive with the OS is "gone". My first thought was drive failure, but every time drives have crashed on me, they're still detected--they just don't work. This drive is an SSD, it's pretty new, and I had no problems beforehand. I find it hard to believe it failed. I'm sure it's possible, but I hope this isn't the case. There has been nothing strange going on at all with my PC, it's been running perfect until now. I was just about to do my monthly dskchk and defrag today. I popped in my Windows 7 Home Premium disk and booted from it. When I launched the repair tool, it didn't list any operating systems (because the drive is 100% missing...). When I've had disks crash before, it still listed the OS, you just couldn't do anything with it. I tried to restore from an image, but I don't have any of those, either. I opened the command console and listed the drivers with wmic logicaldisk get name. Only C: and D: came up. C: was my 1TB storage driver (luckily, all my stuff is here--only the OS is on the SSD!) and D: was the disk driver. So I still had an MIA drive... The SSD didn't come with any driver disks, so I can't install drivers. If there's a way to do this from a CD I can burn with my other PC, please let me know. What the heck do I do? Although only the OS is on my SSD, a new SSD is expensive. I'll probably also have to buy a new copy of Windows (an upgrade would be nice, though...) because I've found it eats my registration key when my PC crashes (and my thousands of dollars of Adobe programs, I'll be on the phone with tech support for a week to get those keys back). And I'll lose my registry, all my settings, all sorts of other stuff that I'll spend weeks restoring. My computer is a pain in the butt to take out and open up, so if I can't fix it, I'll try fiddling with the plug or putting it into a new computer, but not right now. Any help is greatly appreciated! The day when they make crash-less drives will be the day I live without worry.

    Read the article

  • sp_send_dbmail attach files stored as varbinary in database

    - by Mindstorm Interactive
    I have a two part question relating to sending query results as attachments using sp_send_dbmail. Problem 1: Only basic .txt files will open. Any other format like .pdf or .jpg are corrupted. Problem 2: When attempting to send multiple attachments, I receive one file with all file names glued together. I'm running SQL Server 2005 and I have a table storing uploaded documents: CREATE TABLE [dbo].[EmailAttachment]( [EmailAttachmentID] [int] IDENTITY(1,1) NOT NULL, [MassEmailID] [int] NULL, -- foreign key [FileData] [varbinary](max) NOT NULL, [FileName] [varchar](100) NOT NULL, [MimeType] [varchar](100) NOT NULL I also have a MassEmail table with standard email stuff. Here is the SQL Send Mail script. For brevity, I've excluded declare statements. while ( (select count(MassEmailID) from MassEmail where status = 20 )>0) begin select @MassEmailID = Min(MassEmailID) from MassEmail where status = 20 select @Subject = [Subject] from MassEmail where MassEmailID = @MassEmailID select @Body = Body from MassEmail where MassEmailID = @MassEmailID set @query = 'set nocount on; select cast(FileData as varchar(max)) from Mydatabase.dbo.EmailAttachment where MassEmailID = '+ CAST(@MassEmailID as varchar(100)) select @filename = '' select @filename = COALESCE(@filename+ ',', '') +FileName from EmailAttachment where MassEmailID = @MassEmailID exec msdb.dbo.sp_send_dbmail @profile_name = 'MASS_EMAIL', @recipients = '[email protected]', @subject = @Subject, @body =@Body, @body_format ='HTML', @query = @query, @query_attachment_filename = @filename, @attach_query_result_as_file = 1, @query_result_separator = '; ', @query_no_truncate = 1, @query_result_header = 0; update MassEmailset status= 30,SendDate = GetDate() where MassEmailID = @MassEmailID end I am able to successfully read files from the database so I know the binary data is not corrupted. .txt files only read when I cast FilaData to varchar. But clearly original headers are lost. It's also worth noting that attachment file sizes are different than the original files. That is most likely due to improper encoding as well. So I'm hoping there's a way to create file headers using the stored mimetype, or some way to include file headers in the binary data? I'm also not confident in the values of the last few parameters, and I know coalesce is not quite right, because it prepends the first file name with a comma. But good documentation is nearly impossible to find. Please help!

    Read the article

  • UIViewController presentModalViewController: animated: doing nothing?

    - by ryyst
    Hi, I recently started a project, using Apple's Utility Application example project. In the example project, there's an info button that shows an instance of FlipSideView. If you know the Weather.app, you know what the button acts like. I then changed the MainWindow.xib to contain a scrollview in the middle of the window and a page-control view at the bottom of the window (again, like the Weather.app). The scrollview gets filled with instances of MainView. When I then clicked the info button, the FlipSideView would show, but only in the area that was previously filled by the MainView instance – this means that the page-control view on the bottom of the page still showed when the FlipSideView instance got loaded. So, I thought that I would simply add a UIViewController for the top-most window, which is the one declared inside the AppDelegate created along side with the project. So, I created a subclass of UIViewController, put an instance of it inside MainWindow.xib and connected it's view outlet to the UIWindow declared as window inside the app delegate. I also changed the button's action, so that it know sends a message to the MainWindowController instance. The message does get sent (I checked with NSLog() statements), but the FlipSideView doesn't get shown. Here's the relevant (?) code: FlipsideViewController *controller = [[FlipsideViewController alloc] initWithNibName:@"FlipsideView" bundle:nil]; controller.delegate = self; controller.modalTransitionStyle = UIModalTransitionStyleFlipHorizontal; [self presentModalViewController:controller animated:YES]; [controller release]; Why's this not working? I've uploaded the entire project here for you to be able to see the whole thing. Thanks for help! -- Ry

    Read the article

  • managing images in an iphone/ipad universal app

    - by taber
    Hi, I'm just curious as to what methods people are using to dynamically use larger or smaller images in their universal iPhone/iPad apps. I created a large test image and I tried scaling down (using cocos2d) by 0.46875. After viewing that in the iPhone 4.0 simulator I found the results were pretty crappy... rough pixel edges, etc. Plus, loading huge image files for iPhone users when they don't need them is pretty lame. So I guess what I will probably have to do is save out two versions of every sprite... large (for the iPad side) and small (for iPhone/iPod Touch) then detect the user's device and spit out the proper sprite like so: NSString *deviceType = [UIDevice currentDevice].model; CCSprite *test; if([deviceType isEqualToString:@"iPad"]) { test = [CCSprite spriteWithFile:@"testBigHuge.png"]; } else { test = [CCSprite spriteWithFile:@"testRegularMcTiny.png"]; } [self addChild: test]; How are you guys doing this? I'd rather avoid sprinkling all of my code with if statements like this. I want to also avoid using .xib files since it's an OpenGL-based app. Thanks!

    Read the article

  • ZF2 Zend\Db Insert/Update Using Mysql Expression (Zend\Db\Sql\Expression?)

    - by Aleross
    Is there any way to include MySQL expressions like NOW() in the current build of ZF2 (2.0.0beta4) through Zend\Db and/or TableGateway insert()/update() statements? Here is a related post on the mailing list, though it hasn't been answered: http://zend-framework-community.634137.n4.nabble.com/Zend-Db-Expr-and-Transactions-in-ZF2-Db-td4472944.html It appears that ZF1 used something like: $data = array( 'update_time' => new \Zend\Db\Expr("NOW()") ); And after looking through Zend\Db\Sql\Expression I tried: $data = array( 'update_time' => new \Zend\Db\Sql\Expression("NOW()"), ); But get the following error: Catchable fatal error: Object of class Zend\Db\Sql\Expression could not be converted to string in /var/www/vendor/ZF2/library/Zend/Db/Adapter/Driver/Pdo/Statement.php on line 256 As recommended here: http://zend-framework-community.634137.n4.nabble.com/ZF2-beta3-Zend-Db-Sql-Insert-new-Expression-now-td4536197.html I'm currently just setting the date with PHP code, but I'd rather use the MySQL server as the single source of truth for date/times. $data = array( 'update_time' => date('Y-m-d H:i:s'), ); Thanks, I appreciate any input!

    Read the article

  • android.intent.action.SCREEN_ON doesn't work as a receiver intent filter

    - by Jim Blackler
    I'm trying to get a BroadcastReceiver invoked when the screen is turned on. In my AndroidManifest.xml I have specified : <receiver android:name="IntentReceiver"> <intent-filter> <action android:name="android.intent.action.SCREEN_ON"></action> </intent-filter> </receiver> However it seems the receiver is never invoked (breakpoints don't fire, log statements ignored). I've swapped out SCREEN_ON for BOOT_COMPLETED for a test, and this does get invoked. This is in a 1.6 (SDK level 4) project. A Google Code Search revealed this, I downloaded the project and synced it, converted it to work with latest tools, but it too is not able to intercept that event. http://www.google.com/codesearch/p?hl=en#_8L9bayv7qE/trunk/phxandroid-intent-query/AndroidManifest.xml&q=android.intent.action.SCREEN_ON Is this perhaps no longer supported? Previously I have been able to intercept this event successfully with a call to Context.registerReceiver() like so registerReceiver(new BroadcastReceiver() { @Override public void onReceive(Context context, Intent intent) { // ... } }, new IntentFilter(Intent.ACTION_SCREEN_ON)); However this was performed by a long-living Service. Following sage advice from CommonsWare I have elected to try to remove the long-living Service and use different techniques. But I still need to detect the screen off and on events.

    Read the article

< Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >