Search Results

Search found 1351 results on 55 pages for 'pain'.

Page 40/55 | < Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >

  • Musings on the launch of SQL Monitor

    - by Phil Factor
    For several years, I was responsible for the smooth running of a large number of enterprise database servers. We ran a network monitoring tool that was primitive by today’s standards but which performed the useful function of polling every system, including all the Servers in my charge. It ran a configurable script for each service that you needed to monitor that was merely required to return one of a number of integer values. These integer values represented the pain level of the service, from 10 (“hurtin’ real bad”) to 1 (“Things is great”). Not only could you program the visual appearance of each server on the network diagram according to the value of the integer, but you could even opt to run a sound file. Very soon, we had a large TFT Screen, high on the wall of the server room, with every server represented by an icon, and a speaker next to it that would give out a series of grunts, groans, snores, shrieks and funeral marches, depending on the problem. One glance at the display, and you could dive in with iSQL/QA/SSMS and check what was going on with your favourite diagnostic tools. If you saw a server icon burst into flames on the screen or droop like a jelly, you dropped your mug of coffee to do it.  It was real fun, but I remember it more for the huge difference it made to have that real-time visibility into how your servers are performing. The management soon stopped making jokes about the real reason we wanted the TFT screen. (It rendered DVDs beautifully they said; particularly flesh-tints). If you are instantly alerted when things start to go wrong, then there was a good chance you could fix it before being alerted to the problem by the users of the system.  There is a world of difference between this sort of tool, one that gives whoever is ‘on watch’ in the server room the first warning of a potential problem on one of any number of servers, and the breed of tool that attempts to provide some sort of prosthetic DBA Brain. I like to get the early warning, to get the right information to help to diagnose a problem: No auto-fix, but just the information. I prefer to leave the task of ascertaining the exact cause of a problem to my own routines, custom code, intuition and forensic instincts. A simulated aircraft cockpit doesn’t do anything for me, especially before I know where I should be flying.  Time has moved on, and that TFT screen is now, with SQL Monitor, an iPad or any other mobile or static device that can support a browser. Rather than trying to reproduce the conceptual topology of the servers, it lists them in their groups so as to give a display that scales with the increasing number of databases you monitor.  It gives the history of the major events and trends for the servers. It gives the icons and colours that you can spot out of the corner of your eye, but goes on to give you just enough information in drill-down to give you a much clearer idea of where to look with your DBA tools and routines. It doesn't swamp you with information.  Whereas a few server and database-level problems are pretty easily fixed, others depend on judgement and experience to sort out.  Although the idea of an application that automates the bulk of a DBA’s skills is attractive to many, I can’t see it happening soon. SQL Server’s complexity increases faster than the panaceas can be created. In the meantime, I believe that the best way of helping  DBAs  is to make the monitoring process as simple and effective as possible,  and provide the right sort of detail and ‘evidence’ to allow them to decide on the fix. In the end, it is still down to the skill of the DBA.

    Read the article

  • Simple Merging Of PDF Documents with iTextSharp 5.4.5.0

    - by Mladen Prajdic
    As we were working on our first SQL Saturday in Slovenia, we came to a point when we had to print out the so-called SpeedPASS's for attendees. This SpeedPASS file is a PDF and contains thier raffle, lunch and admission tickets. The problem is we have to download one PDF per attendee and print that out. And printing more than 10 docs at once is a pain. So I decided to make a little console app that would merge multiple PDF files into a single file that would be much easier to print. I used an open source PDF manipulation library called iTextSharp version 5.4.5.0 This is a console program I used. It’s brilliantly named MergeSpeedPASS. It only has two methods and is really short. Don't let the name fool you It can be used to merge any PDF files. The first parameter is the name of the target PDF file that will be created. The second parameter is the directory containing PDF files to be merged into a single file. using iTextSharp.text; using iTextSharp.text.pdf; using System; using System.IO; namespace MergeSpeedPASS { class Program { static void Main(string[] args) { if (args.Length == 0 || args[0] == "-h" || args[0] == "/h") { Console.WriteLine("Welcome to MergeSpeedPASS. Created by Mladen Prajdic. Uses iTextSharp 5.4.5.0."); Console.WriteLine("Tool to create a single SpeedPASS PDF from all downloaded generated PDFs."); Console.WriteLine(""); Console.WriteLine("Example: MergeSpeedPASS.exe targetFileName sourceDir"); Console.WriteLine(" targetFileName = name of the new merged PDF file. Must include .pdf extension."); Console.WriteLine(" sourceDir = path to the dir containing downloaded attendee SpeedPASS PDFs"); Console.WriteLine(""); Console.WriteLine(@"Example: MergeSpeedPASS.exe MergedSpeedPASS.pdf d:\Downloads\SQLSaturdaySpeedPASSFiles"); } else if (args.Length == 2) CreateMergedPDF(args[0], args[1]); Console.WriteLine(""); Console.WriteLine("Press any key to exit..."); Console.Read(); } static void CreateMergedPDF(string targetPDF, string sourceDir) { using (FileStream stream = new FileStream(targetPDF, FileMode.Create)) { Document pdfDoc = new Document(PageSize.A4); PdfCopy pdf = new PdfCopy(pdfDoc, stream); pdfDoc.Open(); var files = Directory.GetFiles(sourceDir); Console.WriteLine("Merging files count: " + files.Length); int i = 1; foreach (string file in files) { Console.WriteLine(i + ". Adding: " + file); pdf.AddDocument(new PdfReader(file)); i++; } if (pdfDoc != null) pdfDoc.Close(); Console.WriteLine("SpeedPASS PDF merge complete."); } } } } Hope it helps you and have fun.

    Read the article

  • ARTS Reference Model for Retail

    - by Sanjeev Sharma
    Consider a hypothetical scenario where you have been tasked to set up retail operations for a electronic goods or daily consumables or a luxury brand etc. It is very likely you will be faced with the following questions: What are the essential business capabilities that you must have in place?  What are the essential business activities under-pinning each of the business capabilities, identified in Step 1? What are the set of steps that you need to perform to execute each of the business activities, identified in Step 2? Answers to the above will drive your investments in software and hardware to enable the core retail operations. More importantly, the choices you make in responding to the above questions will several implications in the short-run and in the long-run. In the short-term, you will incur the time and cost of defining your technology requirements, procuring the software/hardware components and getting them up and running. In the long-term, as you grow in operations organically or through M&A, partnerships and franchiser business models  you will invariably need to make more technology investments to manage the greater complexity (scale and scope) of business operations.  "As new software applications, such as time & attendance, labor scheduling, and POS transactions, just to mention a few, are introduced into the store environment, it takes a disproportionate amount of time and effort to integrate them with existing store applications. These integration projects can add up to 50 percent to the time needed to implement a new software application and contribute significantly to the cost of the overall project, particularly if a systems integrator is called in. This has been the reality that all retailers have had to live with over the last two decades. The effect of the environment has not only been to increase costs, but also to limit retailers' ability to implement change and the speed with which they can do so." (excerpt taken from here) Now, one would think a lot of retailers would have already gone through the pain of finding answers to these questions, so why re-invent the wheel? Precisely so, a major effort began almost 17 years ago in the retail industry to make it less expensive and less difficult to deploy new technology in stores and at the retail enterprise level. This effort is called the Association for Retail Technology Standards (ARTS). Without standards such as those defined by ARTS, you would very likely end up experiencing the following: Increased Time and Cost due to resource wastage arising from re-inventing the wheel i.e. re-creating vanilla processes from scratch, and incurring, otherwise avoidable, mistakes and errors by ignoring experience of others Sub-optimal Process Efficiency due to narrow, isolated view of processes thereby ignoring process inter-dependencies i.e. optimizing parts but not the whole, and resulting in lack of transparency and inter-departmental finger-pointing Embracing ARTS standards as a blue-print for establishing or managing or streamlining your retail operations can benefit you in the following ways: Improved Time-to-Market from parity with industry best-practice processes e.g. ARTS, thus avoiding “reinventing the wheel” for common retail processes and focusing more on customizing processes for differentiations, and lowering integration complexity and risk with a standardized vocabulary for exchange between internal and external i.e. partner systems Lower Operating Costs by embracing the ARTS enterprise-wide process reference model for developing and streamlining retail operations holistically instead of a narrow, silo-ed view, and  procuring IT systems in compliance with ARTS thus avoiding IT budget marginalization While parity with industry standards such as ARTS business process model by itself does not create a differentiation, it does however provide a higher starting point for bridging the strategy-execution gap in setting up and improving retail operations.

    Read the article

  • Olympics data available for all on Windows Azure SQL Database and Power View

    - by jamiet
    Are you looking around for some decent test data for your BI demos? Well, if so, Microsoft have provided some data about all medals won at the Olympics Games (1900 to 2008) at OlympicsData workbook - Excel, SSIS, Azure sample; it provides analysis over athletes, countries, medal type, sport, discipline and various other dimensions. The data has been provided in an Excel workbook along with instructions on how to load the data into a Windows Azure SQL Database using SQL Server Integration Services (SSIS). Frankly though, the rigmarole of standing up your own Windows Azure SQL Database ok, SQL Azure database, is both costly (SQL Azure isn’t free) and time consuming (the provided instructions aren’t exactly an idiot’s guide and getting SSIS to work properly with Excel isn’t a barrel of laughs either). To ease the pain for all you BI folks out there that simply want to party on the data I have loaded it all into the SQL Azure database that I use for hosting AdventureWorks on Azure. You can read more about AdventureWorks on Azure below however I’ll summarise here by saying it is a SQL Azure database provided for the use of the SQL Server community and which is supported by voluntary donations. To view the data the credentials you need are: Server mhknbn2kdz.database.windows.net  Database AdventureWorks2012 User sqlfamily Password sqlf@m1ly Type those into SSMS and away you go, the data is provided in four tables [olympics].[Sport], [olympics].[Discipline], [olympics].[Event] & [olympics].[Medalist]: I figured this would be a good candidate for a Power View report so I fired up Excel 2013 and built such a report to slice’n’dice through the data – here are some screenshots that should give you a flavour of what is available: A view of all the available data Where do all the gymastics medals go? Which countries do top ten all-time medal winners come from? You get the idea. There is masses of information here and if you have Excel 2013 handy Power View provides a quick and easy way of surfing through it. To save you the bother of setting up the Power View report yourself you can have the one that I took these screenshots from, it is available on my SkyDrive at OlympicsAnalysis.xlsx so just hit the link and download to play to your heart’s content. Party on, people! As I said above the data is hosted on a SQL Azure database that I use for hosting “AdventureWorks on Azure” which I first announced in March 2013 at AdventureWorks2012 now available for all on SQL Azure. I’ll repeat the pertinent parts of that blog post here: I am pleased to announce that as of today … [AdventureWorks2012] now resides on SQL Azure and is available for anyone, absolutely anyone, to connect to and use for their own means. This database is free for you to use but SQL Azure is of course not free so before I give you the credentials please lend me your ears eyes for a short while longer. AdventureWorks on Azure is being provided for the SQL Server community to use and so I am hoping that that same community will rally around to support this effort by making a voluntary donation to support the upkeep which, going on current pricing, is going to be $119.88 per year. If you would like to contribute to keep AdventureWorks on Azure up and running for that full year please donate via PayPal to [email protected] Any amount, no matter how small, will help. If those 50+ people that retweeted me beforehand all contributed $2 then that would just about be enough to keep this up for a year. If the community contributes more than we need then there are a number of additional things that could be done: Host additional databases (Northwind anyone??) Host in more datacentres (this first one is in Western Europe) Make a charitable donation That last one, a charitable donation, is something I would really like to do. The SQL Community have proved before that they can make a significant contribution to charitable orgnisations through purchasing the SQL Server MVP Deep Dives book and I harbour hopes that AdventureWorks on Azure can continue in that vein. So please, if you think AdventureWorks on Azure is something that is worth supporting please make a contribution. I’d like to emphasize that last point. If my hosting this Olympics data is useful to you please support this initiative by donating. Thanks in advance. @Jamiet

    Read the article

  • Looking for a real-world example illustrating that composition can be superior to inheritance

    - by Job
    I watched a bunch of lectures on Clojure and functional programming by Rich Hickey as well as some of the SICP lectures, and I am sold on many concepts of functional programming. I incorporated some of them into my C# code at a previous job, and luckily it was easy to write C# code in a more functional style. At my new job we use Python and multiple inheritance is all the rage. My co-workers are very smart but they have to produce code fast given the nature of the company. I am learning both the tools and the codebase, but the architecture itself slows me down as well. I have not written the existing class hierarchy (neither would I be able to remember everything about it), and so, when I started adding a fairly small feature, I realized that I had to read a lot of code in the process. At the surface the code is neatly organized and split into small functions/methods and not copy-paste-repetitive, but the flip side of being not repetitive is that there is some magic functionality hidden somewhere in the hierarchy chain that magically glues things together and does work on my behalf, but it is very hard to find and follow. I had to fire up a profiler and run it through several examples and plot the execution graph as well as step through a debugger a few times, search the code for some substring and just read pages at the time. I am pretty sure that once I am done, my resulting code will be short and neatly organized, and yet not very readable. What I write feels declarative, as if I was writing an XML file that drives some other magic engine, except that there is no clear documentation on what the XML should look like and what the engine does except for the existing examples that I can read as well as the source code for the 'engine'. There has got to be a better way. IMO using composition over inheritance can help quite a bit. That way the computation will be linear rather than jumping all over the hierarchy tree. Whenever the functionality does not quite fit into an inheritance model, it will need to be mangled to fit in, or the entire inheritance hierarchy will need to be refactored/rebalanced, sort of like an unbalanced binary tree needs reshuffling from time to time in order to improve the average seek time. As I mentioned before, my co-workers are very smart; they just have been doing things a certain way and probably have an ability to hold a lot of unrelated crap in their head at once. I want to convince them to give composition and functional as opposed to OOP approach a try. To do that, I need to find some very good material. I do not think that a SCIP lecture or one by Rich Hickey will do - I am afraid it will be flagged down as too academic. Then, simple examples of Dog and Frog and AddressBook classes do not really connivence one way or the other - they show how inheritance can be converted to composition but not why it is truly and objectively better. What I am looking for is some real-world example of code that has been written with a lot of inheritance, then hit a wall and re-written in a different style that uses composition. Perhaps there is a blog or a chapter. I am looking for something that can summarize and illustrate the sort of pain that I am going through. I already have been throwing the phrase "composition over inheritance" around, but it was not received as enthusiastically as I had hoped. I do not want to be perceived as a new guy who likes to complain and bash existing code while looking for a perfect approach while not contributing fast enough. At the same time, my gut is convinced that inheritance is often the instrument of evil and I want to show a better way in a near future. Have you stumbled upon any great resources that can help me?

    Read the article

  • Dependency Injection Introduction

    - by MarkPearl
    I recently was going over a great book called “Dependency Injection in .Net” by Mark Seeman. So far I have really enjoyed the book and would recommend anyone looking to get into DI to give it a read. Today I thought I would blog about the first example Mark gives in his book to illustrate some of the benefits that DI provides. The ones he lists are Late binding Extensibility Parallel Development Maintainability Testability To illustrate some of these benefits he gives a HelloWorld example using DI that illustrates some of the basic principles. It goes something like this… class Program { static void Main(string[] args) { var writer = new ConsoleMessageWriter(); var salutation = new Salutation(writer); salutation.Exclaim(); Console.ReadLine(); } } public interface IMessageWriter { void Write(string message); } public class ConsoleMessageWriter : IMessageWriter { public void Write(string message) { Console.WriteLine(message); } } public class Salutation { private readonly IMessageWriter _writer; public Salutation(IMessageWriter writer) { _writer = writer; } public void Exclaim() { _writer.Write("Hello World"); } }   If you had asked me a few years ago if I had thought this was a good approach to solving the HelloWorld problem I would have resounded “No”. How could the above be better than the following…. class Program { static void Main(string[] args) { Console.WriteLine("Hello World"); Console.ReadLine(); } }  Today, my mind-set has changed because of the pain of past programs. So often we can look at a small snippet of code and make judgements when we need to keep in mind that we will most probably be implementing these patterns in projects with hundreds of thousands of lines of code and in projects that we have tests that we don’t want to break and that’s where the first solution outshines the latter. Let’s see if the first example achieves some of the outcomes that were listed as benefits of DI. Could I test the first solution easily? Yes… We could write something like the following using NUnit and RhinoMocks… [TestFixture] public class SalutationTests { [Test] public void ExclaimWillWriteCorrectMessageToMessageWriter() { var writerMock = MockRepository.GenerateMock<IMessageWriter>(); var sut = new Salutation(writerMock); sut.Exclaim(); writerMock.AssertWasCalled(x => x.Write("Hello World")); } }   This would test the existing code fine. Let’s say we then wanted to extend the original solution so that we had a secure message writer. We could write a class like the following… public class SecureMessageWriter : IMessageWriter { private readonly IMessageWriter _writer; private readonly string _secretPassword; public SecureMessageWriter(IMessageWriter writer, string secretPassword) { _writer = writer; _secretPassword = secretPassword; } public void Write(string message) { if (_secretPassword == "Mark") { _writer.Write(message); } else { _writer.Write("Unauthenticated"); } } }   And then extend our implementation of the program as follows… class Program { static void Main(string[] args) { var writer = new SecureMessageWriter(new ConsoleMessageWriter(), "Mark"); var salutation = new Salutation(writer); salutation.Exclaim(); Console.ReadLine(); } }   Our application has now been successfully extended and yet we did very little code change. In addition, our existing tests did not break and we would just need add tests for the extended functionality. Would this approach allow parallel development? Well, I am in two camps on parallel development but with some planning ahead of time it would allow for it as you would simply need to decide on the interface signature and could then have teams develop different sections programming to that interface. So,this was really just a quick intro to some of the basic concepts of DI that Mark introduces very successfully in his book. I am hoping to blog about this further as I continue through the book to list some of the more complex implementations of containers.

    Read the article

  • Where should instantiated classes be stored?

    - by Eric C.
    I'm having a bit of a design dilemma here. I'm writing a library that consists of a bunch of template classes that are designed to be used as a base for creating content. For example: public class Template { public string Name {get; set;} public string Description {get; set;} public string Attribute1 {get; set;} public string Attribute2 {get; set;} public Template() { //constructor } public void DoSomething() { //does something } ... } The problem is, not only is the library providing the templates, it will also supply quite a few predefined templates which are instances of these template classes. The question is, where do I put these instances of the templates? The three solutions I've come up with so far are: 1) Provide serialized instances of the templates as files. On the one hand, this solution would keep the instances separated from the library itself, which is nice, but it would also potentially add complexity for the user. Even if we provided methods for loading/deserializing the files, they'd still have to deal with a bunch of files, and some kind of config file so the app knows where to look for those files. Plus, creating the template files would probably require a separate app, so if the user wanted to stick with the files method of storing templates, we'd have to provide some kind of app for creating the template files. Also, this requires external dependencies for testing the templates in the user's code. 2) Add readonly instances to the template class Example: public class Template { public string Name {get; set;} public string Description {get; set;} public string Attribute1 {get; set;} public string Attribute2 {get; set;} public Template PredefinedTemplate { get { Template templateInstance = new Template(); templateInstance.Name = "Some Name"; templateInstance.Description = "A description"; ... return templateInstance; } } public Template() { //constructor } public void DoSomething() { //does something } ... } This method would be convenient for users, as they would be able to access the predefined templates in code directly, and would be able to unit test code that used them. The drawback here is that the predefined templates pollute the Template type namespace with a bunch of extra stuff. I suppose I could put the predefined templates in a different namespace to get around this drawback. The only other problem with this approach is that I'd have to basically duplicate all the namespaces in the library in the predefined namespace (e.g. Templates.SubTemplates and Predefined.Templates.SubTemplates) which would be a pain, and would also make refactoring more difficult. 3) Make the templates abstract classes and make the predefined templates inherit from those classes. For example: public abstract class Template { public string Name {get; set;} public string Description {get; set;} public string Attribute1 {get; set;} public string Attribute2 {get; set;} public Template() { //constructor } public void DoSomething() { //does something } ... } and public class PredefinedTemplate : Template { public PredefinedTemplate() { this.Name = "Some Name"; this.Description = "A description"; this.Attribute1 = "Some Value"; ... } } This solution is pretty similar to #2, but it ends up creating a lot of classes that don't really do anything (none of our predefined templates are currently overriding behavior), and don't have any methods, so I'm not sure how good a practice this is. Has anyone else had any experience with something like this? Is there a best practice of some kind, or a different/better approach that I haven't thought of? I'm kind of banging my head against a wall trying to figure out the best way to go. Thanks!

    Read the article

  • Using Recursive SQL and XML trick to PIVOT(OK, concat) a "Document Folder Structure Relationship" table, works like MySQL GROUP_CONCAT

    - by Kevin Shyr
    I'm in the process of building out a Data Warehouse and encountered this issue along the way.In the environment, there is a table that stores all the folders with the individual level.  For example, if a document is created here:{App Path}\Level 1\Level 2\Level 3\{document}, then the DocumentFolder table would look like this:IDID_ParentFolderName1NULLLevel 121Level 232Level 3To my understanding, the table was built so that:Each proposal can have multiple documents stored at various locationsDifferent users working on the proposal will have different access level to the folder; if one user is assigned access to a folder level, she/he can see all the sub folders and their content.Now we understand from an application point of view why this table was built this way.  But you can quickly see the pain this causes the report writer to show a document link on the report.  I wasn't surprised to find the report query had 5 self outer joins, which is at the mercy of nobody creating a document that is buried 6 levels deep, and not to mention the degradation in performance.With the help of 2 posts (at the end of this post), I was able to come up with this solution:Use recursive SQL to build out the folder pathUse SQL XML trick to concat the strings.Code (a reminder, I built this code in a stored procedure.  If you copy the syntax into a simple query window and execute, you'll get an incorrect syntax error) Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} -- Get all folders and group them by the original DocumentFolderID in PTSDocument table;WITH DocFoldersByDocFolderID(PTSDocumentFolderID_Original, PTSDocumentFolderID_Parent, sDocumentFolder, nLevel)AS (-- first member      SELECT 'PTSDocumentFolderID_Original' = d1.PTSDocumentFolderID            , PTSDocumentFolderID_Parent            , 'sDocumentFolder' = sName            , 'nLevel' = CONVERT(INT, 1000000)      FROM (SELECT DISTINCT PTSDocumentFolderID                  FROM dbo.PTSDocument_DY WITH(READPAST)            ) AS d1            INNER JOIN dbo.PTSDocumentFolder_DY AS df1 WITH(READPAST)                  ON d1.PTSDocumentFolderID = df1.PTSDocumentFolderID      UNION ALL      -- recursive      SELECT ddf1.PTSDocumentFolderID_Original            , df1.PTSDocumentFolderID_Parent            , 'sDocumentFolder' = df1.sName            , 'nLevel' = ddf1.nLevel - 1      FROM dbo.PTSDocumentFolder_DY AS df1 WITH(READPAST)            INNER JOIN DocFoldersByDocFolderID AS ddf1                  ON df1.PTSDocumentFolderID = ddf1.PTSDocumentFolderID_Parent)-- Flatten out folder path, DocFolderSingleByDocFolderID(PTSDocumentFolderID_Original, sDocumentFolder)AS (SELECT dfbdf.PTSDocumentFolderID_Original            , 'sDocumentFolder' = STUFF((SELECT '\' + sDocumentFolder                                         FROM DocFoldersByDocFolderID                                         WHERE (PTSDocumentFolderID_Original = dfbdf.PTSDocumentFolderID_Original)                                         ORDER BY PTSDocumentFolderID_Original, nLevel                                         FOR XML PATH ('')),1,1,'')      FROM DocFoldersByDocFolderID AS dfbdf      GROUP BY dfbdf.PTSDocumentFolderID_Original) And voila, I use the second CTE to join back to my original query (which is now a CTE for Source as we can now use MERGE to do INSERT and UPDATE at the same time).Each part of this solution would not solve the problem by itself because:If I don't use recursion, I cannot build out the path properly.  If I use the XML trick only, then I don't have the originating folder ID info that I need to link to the document.If I don't use the XML trick, then I don't have one row per document to show in the report.I could conceivably do this in the report function, but I'd rather not deal with the beginning or ending backslash and how to attach the document name.PIVOT doesn't do strings and UNPIVOT runs into the same problem as the above.I'm excited that each version of SQL server provides us new tools to solve old problems and/or enables us to solve problems in a more elegant wayThe 2 posts that helped me along:Recursive Queries Using Common Table ExpressionHow to use GROUP BY to concatenate strings in SQL server?

    Read the article

  • Property overwrite behaviour

    - by jeremyj
    I thought it worth sharing about property overwrite behaviour because i found it confusing at first in the hope of preventing some learning pain for the uninitiated with MSBuild :-)The confusion for me came because of the redundancy of using a Condition statement in a _project_ level property to test that a property has not been previously set. What i mean is that the following two statements are always identical in behaviour, regardless if the property has been supplied on the command line -  <PropertyGroup>    <PropA Condition=" '$(PropA)' == '' ">PropA set at project level</PropA>  </PropertyGroup>has the same behaviour regardless of command line override as -  <PropertyGroup>     <PropA>PropA set at project level</PropA>   </PropertyGroup>  i.e. the two above property declarations have the same result whether the property is overridden on the command line or not.To prove this experiment with the following .proj file -<?xml version="1.0" encoding="utf-8"?><Project ToolsVersion="4.0" >  <PropertyGroup>    <PropA Condition=" '$(PropA)' == '' ">PropA set at project level</PropA>  </PropertyGroup>  <Target Name="Target1">    <Message Text="PropA: $(PropA)"/>  </Target>  <Target Name="Target2">    <PropertyGroup>      <PropA>PropA set in Target2</PropA>    </PropertyGroup>    <Message Text="PropA: $(PropA)"/>  </Target>  <Target Name="Target3">    <PropertyGroup>      <PropA Condition=" '$(PropA)' == '' ">PropA set in Target3</PropA>    </PropertyGroup>    <Message Text="PropA: $(PropA)"/>  </Target>  <Target Name="Target4">    <PropertyGroup>      <PropA Condition=" '$(PropA)' != '' ">PropA set in Target4</PropA>    </PropertyGroup>    <Message Text="PropA: $(PropA)"/>  </Target></Project>Try invoking it using both of the following invocations and observe its output -1)>msbuild blog.proj /t:Target1;Target2;Target3;Target42)>msbuild blog.proj /t:Target1;Target2;Target3;Target4 "/p:PropA=PropA set on command line"Then try those two invocations with the following three variations of specifying PropA at the project level -1)  <PropertyGroup>     <PropA Condition=" '$(PropA)' == '' ">PropA set at project level</PropA>   </PropertyGroup> 2)   <PropertyGroup>     <PropA>PropA set at project level</PropA>   </PropertyGroup>3)  <PropertyGroup>     <PropA Condition=" '$(PropA)' != '' ">PropA set at project level</PropA>   </PropertyGroup>

    Read the article

  • ObjectiveC builtin template system ?

    - by Aurélien Vallée
    I am developing an iPhone application and I use HTML to display formatted text. I often display the same webpage, but with a different content. I would like to use a template HTML file, and then fill it with my diffent values. I wonder if ObjectiveC has a template system similar to ERB in Ruby. That would allow to do things like Template: <HTML> <HEAD> </HEAD> <BODY> <H1>{{{title}}}</H1> <P>{{{content}}}</P> </BODY> </HTML> ObjectiveC (or what it may be in an ideal world) Template* template = [[Template alloc] initWithFile:@"my_template.tpl"]; [template fillMarker:@"title" withContent:@"My Title"]; [template fillMarker:@"content" withContent:@"My text here"]; [template process]; NSString* result = [template result]; [template release]; And the result string would contain: <HTML> <HEAD> </HEAD> <BODY> <H1>My Title</H1> <P>My text here</P> </BODY> </HTML> The above example could be achieved with some text replacement, but that would be a pain to maintain. I would also need something like loops inside templates. For instance if I have multiple items to display, i would like to generate multiple divs. Thanks for reading :)

    Read the article

  • Can't install plugins in Eclipse 3.5 (Galileo) on Ubuntu

    - by dfrankow
    Eclipse Version 3.5.2 Build id: M20100211-1343 Platform: Ubuntu 10.4 I go to install any new software Eclipse plugins (e.g., pydev, subclipse), and it gets to the 60% mark and stops. I wait for a long while and nothing happens. When I hit "cancel" a security warning comes up "Warning: You are installing software that contains unsigned content.." I click "OK" but it's too late because I've cancelled the installation already. It won't pop up until I hit "cancel". I've tried moving all the windows around, that security warning is not there. In previous incarnations of Eclipse, I've been able to click "OK" on that warning (no canceling) and all is well. I've also tried two JREs (Ubuntu's default openjdk and Sun's JDK 1.6.0_20). Is there some way to get that warning to come up, or even just have it always accept unsigned content? I've downloaded the zip of pydev and unzipped it into ~/.eclipse/org.eclipse.platform_3.5.0_155965261/ and Eclipse doesn't have Pydev when restarted, so manually installing plugins is also a pain. This is not my day. Yes, I see the other questions of this type and they don't seem to answer my problem.

    Read the article

  • Is parsing JSON faster than parsing XML

    - by geme_hendrix
    I'm creating a sophisticated JavaScript library for working with my company's server side framework. The server side framework encodes its data to a simple XML format. There's no fancy namespacing or anything like that. Ideally I'd like to parse all of the data in the browser as JSON. However, if I do this I need to rewrite some of the server side code to also spit out JSON. This is a pain because we have public APIs that I can't easily change. What I'm really concerned about here is performance in the browser of parsing JSON versus XML. Is there really a big difference to be concerned about? Or should I exclusively go for JSON? Does anyone have any experience or benchmarks in the performance difference between the two? I realize that most modern web developers would probably opt for JSON and I can see why. However, I really am just interested in performance. If there's a proven massive difference then I'm prepared to spend the extra effort in generating JSON server side for the client.

    Read the article

  • ASP.NET MVC2 Access-Control: How to do authorization dynamically?

    - by Shaharyar
    We're currently rewriting our organizations ASP.NET MVC application which has been written twice already. (Once MVC1, once MVC2). (Thank god it wasn't production ready and too mature back then). This time, anyhow, it's going to be the real deal because we'll be implementing more and more features as time passes and the testruns with MVC1 and MVC2 showed that we're ready to upscale. Until now we were using Controller and Action authorization with AuthorizeAttribute's. But that won't do it any longer because our views are supposed to show different results based on the logged in user. Use Case: Let's say you're a major of a city and you login to a federal managed software and you can only access and edit the citizens in your city. Where you are allowed to access those citizens via an entry in a specialized MajorHasRightsForCity table containing a MajorId and a CityId. What I thought of is something like this: Public ViewResult Edit(int cityId) { if(Access.UserCanEditCity(currentUser, cityId) { var currentCity = Db.Cities.Single(c => c.id == cityId); Return View(currentCity); } else { TempData["ErrorMessage"] = "Yo are not awesome enough to edit that shizzle!" Return View(); } The static class Access would do all kinds of checks and return either true or false from it's methods. This implies that I would need to change and edit all of my controllers every time I change something. (Which would be a pain, because all unit tests would need to be adjusted every time something changes..) Is doing something like that even allowed?

    Read the article

  • Development Environment in a VM against an isolated development/test network

    - by bart
    I currently work in an organization that forces all software development to be done inside a VM. This is for a variety of risk/governance/security/compliance reasons. The standard setup is something like: VMWare image given to devs with tools installed VM is customized to suit project/stream needs VM sits in a network & domain that is isolated from the live/production network SCM connectivity is only possible through dev/test network Email and office tools need to be on live network so this means having two separate desktops going at once Heavyweight dev tools in use on VMs so they are very resource hungry Some problems that people complain about are: Development environment runs slower than normal (host OS is windows XP so memory is limited) Switching between DEV machine and Email/Office machine is a pain, simple things like cut and paste are made harder. This is less efficient from a usability perspective. Mouse in particular doesn't seem to work properly using VMWare player or RDP. Need a separate login to Dev/Test network/domain Has anyone seen or worked in other (hopefully better) setups to this that have similar constraints (as mentioned at the top)? In particular are there viable options that would remove the need for running stuff in a VM altogether?

    Read the article

  • Migrating complex SVN branch hierarchy to Mercurial

    - by Christian Hang
    Our team has been using SVN for managing an application of decent size and over time a rather complex hierarchy of branches and tags has built up, which is following the basic standard layout for SVN repositories, but is more nested: |-trunk |-branches | |-releases | | |-releaseA | | `-releaseB | `-features | |-featureX | `-featureY |-tags |-releaseA | |-beta | `-RTP `-releaseB |-beta `-RTP (The feature branches are obviously temporary branches but we have to take them into consideration as it won't be feasible to close all of them at once in the near future) For several reasons but primarily because merges have been becoming an increasing pain, we are considering to switch to Mercurial. The main problem we are currently facing is migrating the existing code base without losing our history. I've tried several migration tools (e.g., yasvn2hg, hg convert and svn2hg) with yasvn2hg being the most promising, but none of them seem to be able to deal with nested hierarchies but they all assume that branches and tags are organized in one flat directory respectively. The choice between named branches or clones as the conversion target of old SVN branches is not a limiting factor in this case, as either solution would be appreciated. We are currently experimenting with both options and how they would fit into our current processes but haven't decided on one yet. I'd obviously be interested in recommendations or experiences with similar setups concerning that issue as well. So, what is the best way to convert a nested SVN branch hierarchy like this to Mercurial? Converting one branch at a time into a separate repository would be quite annoying and I am not sure if that would be the right approach in the first place, depending on how the tools handle historic merges and need to be aware of all other branches?

    Read the article

  • Web-based clients vs thick/rich clients?

    - by rudolfv
    My company is a software solutions provider to a major telecommunications company. The environment is currently IBM WebSphere-based with front-end IBM Portal servers talking to a cluster of back-end WebSphere Application Servers providing EJB services. Some of the portlets use our own home-grown MVC-pattern and some are written in JSF. Recently we did a proof-of-concept rich/thick-client application that communicates directly with the EJB's on the back-end servers. It was written in NetBeans Platform and uses the WebSphere application client library to establish communication with the EJB's. The really painful bit was getting the client to use secure JAAS/SSL communications. But, after that was resolved, we've found that the rich client has a number of advantages over the web-based portal client applications we've become accustomed to: Enormous performance advantage (CORBA vs. HTTP, cut out the Portal Server middle man) Development is simplified and faster due to use of NetBeans' visual designer and Swing's generally robust architecture The debug cycle is shortened by not having to deploy your client application to a test server No mishmash of technologies as with web-based development (Struts, JSF, JQuery, HTML, JSTL etc., etc.) After enduring the pain of web-based development (even JSF) for a while now, I've come to the following conclusion: Rich clients aren't right for every situation, but when you're developing an in-house intranet-based solution, then you'd be crazy not to consider NetBeans Platform or Eclipse RCP. Any comments/experiences with rich clients vs. web clients?

    Read the article

  • how to do asynchronous http requests with epoll and python 3.1

    - by flow
    there is an interesting page http://scotdoyle.com/python-epoll-howto.html about how to do asnchronous / non-blocking / AIO http serving in python 3. there is the tornado web server which does include a non-blocking http client. i have managed to port parts of the server to python 3.1, but the implementation of the client requires pyCurl and seems to have problems (with one participant stating how ‘Libcurl is such a pain in the neck’, and looking at the incredibly ugly pyCurl page i doubt pyCurl will arrive in py3+ any time soon). now that epoll is available in the standard library, it should be possible to do asynchronous http requests out of the box with python. i really do not want to use asyncore or whatnot; epoll has a reputation for being the ideal tool for the task, and it is part of the python distribution, so using anything but epoll for non-blocking http is highly counterintuitive (prove me wrong if you feel like it). oh, and i feel threading is horrible. no threading. i use stackless. people further interested in the topic of asynchronous http should not miss out on this talk by peter portante at PyCon2010; also of interest is the keynote, where speaker antonio rodriguez at one point emphasizes the importance of having up-to-date web technology libraries right in the standard library.

    Read the article

  • Drupal image management

    - by vian
    Please suggest how should I approach these requirements. What ready-to-use solutions (modules) are best suited to achieve something like this: What I need is an image library that has searchable, tagged images that are already resized when we publish them. If the author searches the library and the image he needs isn’t there, he can upload one and have it added to the index. The important thing is that images in the library can be sorted into three categories: News images, top story images and feature images so that, over time, we don’t end up with hundreds of images crammed into one folder, thus making browsing a pain (and to prevent someone from something like: Searching for a keyword so they can find an image for the news, picking an image, and then seeing it’s 1600X. 1200). Also, I need something which will assemble thumbnail galleries easily. I don’t want to have to go to the image library, get a URL, go back, paste it in, etc. I should be able to pick, say, 8 images and say “create gallery”. How this objective is achieved is flexible, but I am looking for a shortcut to get around assembling screenshot galleries by hand.

    Read the article

  • Tab bar controller inside a navigation controller, or sharing a navigation root view

    - by Daniel Dickison
    I'm trying to implement a UI structured like in the Tweetie app, which behaves as so: the top-level view controller seems to be a navigation controller, whose root view is an "Accounts" table view. If you click on any account, it goes to the second level, which has a tab bar across the bottom. Each tab item shows a different list and lets you drill down further (the subsequent levels don't show the tab bar). So, this seems like the implementation hierarchy is: UINavigationController Accounts: UITableViewController UITabBarController Tweets: UITableViewController Detail view of a tweet/user/etc Replies: UITableViewController ... This seems to work[^1], but appears to be unsupported according to the SDK documentation for -pushViewController:animated: (emphasis added): viewController: The view controller that is pushed onto the stack. It cannot be an instance of tab bar controller. I would like to avoid private APIs and the like, but I'm not sure why this usage is explicitly prohibited even when it seems to work fine. Anyone know the reason? I've thought about putting the tab bar controller as the main controller, with each of the tabs containing separate navigation controllers. The problem with this is that each nav controller needs to share a single root view controller (namely the "Accounts" table in Tweetie) -- this doesn't seem to work: pushing the table controller to a second nav controller seems to remove it from the first. Not to mention all the book-keeping when selecting a different account would probably be a pain. How should I implement this the Right Way? [^1]: The tab bar controller needs to be subclassed so that the tab bar controller's navigation item at that level stays in sync with the selected tab's navigation item, and the individual tab's table controller's need to push their respective detail views to self.tabBarController.navigationController instead of self.navigationController.

    Read the article

  • The best way to package iPhone/iPad static libraries?

    - by Derek Clarkson
    Hi everyone, I have a couple of static Phone/iPad libraries I an working on. The problem I am looking for advise on is the best way to package the libraries. My objective is to make it easy to use the libraries in other projects and include the correct one in a build with minimal problems. To make it more interesting I currently build 4 versions of each library as follows armv6/armv7 release (devices) i386 release (simulator) armv6/armv7 debug (devices) i386 debug (simulator) The difference between the release and debug versions is that the debug versions contain a lot of NSLog(...) code which enables people to see whats going on internally as an aid to debugging. Currently when I build the whole projects I arrange the libraries into two directories like this: release lib-device.a lib-simulator.a debug lib-device.a lib-simulator.a This works ok except that when include in projects, both paths are added to the library search path and switching a target from one to the other is a pain. Or alternatively I end up with two targets. The alternative I am thinking of is to change the directories like this: release device lib.a simulator lib.a debug device lib.a simulator lib.a In playing with XCode is appears that all xcode uses the lbrary references of a project for is to get the name of the library file, which it then looks up in the library paths. Thus by parameterising the library path with the current built type and target device, I can effectively auto switch. What do you guys think? Is there a better way to do this? ciao Derek

    Read the article

  • Use XMPP instead of HTTP

    - by pavel
    Hey guys. My friend and I, we are working on iPhone application. This application uses XMPP protocol to provide chat functionality. Right now we are designing architecture for the application. So my friend is working on iPhone side, and I am ruby on rails guy. My friend suggested, that we wrap every call, that is usually served via HTTP into XMPP. So, user registration, users search, profile editing, photo uploading, everything goes via XMPP. No HTTP at all. My friend wants to use XMPP, because he says, that it's much easier to implement XMPP on client-side rather HTTP. As for me, this is bullshit, but we've got a product owner, who have been working with my friend for a long time and he trusts him. So what I'm trying to do is to convince my friend and product owner that using XMPP for what HTTP can work find — is totally not the best idea. I feel, that if we implement everything on XMPP, we will have a pain in an ass till the end of lives. But how do I prove it? P.S. I'm not against chat over XMPP, I am against users search, photo uploading, rankings, nearby search and various other restful requests. Please, leave response. Any help appreciated. A little update: Yesterday we had a long discussion. And it turns out, it's quiete hard to receive response from both XMPP and HTTP in Objective-C. Because every single object and its data should be stored in Core Data model, while this model can't be securely modified from various places. Say, if you use HTTP transport, you always want to use only HTTP transport to update data in your model. And if you use XMPP, you should always use XMPP. So, you can't use both. That's what my iPhone buddy told me. It sounds weird for me, can anyone explain me that?

    Read the article

  • ASP.NET MVC - Parent-Child Table Relation - how to creat Children in MVC (example request)

    - by adudley
    Hi All. In a standard setup of Parent Child relation, lets say Project and Task. Where a Project is made up of lots of Tasks. So in a standard RDB, we have a Project (ID, Name, Deadline) Task (ID, FK_To_Project, Name, Description, isCompleted) this is all very straight forward. We have an MVC View that views Projects, so we get a nice list of all the project Names next to each deadline. Now we want to CREATE a new PROJECT. The Edit view opens, we type a name, say, 'Make a cup of Tea', with tomorrow as the deadline! Still in this view/web page, I would like a list of all the Child Tasks, in a standard list, with Edit, Delete, and a Create/Add Task button too, just below the 'parent table' details. The simplest way to describe this, is the Parents Table Create/Edit view, with the Childes List View Below it. 1) The ideal solution will also allow my Child Table (Tasks) to have Children also (for more complex scenarios) , and so on, and on, and on. 2) If I navigate away from my Created Project, I don’t want all sorts of random stuff laying around, they went away, it’s gone! 3) I’d expect all the same functionality when Editing an existing project. I’m struggling with the ‘Add New Child’, I had a model dialog (jquery) and all was well, but now when editing an existing child/task, I need to populate the Child Edit, which is a pain and will need loads of java script I think :( How can this be achieved in MVC, does anybody have any examples?

    Read the article

  • What do I do about recurring billing?

    - by phidah
    This might be a subjective question, but I'll give it a go. There are already a number of questions on SO that revolves around subscription billing management. I am currently working on a SaaS solution that will require a fully automated billing system. What I am not looking for when asking this question is not advice on implementing towards a specific payment gateway or stuff like that. Instead I'd like advice on what kind of approach to take. The functionality that I need is a system that can handle upgrades, downgrades, recurring billing, cancellations, etc. Initially for one product only, but it might over time be a requirement that the system can handle multiple products (by products I mean fundamentally different products, not different variations of the same product). As I see it there are a number of possible approaches when you need a solution like this: Code a billing server yourself that supports this and is decoupled from each product so that it can handle multiple independent products. Use a hosted solution like Recurly, Chargify, Spreedly or CheddarGetter. The advantage of using a hosted solution is obviously that you don't need PCI certification, the concern is outsourced and it is a lot faster to get up and running. These advantages come at a cost however: The most important support function for your product - i.e. the billing is not in your control. Additionally you have less control and flexibility. What would you do? If we look beyond the PCI requirements I would definately prefer to have a system coded in-house that could do this kind of job. On the other hand I've heard from numerous sources that coding a system like this is a pain. Any advice is highly appreciated. Also, if you advice to code it yourself, any experiences on how to do it or if there are any opensource projects (no matter the language, what I'm after is not the code but the structure) that I can benefit from would really mean alot. Thanks in advance for your inputs! :-)

    Read the article

  • How to approach ninject container/kernel in inheritance situation

    - by Bas
    I have the following situation: class RuleEngine {} abstract class RuleImplementation {} class RootRule : RuleImplementation {} class Rule1 : RuleImplementation {} class Rule2 : RuleImplementation {} The RuleEngine is injected by Ninject and has a kernel at it's disposal, the role of the RuleEngine is to fire off the root rule, which on it's turn will load all the other rules also using Ninject, but using a different Module and creating a new Kernel. Now my question is, some of the rules require some dependencies which I want to inject using Ninject. What would be the best way to create the kernel for these rules and also still do proper unit testing with it? (the kernel shouldn't become a real pain in my tests) I've been thinking of the following possibilitys: The kernel that I use in the RuleEngine class could be tossed around to RuleImplementation and thus be available for every rule. But tossing around Kernels isn't really something I wish to do. When creating the rules, I could give the kernel (which creates the rules) as a constructor argument for each rule. I could create a method inside the RuleImplementation which creates a kernel and makes it possible for the rules to retrieve the kernel using a get() in the abstract class Whats the convention of passing around/creating kernels? Just create new kernels, or reuse them?

    Read the article

  • Why does ruby-debug say 'Saved frames may be incomplete'

    - by Chris McCauley
    From time-to-time I get this when a breakpoint is triggered. It looks like stack frames aren't getting saved so I can't step back through the call stack - a real pain. See below for an example --> #0 BatchProcess.add_failure_record(row_id#Fixnum, test#Struct::Test, message#String,...) at line server/processes/batch.rb:309 Warning: saved frames may be incomplete; compare with caller(0). (rdb:1) pp caller ["./server/processes/batch.rb:309:in `run_tests'", "./server/processes/common/generic_process.rb:219:in `each'", "./server/processes/common/generic_process.rb:219:in `run_tests'", "./server/processes/common/generic_process.rb:271:in `run_plan'", "./server/processes/common/corrections.rb:19:in `each_with_index'", "./server/processes/common/generic_process.rb:266:in `each'", "./server/processes/common/generic_process.rb:266:in `each_with_index'", "./server/processes/common/generic_process.rb:266:in `run_plan'", "./server/processes/batch.rb:202:in `run_engine'", "/usr/lib/ruby/1.8/benchmark.rb:293:in `measure'", "./server/processes/batch.rb:201:in `run_engine'", "./server/processes/common/generic_process.rb:88:in `run_dataset'", "./server/processes/batch.rb:210:in `run_dataset'", "/usr/lib/ruby/1.8/benchmark.rb:293:in `measure'", "./server/processes/batch.rb:209:in `run_dataset'", "./server/processes/common/generic_process.rb:159:in `run'", "./server/processes/common/generic_process.rb:158:in `each'", "./server/processes/common/generic_process.rb:158:in `run'", "./server/processes/batch.rb:350:in `run'", "/usr/lib/ruby/1.8/benchmark.rb:293:in `measure'", "./server/processes/batch.rb:349:in `run'", "server/processes/test_runs/run_tests.rb:55:in `run_one_process'", "server/processes/test_runs/run_tests.rb:81"] Any ideas on how to stop this happening?

    Read the article

< Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >