Search Results

Search found 1275 results on 51 pages for 'derived'.

Page 16/51 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • Investigation: Can different combinations of components effect Dataflow performance?

    - by jamiet
    Introduction The Dataflow task is one of the core components (if not the core component) of SQL Server Integration Services (SSIS) and often the most misunderstood. This is not surprising, its an incredibly complicated beast and we’re abstracted away from that complexity via some boxes that go yellow red or green and that have some lines drawn between them. Example dataflow In this blog post I intend to look under that facade and get into some of the nuts and bolts of the Dataflow Task by investigating how the decisions we make when building our packages can affect performance. I will do this by comparing the performance of three dataflows that all have the same input, all produce the same output, but which all operate slightly differently by way of having different transformation components. I also want to use this blog post to challenge a common held opinion that I see perpetuated over and over again on the SSIS forum. That is, that people assume adding components to a dataflow will be detrimental to overall performance. Its not surprising that people think this –it is intuitive to think that more components means more work- however this is not a view that I share. I have always been of the opinion that there are many factors affecting dataflow duration and the number of components is actually one of the less important ones; having said that I have never proven that assertion and that is one reason for this investigation. I have actually seen evidence that some people think dataflow duration is simply a function of number of rows and number of components. I’ll happily call that one out as a myth even without any investigation!  The Setup I have a 2GB datafile which is a list of 4731904 (~4.7million) customer records with various attributes against them and it contains 2 columns that I am going to use for categorisation: [YearlyIncome] [BirthDate] The data file is a SSIS raw format file which I chose to use because it is the quickest way of getting data into a dataflow and given that I am testing the transformations, not the source or destination adapters, I want to minimise external influences as much as possible. In the test I will split the customers according to month of birth (12 of those) and whether or not their yearly income is above or below 50000 (2 of those); in other words I will be splitting them into 24 discrete categories and in order to do it I shall be using different combinations of SSIS’ Conditional Split and Derived Column transformation components. The 24 datapaths that occur will each input to a rowcount component, again because this is the least resource intensive means of terminating a datapath. The test is being carried out on a Dell XPS Studio laptop with a quad core (8 logical Procs) Intel Core i7 at 1.73GHz and Samsung SSD hard drive. Its running SQL Server 2008 R2 on Windows 7. The Variables Here are the three combinations of components that I am going to test:     One Conditional Split - A single Conditional Split component CSPL Split by Month of Birth and income category that will use expressions on [YearlyIncome] & [BirthDate] to send each row to one of 24 outputs. This next screenshot displays the expression logic in use: Derived Column & Conditional Split - A Derived Column component DER Income Category that adds a new column [IncomeCategory] which will contain one of two possible text values {“LessThan50000”,”GreaterThan50000”} and uses [YearlyIncome] to determine which value each row should get. A Conditional Split component CSPL Split by Month of Birth and Income Category then uses that new column in conjunction with [BirthDate] to determine which of the same 24 outputs to send each row to. Put more simply, I am separating the Conditional Split of #1 into a Derived Column and a Conditional Split. The next screenshots display the expression logic in use: DER Income Category         CSPL Split by Month of Birth and Income Category       Three Conditional Splits - A Conditional Split component that produces two outputs based on [YearlyIncome], one for each Income Category. Each of those outputs will go to a further Conditional Split that splits the input into 12 outputs, one for each month of birth (identical logic in each). In this case then I am separating the single Conditional Split of #1 into three Conditional Split components. The next screenshots display the expression logic in use: CSPL Split by Income Category         CSPL Split by Month of Birth 1& 2       Each of these combinations will provide an input to one of the 24 rowcount components, just the same as before. For illustration here is a screenshot of the dataflow containing three Conditional Split components: As you can these dataflows have a fair bit of work to do and remember that they’re doing that work for 4.7million rows. I will execute each dataflow 10 times and use the average for comparison. I foresee three possible outcomes: The dataflow containing just one Conditional Split (i.e. #1) will be quicker There is no significant difference between any of them One of the two dataflows containing multiple transformation components will be quicker Regardless of which of those outcomes come to pass we will have learnt something and that makes this an interesting test to carry out. Note that I will be executing the dataflows using dtexec.exe rather than hitting F5 within BIDS. The Results and Analysis The table below shows all of the executions, 10 for each dataflow. It also shows the average for each along with a standard deviation. All durations are in seconds. I’m pasting a screenshot because I frankly can’t be bothered with the faffing about needed to make a presentable HTML table. It is plain to see from the average that the dataflow containing three conditional splits is significantly faster, the other two taking 43% and 52% longer respectively. This seems strange though, right? Why does the dataflow containing the most components outperform the other two by such a big margin? The answer is actually quite logical when you put some thought into it and I’ll explain that below. Before progressing, a side note. The standard deviation for the “Three Conditional Splits” dataflow is orders of magnitude smaller – indicating that performance for this dataflow can be predicted with much greater confidence too. The Explanation I refer you to the screenshot above that shows how CSPL Split by Month of Birth and salary category in the first dataflow is setup. Observe that there is a case for each combination of Month Of Date and Income Category – 24 in total. These expressions get evaluated in the order that they appear and hence if we assume that Month of Date and Income Category are uniformly distributed in the dataset we can deduce that the expected number of expression evaluations for each row is 12.5 i.e. 1 (the minimum) + 24 (the maximum) divided by 2 = 12.5. Now take a look at the screenshots for the second dataflow. We are doing one expression evaluation in DER Income Category and we have the same 24 cases in CSPL Split by Month of Birth and Income Category as we had before, only the expression differs slightly. In this case then we have 1 + 12.5 = 13.5 expected evaluations for each row – that would account for the slightly longer average execution time for this dataflow. Now onto the third dataflow, the quick one. CSPL Split by Income Category does a maximum of 2 expression evaluations thus the expected number of evaluations per row is 1.5. CSPL Split by Month of Birth 1 & CSPL Split by Month of Birth 2 both have less work to do than the previous Conditional Split components because they only have 12 cases to test for thus the expected number of expression evaluations is 6.5 There are two of them so total expected number of expression evaluations for this dataflow is 6.5 + 6.5 + 1.5 = 14.5. 14.5 is still more than 12.5 & 13.5 though so why is the third dataflow so much quicker? Simple, the conditional expressions in the first two dataflows have two boolean predicates to evaluate – one for Income Category and one for Month of Birth; the expressions in the Conditional Split in the third dataflow however only have one predicate thus they are doing a lot less work. To sum up, the difference in execution times can be attributed to the difference between: MONTH(BirthDate) == 1 && YearlyIncome <= 50000 and MONTH(BirthDate) == 1 In the first two dataflows YearlyIncome <= 50000 gets evaluated an average of 12.5 times for every row whereas in the third dataflow it is evaluated once and once only. Multiply those 11.5 extra operations by 4.7million rows and you get a significant amount of extra CPU cycles – that’s where our duration difference comes from. The Wrap-up The obvious point here is that adding new components to a dataflow isn’t necessarily going to make it go any slower, moreover you may be able to achieve significant improvements by splitting logic over multiple components rather than one. Performance tuning is all about reducing the amount of work that needs to be done and that doesn’t necessarily mean use less components, indeed sometimes you may be able to reduce workload in ways that aren’t immediately obvious as I think I have proven here. Of course there are many variables in play here and your mileage will most definitely vary. I encourage you to download the package and see if you get similar results – let me know in the comments. The package contains all three dataflows plus a fourth dataflow that will create the 2GB raw file for you (you will also need the [AdventureWorksDW2008] sample database from which to source the data); simply disable all dataflows except the one you want to test before executing the package and remember, execute using dtexec, not within BIDS. If you want to explore dataflow performance tuning in more detail then here are some links you might want to check out: Inequality joins, Asynchronous transformations and Lookups Destination Adapter Comparison Don’t turn the dataflow into a cursor SSIS Dataflow – Designing for performance (webinar) Any comments? Let me know! @Jamiet

    Read the article

  • Extreme Optimization – Curves (Function Mapping) Part 1

    - by JoshReuben
    Overview ·        a curve is a functional map relationship between two factors (i.e. a function - However, the word function is a reserved word). ·        You can use the EO API to create common types of functions, find zeroes and calculate derivatives - currently supports constants, lines, quadratic curves, polynomials and Chebyshev approximations. ·        A function basis is a set of functions that can be combined to form a particular class of functions.   The Curve class ·        the abstract base class from which all other curve classes are derived – it provides the following methods: ·        ValueAt(Double) - evaluates the curve at a specific point. ·        SlopeAt(Double) - evaluates the derivative ·        Integral(Double, Double) - evaluates the definite integral over a specified interval. ·        TangentAt(Double) - returns a Line curve that is the tangent to the curve at a specific point. ·        FindRoots() - attempts to find all the roots or zeroes of the curve. ·        A particular type of curve is defined by a Parameters property, of type ParameterCollection   The GeneralCurve class ·        defines a curve whose value and, optionally, derivative and integrals, are calculated using arbitrary methods. A general curve has no parameters. ·        Constructor params:  RealFunction delegates – 1 for the function, and optionally another 2 for the derivative and integral ·        If no derivative  or integral function is supplied, they are calculated via the NumericalDifferentiation  and AdaptiveIntegrator classes in the Extreme.Mathematics.Calculus namespace. // the function is 1/(1+x^2) private double f(double x) {     return 1 / (1 + x*x); }   // Its derivative is -2x/(1+x^2)^2 private double df(double x) {     double y = 1 + x*x;     return -2*x* / (y*y); }   // The integral of f is Arctan(x), which is available from the Math class. var c1 = new GeneralCurve (new RealFunction(f), new RealFunction(df), new RealFunction(System.Math.Atan)); // Find the tangent to this curve at x=1 (the Line class is derived from Curve) Line l1 = c1.TangentAt(1);

    Read the article

  • White box testing with Google Test

    - by Daemin
    I've been trying out using GoogleTest for my C++ hobby project, and I need to test the internals of a component (hence white box testing). At my previous work we just made the test classes friends of the class being tested. But with Google Test that doesn't work as each test is given its own unique class, derived from the fixture class if specified, and friend-ness doesn't transfer to derived classes. Initially I created a test proxy class that is friends with the tested class. It contains a pointer to an instance of the tested class and provides methods for the required, but hidden, members. This worked for a simple class, but now I'm up to testing a tree class with an internal private node class, of which I need to access and mess with. I'm just wondering if anyone using the GoogleTest library has done any white box testing and if they have any hints or helpful constructs that would make this easier. Ok, I've found the FRIEND_TEST macro defined in the documentation, as well as some hints on how to test private code in the advanced guide. But apart from having a huge amount of friend declerations (i.e. one FRIEND_TEST for each test), is there an easier idion to use, or should I abandon using GoogleTest and move to a different test framework?

    Read the article

  • What is a Data Warehouse?

    Typically Data Warehouses are considered to be non-volatile in comparison to traditional databasesdue to the fact that data within the warehouse does not change that often.  In addition, Data Warehouses typically represent data through the use of Multidimensional Conceptual Views that allow data to be extracted based on the view and the current position within the view. Common Data Warehouse Traits Relatively Non-volatile Data Supports Data Extraction and Analysis Optimized for Data Retrieval and Analysis Multidimensional Views of Data Flexible Reporting Multi User Support Generic Dimensionality Transparent Accessible Unlimited Dimensions of Data Unlimited Aggregation levels of Data Normally, Data Warehouses are much larger then there traditional database counterparts due to the fact that they store the basis data along with derived data via Multidimensional Conceptual Views. As companies store larger and larger amounts of data, they will need a way to effectively and accurately extract analysis information that can be used to aide in formulating current and future business decisions. This process can be done currently through data mining within a Data Warehouse. Data Warehouses provide access to data derived through complex analysis, knowledge discovery and decision making. Secondly, they support the demands for high performance in regards to analyzing an organization’s existing and current data. Data Warehouses provide support for an organization’s data and acquired business knowledge.  Within a Data Warehouse multiple types of operations/sub systems are supported. Common Data Warehouse Sub Systems Online Analytical Processing (OLAP) Decision –Support Systems (DSS) Online Transaction Processing (OLTP)

    Read the article

  • White box testing with Google Test

    - by Daemin
    I've been trying out using GoogleTest for my C++ hobby project, and I need to test the internals of a component (hence white box testing). At my previous work we just made the test classes friends of the class being tested. But with Google Test that doesn't work as each test is given its own unique class, derived from the fixture class if specified, and friend-ness doesn't transfer to derived classes. Initially I created a test proxy class that is friends with the tested class. It contains a pointer to an instance of the tested class and provides methods for the required, but hidden, members. This worked for a simple class, but now I'm up to testing a tree class with an internal private node class, of which I need to access and mess with. I'm just wondering if anyone using the GoogleTest library has done any white box testing and if they have any hints or helpful constructs that would make this easier. Ok, I've found the FRIEND_TEST macro defined in the documentation, as well as some hints on how to test private code in the advanced guide. But apart from having a huge amount of friend declerations (i.e. one FRIEND_TEST for each test), is there an easier idion to use, or should I abandon using GoogleTest and move to a different test framework?

    Read the article

  • What are the legal considerations when forking a BSD-licensed project?

    - by Thomas Owens
    I'm interested in forking a project released under a two-clause BSD license: Copyright (c) 2010 {copyright holder} All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: (1) Redistributions of source code must retain the above copyright notice, this list of conditions and the disclaimer at the end. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. (2) Neither the name of {copyright holder} nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. DISCLAIMER THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. I've never forked a project before, but this project is very similar to something that I need/want. However, I'm not sure how far I'll get, so my plan is to pull the latest from their repository and start working. Maybe, eventually, I'll get it to where I want it, and be able to release it. Is this the right approach? How, exactly, does this impact forking of the project? How do I track who owns what components or sections (what's copyright me, what's copyright the original creators, once I start stomping over their code base)? Can I fork this project? What must I do prior to releasing, and when/if I decide to release the software derived from this BSD-licensed work?

    Read the article

  • How can Agile methodologies be adapted to High Volume processing system development?

    - by luckyluke
    I am developing high volume processing systems. Like mathematical models that calculate various parameters based on millions of records, calculated derived fields over milions of records, process huge files having transactions etc... I am well aware of unit testing methodologies and if my code is in C# I have no problem in unit testing it. Problem is I often have code in T-SQL, C# code that is a SQL stored assembly, and SSIS workflow with a good amount of logic (and outcomes etc) or some SAS process. What is the approach YOu use when developing such systems. I usually develop several tests as Stored procedures in a designed schema(TEST) and then automatically run them overnight and check out the results. But this is only for T-SQL. And Continous integration IS hard. But the problem is with testing SSIS packages. How do You test it? What is Your preferred approach for stubbing data into tables (especially if You need a lot data initialization). I have some approach derived over the years but maybe I am just not reading enough articles. So Banking, Telecom, Risk developers out there. How do You test your mission critical apps that process milions of records at end day, month end etc? What frameworks do You use? How do You validate that Your ssis package is Correct (as You develop it)/ How do You achieve continous integration in such an environment (Personally I never got there)? I hope this is not to open-ended question. How do You test Your map-reduce jobs for example (i do not use hadoop but this is quite similar). luke Hope that this is not too open ended

    Read the article

  • HTML5 Canvas Converting between cartesian and isometric coordinates

    - by Amir
    I'm having issues wrapping my head around the Cartesian to Isometric coordinate conversion in HTML5 canvas. As I understand it, the process is two fold: (1) Scale down the y-axis by 0.5, i.e. ctx.scale(1,0.5); or ctx.setTransform(1,0,0,0.5,0,0); This supposedly produces the following matrix: [x; y] x [1, 0; 0, 0.5] (2) Rotate the context by 45 degrees, i.e. ctx.rotate(Math.PI/4); This should produce the following matrix: [x; y] x [cos(45), -sin(45); sin(45), cos(45)] This (somehow) results in the final matrix of ctx.setTransform(2,-1,1,0.5,0,0); which I cannot seem to understand... How is this matrix derived? I cannot seem to produce this matrix by multiplying the scaling and rotation matrices produced earlier... Also, if I write out the equation for the final transformation matrix, I get: newX = 2x + y newY = -x + y/2 But this doesn't seem to be correct. For example, the following code draws an isometric tile at cartesian coordinates (500, 100). ctx.setTransform(2,-1,1,0.5,0,0); ctx.fillRect(500, 100, width*2, height); When I check the result on the screen, the actual coordinates are (285, 215) which do not satisfy the equations I produced earlier... So what is going on here? I would be very grateful if you could: (1) Help me understand how the final isometric transformation matrix is derived; (2) Help me produce the correct equation for finding the on-screen coordinates of an isometric projection. Many thanks and kind regards

    Read the article

  • Open Source but not Free Software (or vice versa)

    - by TRiG
    The definition of "Free Software" from the Free Software Foundation: “Free software” is a matter of liberty, not price. To understand the concept, you should think of “free” as in “free speech,” not as in “free beer.” Free software is a matter of the users' freedom to run, copy, distribute, study, change and improve the software. More precisely, it means that the program's users have the four essential freedoms: The freedom to run the program, for any purpose (freedom 0). The freedom to study how the program works, and change it to make it do what you wish (freedom 1). Access to the source code is a precondition for this. The freedom to redistribute copies so you can help your neighbor (freedom 2). The freedom to distribute copies of your modified versions to others (freedom 3). By doing this you can give the whole community a chance to benefit from your changes. Access to the source code is a precondition for this. A program is free software if users have all of these freedoms. Thus, you should be free to redistribute copies, either with or without modifications, either gratis or charging a fee for distribution, to anyone anywhere. Being free to do these things means (among other things) that you do not have to ask or pay for permission to do so. The definition of "Open Source Software" from the Open Source Initiative: Open source doesn't just mean access to the source code. The distribution terms of open-source software must comply with the following criteria: Free Redistribution The license shall not restrict any party from selling or giving away the software as a component of an aggregate software distribution containing programs from several different sources. The license shall not require a royalty or other fee for such sale. Source Code The program must include source code, and must allow distribution in source code as well as compiled form. Where some form of a product is not distributed with source code, there must be a well-publicized means of obtaining the source code for no more than a reasonable reproduction cost preferably, downloading via the Internet without charge. The source code must be the preferred form in which a programmer would modify the program. Deliberately obfuscated source code is not allowed. Intermediate forms such as the output of a preprocessor or translator are not allowed. Derived Works The license must allow modifications and derived works, and must allow them to be distributed under the same terms as the license of the original software. Integrity of The Author's Source Code The license may restrict source-code from being distributed in modified form only if the license allows the distribution of "patch files" with the source code for the purpose of modifying the program at build time. The license must explicitly permit distribution of software built from modified source code. The license may require derived works to carry a different name or version number from the original software. No Discrimination Against Persons or Groups The license must not discriminate against any person or group of persons. No Discrimination Against Fields of Endeavor The license must not restrict anyone from making use of the program in a specific field of endeavor. For example, it may not restrict the program from being used in a business, or from being used for genetic research. Distribution of License The rights attached to the program must apply to all to whom the program is redistributed without the need for execution of an additional license by those parties. License Must Not Be Specific to a Product The rights attached to the program must not depend on the program's being part of a particular software distribution. If the program is extracted from that distribution and used or distributed within the terms of the program's license, all parties to whom the program is redistributed should have the same rights as those that are granted in conjunction with the original software distribution. License Must Not Restrict Other Software The license must not place restrictions on other software that is distributed along with the licensed software. For example, the license must not insist that all other programs distributed on the same medium must be open-source software. License Must Be Technology-Neutral No provision of the license may be predicated on any individual technology or style of interface. These definitions, although they derive from very different ideologies, are broadly compatible, and most Free Software is also Open Source Software and vice versa. I believe, however, that it is possible for this not to be the case: It is possible for software to be Open Source without being Free, or to be Free without being Open Source. Questions Is my belief correct? Is it possible for software to fall into one camp and not the other? Does any such software actually exist? Please give examples. Clarification I've already accepted an answer now, but I seem to have confused a lot of people, so perhaps a clarification is in order. I was not asking about the difference between copyleft (or "viral", though I don't like that term) and non-copyleft ("permissive") licenses. Nor was I asking about your personal idiosyncratic definitions of "Free" and "Open". I was asking about "Free Software as defined by the FSF" and "Open Source Software as defined by the OSI". Are the two always the same? Is it possible to be one without being the other? And the answer, it seems, is that it's impossible to be Free without being Open, but possible to be Open without being Free. Thank you everyone who actually answered the question.

    Read the article

  • SQL SERVER – SQL in Sixty Seconds – 5 Videos from Joes 2 Pros Series – SQL Exam Prep Series 70-433

    - by pinaldave
    Joes 2 Pros SQL Server Learning series is indeed fun. Joes 2 Pros series is written for beginners and who wants to build expertise for SQL Server programming and development from fundamental. In the beginning of the series author Rick Morelan is not shy to explain the simplest concept of how to open SQL Server Management Studio. Honestly the book starts with that much basic but as it progresses further Rick discussing about various advanced concepts from query tuning to Core Architecture. This five part series is written with keeping SQL Server Exam 70-433. Instead of just focusing on what will be there in exam, this series is focusing on learning the important concepts thoroughly. This book no way take short cut to explain any concepts and at times, will go beyond the topic at length. The best part is that all the books has many companion videos explaining the concepts and videos. Every Wednesday I like to post a video which explains something in quick few seconds. Today we will go over five videos which I posted in my earlier posts related to Joes 2 Pros series. Introduction to XML Data Type Methods – SQL in Sixty Seconds #015 The XML data type was first introduced with SQL Server 2005. This data type continues with SQL Server 2008 where expanded XML features are available, most notably is the power of the XQuery language to analyze and query the values contained in your XML instance. There are five XML data type methods available in SQL Server 2008: query() – Used to extract XML fragments from an XML data type. value() – Used to extract a single value from an XML document. exist() – Used to determine if a specified node exists. Returns 1 if yes and 0 if no. modify() – Updates XML data in an XML data type. node() – Shreds XML data into multiple rows (not covered in this blog post). [Detailed Blog Post] | [Quiz with Answer] Introduction to SQL Error Actions – SQL in Sixty Seconds #014 Most people believe that when SQL Server encounters an error severity level 11 or higher the remaining SQL statements will not get executed. In addition, people also believe that if any error severity level of 11 or higher is hit inside an explicit transaction, then the whole statement will fail as a unit. While both of these beliefs are true 99% of the time, they are not true in all cases. It is these outlying cases that frequently cause unexpected results in your SQL code. To understand how to achieve consistent results you need to know the four ways SQL Error Actions can react to error severity levels 11-16: Statement Termination – The statement with the procedure fails but the code keeps on running to the next statement. Transactions are not affected. Scope Abortion – The current procedure, function or batch is aborted and the next calling scope keeps running. That is, if Stored Procedure A calls B and C, and B fails, then nothing in B runs but A continues to call C. @@Error is set but the procedure does not have a return value. Batch Termination – The entire client call is terminated. XACT_ABORT – (ON = The entire client call is terminated.) or (OFF = SQL Server will choose how to handle all errors.) [Detailed Blog Post] | [Quiz with Answer] Introduction to Basics of a Query Hint – SQL in Sixty Seconds #013 Query hints specify that the indicated hints should be used throughout the query. Query hints affect all operators in the statement and are implemented using the OPTION clause. Cautionary Note: Because the SQL Server Query Optimizer typically selects the best execution plan for a query, it is highly recommended that hints be used as a last resort for experienced developers and database administrators to achieve the desired results. [Detailed Blog Post] | [Quiz with Answer] Introduction to Hierarchical Query – SQL in Sixty Seconds #012 A CTE can be thought of as a temporary result set and are similar to a derived table in that it is not stored as an object and lasts only for the duration of the query. A CTE is generally considered to be more readable than a derived table and does not require the extra effort of declaring a Temp Table while providing the same benefits to the user. However; a CTE is more powerful than a derived table as it can also be self-referencing, or even referenced multiple times in the same query. A recursive CTE requires four elements in order to work properly: Anchor query (runs once and the results ‘seed’ the Recursive query) Recursive query (runs multiple times and is the criteria for the remaining results) UNION ALL statement to bind the Anchor and Recursive queries together. INNER JOIN statement to bind the Recursive query to the results of the CTE. [Detailed Blog Post] | [Quiz with Answer] Introduction to SQL Server Security – SQL in Sixty Seconds #011 Let’s get some basic definitions down first. Take the workplace example where “Tom” needs “Read” access to the “Financial Folder”. What are the Securable, Principal, and Permissions from that last sentence? A Securable is a resource that someone might want to access (like the Financial Folder). A Principal is anything that might want to gain access to the securable (like Tom). A Permission is the level of access a principal has to a securable (like Read). [Detailed Blog Post] | [Quiz with Answer] Please leave a comment explain which one was your favorite video as that will help me understand what works and what needs improvement. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • Breaking through the class sealing

    - by Jason Crease
    Do you understand 'sealing' in C#?  Somewhat?  Anyway, here's the lowdown. I've done this article from a C# perspective, but I've occasionally referenced .NET when appropriate. What is sealing a class? By sealing a class in C#, you ensure that you ensure that no class can be derived from that class.  You do this by simply adding the word 'sealed' to a class definition: public sealed class Dog {} Now writing something like " public sealed class Hamster: Dog {} " you'll get a compile error like this: 'Hamster: cannot derive from sealed type 'Dog' If you look in an IL disassembler, you'll see a definition like this: .class public auto ansi sealed beforefieldinit Dog extends [mscorlib]System.Object Note the addition of the word 'sealed'. What about sealing methods? You can also seal overriding methods.  By adding the word 'sealed', you ensure that the method cannot be overridden in a derived class.  Consider the following code: public class Dog : Mammal { public sealed override void Go() { } } public class Mammal { public virtual void Go() { } } In this code, the method 'Go' in Dog is sealed.  It cannot be overridden in a subclass.  Writing this would cause a compile error: public class Dachshund : Dog { public override void Go() { } } However, we can 'new' a method with the same name.  This is essentially a new method; distinct from the 'Go' in the subclass: public class Terrier : Dog { public new void Go() { } } Sealing properties? You can also seal seal properties.  You add 'sealed' to the property definition, like so: public sealed override string Name {     get { return m_Name; }     set { m_Name = value; } } In C#, you can only seal a property, not the underlying setters/getters.  This is because C# offers no override syntax for setters or getters.  However, in underlying IL you seal the setter and getter methods individually - a property is just metadata. Why bother sealing? There are a few traditional reasons to seal: Invariance. Other people may want to derive from your class, even though your implementation may make successful derivation near-impossible.  There may be twisted, hacky logic that could never be second-guessed by another developer.  By sealing your class, you're protecting them from wasting their time.  The CLR team has sealed most of the framework classes, and I assume they did this for this reason. Security.  By deriving from your type, an attacker may gain access to functionality that enables him to hack your system.  I consider this a very weak security precaution. Speed.  If a class is sealed, then .NET doesn't need to consult the virtual-function-call table to find the actual type, since it knows that no derived type can exist.  Therefore, it could emit a 'call' instead of 'callvirt' or at least optimise the machine code, thus producing a performance benefit.  But I've done trials, and have been unable to demonstrate this If you have an example, please share! All in all, I'm not convinced that sealing is interesting or important.  Anyway, moving-on... What is automatically sealed? Value types and structs.  If they were not always sealed, all sorts of things would go wrong.  For instance, structs are laid-out inline within a class.  But what if you assigned a substruct to a struct field of that class?  There may be too many fields to fit. Static classes.  Static classes exist in C# but not .NET.  The C# compiler compiles a static class into an 'abstract sealed' class.  So static classes are already sealed in C#. Enumerations.  The CLR does not track the types of enumerations - it treats them as simple value types.  Hence, polymorphism would not work. What cannot be sealed? Interfaces.  Interfaces exist to be implemented, so sealing to prevent implementation is dumb.  But what if you could prevent interfaces from being extended (i.e. ban declarations like "public interface IMyInterface : ISealedInterface")?  There is no good reason to seal an interface like this.  Sealing finalizes behaviour, but interfaces have no intrinsic behaviour to finalize Abstract classes.  In IL you can create an abstract sealed class.  But C# syntax for this already exists - declaring a class as a 'static', so it forces you to declare it as such. Non-override methods.  If a method isn't declared as override it cannot be overridden, so sealing would make no difference.  Note this is stated from a C# perspective - the words are opposite in IL.  In IL, you have four choices in total: no declaration (which actually seals the method), 'virtual' (called 'override' in C#), 'sealed virtual' ('sealed override' in C#) and 'newslot virtual' ('new virtual' or 'virtual' in C#, depending on whether the method already exists in a base class). Methods that implement interface methods.  Methods that implement an interface method must be virtual, so cannot be sealed. Fields.  A field cannot be overridden, only hidden (using the 'new' keyword in C#), so sealing would make no sense.

    Read the article

  • Unable to set TestContext property

    - by Brandon
    I have a visual studio 2008 Unit test and I'm getting the following runtime error: Unable to set TestContext property for the class JMPS.PlannerSuite.DataServices.MyUnitTest. Error: System.ArgumentException: Object of type 'Microsoft.VisualStudio.TestTools.TestTypes.Unit.UnitTestAdapterContext' cannot be converted to type 'Microsoft.VisualStudio.TestTools.UnitTesting.TestContext' I have read that VS 2008 does not properly update the references to the UnitTestFramework when converting 2005 projects. My unit test was created in 2008 but it inherits from a base class built in VS 2005. Is this where my problem is coming from? Does my base class have to be rebuilt in 2008? I would rather not do this as it will affect other projects. In other derived unit tests built in 2005, all that we needed to do was comment out the TestContext property in the derived unit test. I have tried this in the VS 2008 unit test with no luck. I have also tried to "new" the TestContext property which gives me a different runtime error. Any ideas?

    Read the article

  • Problems with Castle DynamicProxy2 on .Net 3.5 SP1 on Win2003 Server

    - by Andrea Balducci
    I've an mvc + nh asp.net application. On my dev machine (win 7 Ent) all works fine, if deployed on a Win 2k3 (tried 2 different vm and one phisical machine) I got the following error.. anyone can help? Cannot explain this issue (tried the same build, so i think it'a machine configuration issue).. Derived method 'set_ID' in type 'CustomerProxy75950979a2a048e889584c21696f7f1b' from assembly 'DynamicProxyGenAssembly2, Version=0.0.0.0, Culture=neutral, PublicKeyToken=null' cannot reduce access [TypeLoadException: Derived method 'set_ID' in type 'CustomerProxy75950979a2a048e889584c21696f7f1b' from assembly 'DynamicProxyGenAssembly2, Version=0.0.0.0, Culture=neutral, PublicKeyToken=null' cannot reduce access.] System.Reflection.Emit.TypeBuilder._TermCreateClass(Int32 handle, Module module) +0 System.Reflection.Emit.TypeBuilder.CreateTypeNoLock() +915 System.Reflection.Emit.TypeBuilder.CreateType() +108 Castle.DynamicProxy.Generators.Emitters.AbstractTypeEmitter.BuildType() +48 Castle.DynamicProxy.Generators.ClassProxyGenerator.GenerateCode(Type[] interfaces, ProxyGenerationOptions options) +3821 Castle.DynamicProxy.DefaultProxyBuilder.CreateClassProxy(Type classToProxy, Type[] additionalInterfacesToProxy, ProxyGenerationOptions options) +84 Castle.DynamicProxy.ProxyGenerator.CreateClassProxy(Type classToProxy, Type[] additionalInterfacesToProxy, ProxyGenerationOptions options, Object[] constructorArguments, IInterceptor[] interceptors) +92 Castle.DynamicProxy.ProxyGenerator.CreateClassProxy(Type classToProxy, Type[] additionalInterfacesToProxy, IInterceptor[] interceptors) +21 NHibernate.ByteCode.Castle.ProxyFactory.GetProxy(Object id, ISessionImplementor session) +283

    Read the article

  • Winform problem with autoscrolling of the ScrollableControl

    - by BobLim
    Hi guys, I have a problem with autoscrolling of the .NET ScrollableControl. I am using TabPage which inherited from ScrollableControl in the class hierarachy. Every TabPage object has only 1 UserControl derived control which draws the landscape; there is no other control on the tabpage. The usage of my application is its user will drag a file from windows explorer and drop into the TabPage. As more files are dragged and dropped, the UserControl derived control will expand to accomodate the drawing of the files and auto-scrolling will be enabled. The problem I have is when I mouse-click on the UserControl control, the vertical and horizontal scrollbars will scroll back to (0,0) position. I want the vertical and horizontal scrollbars to remain at their original scrolled position whatever happens. I believe when I mouse-click on the UserControl control, the UserControl control comes into focus and that triggers the auto-scrolling to (0,0) position. Please help. Thanks in advance!

    Read the article

  • [LINQ]InsertOnSubmit NullReferenceException

    - by Kurresmack
    Hello, I have a rather annoying issue with LinqToSql. I have created a class that is derived from the class in the DataContext. The problem is that as soon as I use "InsertOnSubmit(this);" on this derived class I get a NullReferenceException. I've seen some people with the same issue. However they've used a custom constructor and solved the issue by calling ": this()" like this thread http://social.msdn.microsoft.com/Forums/en-US/linqprojectgeneral/thread/0cf1fccb-6398-4f16-920b-adef9dc4ac9f The difference is that I use a default constructor which causes the base constructor to be called so there should not be any problem! Could someone please help me with this, starts to get annoying! Thanks :)

    Read the article

  • WP: AesManaged encryption vs. mcrypt_encrypt

    - by invalidusername
    I'm trying to synchronize my encryption and decryption methods between C# and PHP but something seems to be going wrong. In the Windows Phone 7 SDK you can use AESManaged to encrypt your data I use the following method: public static string EncryptA(string dataToEncrypt, string password, string salt) { AesManaged aes = null; MemoryStream memoryStream = null; CryptoStream cryptoStream = null; try { //Generate a Key based on a Password, Salt and HMACSHA1 pseudo-random number generator Rfc2898DeriveBytes rfc2898 = new Rfc2898DeriveBytes(password, Encoding.UTF8.GetBytes(salt)); //Create AES algorithm with 256 bit key and 128-bit block size aes = new AesManaged(); aes.Key = rfc2898.GetBytes(aes.KeySize / 8); aes.IV = new byte[] { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 }; // rfc2898.GetBytes(aes.BlockSize / 8); // to check my results against those of PHP var blaat1 = Convert.ToBase64String(aes.Key); var blaat2 = Convert.ToBase64String(aes.IV); //Create Memory and Crypto Streams memoryStream = new MemoryStream(); cryptoStream = new CryptoStream(memoryStream, aes.CreateEncryptor(), CryptoStreamMode.Write); //Encrypt Data byte[] data = Encoding.Unicode.GetBytes(dataToEncrypt); cryptoStream.Write(data, 0, data.Length); cryptoStream.FlushFinalBlock(); //Return Base 64 String string result = Convert.ToBase64String(memoryStream.ToArray()); return result; } finally { if (cryptoStream != null) cryptoStream.Close(); if (memoryStream != null) memoryStream.Close(); if (aes != null) aes.Clear(); } } I solved the problem of generating the Key. The Key and IV are similar as those on the PHP end. But then the final step in the encryption is going wrong. here is my PHP code <?php function pbkdf2($p, $s, $c, $dk_len, $algo = 'sha1') { // experimentally determine h_len for the algorithm in question static $lengths; if (!isset($lengths[$algo])) { $lengths[$algo] = strlen(hash($algo, null, true)); } $h_len = $lengths[$algo]; if ($dk_len > (pow(2, 32) - 1) * $h_len) { return false; // derived key is too long } else { $l = ceil($dk_len / $h_len); // number of derived key blocks to compute $t = null; for ($i = 1; $i <= $l; $i++) { $f = $u = hash_hmac($algo, $s . pack('N', $i), $p, true); // first iterate for ($j = 1; $j < $c; $j++) { $f ^= ($u = hash_hmac($algo, $u, $p, true)); // xor each iterate } $t .= $f; // concatenate blocks of the derived key } return substr($t, 0, $dk_len); // return the derived key of correct length } } $password = 'test'; $salt = 'saltsalt'; $text = "texttoencrypt"; #$iv_size = mcrypt_get_iv_size(MCRYPT_RIJNDAEL_128, MCRYPT_MODE_CBC); #echo $iv_size . '<br/>'; #$iv = mcrypt_create_iv($iv_size, MCRYPT_RAND); #print_r (mcrypt_list_algorithms()); $iv = "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"; $key = pbkdf2($password, $salt, 1000, 32); echo 'key: ' . base64_encode($key) . '<br/>'; echo 'iv: ' . base64_encode($iv) . '<br/>'; echo '<br/><br/>'; function addpadding($string, $blocksize = 32){ $len = strlen($string); $pad = $blocksize - ($len % $blocksize); $string .= str_repeat(chr($pad), $pad); return $string; } echo 'text: ' . $text . '<br/>'; echo 'text: ' . addpadding($text) . '<br/>'; // -- works till here $crypttext = mcrypt_encrypt(MCRYPT_RIJNDAEL_256, $key, $text, MCRYPT_MODE_CBC, $iv); echo '1.' . $crypttext . '<br/>'; $crypttext = base64_encode($crypttext); echo '2.' . $crypttext . '<br/>'; $crypttext = mcrypt_encrypt(MCRYPT_RIJNDAEL_256, $key, addpadding($text), MCRYPT_MODE_CBC, $iv); echo '1.' . $crypttext . '<br/>'; $crypttext = base64_encode($crypttext); echo '2.' . $crypttext . '<br/>'; ?> So to point out, the Key and IV look similar on both .NET and PHP, but something seems to be going wrong in the final call when executing mcrypt_encrypt(). The end result, the encrypted string, differs from .NET. Can anybody tell me what i'm doing wrong. As far as i can see everything should be correct. Thank you! EDIT: Additional information on the AESManaged object in .NET Keysize = 256 Mode = CBC Padding = PKCS7

    Read the article

  • Should Java IOException have been an unchecked RuntimeException?

    - by Derek Mahar
    Do you agree that the designers of Java class java.io.IOException should have made it an unchecked run-time exception derived from java.lang.RuntimeException instead of a checked exception derived only from java.lang.Exception? I think that class IOException should have been an unchecked exception because there is little that an application can do to resolve problems like file system errors. However, in When You Can't Throw An Exception, Elliotte Rusty Harold claims that most I/O errors are transient and so you can retry an I/O operation several times before giving up: For instance, an IOComparator might not take an I/O error lying down, but — because many I/O problems are transient — you can retry a few times, as shown in Listing 7: Is this generally the case? Can a Java application correct I/O errors or wait for the system to recover? If so, then it is reasonable for IOException to be checked, but if it is not the case, then IOException should be unchecked so that business logic can delegate handling this exception to a separate system error handler.

    Read the article

  • Static lib that links another static lib and qmake? Odd linking error

    - by Dan O
    I have two qt .pro files, both using the lib TEMPLATE and staticlib CONFIG. The first library (lets call it 'core') is a dependency for the second lib (I'll call it 'foo'). In fact, there's a class in foo that extends a class in core, I will call this class Bar. When I instantiate the class (which is defined and implemented in foo, but extends a class (Bar) from core) in another project (not a lib) I get the following linking error: /usr/bin/ld: Undefined symbols: Bar::Bar() Basically, the linker cannot find the class in the core lib that has been derived in the foo lib, but ONLY when I instantiate the class in a third project that is using both libs. Is this behaviour expected? Regards, Dan O Update: I fixed it by directly invoking the Bars constructor in the third project before using derived class... does anyone know why I need to do this?

    Read the article

  • Using interface classes and non-virtual interface idiom in C++

    - by andreas buykx
    Hi all, In C++ an interface can be implemented by a class with all its methods pure virtual: class IFoo { public: virtual void method() = 0; }; Now I want to implement this interface by a hierarchy of classes: class FooBase : public IFoo // implement interface IFoo { public: void method(); // calls methodImpl; private: virtual void methodImpl(); }; For the class hierarchy I would like to use the non-virtual interface (NVI) idiom, to deny derived classes the possibility of overriding the common behavior implemented in FooBase::method(), but it seems that all derived classes have the opportunity to override the FooBase::method() because it is declared in the interface class. Is my observation correct? And if so are there other options to both use interface classes and the NVI idiom?

    Read the article

  • Subclassing QGraphicsItemGroup

    - by onurozcelik
    Hi everyone I have system that has classes derived from QGraphicsWidget. I manage derived class objects in layouts on QGraphicsScene. Now I need a compound item that contain two or more QGraphicsWidget in it and also I need to put that item inside my layout. So I choose QGraphicsItemGroup and write I class like this. class CompositeItem : public QGraphicsItemGroup,public QGraphicsLayoutItem { ... }; I only implemented sizeHint function again. When add CompositeItem instance to layout it does not shown. What may cause this? Where I made wrong?

    Read the article

  • Inheritance: when implementing an interface which define a base class property why cant the class im

    - by Deepak
    Lets create some interfaces public interface ITimeEventHandler { string Open(); } public interface IJobTimeEventHandler: ITimeEventHandler { string DeleteJob(); } public interface IActivityTimeEventHandler: ITimeEventHandler { string DeleteActivity(); } public interface ITimeEvent { ITimeEventHandler Handler; } Another Interface public interface IJobTimeEvent :ITimeEvent { int JobID; } Create a class public class JobTimeEvent : IJobTimeEvent { public int JobID = 0; public IJobTimeEventHandler Handler = null; } My question is .. when implementing an interface which define a base class property why cant the class implementing interface return a derived class type object ?? For ex in class JobTimeEvent, IJobtimeEvent needs a property of type ITimeEventHandler but why IJobTimeEventHandler type is not allowed which derived from ITimeEventHandler

    Read the article

  • How to use reflection to get a default constructor?

    - by Qwertie
    I am writing a library that generates derived classes of abstract classes dynamically at runtime. The constructor of the derived class needs a MethodInfo of the base class constructor so that it can invoke it. However, for some reason Type.GetConstructor() returns null. For example: abstract class Test { public abstract void F(); } public static void Main(string[] args) { ConstructorInfo constructor = typeof(Test).GetConstructor( BindingFlags.NonPublic | BindingFlags.Public, null, System.Type.EmptyTypes, null); // returns null! } Note that GetConstructor returns null even if I explicitly declare a constructor in Test, and even if Test is not abstract.

    Read the article

  • Help with these warnings. [inheritance].

    - by sil3nt
    Hello there. I have a set of code, which mimics a basic library cataloging system. There is a base class named items, in which the the general id,title and year variables are defined and 3 other derived classes (DVD,Book and CD). Base [Items] Derived [DVD,Book,CD]. The programs runs, however I get the following warnings, I'm not sure how to fix these. "C:\Program Files\gcc\bin/g++" -Os -mconsole -g -Wall -Wshadow -fno-common mainA4.cpp -o mainA4.exe In file included from mainA4.cpp:5: a4.h: In constructor `DVD::DVD(int, std::string, int, std::string)': a4.h:28: warning: `DVD::director' will be initialized after a4.h:32: warning: base `Items' a4.h:32: warning: when initialized here a4.h: In constructor `Book::Book(int, std::string, int, std::string, int)': a4.h:48: warning: `Book::numPages' will be initialized after a4.h:52: warning: base `Items' a4.h:52: warning: when initialized here a4.h: In constructor `CD::CD(int, std::string, int, std::string, int)': a4.h:66: warning: `CD::numSongs' will be initialized after a4.h:70: warning: base `Items' a4.h:70: warning: when initialized here Exit code: 0

    Read the article

  • Alternative to c++ static virtual methods

    - by Jaime Pardos
    In C++ is not possible to declare a static virtual function, neither cast a non-static function to a C style function pointer. Now, I have a plain ol' C SDK that uses function pointers heavily. I have to fill a structure with several function pointers. I was planning to use an abstract class with a bunch of static pure virtual methods, and redefine them in derived classes and fill the structure with them. It wasn't until then that I realized that static virtual are not allowed in C++. Is there any good alternative? The best I can think of is defining some pure virtual methods GetFuncA(), GetFuncB(),... and some static members FuncA()/FuncB() in each derived class, which would be returned by the GetFuncX(). Then a function in the abstract class would call those functions to get the pointers and fill the structure.

    Read the article

  • Interface and base class mix, the right way to implement this

    - by Lerxst
    I have some user controls which I want to specify properties and methods for. They inherit from a base class, because they all have properties such as "Foo" and "Bar", and the reason I used a base class is so that I dont have to manually implement all of these properties in each derived class. However, I want to have a method that is only in the derived classes, not in the base class, as the base class doesn't know how to "do" the method, so I am thinking of using an interface for this. If i put it in the base class, I have to define some body to return a value (which would be invalid), and always make sure that the overriding method is not calling the base. method Is the right way to go about this to use both the base class and an interface to expose the method? It seems very round-about, but every way i think about doing it seems wrong... Let me know if the question is not clear, it's probably a dumb question but I want to do this right.

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >