Search Results

Search found 2479 results on 100 pages for 'or operator'.

Page 37/100 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • std::bind overload resolution

    - by bpw1621
    The following code works fine #include <functional> using namespace std; using namespace std::placeholders; class A { int operator()( int i, int j ) { return i - j; } }; A a; auto aBind = bind( &A::operator(), ref(a), _2, _1 ); This does not #include <functional> using namespace std; using namespace std::placeholders; class A { int operator()( int i, int j ) { return i - j; } int operator()( int i ) { return -i; } }; A a; auto aBind = bind( &A::operator(), ref(a), _2, _1 ); I have tried playing around with the syntax to try and explicitly resolve which function I want in the code that does not work without luck so far. How do I write the bind line in order to choose the call that takes the two integer arguments?

    Read the article

  • conversions in C++

    - by lego69
    I have this snippet of the code: header class A { private: int player; public: A(int initPlayer = 0); A(const A&); A& operator=(const A&); ~A(); void foo() const; friend A& operator=(A& i, const A& member); }; operator= A& operator=(A& i, const A& member){ i(member.player); return i; } and I have row in my code: i = *pa1; A *pa1 = new A(a2); at the beginning i was int how can I fix it, thanks in advance I have an error must be non-static function

    Read the article

  • C++0x Smart Pointer Comparisons: Inconsistent, what's the rationale?

    - by GManNickG
    In C++0x (n3126), smart pointers can be compared, both relationally and for equality. However, the way this is done seems inconsistent to me. For example, shared_ptr defines operator< be equivalent to: template <typename T, typename U> bool operator<(const shared_ptr<T>& a, const shared_ptr<T>& b) { return std::less<void*>()(a.get(), b.get()); } Using std::less provides total ordering with respect to pointer values, unlike a vanilla relational pointer comparison, which is unspecified. However, unique_ptr defines the same operator as: template <typename T1, typename D1, typename T2, typename D2> bool operator<(const unique_ptr<T1, D1>& a, const unique_ptr<T2, D2>& b) { return a.get() < b.get(); } It also defined the other relational operators in similar fashion. Why the change in method and "completeness"? That is, why does shared_ptr use std::less while unique_ptr uses the built-in operator<? And why doesn't shared_ptr also provide the other relational operators, like unique_ptr? I can understand the rationale behind either choice: with respect to method: it represents a pointer so just use the built-in pointer operators, versus it needs to be usable within an associative container so provide total ordering (like a vanilla pointer would get with the default std::less predicate template argument) with respect to completeness: it represents a pointer so provide all the same comparisons as a pointer, versus it is a class type and only needs to be less-than comparable to be used in an associative container, so only provide that requirement But I don't see why the choice changes depending on the smart pointer type. What am I missing? Bonus/related: std::shared_ptr seems to have followed from boost::shared_ptr, and the latter omits the other relational operators "by design" (and so std::shared_ptr does too). Why is this?

    Read the article

  • polymorphism in C++

    - by user550413
    I am trying to implement the next 2 functions Number& DoubleClass::operator+( Number& x); Number& IntClass::operator+(Number& x); I am not sure how to do it..(their unidirectionality is explained below): class IntClass; class DoubleClass; class Number { //return a Number object that's the results of x+this, when x is either //IntClass or DoubleClass virtual Number& operator+(Number& x) = 0; }; class IntClass : public Number { private: int my_number; //return a Number object that's the result of x+this. //The actual class of the returned object depends on x. //If x is IntClass, then the result if IntClass. //If x is DoubleClass, then the results is DoubleClass. public: Number& operator+(Number& x); }; class DoubleClass : public Number { private: double my_number; public: //return a DoubleClass object that's the result of x+this. //This should work if x is either IntClass or DoubleClass Number& operator+( Number& x); };

    Read the article

  • Input Iterator for a shared_ptr

    - by Baz
    I have an iterator which contains the following functions: ... T &operator*() { return *_i; } std::shared_ptr<T> operator->() { return _i; } private: std::shared_ptr<T> _i; ... How do I get a shared pointer to the internally stored _i? std::shared_ptr<Type> item = ??? Should I do: MyInterfaceIterator<Type> i; std::shared_ptr<Type> item = i.operator->(); Or should I rewrite operator*()?

    Read the article

  • Custom InputIterator for Boost graph (BGL)

    - by Shadow
    Hi, I have a graph with custom properties to the vertices and edges. I now want to create a copy of this graph, but I don't want the vertices to be as complex as in the original. By this I mean that it would suffice that the vertices have the same indices (vertex_index_t) as they do in the original graph. Instead of doing the copying by hand I wanted to use the copy-functionality of boost::adjacency_list (s. http://www.boost.org/doc/libs/1_37_0/libs/graph/doc/adjacency_list.html): template <class EdgeIterator> adjacency_list(EdgeIterator first, EdgeIterator last, vertices_size_type n, edges_size_type m = 0, const GraphProperty& p = GraphProperty()) The description there says: The EdgeIterator must be a model of InputIterator. The value type of the EdgeIterator must be a std::pair, where the type in the pair is an integer type. The integers will correspond to vertices, and they must all fall in the range of [0, n). Unfortunately I have to admit that I don't quite get it how to define an EdgeIterator that is a model of InputIterator. Here's what I've succeded so far: template< class EdgeIterator, class Edge > class MyEdgeIterator// : public input_iterator< std::pair<int, int> > { public: MyEdgeIterator() {}; MyEdgeIterator(EdgeIterator& rhs) : actual_edge_it_(rhs) {}; MyEdgeIterator(const MyEdgeIterator& to_copy) {}; bool operator==(const MyEdgeIterator& to_compare) { return actual_edge_it_ == to_compare.actual_edge_it_; } bool operator!=(const MyEdgeIterator& to_compare) { return !(*this == to_compare); } Edge operator*() const { return *actual_edge_it_; } const MyEdgeIterator* operator->() const; MyEdgeIterator& operator ++() { ++actual_edge_it_; return *this; } MyEdgeIterator operator ++(int) { MyEdgeIterator<EdgeIterator, Edge> tmp = *this; ++*this; return tmp; } private: EdgeIterator& actual_edge_it_; } However, this doesn't work as it is supposed to and I ran out of clues. So, how do I define the appropriate InputIterator?

    Read the article

  • Field to display Previous 30 Day Total

    - by whytheq
    I've got this table: CREATE TABLE #Data1 ( [Market] VARCHAR(100) NOT NULL, [Operator] VARCHAR(100) NOT NULL, [Date] DATETIME NOT NULL, [Measure] VARCHAR(100) NOT NULL, [Amount] NUMERIC(36,10) NOT NULL, --new calculated fields [DailyAvg_30days] NUMERIC(38,6) NULL DEFAULT 0 ) I've populated all the fields apart from DailyAvg_30days. This field needs to show the total for the preceding 30 days e.g. 1. if Date for a particular record is 2nd Dec then it will be the total for the period 3rd Nov - 2nd Dec inclusive. 2. if Date for a particular record is 1st Dec then it will be the total for the period 2nd Nov - 1st Dec inclusive. My attempt to try to find these totals before updating the table is as follows: SELECT a.[Market], a.[Operator], a.[Date], a.[Measure], a.[Amount], [DailyAvg_30days] = SUM(b.[Amount]) FROM #Data1 a INNER JOIN #Data1 b ON a.[Market] = b.[Market] AND a.[Operator] = b.[Operator] AND a.[Measure] = b.[Measure] AND a.[Date] >= b.[Date]-30 AND a.[Date] <= b.[Date] GROUP BY a.[Market], a.[Operator], a.[Date], a.[Measure], a.[Amount] ORDER BY 1,2,4,3 Is this a valid approach or do I need to approach this from a different angle?

    Read the article

  • CodePlex Daily Summary for Wednesday, October 24, 2012

    CodePlex Daily Summary for Wednesday, October 24, 2012Popular ReleasesfastJSON: v2.0.9: - added support for root level DataSet and DataTable deserialize (you have to do ToObject<DataSet>(...) ) - added dataset testsInfo Gempa BMKG: Info Gempa BMKG v 1.0.0.3: Release perdana aplikasi pembaca informasi gempa dari data feeder BMKG.Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.72: Fix for Issue #18819 - bad optimization of return/assign operator.DNN Module Creator: 01.01.00: Updated templates for DNN7 ( ie. DAL2, Web Service API ). Numerous bug fixes and enhancements.WPF Application Framework (WAF): WPF Application Framework (WAF) 2.5.0.390: Version 2.5.0.390 (Release Candidate): This release contains the source code of the WPF Application Framework (WAF) and the sample applications. Requirements .NET Framework 4.0 (The package contains a solution file for Visual Studio 2010) The unit test projects require Visual Studio 2010 Professional Changelog Legend: [B] Breaking change; [O] Marked member as obsolete WAF: Fix recent file list remove issue. WAF: Minor code improvements. BookLibrary: Fix Blend design time support o...ltxml.js - LINQ to XML for JavaScript: 1.0 - Beta 1: First release!ZXMAK2: Version 2.6.6.0: + fix refresh debugger after open RZX file + add NoFlic video filterEPiServer CMS ElencySolutions.MultipleProperty: ElencySolutions.MultipleProperty v1.6.3: The ElencySolutions.MulitpleProperty property controls have been developed by Lee Crowe a technical developer at Fortune Cookie (London). Installation notes The property copy page can be locked down by adding the following location element, the path of this will be different depending on whether you use the embedded or non embedded resource version. When installing the nuget package these will be added automatically, examples below: Embedded: <location path="util/ElencySolutionsMultipleP...Fiskalizacija za developere: FiskalizacijaDev 1.1: Ovo je prva nadogradnja ovog projekta nakon inicijalnog predstavljanja - dodali smo nekoliko feature-a, bilo zato što smo sami primijetili da bi ih bilo dobro dodati, bilo na osnovu vaših sugestija - hvala svima koji su se ukljucili :) Ovo su stvari riješene u v1.1.: 1. Bilo bi dobro da se XML dokument koji se šalje u CIS može snimiti u datoteku (http://fiskalizacija.codeplex.com/workitem/612) 2. Podrška za COM DLL (VB6) (http://fiskalizacija.codeplex.com/workitem/613) 3. Podrška za DOS (unu...MCEBuddy 2.x: MCEBuddy 2.3.4: Changelog for 2.3.4 (32bit and 64bit) 1. Fixed a bug introduced in 2.3.3 that would cause HD recordings and recordings with multiple audio channels to fail. 2. Updated <encoder-unsupported> option to compare with all Audio tracks for videos with multiple audio tracks. 3. Fixed a bug with SRT and EDL files, when input and output directory are the same the files are not preserved.Liberty: v3.4.0.0 Release 20th October 2012: Change Log -Added -Halo 4 support (invincibility, ammo editing) -Reach A warning dialog now shows up when you first attempt to swap a weapon -Fixed -A few minor bugsFoxOS: Stable Fox: Stable Fox Versione 0.0.0.1 Richiede .NET Framework 3.5 o superioriDoctor Reg: Doctor Reg V1.0: Doctor Reg V1.0 PT-PTkv: kv 1.0: if it were any more stable it would be a barn.LINQ for C++: cpplinq-20121020: LINQ for C++ is an attempt to bring LINQ-like list manipulation to C++11. This release includes just the source code. What's new in this release: join range operators: Inner Joins two ranges using a key selector reverse range operator distinct range operator union_with range operator intersect_with range operator except range operator concat range operator sequence_equal range aggregator to_lookup range aggregator This is a sample on how to use cpplinq: #include "cpplinq.h...helferlein_Form: 02.03.05: Requirements.Net 4.0 DotNetNuke 05.06.07 or higher, maybe it works with lower versions, but I developed it on this one and tested it on DotNetNuke 06.02.00 as well helferlein_BabelFish version 01.01.03 - please upgrade this first! Issues fixed Fixed issue with all users from all portals are listed as Host users in the sender options (E-Mail Options - Sender - ALL Users Listed) Registered postback-button for Excel-Export on Form submission edit control Changed behaviour Due to some mis...ClosedXML - The easy way to OpenXML: ClosedXML 0.68.1: ClosedXML now resolves formulas! Yes it finally happened. If you call cell.Value and it has a formula the library will try to evaluate the formula and give you the result. For example: var wb = new XLWorkbook(); var ws = wb.AddWorksheet("Sheet1"); ws.Cell("A1").SetValue(1).CellBelow().SetValue(1); ws.Cell("B1").SetValue(1).CellBelow().SetValue(1); ws.Cell("C1").FormulaA1 = "\"The total value is: \" & SUM(A1:B2)"; var...Orchard Project: Orchard 1.6 RC: RELEASE NOTES This is the Release Candidate version of Orchard 1.6. You should use this version to prepare your current developments to the upcoming final release, and report problems. Please read our release notes for Orchard 1.6 RC: http://docs.orchardproject.net/Documentation/Orchard-1-6-Release-Notes Please do not post questions as reviews. Questions should be posted in the Discussions tab, where they will usually get promptly responded to. If you post a question as a review, you wil...Rawr: Rawr 5.0.1: This is the Downloadable WPF version of Rawr!For web-based version see http://elitistjerks.com/rawr.php You can find the version notes at: http://rawr.codeplex.com/wikipage?title=VersionNotes Rawr Addon (NOT UPDATED YET FOR MOP)We now have a Rawr Official Addon for in-game exporting and importing of character data hosted on Curse. The Addon does not perform calculations like Rawr, it simply shows your exported Rawr data in wow tooltips and lets you export your character to Rawr (including ba...Yahoo! UI Library: YUI Compressor for .Net: Version 2.1.1.0 - Sartha (BugFix): - Revered back the embedding of the 2x assemblies.New ProjectsASP.NET DatePicker (Persian/Gregorian): This is just another DatePicker for ASP.NET that supports both Persian (Jalali/Shamsi/Solar) and Gregorian Calendar.AspectMap: AspectMap is an Aspect Oriented framework built on top of the StructureMap IoC framework.Building a Pinterest like Image Crawler: More about it here : http://www.alexpeta.ro/article/building-a-pinterest-like-image-crawlerCRM 2011 client scripting TypeScript definition file: xrm crm 2011 typescript javascript definition fileDarkSky Commerce: DarkSky Commerce is an Orchard module that provides a generic and extensible set of core commerce features and servicesDarkSky Learning: An Orchard module providing E-learning authoring tools and engines.Dusk Consulting: Dusk Consulting provides automation scripts to help IT Professionals and Developers alike.Edi: Edi is a an IDE based editor that is currently focused on text based editing (other editors may follow). This project is based on AvalonDock 2.0 and AvalonEdit.Entity Framework Extensions: This project includes extensions for the entity framework, includes unit of work pattern with event support, repository pattern with event support, and so on.EPiBoost (EPiServer 7 MVC Toolkit): Coming SoonExtended Guid: Easilly create version 5 guids. Converts an arbitrary string to a guid given a specific work area, or namespace.FileToText: pour la création de liste de fichierFilmBook, Film and Photo Archival Management: FilmBook is a simple image gallery and organization application. It's goal is to help identify the location of negatives in an archival page.FridgeBoard: Este proyecto está siendo realizado por alumnos de informática. No nos hacemos responsables de la mala calidad del mismo.GIV_P1: testyIQTool: Provide a set of tools to generate reports capturing the state of a server. This is written in powershell to allow easy modification / update of the scriptsLeaving Application System: have a good time.Leaving Management System: optimize work flow of staff work attendance managementltxml.js - LINQ to XML for JavaScript: ltxml.js is an XML processing library for JavaScript that is inspired by and largely similar to the LINQ to XML library for .NET.Microsoft .NET SDK For Hadoop: A collection of API's, tools and samples for using .NET with Apache Hadoop.MyLib: This is my personal library of various tidbits that I have been using regularly. Includes a generic repository with EF and MongoDB implementations.Parlez MVC: A multi-lingual implementation for the ASP.NET MVC 4 framework to make it easier to build websites that are viewable in different languages.Perfect Sport Nutrition: Proyecto realizado por la Empresa de Desarrollo Software SoftwareHC E.I.R.LScale Model Database: This project is a work of databasesSliding Block Puzzle: This is a simple (not very good) game for Windows Phone 7.1 (Mango). The point of it is to show off how to do different things in WP7.1.SmallBasic Extension Manager: SXM, or the SmallBasic Extension Manager, is a program intended for use with the Small Basic programming language from Microsoft.SQL Azure Federation Backup to Azure Blob Storage using Azure Worker: Azure Worker project, that will backup any number of Azure SQL Databases to any selected Azure Blob containers. Config in XML. Automatic 7z compression.TeF: A framework for running tests with a plenty of features and ease of use.Testge: testTwittOracle: This project is a prototype project intended to be submitted in a competition. The details will be updated later as things are finalized.uMembership - Alternative Umbraco Member Api: uMembership is an alternative and faster way of dealing with Members in Umbraco when you have 1000's or even 100Variedades Silverlight 5: Varidades Silverlight 5Website d?t tour du lich: Ð? án môn ASP.NET Xây d?ng website d?t tour du l?ch d?a trên mô hình MVCWPF About Box: WPF About Box is simple and free about box for WPF using MVVM pattern. Several properties can be set. Some properties are read from assembly, automatically.WTUnion: WTUnionXML Cleaner: Console app for batch cleaning XML files containing ilegal caracters. It can be useful for cleaning XML files before importing them to SOLR or something similarXmlToObject: XmlToObject is a .Net library for object serialization with use of XPath expressions applied to class fields and properties via custom attributes.XNAGalcon: This Project will create Galcon Game for XNA version

    Read the article

  • Does ModSecurity 2.7.1 work with ASP.NET MVC 3?

    - by autonomatt
    I'm trying to get ModSecurity 2.7.1 to work with an ASP.NET MVC 3 website. The installation ran without errors and looking at the event log, ModSecurity is starting up successfully. I am using the modsecurity.conf-recommended file to set the basic rules. The problem I'm having is that whenever I am POSTing some form data, it doesn't get through to the controller action (or model binder). I have SecRuleEngine set to DetectionOnly. I have SecRequestBodyAccess set to On. With these settings, the body of the POST never reaches the controller action. If I set SecRequestBodyAccess to Off it works, so it's definitely something to do with how ModSecurity forwards the body data. The ModSecurity debug shows the following (looks to me as if all passed through): Second phase starting (dcfg 94b750). Input filter: Reading request body. Adding request argument (BODY): name "[0].IsSelected", value "on" Adding request argument (BODY): name "[0].Quantity", value "1" Adding request argument (BODY): name "[0].VariantSku", value "047861" Adding request argument (BODY): name "[1].Quantity", value "0" Adding request argument (BODY): name "[1].VariantSku", value "047862" Input filter: Completed receiving request body (length 115). Starting phase REQUEST_BODY. Recipe: Invoking rule 94c620; [file "*********************"] [line "54"] [id "200001"]. Rule 94c620: SecRule "REQBODY_ERROR" "!@eq 0" "phase:2,auditlog,id:200001,t:none,log,deny,status:400,msg:'Failed to parse request body.',logdata:%{reqbody_error_msg},severity:2" Transformation completed in 0 usec. Executing operator "!eq" with param "0" against REQBODY_ERROR. Operator completed in 0 usec. Rule returned 0. Recipe: Invoking rule 5549c38; [file "*********************"] [line "75"] [id "200002"]. Rule 5549c38: SecRule "MULTIPART_STRICT_ERROR" "!@eq 0" "phase:2,auditlog,id:200002,t:none,log,deny,status:44,msg:'Multipart request body failed strict validation: PE %{REQBODY_PROCESSOR_ERROR}, BQ %{MULTIPART_BOUNDARY_QUOTED}, BW %{MULTIPART_BOUNDARY_WHITESPACE}, DB %{MULTIPART_DATA_BEFORE}, DA %{MULTIPART_DATA_AFTER}, HF %{MULTIPART_HEADER_FOLDING}, LF %{MULTIPART_LF_LINE}, SM %{MULTIPART_MISSING_SEMICOLON}, IQ %{MULTIPART_INVALID_QUOTING}, IP %{MULTIPART_INVALID_PART}, IH %{MULTIPART_INVALID_HEADER_FOLDING}, FL %{MULTIPART_FILE_LIMIT_EXCEEDED}'" Transformation completed in 0 usec. Executing operator "!eq" with param "0" against MULTIPART_STRICT_ERROR. Operator completed in 0 usec. Rule returned 0. Recipe: Invoking rule 554bd70; [file "********************"] [line "80"] [id "200003"]. Rule 554bd70: SecRule "MULTIPART_UNMATCHED_BOUNDARY" "!@eq 0" "phase:2,auditlog,id:200003,t:none,log,deny,status:44,msg:'Multipart parser detected a possible unmatched boundary.'" Transformation completed in 0 usec. Executing operator "!eq" with param "0" against MULTIPART_UNMATCHED_BOUNDARY. Operator completed in 0 usec. Rule returned 0. Recipe: Invoking rule 554cbe0; [file "*********************************"] [line "94"] [id "200004"]. Rule 554cbe0: SecRule "TX:/^MSC_/" "!@streq 0" "phase:2,log,auditlog,id:200004,t:none,deny,msg:'ModSecurity internal error flagged: %{MATCHED_VAR_NAME}'" Rule returned 0. Hook insert_filter: Adding input forwarding filter (r 5541fc0). Hook insert_filter: Adding output filter (r 5541fc0). Initialising logging. Starting phase LOGGING. Recording persistent data took 0 microseconds. Audit log: Ignoring a non-relevant request. I can't see anything unusual in Fiddler. I'm using a ViewModel in the parameters of my action. No data is bound if SecRequestBodyAccess is set to On. I'm even logging all the Request.Form.Keys and values via log4net, but not getting any values there either. I'm starting to wonder if ModSecurity actually works with ASP.NET MVC or if there is some conflict with the ModSecurity http Module and the model binder kicking in. Does anyone have any suggestions or can anyone confirm they have ModSecurity working with an ASP.NET MVC website?

    Read the article

  • ssh without password does not work for some users

    - by joshxdr
    I have a new RHEL4 Linux box that I am using to copy data to old Solaris 2.6 and RHEL3 Linux boxes with scp. I have found that with the same setup, it works for some users but not for others. For user jane, this works fine: jane@host1$ ssh -v remhost debug1: Next authentication method: publickey debug1: Trying private key: /mnt/home/osborjo/.ssh/identity debug1: Offering public key: /mnt/home/osborjo/.ssh/id_rsa debug1: Server accepts key: pkalg ssh-rsa blen 277 debug1: read PEM private key done: type RSA debug1: Authentication succeeded (publickey). for user jack it does not: jack@host1 ssh -v remhost debug1: Next authentication method: publickey debug1: Trying private key: /mnt/home/oper1/.ssh/identity debug1: Offering public key: /mnt/home/oper1/.ssh/id_rsa debug1: Authentications that can continue: publickey,password,keyboard-interactive I have looked at the permissions for all the keys and files, they look the same. Since I am using home directories mounted by NFS, the keys for both the remote host and the local host are in the same directory. This is how things look for jane: jane@host1$ ls -l $HOME/.ssh -rw-rw-r-- 1 jane operator 394 Jan 27 16:28 authorized_keys -rw------- 1 jane operator 1675 Jan 27 16:27 id_rsa -rw-r--r-- 1 jane operator 394 Jan 27 16:27 id_rsa.pub -rw-rw-r-- 1 jane operator 1205 Jan 27 16:46 known_hosts For user jack: jack@host1$ ls -l $HOME/.ssh -rw-rw-r-- 1 jack engineer 394 Jan 27 16:28 authorized_keys -rw------- 1 jack engineer 1675 Jan 27 16:27 id_rsa -rw-r--r-- 1 jack engineer 394 Jan 27 16:27 id_rsa.pub -rw-rw-r-- 1 jack engineer 1205 Jan 27 16:46 known_hosts As a last ditch effort, I copied the authorized_keys, id_rsa, and id_rsa.pub from jill to jack, and changed the username in authorized_keys and id_rsa.pub with vi. It still did not work. It seems there is something different between the two users but I cannot figure out what it is.

    Read the article

  • When is a SQL function not a function?

    - by Rob Farley
    Should SQL Server even have functions? (Oh yeah – this is a T-SQL Tuesday post, hosted this month by Brad Schulz) Functions serve an important part of programming, in almost any language. A function is a piece of code that is designed to return something, as opposed to a piece of code which isn’t designed to return anything (which is known as a procedure). SQL Server is no different. You can call stored procedures, even from within other stored procedures, and you can call functions and use these in other queries. Stored procedures might query something, and therefore ‘return data’, but a function in SQL is considered to have the type of the thing returned, and can be used accordingly in queries. Consider the internal GETDATE() function. SELECT GETDATE(), SomeDatetimeColumn FROM dbo.SomeTable; There’s no logical difference between the field that is being returned by the function and the field that’s being returned by the table column. Both are the datetime field – if you didn’t have inside knowledge, you wouldn’t necessarily be able to tell which was which. And so as developers, we find ourselves wanting to create functions that return all kinds of things – functions which look up values based on codes, functions which do string manipulation, and so on. But it’s rubbish. Ok, it’s not all rubbish, but it mostly is. And this isn’t even considering the SARGability impact. It’s far more significant than that. (When I say the SARGability aspect, I mean “because you’re unlikely to have an index on the result of some function that’s applied to a column, so try to invert the function and query the column in an unchanged manner”) I’m going to consider the three main types of user-defined functions in SQL Server: Scalar Inline Table-Valued Multi-statement Table-Valued I could also look at user-defined CLR functions, including aggregate functions, but not today. I figure that most people don’t tend to get around to doing CLR functions, and I’m going to focus on the T-SQL-based user-defined functions. Most people split these types of function up into two types. So do I. Except that most people pick them based on ‘scalar or table-valued’. I’d rather go with ‘inline or not’. If it’s not inline, it’s rubbish. It really is. Let’s start by considering the two kinds of table-valued function, and compare them. These functions are going to return the sales for a particular salesperson in a particular year, from the AdventureWorks database. CREATE FUNCTION dbo.FetchSales_inline(@salespersonid int, @orderyear int) RETURNS TABLE AS  RETURN (     SELECT e.LoginID as EmployeeLogin, o.OrderDate, o.SalesOrderID     FROM Sales.SalesOrderHeader AS o     LEFT JOIN HumanResources.Employee AS e     ON e.EmployeeID = o.SalesPersonID     WHERE o.SalesPersonID = @salespersonid     AND o.OrderDate >= DATEADD(year,@orderyear-2000,'20000101')     AND o.OrderDate < DATEADD(year,@orderyear-2000+1,'20000101') ) ; GO CREATE FUNCTION dbo.FetchSales_multi(@salespersonid int, @orderyear int) RETURNS @results TABLE (     EmployeeLogin nvarchar(512),     OrderDate datetime,     SalesOrderID int     ) AS BEGIN     INSERT @results (EmployeeLogin, OrderDate, SalesOrderID)     SELECT e.LoginID, o.OrderDate, o.SalesOrderID     FROM Sales.SalesOrderHeader AS o     LEFT JOIN HumanResources.Employee AS e     ON e.EmployeeID = o.SalesPersonID     WHERE o.SalesPersonID = @salespersonid     AND o.OrderDate >= DATEADD(year,@orderyear-2000,'20000101')     AND o.OrderDate < DATEADD(year,@orderyear-2000+1,'20000101')     ;     RETURN END ; GO You’ll notice that I’m being nice and responsible with the use of the DATEADD function, so that I have SARGability on the OrderDate filter. Regular readers will be hoping I’ll show what’s going on in the execution plans here. Here I’ve run two SELECT * queries with the “Show Actual Execution Plan” option turned on. Notice that the ‘Query cost’ of the multi-statement version is just 2% of the ‘Batch cost’. But also notice there’s trickery going on. And it’s nothing to do with that extra index that I have on the OrderDate column. Trickery. Look at it – clearly, the first plan is showing us what’s going on inside the function, but the second one isn’t. The second one is blindly running the function, and then scanning the results. There’s a Sequence operator which is calling the TVF operator, and then calling a Table Scan to get the results of that function for the SELECT operator. But surely it still has to do all the work that the first one is doing... To see what’s actually going on, let’s look at the Estimated plan. Now, we see the same plans (almost) that we saw in the Actuals, but we have an extra one – the one that was used for the TVF. Here’s where we see the inner workings of it. You’ll probably recognise the right-hand side of the TVF’s plan as looking very similar to the first plan – but it’s now being called by a stack of other operators, including an INSERT statement to be able to populate the table variable that the multi-statement TVF requires. And the cost of the TVF is 57% of the batch! But it gets worse. Let’s consider what happens if we don’t need all the columns. We’ll leave out the EmployeeLogin column. Here, we see that the inline function call has been simplified down. It doesn’t need the Employee table. The join is redundant and has been eliminated from the plan, making it even cheaper. But the multi-statement plan runs the whole thing as before, only removing the extra column when the Table Scan is performed. A multi-statement function is a lot more powerful than an inline one. An inline function can only be the result of a single sub-query. It’s essentially the same as a parameterised view, because views demonstrate this same behaviour of extracting the definition of the view and using it in the outer query. A multi-statement function is clearly more powerful because it can contain far more complex logic. But a multi-statement function isn’t really a function at all. It’s a stored procedure. It’s wrapped up like a function, but behaves like a stored procedure. It would be completely unreasonable to expect that a stored procedure could be simplified down to recognise that not all the columns might be needed, but yet this is part of the pain associated with this procedural function situation. The biggest clue that a multi-statement function is more like a stored procedure than a function is the “BEGIN” and “END” statements that surround the code. If you try to create a multi-statement function without these statements, you’ll get an error – they are very much required. When I used to present on this kind of thing, I even used to call it “The Dangers of BEGIN and END”, and yes, I’ve written about this type of thing before in a similarly-named post over at my old blog. Now how about scalar functions... Suppose we wanted a scalar function to return the count of these. CREATE FUNCTION dbo.FetchSales_scalar(@salespersonid int, @orderyear int) RETURNS int AS BEGIN     RETURN (         SELECT COUNT(*)         FROM Sales.SalesOrderHeader AS o         LEFT JOIN HumanResources.Employee AS e         ON e.EmployeeID = o.SalesPersonID         WHERE o.SalesPersonID = @salespersonid         AND o.OrderDate >= DATEADD(year,@orderyear-2000,'20000101')         AND o.OrderDate < DATEADD(year,@orderyear-2000+1,'20000101')     ); END ; GO Notice the evil words? They’re required. Try to remove them, you just get an error. That’s right – any scalar function is procedural, despite the fact that you wrap up a sub-query inside that RETURN statement. It’s as ugly as anything. Hopefully this will change in future versions. Let’s have a look at how this is reflected in an execution plan. Here’s a query, its Actual plan, and its Estimated plan: SELECT e.LoginID, y.year, dbo.FetchSales_scalar(p.SalesPersonID, y.year) AS NumSales FROM (VALUES (2001),(2002),(2003),(2004)) AS y (year) CROSS JOIN Sales.SalesPerson AS p LEFT JOIN HumanResources.Employee AS e ON e.EmployeeID = p.SalesPersonID; We see here that the cost of the scalar function is about twice that of the outer query. Nicely, the query optimizer has worked out that it doesn’t need the Employee table, but that’s a bit of a red herring here. There’s actually something way more significant going on. If I look at the properties of that UDF operator, it tells me that the Estimated Subtree Cost is 0.337999. If I just run the query SELECT dbo.FetchSales_scalar(281,2003); we see that the UDF cost is still unchanged. You see, this 0.0337999 is the cost of running the scalar function ONCE. But when we ran that query with the CROSS JOIN in it, we returned quite a few rows. 68 in fact. Could’ve been a lot more, if we’d had more salespeople or more years. And so we come to the biggest problem. This procedure (I don’t want to call it a function) is getting called 68 times – each one between twice as expensive as the outer query. And because it’s calling it in a separate context, there is even more overhead that I haven’t considered here. The cheek of it, to say that the Compute Scalar operator here costs 0%! I know a number of IT projects that could’ve used that kind of costing method, but that’s another story that I’m not going to go into here. Let’s look at a better way. Suppose our scalar function had been implemented as an inline one. Then it could have been expanded out like a sub-query. It could’ve run something like this: SELECT e.LoginID, y.year, (SELECT COUNT(*)     FROM Sales.SalesOrderHeader AS o     LEFT JOIN HumanResources.Employee AS e     ON e.EmployeeID = o.SalesPersonID     WHERE o.SalesPersonID = p.SalesPersonID     AND o.OrderDate >= DATEADD(year,y.year-2000,'20000101')     AND o.OrderDate < DATEADD(year,y.year-2000+1,'20000101')     ) AS NumSales FROM (VALUES (2001),(2002),(2003),(2004)) AS y (year) CROSS JOIN Sales.SalesPerson AS p LEFT JOIN HumanResources.Employee AS e ON e.EmployeeID = p.SalesPersonID; Don’t worry too much about the Scan of the SalesOrderHeader underneath a Nested Loop. If you remember from plenty of other posts on the matter, execution plans don’t push the data through. That Scan only runs once. The Index Spool sucks the data out of it and populates a structure that is used to feed the Stream Aggregate. The Index Spool operator gets called 68 times, but the Scan only once (the Number of Executions property demonstrates this). Here, the Query Optimizer has a full picture of what’s being asked, and can make the appropriate decision about how it accesses the data. It can simplify it down properly. To get this kind of behaviour from a function, we need it to be inline. But without inline scalar functions, we need to make our function be table-valued. Luckily, that’s ok. CREATE FUNCTION dbo.FetchSales_inline2(@salespersonid int, @orderyear int) RETURNS table AS RETURN (SELECT COUNT(*) as NumSales     FROM Sales.SalesOrderHeader AS o     LEFT JOIN HumanResources.Employee AS e     ON e.EmployeeID = o.SalesPersonID     WHERE o.SalesPersonID = @salespersonid     AND o.OrderDate >= DATEADD(year,@orderyear-2000,'20000101')     AND o.OrderDate < DATEADD(year,@orderyear-2000+1,'20000101') ); GO But we can’t use this as a scalar. Instead, we need to use it with the APPLY operator. SELECT e.LoginID, y.year, n.NumSales FROM (VALUES (2001),(2002),(2003),(2004)) AS y (year) CROSS JOIN Sales.SalesPerson AS p LEFT JOIN HumanResources.Employee AS e ON e.EmployeeID = p.SalesPersonID OUTER APPLY dbo.FetchSales_inline2(p.SalesPersonID, y.year) AS n; And now, we get the plan that we want for this query. All we’ve done is tell the function that it’s returning a table instead of a single value, and removed the BEGIN and END statements. We’ve had to name the column being returned, but what we’ve gained is an actual inline simplifiable function. And if we wanted it to return multiple columns, it could do that too. I really consider this function to be superior to the scalar function in every way. It does need to be handled differently in the outer query, but in many ways it’s a more elegant method there too. The function calls can be put amongst the FROM clause, where they can then be used in the WHERE or GROUP BY clauses without fear of calling the function multiple times (another horrible side effect of functions). So please. If you see BEGIN and END in a function, remember it’s not really a function, it’s a procedure. And then fix it. @rob_farley

    Read the article

  • Ok it has been pointed out to me

    - by Ratman21
    That it seems my blog is more of poor me or pity me or I deserve a job blog.   Hmmm I wont say, I have not wined here as I have used this blog to vent my frustration on the whole out of work thing (lack of money, self worth, family issues and the never end bills coming my way) but, it was also me trying to reach to others in the same boat as well as advertising, hay I am out here, employers.   It was also said, that I don’t have any thing listed here on me, like a cover letter or resume. Well there is but, it was so many months and post ago. Also what I had posted is not current. So here is my most current cover and resume.   Scott L Newman 45219 Dutton Way Callahan, Fl. 32011 To Whom It May Concern: I am really interested in the IT vacancie that you have listed for your company. Maybe I don’t have all the qualifications you want (hold on don’t hit delete yet) yet! But maybe I do, as I have over 20 + years experience in "IT” RIGHT NOW.   Read the rest of my cover and my resume. You will see what my “IT” skills are and it will Show that I can to this work! I can bring to your company along with my, can do attitude, a broad range of skills, including: Certified CompTIA A+, Security+  and Network+ Technician §         2.5 years (NOC) Network experience on large Cisco based Wan – UK to Austria §         20 years experience MIS/DP – Yes I can do IBM mainframes and Tandem  non-stops too §         18 years experience as technical Help Desk support – panicking users, no problem §         18 years experience with PC/Server based system, intranet and internet systems §         10+ years experienced on: Microsoft Office, Windows XP and Data Network Fundamentals (YES I do windows) §         Strong trouble shooting skills for software, hard ware and circuit issues (and I can tell you what kind of horrors I had to face on all of them). §         Very experienced on working with customers on problems – again panicking users, no problem §         Working experience with Remote Access (VPN/SecurID) – I didn’t just study them I worked on/with them §         Skilled in getting info for and creating documentation for Operation procedures (I don’t just wait for them to give it to me I go out and get it. Waiting for info on working applications is, well dumb) Multiple software languages (Hey I have done some programming) And much more experiences in “IT” (Mortgage, stocks and financial information systems experience and have worked “IT” in a hospital) Can multitask, also have ability to adapt to change and learn quickly. (once was put in charge of a system that I had not worked with for over two years. Talk about having to relearn and adapt to changes but, I did it.) I would welcome the opportunity to further discuss this position with you. If you have questions or would like to schedule an interview, please contact me by phone at 904-879-4880 or on my cell 352-356-0945 or by e-mail at [email protected] or leave a message on my web site (http://beingscottnewman.webs.com/). I have enclosed/attached my resume for your review and I look forward to hearing from you.   Thank you for taking a moment to consider my cover letter and resume. I appreciate how busy you are. Sincerely, Scott L. Newman    Scott L. Newman 45219 Dutton Way, Callahan, FL 32011? H (904)879-4880 C (352)356-0945 ? [email protected] Web - http://beingscottnewman.webs.com/                                                       ______                                                                                       OBJECTIVE To obtain a Network Operation or Helpdesk position.     PROFILE Information Technology Professional with 20+ years of experience. Volunteer website creator and back-up sound technician at True Faith Christian Fellowship. CompTIA A+, Network+ and Security+ Certified.   TECHNICAL AND PROFESSIONAL SKILLS   §         Technical Support §         Frame Relay §         Microsoft Office Suite §         Inventory Management §         ISDN §         Windows NT/98/XP §         Client/Vendor Relations §         CICS §         Cisco Routers/Switches §         Networking/Administration §         RPG §         Helpdesk §         Website Design/Dev./Management §         Assembler §         Visio §         Programming §         COBOL IV §               EDUCATION ? New HorizonsComputerLearningCenter, Jacksonville, Florida – CompTIA A+, Security+ and Network+ Certified.             Currently working on CCNA Certification ?MottCommunity College, Flint, Michigan – Associates Degree - Data Processing and General Education ? Currently studying Japanese     PROFESSIONAL             TrueFaithChristianFellowshipChurch – Callahan, FL, October 2009 – Present Web site Tech ·        Web site Creator/tech, back up song leader and back up sound technician. Note church web site is (http://ambassadorsforjesuschrist.webs.com/) U.S. Census (temp employee) Feb. 23 to March 8, 2010 ·        Enumerator for NassauCounty   ThomasCreekBaptistChurch – Callahan, FL,     June 2008 – September 2009 Churchsound and video technician      ·        sound and video technician           Fidelity National Information Services ? Jacksonville, FL ? February 01, 2005 to October 28, 2008 Client Server Dev/Analyst I ·        Monitored Multiple Debit Card sites, Check Authorization customers and the Card Auth system (AuthNet) for problems with the sites, connections, servers (on our LAN) and/or applications ·        Night (NOC) Network operator for a large Wide Area Network (WAN) ·        Monitored Multiple Check Authorization customers for problems with circuits, routers and applications ·        Resolved circuit and/or router issues or assist circuit carrier in resolving issue ·        Resolved application problems or assist application support in resolution ·        Liaison between customer and application support ·        Maintained and updated the NetOps Operation procedures Guide ·        Kept the listing of equipment on the raised floor updated ·        Involved in the training of all Night Check and Card server operation operators ·        FNIS acquired Certegy in 2005. Was one of 3 kept on.   Certegy ? St.Pete, FL ? August 31, 2003 to February 1, 2005 Senior NetOps Operator(FNIS acquired Certegy in 2005 all of above jobs/skills were same as listed in FNIS) ·        Converting Documentation to Adobe format ·        Sole trainer of day/night shift System Management Center operators (SMC) ·        Equifax spun off Card/Check Dept. as Certegy. Certegy terminated contract with EDS. One of six in the whole IT dept that was kept on.   EDS  (Certegy Account) ? St.Pete, FL ? July 1, 1999 to August 31, 2003 Senior NetOps Operator ·        Equifax outsourced the NetOps dept. to EDS in 1999. ·        Same job skills as listed above for FNIS.   Equifax ? St.Pete&Tampa, FL ? January 1, 1991 to July 1, 1999 NetOps/Tandem Operator ·        All of the above for FNIS, except for circuit and router issues ·        Operated, monitored and trouble shot Tandem mainframe and servers on LAN ·        Supported in the operation of the Print, Tape and Microfiche rooms ·        Equifax acquired TelaCredit in 1991.   TelaCredit ? Tampa, FL ? June 28, 1989 to January 1, 1991 Tandem Operator ·        Operated and monitored Tandem Non-stop systems for Card and Check Auths ·        Operated multiple high-speed Laser printers and Microfiche printers ·        Mounted, filed and maintained 18 reel-to-reel mainframe tape drives, cartridges tape drives and tape library.

    Read the article

  • Feedback on iterating over type-safe enums

    - by Sumant
    In response to the earlier SO question "Enumerate over an enum in C++", I came up with the following reusable solution that uses type-safe enum idiom. I'm just curious to see the community feedback on my solution. This solution makes use of a static array, which is populated using type-safe enum objects before first use. Iteration over enums is then simply reduced to iteration over the array. I'm aware of the fact that this solution won't work if the enumerators are not strictly increasing. template<typename def, typename inner = typename def::type> class safe_enum : public def { typedef typename def::type type; inner val; static safe_enum array[def::end - def::begin]; static bool init; static void initialize() { if(!init) // use double checked locking in case of multi-threading. { unsigned int size = def::end - def::begin; for(unsigned int i = 0, j = def::begin; i < size; ++i, ++j) array[i] = static_cast<typename def::type>(j); init = true; } } public: safe_enum(type v = def::begin) : val(v) {} inner underlying() const { return val; } static safe_enum * begin() { initialize(); return array; } static safe_enum * end() { initialize(); return array + (def::end - def::begin); } bool operator == (const safe_enum & s) const { return this->val == s.val; } bool operator != (const safe_enum & s) const { return this->val != s.val; } bool operator < (const safe_enum & s) const { return this->val < s.val; } bool operator <= (const safe_enum & s) const { return this->val <= s.val; } bool operator > (const safe_enum & s) const { return this->val > s.val; } bool operator >= (const safe_enum & s) const { return this->val >= s.val; } }; template <typename def, typename inner> safe_enum<def, inner> safe_enum<def, inner>::array[def::end - def::begin]; template <typename def, typename inner> bool safe_enum<def, inner>::init = false; struct color_def { enum type { begin, red = begin, green, blue, end }; }; typedef safe_enum<color_def> color; template <class Enum> void f(Enum e) { std::cout << static_cast<unsigned>(e.underlying()) << std::endl; } int main() { std::for_each(color::begin(), color::end(), &f<color>); color c = color::red; }

    Read the article

  • Using LINQ Distinct: With an Example on ASP.NET MVC SelectListItem

    - by Joe Mayo
    One of the things that might be surprising in the LINQ Distinct standard query operator is that it doesn’t automatically work properly on custom classes. There are reasons for this, which I’ll explain shortly. The example I’ll use in this post focuses on pulling a unique list of names to load into a drop-down list. I’ll explain the sample application, show you typical first shot at Distinct, explain why it won’t work as you expect, and then demonstrate a solution to make Distinct work with any custom class. The technologies I’m using are  LINQ to Twitter, LINQ to Objects, Telerik Extensions for ASP.NET MVC, ASP.NET MVC 2, and Visual Studio 2010. The function of the example program is to show a list of people that I follow.  In Twitter API vernacular, these people are called “Friends”; though I’ve never met most of them in real life. This is part of the ubiquitous language of social networking, and Twitter in particular, so you’ll see my objects named accordingly. Where Distinct comes into play is because I want to have a drop-down list with the names of the friends appearing in the list. Some friends are quite verbose, which means I can’t just extract names from each tweet and populate the drop-down; otherwise, I would end up with many duplicate names. Therefore, Distinct is the appropriate operator to eliminate the extra entries from my friends who tend to be enthusiastic tweeters. The sample doesn’t do anything with the drop-down list and I leave that up to imagination for what it’s practical purpose could be; perhaps a filter for the list if I only want to see a certain person’s tweets or maybe a quick list that I plan to combine with a TextBox and Button to reply to a friend. When the program runs, you’ll need to authenticate with Twitter, because I’m using OAuth (DotNetOpenAuth), for authentication, and then you’ll see the drop-down list of names above the grid with the most recent tweets from friends. Here’s what the application looks like when it runs: As you can see, there is a drop-down list above the grid. The drop-down list is where most of the focus of this article will be. There is some description of the code before we talk about the Distinct operator, but we’ll get there soon. This is an ASP.NET MVC2 application, written with VS 2010. Here’s the View that produces this screen: <%@ Page Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<TwitterFriendsViewModel>" %> <%@ Import Namespace="DistinctSelectList.Models" %> <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server">     Home Page </asp:Content><asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server">     <fieldset>         <legend>Twitter Friends</legend>         <div>             <%= Html.DropDownListFor(                     twendVM => twendVM.FriendNames,                     Model.FriendNames,                     "<All Friends>") %>         </div>         <div>             <% Html.Telerik().Grid<TweetViewModel>(Model.Tweets)                    .Name("TwitterFriendsGrid")                    .Columns(cols =>                     {                         cols.Template(col =>                             { %>                                 <img src="<%= col.ImageUrl %>"                                      alt="<%= col.ScreenName %>" />                         <% });                         cols.Bound(col => col.ScreenName);                         cols.Bound(col => col.Tweet);                     })                    .Render(); %>         </div>     </fieldset> </asp:Content> As shown above, the Grid is from Telerik’s Extensions for ASP.NET MVC. The first column is a template that renders the user’s Avatar from a URL provided by the Twitter query. Both the Grid and DropDownListFor display properties that are collections from a TwitterFriendsViewModel class, shown below: using System.Collections.Generic; using System.Web.Mvc; namespace DistinctSelectList.Models { /// /// For finding friend info on screen /// public class TwitterFriendsViewModel { /// /// Display names of friends in drop-down list /// public List FriendNames { get; set; } /// /// Display tweets in grid /// public List Tweets { get; set; } } } I created the TwitterFreindsViewModel. The two Lists are what the View consumes to populate the DropDownListFor and Grid. Notice that FriendNames is a List of SelectListItem, which is an MVC class. Another custom class I created is the TweetViewModel (the type of the Tweets List), shown below: namespace DistinctSelectList.Models { /// /// Info on friend tweets /// public class TweetViewModel { /// /// User's avatar /// public string ImageUrl { get; set; } /// /// User's Twitter name /// public string ScreenName { get; set; } /// /// Text containing user's tweet /// public string Tweet { get; set; } } } The initial Twitter query returns much more information than we need for our purposes and this a special class for displaying info in the View.  Now you know about the View and how it’s constructed. Let’s look at the controller next. The controller for this demo performs authentication, data retrieval, data manipulation, and view selection. I’ll skip the description of the authentication because it’s a normal part of using OAuth with LINQ to Twitter. Instead, we’ll drill down and focus on the Distinct operator. However, I’ll show you the entire controller, below,  so that you can see how it all fits together: using System.Linq; using System.Web.Mvc; using DistinctSelectList.Models; using LinqToTwitter; namespace DistinctSelectList.Controllers { [HandleError] public class HomeController : Controller { private MvcOAuthAuthorization auth; private TwitterContext twitterCtx; /// /// Display a list of friends current tweets /// /// public ActionResult Index() { auth = new MvcOAuthAuthorization(InMemoryTokenManager.Instance, InMemoryTokenManager.AccessToken); string accessToken = auth.CompleteAuthorize(); if (accessToken != null) { InMemoryTokenManager.AccessToken = accessToken; } if (auth.CachedCredentialsAvailable) { auth.SignOn(); } else { return auth.BeginAuthorize(); } twitterCtx = new TwitterContext(auth); var friendTweets = (from tweet in twitterCtx.Status where tweet.Type == StatusType.Friends select new TweetViewModel { ImageUrl = tweet.User.ProfileImageUrl, ScreenName = tweet.User.Identifier.ScreenName, Tweet = tweet.Text }) .ToList(); var friendNames = (from tweet in friendTweets select new SelectListItem { Text = tweet.ScreenName, Value = tweet.ScreenName }) .Distinct() .ToList(); var twendsVM = new TwitterFriendsViewModel { Tweets = friendTweets, FriendNames = friendNames }; return View(twendsVM); } public ActionResult About() { return View(); } } } The important part of the listing above are the LINQ to Twitter queries for friendTweets and friendNames. Both of these results are used in the subsequent population of the twendsVM instance that is passed to the view. Let’s dissect these two statements for clarification and focus on what is happening with Distinct. The query for friendTweets gets a list of the 20 most recent tweets (as specified by the Twitter API for friend queries) and performs a projection into the custom TweetViewModel class, repeated below for your convenience: var friendTweets = (from tweet in twitterCtx.Status where tweet.Type == StatusType.Friends select new TweetViewModel { ImageUrl = tweet.User.ProfileImageUrl, ScreenName = tweet.User.Identifier.ScreenName, Tweet = tweet.Text }) .ToList(); The LINQ to Twitter query above simplifies what we need to work with in the View and the reduces the amount of information we have to look at in subsequent queries. Given the friendTweets above, the next query performs another projection into an MVC SelectListItem, which is required for binding to the DropDownList.  This brings us to the focus of this blog post, writing a correct query that uses the Distinct operator. The query below uses LINQ to Objects, querying the friendTweets collection to get friendNames: var friendNames = (from tweet in friendTweets select new SelectListItem { Text = tweet.ScreenName, Value = tweet.ScreenName }) .Distinct() .ToList(); The above implementation of Distinct seems normal, but it is deceptively incorrect. After running the query above, by executing the application, you’ll notice that the drop-down list contains many duplicates.  This will send you back to the code scratching your head, but there’s a reason why this happens. To understand the problem, we must examine how Distinct works in LINQ to Objects. Distinct has two overloads: one without parameters, as shown above, and another that takes a parameter of type IEqualityComparer<T>.  In the case above, no parameters, Distinct will call EqualityComparer<T>.Default behind the scenes to make comparisons as it iterates through the list. You don’t have problems with the built-in types, such as string, int, DateTime, etc, because they all implement IEquatable<T>. However, many .NET Framework classes, such as SelectListItem, don’t implement IEquatable<T>. So, what happens is that EqualityComparer<T>.Default results in a call to Object.Equals, which performs reference equality on reference type objects.  You don’t have this problem with value types because the default implementation of Object.Equals is bitwise equality. However, most of your projections that use Distinct are on classes, just like the SelectListItem used in this demo application. So, the reason why Distinct didn’t produce the results we wanted was because we used a type that doesn’t define its own equality and Distinct used the default reference equality. This resulted in all objects being included in the results because they are all separate instances in memory with unique references. As you might have guessed, the solution to the problem is to use the second overload of Distinct that accepts an IEqualityComparer<T> instance. If you were projecting into your own custom type, you could make that type implement IEqualityComparer<T>, but SelectListItem belongs to the .NET Framework Class Library.  Therefore, the solution is to create a custom type to implement IEqualityComparer<T>, as in the SelectListItemComparer class, shown below: using System.Collections.Generic; using System.Web.Mvc; namespace DistinctSelectList.Models { public class SelectListItemComparer : EqualityComparer { public override bool Equals(SelectListItem x, SelectListItem y) { return x.Value.Equals(y.Value); } public override int GetHashCode(SelectListItem obj) { return obj.Value.GetHashCode(); } } } The SelectListItemComparer class above doesn’t implement IEqualityComparer<SelectListItem>, but rather derives from EqualityComparer<SelectListItem>. Microsoft recommends this approach for consistency with the behavior of generic collection classes. However, if your custom type already derives from a base class, go ahead and implement IEqualityComparer<T>, which will still work. EqualityComparer is an abstract class, that implements IEqualityComparer<T> with Equals and GetHashCode abstract methods. For the purposes of this application, the SelectListItem.Value property is sufficient to determine if two items are equal.   Since SelectListItem.Value is type string, the code delegates equality to the string class. The code also delegates the GetHashCode operation to the string class.You might have other criteria in your own object and would need to define what it means for your object to be equal. Now that we have an IEqualityComparer<SelectListItem>, let’s fix the problem. The code below modifies the query where we want distinct values: var friendNames = (from tweet in friendTweets select new SelectListItem { Text = tweet.ScreenName, Value = tweet.ScreenName }) .Distinct(new SelectListItemComparer()) .ToList(); Notice how the code above passes a new instance of SelectListItemComparer as the parameter to the Distinct operator. Now, when you run the application, the drop-down list will behave as you expect, showing only a unique set of names. In addition to Distinct, other LINQ Standard Query Operators have overloads that accept IEqualityComparer<T>’s, You can use the same techniques as shown here, with SelectListItemComparer, with those other operators as well. Now you know how to resolve problems with getting Distinct to work properly and also have a way to fix problems with other operators that require equality comparisons. @JoeMayo

    Read the article

  • OWB 11gR2 - Early Arriving Facts

    - by Dawei Sun
    A common challenge when building ETL components for a data warehouse is how to handle early arriving facts. OWB 11gR2 introduced a new feature to address this for dimensional objects entitled Orphan Management. An orphan record is one that does not have a corresponding existing parent record. Orphan management automates the process of handling source rows that do not meet the requirements necessary to form a valid dimension or cube record. In this article, a simple example will be provided to show you how to use Orphan Management in OWB. We first import a sample MDL file that contains all the objects we need. Then we take some time to examine all the objects. After that, we prepare the source data, deploy the target table and dimension/cube loading map. Finally, we run the loading maps, and check the data in target dimension/cube tables. OK, let’s start… 1. Import MDL file and examine sample project First, download zip file from here, which includes a MDL file and three source data files. Then we open OWB design center, import orphan_management.mdl by using the menu File->Import->Warehouse Builder Metadata. Now we have several objects in BI_DEMO project as below: Mapping LOAD_CHANNELS_OM: The mapping for dimension loading. Mapping LOAD_SALES_OM: The mapping for cube loading. Dimension CHANNELS_OM: The dimension that contains channels data. Cube SALES_OM: The cube that contains sales data. Table CHANNELS_OM: The star implementation table of dimension CHANNELS_OM. Table SALES_OM: The star implementation table of cube SALES_OM. Table SRC_CHANNELS: The source table of channels data, that will be loaded into dimension CHANNELS_OM. Table SRC_ORDERS and SRC_ORDER_ITEMS: The source tables of sales data that will be loaded into cube SALES_OM. Sequence CLASS_OM_DIM_SEQ: The sequence used for loading dimension CHANNELS_OM. Dimension CHANNELS_OM This dimension has a hierarchy with three levels: TOTAL, CLASS and CHANNEL. Each level has three attributes: ID (surrogate key), NAME and SOURCE_ID (business key). It has a standard star implementation. The orphan management policy and the default parent setting are shown in the following screenshots: The orphan management policy options that you can set for loading are: Reject Orphan: The record is not inserted. Default Parent: You can specify a default parent record. This default record is used as the parent record for any record that does not have an existing parent record. If the default parent record does not exist, Warehouse Builder creates the default parent record. You specify the attribute values of the default parent record at the time of defining the dimensional object. If any ancestor of the default parent does not exist, Warehouse Builder also creates this record. No Maintenance: This is the default behavior. Warehouse Builder does not actively detect, reject, or fix orphan records. While removing data from a dimension, you can select one of the following orphan management policies: Reject Removal: Warehouse Builder does not allow you to delete the record if it has existing child records. No Maintenance: This is the default behavior. Warehouse Builder does not actively detect, reject, or fix orphan records. (More details are at http://download.oracle.com/docs/cd/E11882_01/owb.112/e10935/dim_objects.htm#insertedID1) Cube SALES_OM This cube is references to dimension CHANNELS_OM. It has three measures: AMOUNT, QUANTITY and COST. The orphan management policy setting are shown as following screenshot: The orphan management policy options that you can set for loading are: No Maintenance: Warehouse Builder does not actively detect, reject, or fix orphan rows. Default Dimension Record: Warehouse Builder assigns a default dimension record for any row that has an invalid or null dimension key value. Use the Settings button to define the default parent row. Reject Orphan: Warehouse Builder does not insert the row if it does not have an existing dimension record. (More details are at http://download.oracle.com/docs/cd/E11882_01/owb.112/e10935/dim_objects.htm#BABEACDG) Mapping LOAD_CHANNELS_OM This mapping loads source data from table SRC_CHANNELS to dimension CHANNELS_OM. The operator CHANNELS_IN is bound to table SRC_CHANNELS; CHANNELS_OUT is bound to dimension CHANNELS_OM. The TOTALS operator is used for generating a constant value for the top level in the dimension. The CLASS_FILTER operator is used to filter out the “invalid” class name, so then we can see what will happen when those channel records with an “invalid” parent are loading into dimension. Some properties of the dimension operator in this mapping are important to orphan management. See the screenshot below: Create Default Level Records: If YES, then default level records will be created. This property must be set to YES for dimensions and cubes if one of their orphan management policies is “Default Parent” or “Default Dimension Record”. This property is set to NO by default, so the user may need to set this to YES manually. LOAD policy for INVALID keys/ LOAD policy for NULL keys: These two properties have the same meaning as in the dimension editor. The values are set to the same as the dimension value when user drops the dimension into the mapping. The user does not need to modify these properties. Record Error Rows: If YES, error rows will be inserted into error table when loading the dimension. REMOVE Orphan Policy: This property is used when removing data from a dimension. Since the dimension loading type is set to LOAD in this example, this property is disabled. Mapping LOAD_SALES_OM This mapping loads source data from table SRC_ORDERS and SRC_ORDER_ITEMS to cube SALES_OM. This mapping seems a little bit complicated, but operators in the red rectangle are used to filter out and generate the records with “invalid” or “null” dimension keys. Some properties of the cube operator in a mapping are important to orphan management. See the screenshot below: Enable Source Aggregation: Should be checked in this example. If the default dimension record orphan policy is set for the cube operator, then it is recommended that source aggregation also be enabled. Otherwise, the orphan management processing may produce multiple fact rows with the same default dimension references, which will cause an “unstable rowset” execution error in the database, since the dimension refs are used as update match attributes for updating the fact table. LOAD policy for INVALID keys/ LOAD policy for NULL keys: These two properties have the same meaning as in the cube editor. The values are set to the same as in the cube editor when the user drops the cube into the mapping. The user does not need to modify these properties. Record Error Rows: If YES, error rows will be inserted into error table when loading the cube. 2. Deploy objects and mappings We now can deploy the objects. First, make sure location SALES_WH_LOCAL has been correctly configured. Then open Control Center Manager by using the menu Tools->Control Center Manager. Expand BI_DEMO->SALES_WH_LOCAL, click SALES_WH node on the project tree. We can see the following objects: Deploy all the objects in the following order: Sequence CLASS_OM_DIM_SEQ Table CHANNELS_OM, SALES_OM, SRC_CHANNELS, SRC_ORDERS, SRC_ORDER_ITEMS Dimension CHANNELS_OM Cube SALES_OM Mapping LOAD_CHANNELS_OM, LOAD_SALES_OM Note that we deployed source tables as well. Normally, we import source table from database instead of deploying them to target schema. However, in this example, we designed the source tables in OWB and deployed them to database for the purpose of this demonstration. 3. Prepare and examine source data Before running the mappings, we need to populate and examine the source data first. Run SRC_CHANNELS.sql, SRC_ORDERS.sql and SRC_ORDER_ITEMS.sql as target user. Then we check the data in these three tables. Table SRC_CHANNELS SQL> select rownum, id, class, name from src_channels; Records 1~5 are correct; they should be loaded into dimension without error. Records 6,7 and 8 have null parents; they should be loaded into dimension with a default parent value, and should be inserted into error table at the same time. Records 9, 10 and 11 have “invalid” parents; they should be rejected by dimension, and inserted into error table. Table SRC_ORDERS and SRC_ORDER_ITEMS SQL> select rownum, a.id, a.channel, b.amount, b.quantity, b.cost from src_orders a, src_order_items b where a.id = b.order_id; Record 178 has null dimension reference; it should be loaded into cube with a default dimension reference, and should be inserted into error table at the same time. Record 179 has “invalid” dimension reference; it should be rejected by cube, and inserted into error table. Other records should be aggregated and loaded into cube correctly. 4. Run the mappings and examine the target data In the Control Center Manager, expand BI_DEMO-> SALES_WH_LOCAL-> SALES_WH-> Mappings, right click on LOAD_CHANNELS_OM node, click Start. Use the same way to run mapping LOAD_SALES_OM. When they successfully finished, we can check the data in target tables. Table CHANNELS_OM SQL> select rownum, total_id, total_name, total_source_id, class_id,class_name, class_source_id, channel_id, channel_name,channel_source_id from channels_om order by abs(dimension_key); Records 1,2 and 3 are the default dimension records for the three levels. Records 8, 10 and 15 are the loaded records that originally have null parents. We see their parents name (class_name) is set to DEF_CLASS_NAME. Those records whose CHANNEL_NAME are Special_4, Special_5 and Special_6 are not loaded to this table because of the invalid parent. Error Table CHANNELS_OM_ERR SQL> select rownum, class_source_id, channel_id, channel_name,channel_source_id, err$$$_error_reason from channels_om_err order by channel_name; We can see all the record with null parent or invalid parent are inserted into this error table. Error reason is “Default parent used for record” for the first three records, and “No parent found for record” for the last three. Table SALES_OM SQL> select a.*, b.channel_name from sales_om a, channels_om b where a.channels=b.channel_id; We can see the order record with null channel_name has been loaded into target table with a default channel_name. The one with “invalid” channel_name are not loaded. Error Table SALES_OM_ERR SQL> select a.amount, a.cost, a.quantity, a.channels, b.channel_name, a.err$$$_error_reason from sales_om_err a, channels_om b where a.channels=b.channel_id(+); We can see the order records with null or invalid channel_name are inserted into error table. If the dimension reference column is null, the error reason is “Default dimension record used for fact”. If it is invalid, the error reason is “Dimension record not found for fact”. Summary In summary, this article illustrated the Orphan Management feature in OWB 11gR2. Automated orphan management policies improve ETL developer and administrator productivity by addressing an important cause of cube and dimension load failures, without requiring developers to explicitly build logic to handle these orphan rows.

    Read the article

  • How should I design a correct OO design in case of a Business-logic wide operation

    - by Mithir
    EDIT: Maybe I should ask the question in a different way. in light of ammoQ's comment, I realize that I've done something like suggested which is kind of a fix and it is fine by me. But I still want to learn for the future, so that if I develop new code for operations similar to this, I can design it correctly from the start. So, if I got the following characteristics: The relevant input is composed from data which is connected to several different business objects All the input data is validated and cross-checked Attempts are made in order to insert the data to the DB All this is just a single operation from Business side prospective, meaning all of the cross checking and validations are just side effects. I can't think of any other way but some sort of Operator/Coordinator kind of Object which activates the entire procedure, but then I fall into a Functional-Decomposition kind of code. so is there a better way in doing this? Original Question In our system we have many complex operations which involve many validations and DB activities. One of the main Business functionality could have been designed better. In short, there were no separation of layers, and the code would only work from the scenario in which it was first designed at, and now there were more scenarios (like requests from an API or from other devices) So I had to redesign. I found myself moving all the DB code to objects which acts like Business to DB objects, and I've put all the business logic in an Operator kind of a class, which I've implemented like this: First, I created an object which will hold all the information needed for the operation let's call it InformationObject. Then I created an OperatorObject which will take the InformationObject as a parameter and act on it. The OperatorObject should activate different objects and validate or check for existence or any scenario in which the business logic is compromised and then make the operation according to the information on the InformationObject. So my question is - Is this kind of implementation correct? PS, this Operator only works on a single Business-wise Operation.

    Read the article

  • Did you forget me?

    - by Ratman21
    I know it has been a long time since I last blogged. Still at it, looking for work in the “IT” field. Had another phone interview (only found out during the interview that it was for one year contract job, but I still would take it) for a Help Desk job. Didn’t get it, they thought I was not a application support person and more of a hardware support. Gee..I started out in “IT” as a programmer. Then a programmer/computer operator, then a Tandem/Lan operator and finally a Network operator. I had to deal with so many different operating systems, software and applications.   And they thought I was too hardware. Well I am working a temp day job with the U.S. Census. It gets me out of the house and out in the country. If find getting paid to check for living quarters not bad job, except for the many houses I find that are up for sale and looks like it was not the owners (former owners it seems) idea, with the kids toys still in the yards. Not good for some one with a over active imagine or for my truck. So far I have backed in to ditch (and had to be pulled out), in to power pole (no damage to pole and very little to truck) and a mail box (no damage to truck but mail box was leaning a little) in the last two weeks.   Oh an I have started reading/using “The Love Dare” book from the movie “Fireproof”. I restarted (yes I have had to go back to day one from day five) the dare this Sunday. Dare one dare/day one “Love Is Patient” and the first dare is (reading from the book is): “The first part of this dare is fairly simple. Although Love is communicated in a number of ways. Our words often reflect the condition of our heart. For the next day, resolve to demonstrate patience and to sys nothing negative to your spouse at all. If the temptation arises, choose not to say anything. It’s better to hold your tongue that to say something you’ll regret. “. This was almost too easy as I can hold back from saying anything bad to any one but, this can also be a problem in life (you hold back for so long and!!!!!!!!!!!!!! Boom). Check back for dare/day two “Love Is Kind”.

    Read the article

  • How does the ? make a quantifier lazy in regex

    - by Uriel Katz
    I've been looking into regex lately and figured that the ? operator makes the *,+, or ? lazy. My question is how does it do that? Is it that *? for example is a special operator, or does the ? have an effect on the *? In other words, does regex recognize *? as one operator in itself, or does regex recognize *? as the two separate operators * and ?? If it is the case that *? is being recognized as two separate operators, how does the ? affect the * to make it lazy. If ? means that the * is optional, shouldn't this mean that the * doesn't have to exists at all. If so, then in a statement .*? wouldn't regex just match separate letters and the whole string instead of the shorter string? Please explain, I'm desperate to understand.

    Read the article

  • Any language where every class instance is a class too?

    - by Dokkat
    Taking inspiration from Javascript prototypes, I had the idea of a language where every instance can be used as a class. Before I potentially reinvent the wheel, I would like to ask if there is a language already using this concept: //To declare a Class, extend the base class (in this case, Type) Type(Weapon,{price:0}); //Same syntax to inherit; simply extend the parent: Weapon(Sword,{price:3}); Weapon(Axe,{price:4}); Sword(Katana,{price:7}); Sword(Dagger,{price:3}); //And the same to create an instance: Katana(myKatana,{nickname:"Leon"}); myKatana.price; // 7 myKatana.nickname; // Leon // An operator to return children of a class; Sword_; // [Katana, Dagger] // An operator to return array of descendants; Sword__; // [Katana, Dagger, myKatana] // An operator to return array of parents; Sword^; // Weapon // Arrays can be used as elements Sword__.price += 1; //increases price of Sword's descendants by 1 mySword.price; //8 // And to access specific element (using its name instead of index) var name = "mySword" Katana_[name]; // [mySword] Katana_[name].nickname; // Leon Has this kind of approach been already studied/implemented?

    Read the article

  • Ways to organize interface and implementation in C++

    - by Felix Dombek
    I've seen that there are several different paradigms in C++ concerning what goes into the header file and what to the cpp file. AFAIK, most people, especially those from a C background, do: foo.h class foo { private: int mem; int bar(); public: foo(); foo(const foo&); foo& operator=(foo); ~foo(); } foo.cpp #include foo.h foo::bar() { return mem; } foo::foo() { mem = 42; } foo::foo(const foo& f) { mem = f.mem; } foo::operator=(foo f) { mem = f.mem; } foo::~foo() {} int main(int argc, char *argv[]) { foo f; } However, my lecturers usually teach C++ to beginners like this: foo.h class foo { private: int mem; int bar() { return mem; } public: foo() { mem = 42; } foo(const foo& f) { mem = f.mem; } foo& operator=(foo f) { mem = f.mem; } ~foo() {} } foo.cpp #include foo.h int main(int argc, char* argv[]) { foo f; } // other global helper functions, DLL exports, and whatnot Originally coming from Java, I have also always stuck to this second way for several reasons, such as that I only have to change something in one place if the interface or method names change, and that I like the different indentation of things in classes when I look at their implementation, and that I find names more readable as foo compared to foo::foo. I want to collect pro's and con's for either way. Maybe there are even still other ways? One disadvantage of my way is of course the need for occasional forward declarations.

    Read the article

  • Working with Reporting Services Filters–Part 5: OR Logic

    - by smisner
    When you combine multiple filters, Reporting Services uses AND logic. Once upon a time, there was actually a drop-down list for selecting AND or OR between filters which was very confusing to people because often it was grayed out. Now that selection is gone, but no matter. It wouldn’t help us solve the problem that I want to describe today. As with many problems, Reporting Services gives us more than one way to apply OR logic in a filter. If I want a filter to include this value OR that value for the same field, one approach is to set up the filter is to use the IN operator as I explained in Part 1 of this series. But what if I want to base the filter on two different fields? I  need a different solution. Using the AdventureWorksDW2008R2 database, I have a report that lists product sales: Let’s say that I want to filter this report to show only products that are Bikes (a category) OR products for which sales were greater than $1,000 in a year. If I set up the filter like this: Expression Data Type Operator Value [Category] Text = Bikes [SalesAmount]   > 1000 Then AND logic is used which means that both conditions must be true. That’s not the result I want. Instead, I need to set up the filter like this: Expression Data Type Operator Value =Fields!EnglishProductCategoryName.Value = "Bikes" OR Fields!SalesAmount.Value > 1000 Boolean = =True The OR logic needs to be part of the expression so that it can return a Boolean value that we test against the Value. Notice that I have used =True rather than True for the value. The filtered report appears below. Any non-bike product appears only if the total sales exceed $1,000, whereas Bikes appear regardless of sales. (You can’t see it in this screenshot, but Mountain-400-W Silver, 38 has sales of $923 in 2007 but gets included because it is in the Bikes category.)

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >