Search Results

Search found 3027 results on 122 pages for 'explicit cast'.

Page 33/122 | < Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >

  • SQL query problem

    - by Brisonela
    Hi, I'm new to StackOverflow, and new to SQL Server, I'd like you to help me with some troublesome query. This is my database structure(It's half spanish, hope doesn't matter) Database My problem is that I don't now how to make a query that states which team is local and which is visitor(using table TMatch, knowing that the stadium belongs to only one team) This is as far as I can get Select P.NroMatch, (select * from fnTeam (P.TeamA)) as TeamA,(select * from fnTeam (P.TeamB)) as TeamB, (select * from fnEstadium (P.CodEstadium)) as Estadium, (cast(P.GolesTeamA as varchar)) + '-' + (cast(P.GolesTeamA as varchar)) as Score, P.Fecha from TMatch P Using this functions: If object_id ('fnTeam','fn')is not null drop function fnTeam go create function fnTeam(@CodTeam varchar(5)) returns table return(Select Name from TTeam where CodTeam = @CodTeam) go select * from fnTeam ('Eq001') go ----**** If object_id ('fnEstadium','fn')is not null drop function fnEstadium go create function fnEstadium(@CodEstadium varchar(5)) returns table return(Select Name from TEstadium where CodEstadium = @CodEstadium) go I hope I'd explained myself well, and I thank you help in advance

    Read the article

  • Getting records from a table based on a filter field and Between but also having the OR login for mu

    - by Pentium10
    I have a this table, where I store multiple ids and an age range (def1,def2) CREATE TABLE "template_requirements" ("_id" INTEGER NOT NULL, "templateid" INTEGER, "def1" VARCHAR(255), "def2" VARCHAR(255), PRIMARY KEY("_id")) Having values such as: templateid | def1 | def2 100 | 7 | 25 200 | 40 | 90 300 | 7 | 25 300 | 40 | 60 as you see for templateid 300 we have an or logic: age between 7 and 25 or age between 40 and 60. I want to get all the template ids that are not for a certain age like 25... What's the problem? If I run a query like this one: SELECT group_concat(templateid) FROM template_requirements where and '25' not between cast(def1 as integer) and cast(def2 as integer) it returns 200, 300, which is wrong, as the 300 matched on row 40 to 60, but shouldn't be included in the result as we have a condition with same templateid 7 to 25 that fails the not beetween stuff. How would be the correct query in SQLite, I would like to keep the group_concat stuff.

    Read the article

  • Should I convert data arrays to objects in PHP in a CouchDB/Ajax app?

    - by karlthorwald
    I find myself converting between array and object all the time in PHP application that uses couchDB and Ajax. Of course I am also converting objects to JSON and back (for sometimes couchdb but mostly Ajax), but this is not so much disturbing my workflow. At the present I have php objects that are returned by the CouchDB modules I use and on the other hand I have the old habbit to return arrays like array("error"="not found","data"=$dataObj) from my functions. This leads to a mixed occurence of real php objects and nested arrays and I cast with (object) or (array) if necessary. The worst thing is that I know more or less by heart what a function returns, but not what type (array or object), so I often run into type errors. My plan is now to always cast arrays to objects before returning from a function. Of course this implies a lot of refactoring. Is this the right way to go? What about the conversion overhead? Other ideas or tips?

    Read the article

  • i need to do a view in sql that returns the latest invoice date for each company

    - by dave haughton
    hi, i have a company table that is dbo.companies and has companyId as a column. I also have an invoice table that is dbo.invoices with invoicecompanyId column (which is the same as the companyId on the other table) and it also has a column called invoicedate. What i am mtrying to achieve is a view of each companyid with the corresponding latest invoice date for all the companies i have. i have done the following but i dont know how to filter for the latest invoice, it returns all invoices from all companies and i need latest invoice for all companies SELECT TOP (100) PERCENT 'A' + SUBSTRING('000000', 1, 6 - LEN(CAST(dbo.companies.companyId AS varchar(10)))) + CAST(dbo.companies.companyId AS varchar(10)) AS Client_ID, dbo.invoices.invoiceDate AS S_Inv_Date FROM dbo.invoices INNER JOIN dbo.companies ON dbo.invoices.invoiceCompanyId = dbo.companies.companyId ORDER BY Client_ID can you help please ta

    Read the article

  • Finding the specific type held in an ArrayList<Object> (ie. Object = String, etc.)

    - by Christopher Griffith
    Say I have an ArrayList that I have cast to an ArrayList of objects. I know that all the objects that were in the ArrayList I cast were of the same type, but not what the type was. Now, if the ArrayList is not empty, I could take one of the objects in it and use the instanceof operator to learn what the actual type is. But what of the case where the ArrayList is empty? How do I determine what type Object actually is then? Is it possible?

    Read the article

  • SQL Select Upcoming Birthdays

    - by Crob
    I'm trying to write a stored procedure to select employees who have birthdays that are upcoming. SELECT * FROM Employees WHERE Birthday > @Today AND Birthday < @Today + @NumDays This will not work because the birth year is part of Birthday, so if my birthday was '09-18-1983' that will not fall between '09-18-2008' and '09-25-2008'. Is there a way to ignore the year portion of date fields and just compare month/days? This will be run every monday morning to alert managers of birthdays upcoming, so it possibly will span new years. Here is the working solution that I ended up creating, thanks Kogus. SELECT * FROM Employees WHERE Cast(DATEDIFF(dd, birthdt, getDate()) / 365.25 as int) - Cast(DATEDIFF(dd, birthdt, futureDate) / 365.25 as int) <> 0

    Read the article

  • Conversion of Single to two UInt16 values in .net

    - by Gio
    In the good old days of C. I could cast a float to an int (assuming 32 bit system), do some bit manipluation ( bitwise and, right shift, ect ), and get the upper and lower 16 bit hex representations of the floating point number, which I could then store in two short values. I'm not seeing an easy way of doing this in C#. System.Convert.ToUInt16 just does a float to int convert (even after I shift right), which leaves a vlaue of 0 if the float is less than 0, which is not the desired effect. //useless leaves me witg a value 0f 0 UIN16 s1 = (UInt16)((System.Convert.ToUInt32(d2) & 0xffff0000) >> 16); //capture the high word UInt16 s2 = (UInt16)(System.Convert.ToUInt32(d2) & 0xffff); //capture the low word A basic cast (UInt32) doesn't work either.

    Read the article

  • Where is the 'indeterminate type'?

    - by Daniel
    I'm defining the following type extension: type System.Reflection.MemberInfo with member x.GetAttribute<'T when 'T :> Attribute>(required, inherit') = match required, Attribute.GetCustomAttribute(x, typeof<'T>, inherit') with | true, null -> invalidOp (sprintf "Missing required attribute: %s" typeof<'T>.FullName) | _, attr -> attr :> 'T The last match expression (attr :> 'T) gives the error: The static coercion from Attribute to 'T involves an indeterminate type based on information prior to this program point. Static coercions are not allowed on some types. Further type annotations are needed. I've tried annotating the function return type, but got the same result. I would hate to change this to a dynamic cast. Is there a way to make the static cast work?

    Read the article

  • [Python] Best strategy for dealing with incomplete lines of data from a file.

    - by adoran
    I use the following block of code to read lines out of a file 'f' into a nested list: for data in f: clean_data = data.rstrip() data = clean_data.split('\t') t += [data[0]] strmat += [data[1:]] Sometimes, however, the data is incomplete and a row may look like this: ['955.159', '62.8168', '', '', '', '', '', '', '', '', '', '', '', '', '', '29', '30', '0', '0'] It puts a spanner in the works because I would like Python to implicitly cast my list as floats but the empty fields '' cause it to be cast as an array of strings (dtype: s12). I could start a second 'if' statement and convert all empty fields into NULL (since 0 is wrong in this instance) but I was unsure whether this was best. Is this the best strategy of dealing with incomplete data? Should I edit the stream or do it post-hoc?

    Read the article

  • Number 0 is not saving to database as a prefix in SQL Server of CHAR data type column

    - by gmhk
    I am trying to insert an value as '019393' into a table with a CHAR(10) column. It is inserting only '19393' into the database I am implementing this feature in a stored procedure, doing some manipulation like incrementing that number by 15 and saving it back with '0' as the prefix I am using SQL Server database Note: I tried CASTING that value as VARCHAR before saving to the database, but even that did not get the solution Code SELECT @fromBSB = fromBSB, @toBSB = toBSB, @type = Type FROM [dbo].[tbl_REF_SpecialBSBRanges] WHERE CAST(@inputFromBSB AS INT) BETWEEN fromBSB AND toBSB SET @RETURNVALUE = @fromBSB IF(@fromBSB = @inputFromBSB) BEGIN PRINT 'Starting Number is Equal'; DELETE FROM tbl_REF_SpecialBSBRanges WHERE Type = @type AND fromBSB = @fromBSB AND toBSB = @toBSB INSERT INTO [tbl_REF_SpecialBSBRanges] ([Type], [fromBSB], [toBSB]) VALUES(@type, CAST('0' + @fromBSB + 1 AS CHAR), @toBSB) INSERT INTO [tbl_REF_SpecialBSBRanges] ([Type], [fromBSB], [toBSB]) VALUES(@inputBSBName, @inputFromBSB, @inputToBSB) END

    Read the article

  • Table Variables: an empirical approach.

    - by Phil Factor
    It isn’t entirely a pleasant experience to publish an article only to have it described on Twitter as ‘Horrible’, and to have it criticized on the MVP forum. When this happened to me in the aftermath of publishing my article on Temporary tables recently, I was taken aback, because these critics were experts whose views I respect. What was my crime? It was, I think, to suggest that, despite the obvious quirks, it was best to use Table Variables as a first choice, and to use local Temporary Tables if you hit problems due to these quirks, or if you were doing complex joins using a large number of rows. What are these quirks? Well, table variables have advantages if they are used sensibly, but this requires some awareness by the developer about the potential hazards and how to avoid them. You can be hit by a badly-performing join involving a table variable. Table Variables are a compromise, and this compromise doesn’t always work out well. Explicit indexes aren’t allowed on Table Variables, so one cannot use covering indexes or non-unique indexes. The query optimizer has to make assumptions about the data rather than using column distribution statistics when a table variable is involved in a join, because there aren’t any column-based distribution statistics on a table variable. It assumes a reasonably even distribution of data, and is likely to have little idea of the number of rows in the table variables that are involved in queries. However complex the heuristics that are used might be in determining the best way of executing a SQL query, and they most certainly are, the Query Optimizer is likely to fail occasionally with table variables, under certain circumstances, and produce a Query Execution Plan that is frightful. The experienced developer or DBA will be on the lookout for this sort of problem. In this blog, I’ll be expanding on some of the tests I used when writing my article to illustrate the quirks, and include a subsequent example supplied by Kevin Boles. A simplified example. We’ll start out by illustrating a simple example that shows some of these characteristics. We’ll create two tables filled with random numbers and then see how many matches we get between the two tables. We’ll forget indexes altogether for this example, and use heaps. We’ll try the same Join with two table variables, two table variables with OPTION (RECOMPILE) in the JOIN clause, and with two temporary tables. It is all a bit jerky because of the granularity of the timing that isn’t actually happening at the millisecond level (I used DATETIME). However, you’ll see that the table variable is outperforming the local temporary table up to 10,000 rows. Actually, even without a use of the OPTION (RECOMPILE) hint, it is doing well. What happens when your table size increases? The table variable is, from around 30,000 rows, locked into a very bad execution plan unless you use OPTION (RECOMPILE) to provide the Query Analyser with a decent estimation of the size of the table. However, if it has the OPTION (RECOMPILE), then it is smokin’. Well, up to 120,000 rows, at least. It is performing better than a Temporary table, and in a good linear fashion. What about mixed table joins, where you are joining a temporary table to a table variable? You’d probably expect that the query analyzer would throw up its hands and produce a bad execution plan as if it were a table variable. After all, it knows nothing about the statistics in one of the tables so how could it do any better? Well, it behaves as if it were doing a recompile. And an explicit recompile adds no value at all. (we just go up to 45000 rows since we know the bigger picture now)   Now, if you were new to this, you might be tempted to start drawing conclusions. Beware! We’re dealing with a very complex beast: the Query Optimizer. It can come up with surprises What if we change the query very slightly to insert the results into a Table Variable? We change nothing else and just measure the execution time of the statement as before. Suddenly, the table variable isn’t looking so much better, even taking into account the time involved in doing the table insert. OK, if you haven’t used OPTION (RECOMPILE) then you’re toast. Otherwise, there isn’t much in it between the Table variable and the temporary table. The table variable is faster up to 8000 rows and then not much in it up to 100,000 rows. Past the 8000 row mark, we’ve lost the advantage of the table variable’s speed. Any general rule you may be formulating has just gone for a walk. What we can conclude from this experiment is that if you join two table variables, and can’t use constraints, you’re going to need that Option (RECOMPILE) hint. Count Dracula and the Horror Join. These tables of integers provide a rather unreal example, so let’s try a rather different example, and get stuck into some implicit indexing, by using constraints. What unusual words are contained in the book ‘Dracula’ by Bram Stoker? Here we get a table of all the common words in the English language (60,387 of them) and put them in a table. We put them in a Table Variable with the word as a primary key, a Table Variable Heap and a Table Variable with a primary key. We then take all the distinct words used in the book ‘Dracula’ (7,558 of them). We then create a table variable and insert into it all those uncommon words that are in ‘Dracula’. i.e. all the words in Dracula that aren’t matched in the list of common words. To do this we use a left outer join, where the right-hand value is null. The results show a huge variation, between the sublime and the gorblimey. If both tables contain a Primary Key on the columns we join on, and both are Table Variables, it took 33 Ms. If one table contains a Primary Key, and the other is a heap, and both are Table Variables, it took 46 Ms. If both Table Variables use a unique constraint, then the query takes 36 Ms. If neither table contains a Primary Key and both are Table Variables, it took 116383 Ms. Yes, nearly two minutes!! If both tables contain a Primary Key, one is a Table Variables and the other is a temporary table, it took 113 Ms. If one table contains a Primary Key, and both are Temporary Tables, it took 56 Ms.If both tables are temporary tables and both have primary keys, it took 46 Ms. Here we see table variables which are joined on their primary key again enjoying a  slight performance advantage over temporary tables. Where both tables are table variables and both are heaps, the query suddenly takes nearly two minutes! So what if you have two heaps and you use option Recompile? If you take the rogue query and add the hint, then suddenly, the query drops its time down to 76 Ms. If you add unique indexes, then you've done even better, down to half that time. Here are the text execution plans.So where have we got to? Without drilling down into the minutiae of the execution plans we can begin to create a hypothesis. If you are using table variables, and your tables are relatively small, they are faster than temporary tables, but as the number of rows increases you need to do one of two things: either you need to have a primary key on the column you are using to join on, or else you need to use option (RECOMPILE) If you try to execute a query that is a join, and both tables are table variable heaps, you are asking for trouble, well- slow queries, unless you give the table hint once the number of rows has risen past a point (30,000 in our first example, but this varies considerably according to context). Kevin’s Skew In describing the table-size, I used the term ‘relatively small’. Kevin Boles produced an interesting case where a single-row table variable produces a very poor execution plan when joined to a very, very skewed table. In the original, pasted into my article as a comment, a column consisted of 100000 rows in which the key column was one number (1) . To this was added eight rows with sequential numbers up to 9. When this was joined to a single-tow Table Variable with a key of 2 it produced a bad plan. This problem is unlikely to occur in real usage, and the Query Optimiser team probably never set up a test for it. Actually, the skew can be slightly less extreme than Kevin made it. The following test showed that once the table had 54 sequential rows in the table, then it adopted exactly the same execution plan as for the temporary table and then all was well. Undeniably, real data does occasionally cause problems to the performance of joins in Table Variables due to the extreme skew of the distribution. We've all experienced Perfectly Poisonous Table Variables in real live data. As in Kevin’s example, indexes merely make matters worse, and the OPTION (RECOMPILE) trick does nothing to help. In this case, there is no option but to use a temporary table. However, one has to note that once the slight de-skew had taken place, then the plans were identical across a huge range. Conclusions Where you need to hold intermediate results as part of a process, Table Variables offer a good alternative to temporary tables when used wisely. They can perform faster than a temporary table when the number of rows is not great. For some processing with huge tables, they can perform well when only a clustered index is required, and when the nature of the processing makes an index seek very effective. Table Variables are scoped to the batch or procedure and are unlikely to hang about in the TempDB when they are no longer required. They require no explicit cleanup. Where the number of rows in the table is moderate, you can even use them in joins as ‘Heaps’, unindexed. Beware, however, since, as the number of rows increase, joins on Table Variable heaps can easily become saddled by very poor execution plans, and this must be cured either by adding constraints (UNIQUE or PRIMARY KEY) or by adding the OPTION (RECOMPILE) hint if this is impossible. Occasionally, the way that the data is distributed prevents the efficient use of Table Variables, and this will require using a temporary table instead. Tables Variables require some awareness by the developer about the potential hazards and how to avoid them. If you are not prepared to do any performance monitoring of your code or fine-tuning, and just want to pummel out stuff that ‘just runs’ without considering namby-pamby stuff such as indexes, then stick to Temporary tables. If you are likely to slosh about large numbers of rows in temporary tables without considering the niceties of processing just what is required and no more, then temporary tables provide a safer and less fragile means-to-an-end for you.

    Read the article

  • array and array_view from amp.h

    - by Daniel Moth
    This is a very long post, but it also covers what are probably the classes (well, array_view at least) that you will use the most with C++ AMP, so I hope you enjoy it! Overview The concurrency::array and concurrency::array_view template classes represent multi-dimensional data of type T, of N dimensions, specified at compile time (and you can later access the number of dimensions via the rank property). If N is not specified, it is assumed that it is 1 (i.e. single-dimensional case). They are rectangular (not jagged). The difference between them is that array is a container of data, whereas array_view is a wrapper of a container of data. So in that respect, array behaves like an STL container, whereas the closest thing an array_view behaves like is an STL iterator (albeit with random access and allowing you to view more than one element at a time!). The data in the array (whether provided at creation time or added later) resides on an accelerator (which is specified at creation time either explicitly by the developer, or set to the default accelerator at creation time by the runtime) and is laid out contiguously in memory. The data provided to the array_view is not stored by/in the array_view, because the array_view is simply a view over the real source (which can reside on the CPU or other accelerator). The underlying data is copied on demand to wherever the array_view is accessed. Elements which differ by one in the least significant dimension of the array_view are adjacent in memory. array objects must be captured by reference into the lambda you pass to the parallel_for_each call, whereas array_view objects must be captured by value (into the lambda you pass to the parallel_for_each call). Creating array and array_view objects and relevant properties You can create array_view objects from other array_view objects of the same rank and element type (shallow copy, also possible via assignment operator) so they point to the same underlying data, and you can also create array_view objects over array objects of the same rank and element type e.g.   array_view<int,3> a(b); // b can be another array or array_view of ints with rank=3 Note: Unlike the constructors above which can be called anywhere, the ones in the rest of this section can only be called from CPU code. You can create array objects from other array objects of the same rank and element type (copy and move constructors) and from other array_view objects, e.g.   array<float,2> a(b); // b can be another array or array_view of floats with rank=2 To create an array from scratch, you need to at least specify an extent object, e.g. array<int,3> a(myExtent);. Note that instead of an explicit extent object, there are convenience overloads when N<=3 so you can specify 1-, 2-, 3- integers (dependent on the array's rank) and thus have the extent created for you under the covers. At any point, you can access the array's extent thought the extent property. The exact same thing applies to array_view (extent as constructor parameters, incl. convenience overloads, and property). While passing only an extent object to create an array is enough (it means that the array will be written to later), it is not enough for the array_view case which must always wrap over some other container (on which it relies for storage space and actual content). So in addition to the extent object (that describes the shape you'd like to be viewing/accessing that data through), to create an array_view from another container (e.g. std::vector) you must pass in the container itself (which must expose .data() and a .size() methods, e.g. like std::array does), e.g.   array_view<int,2> aaa(myExtent, myContainerOfInts); Similarly, you can create an array_view from a raw pointer of data plus an extent object. Back to the array case, to optionally initialize the array with data, you can pass an iterator pointing to the start (and optionally one pointing to the end of the source container) e.g.   array<double,1> a(5, myVector.begin(), myVector.end()); We saw that arrays are bound to an accelerator at creation time, so in case you don’t want the C++ AMP runtime to assign the array to the default accelerator, all array constructors have overloads that let you pass an accelerator_view object, which you can later access via the accelerator_view property. Note that at the point of initializing an array with data, a synchronous copy of the data takes place to the accelerator, and then to copy any data back we'll see that an explicit copy call is required. This does not happen with the array_view where copying is on demand... refresh and synchronize on array_view Note that in the previous section on constructors, unlike the array case, there was no overload that accepted an accelerator_view for array_view. That is because the array_view is simply a wrapper, so the allocation of the data has already taken place before you created the array_view. When you capture an array_view variable in your call to parallel_for_each, the copy of data between the non-CPU accelerator and the CPU takes place on demand (i.e. it is implicit, versus the explicit copy that has to happen with the array). There are some subtleties to the on-demand-copying that we cover next. The assumption when using an array_view is that you will continue to access the data through the array_view, and not through the original underlying source, e.g. the pointer to the data that you passed to the array_view's constructor. So if you modify the data through the array_view on the GPU, the original pointer on the CPU will not "know" that, unless one of two things happen: you access the data through the array_view on the CPU side, i.e. using indexing that we cover below you explicitly call the array_view's synchronize method on the CPU (this also gets called in the array_view's destructor for you) Conversely, if you make a change to the underlying data through the original source (e.g. the pointer), the array_view will not "know" about those changes, unless you call its refresh method. Finally, note that if you create an array_view of const T, then the data is copied to the accelerator on demand, but it does not get copied back, e.g.   array_view<const double, 5> myArrView(…); // myArrView will not get copied back from GPU There is also a similar mechanism to achieve the reverse, i.e. not to copy the data of an array_view to the GPU. copy_to, data, and global copy/copy_async functions Both array and array_view expose two copy_to overloads that allow copying them to another array, or to another array_view, and these operations can also be achieved with assignment (via the = operator overloads). Also both array and array_view expose a data method, to get a raw pointer to the underlying data of the array or array_view, e.g. float* f = myArr.data();. Note that for array_view, this only works when the rank is equal to 1, due to the data only being contiguous in one dimension as covered in the overview section. Finally, there are a bunch of global concurrency::copy functions returning void (and corresponding concurrency::copy_async functions returning a future) that allow copying between arrays and array_views and iterators etc. Just browse intellisense or amp.h directly for the full set. Note that for array, all copying described throughout this post is deep copying, as per other STL container expectations. You can never have two arrays point to the same data. indexing into array and array_view plus projection Reading or writing data elements of an array is only legal when the code executes on the same accelerator as where the array was bound to. In the array_view case, you can read/write on any accelerator, not just the one where the original data resides, and the data gets copied for you on demand. In both cases, the way you read and write individual elements is via indexing as described next. To access (or set the value of) an element, you can index into it by passing it an index object via the subscript operator. Furthermore, if the rank is 3 or less, you can use the function ( ) operator to pass integer values instead of having to use an index object. e.g. array<float,2> arr(someExtent, someIterator); //or array_view<float,2> arr(someExtent, someContainer); index<2> idx(5,4); float f1 = arr[idx]; float f2 = arr(5,4); //f2 ==f1 //and the reverse for assigning, e.g. arr(idx[0], 7) = 6.9; Note that for both array and array_view, regardless of rank, you can also pass a single integer to the subscript operator which results in a projection of the data, and (for both array and array_view) you get back an array_view of rank N-1 (or if the rank was 1, you get back just the element at that location). Not Covered In this already very long post, I am not going to cover three very cool methods (and related overloads) that both array and array_view expose: view_as, section, reinterpret_as. We'll revisit those at some point in the future, probably on the team blog. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Changes to the LINQ-to-StreamInsight Dialect

    - by Roman Schindlauer
    In previous versions of StreamInsight (1.0 through 2.0), CepStream<> represents temporal streams of many varieties: Streams with ‘open’ inputs (e.g., those defined and composed over CepStream<T>.Create(string streamName) Streams with ‘partially bound’ inputs (e.g., those defined and composed over CepStream<T>.Create(Type adapterFactory, …)) Streams with fully bound inputs (e.g., those defined and composed over To*Stream – sequences or DQC) The stream may be embedded (where Server.Create is used) The stream may be remote (where Server.Connect is used) When adding support for new programming primitives in StreamInsight 2.1, we faced a choice: Add a fourth variety (use CepStream<> to represent streams that are bound the new programming model constructs), or introduce a separate type that represents temporal streams in the new user model. We opted for the latter. Introducing a new type has the effect of reducing the number of (confusing) runtime failures due to inappropriate uses of CepStream<> instances in the incorrect context. The new types are: IStreamable<>, which logically represents a temporal stream. IQStreamable<> : IStreamable<>, which represents a queryable temporal stream. Its relationship to IStreamable<> is analogous to the relationship of IQueryable<> to IEnumerable<>. The developer can compose temporal queries over remote stream sources using this type. The syntax of temporal queries composed over IQStreamable<> is mostly consistent with the syntax of our existing CepStream<>-based LINQ provider. However, we have taken the opportunity to refine certain aspects of the language surface. Differences are outlined below. Because 2.1 introduces new types to represent temporal queries, the changes outlined in this post do no impact existing StreamInsight applications using the existing types! SelectMany StreamInsight does not support the SelectMany operator in its usual form (which is analogous to SQL’s “CROSS APPLY” operator): static IEnumerable<R> SelectMany<T, R>(this IEnumerable<T> source, Func<T, IEnumerable<R>> collectionSelector) It instead uses SelectMany as a convenient syntactic representation of an inner join. The parameter to the selector function is thus unavailable. Because the parameter isn’t supported, its type in StreamInsight 1.0 – 2.0 wasn’t carefully scrutinized. Unfortunately, the type chosen for the parameter is nonsensical to LINQ programmers: static CepStream<R> SelectMany<T, R>(this CepStream<T> source, Expression<Func<CepStream<T>, CepStream<R>>> streamSelector) Using Unit as the type for the parameter accurately reflects the StreamInsight’s capabilities: static IQStreamable<R> SelectMany<T, R>(this IQStreamable<T> source, Expression<Func<Unit, IQStreamable<R>>> streamSelector) For queries that succeed – that is, queries that do not reference the stream selector parameter – there is no difference between the code written for the two overloads: from x in xs from y in ys select f(x, y) Top-K The Take operator used in StreamInsight causes confusion for LINQ programmers because it is applied to the (unbounded) stream rather than the (bounded) window, suggesting that the query as a whole will return k rows: (from win in xs.SnapshotWindow() from x in win orderby x.A select x.B).Take(k) The use of SelectMany is also unfortunate in this context because it implies the availability of the window parameter within the remainder of the comprehension. The following compiles but fails at runtime: (from win in xs.SnapshotWindow() from x in win orderby x.A select win).Take(k) The Take operator in 2.1 is applied to the window rather than the stream: Before After (from win in xs.SnapshotWindow() from x in win orderby x.A select x.B).Take(k) from win in xs.SnapshotWindow() from b in     (from x in win     orderby x.A     select x.B).Take(k) select b Multicast We are introducing an explicit multicast operator in order to preserve expression identity, which is important given the semantics about moving code to and from StreamInsight. This also better matches existing LINQ dialects, such as Reactive. This pattern enables expressing multicasting in two ways: Implicit Explicit var ys = from x in xs          where x.A > 1          select x; var zs = from y1 in ys          from y2 in ys.ShiftEventTime(_ => TimeSpan.FromSeconds(1))          select y1 + y2; var ys = from x in xs          where x.A > 1          select x; var zs = ys.Multicast(ys1 =>     from y1 in ys1     from y2 in ys1.ShiftEventTime(_ => TimeSpan.FromSeconds(1))     select y1 + y2; Notice the product translates an expression using implicit multicast into an expression using the explicit multicast operator. The user does not see this translation. Default window policies Only default window policies are supported in the new surface. Other policies can be simulated by using AlterEventLifetime. Before After xs.SnapshotWindow(     WindowInputPolicy.ClipToWindow,     SnapshotWindowInputPolicy.Clip) xs.SnapshotWindow() xs.TumblingWindow(     TimeSpan.FromSeconds(1),     HoppingWindowOutputPolicy.PointAlignToWindowEnd) xs.TumblingWindow(     TimeSpan.FromSeconds(1)) xs.TumblingWindow(     TimeSpan.FromSeconds(1),     HoppingWindowOutputPolicy.ClipToWindowEnd) Not supported … LeftAntiJoin Representation of LASJ as a correlated sub-query in the LINQ surface is problematic as the StreamInsight engine does not support correlated sub-queries (see discussion of SelectMany). The current syntax requires the introduction of an otherwise unsupported ‘IsEmpty()’ operator. As a result, the pattern is not discoverable and implies capabilities not present in the server. The direct representation of LASJ is used instead: Before After from x in xs where     (from y in ys     where x.A > y.B     select y).IsEmpty() select x xs.LeftAntiJoin(ys, (x, y) => x.A > y.B) from x in xs where     (from y in ys     where x.A == y.B     select y).IsEmpty() select x xs.LeftAntiJoin(ys, x => x.A, y => y.B) ApplyWithUnion The ApplyWithUnion methods have been deprecated since their signatures are redundant given the standard SelectMany overloads: Before After xs.GroupBy(x => x.A).ApplyWithUnion(gs => from win in gs.SnapshotWindow() select win.Count()) xs.GroupBy(x => x.A).SelectMany(     gs =>     from win in gs.SnapshotWindow()     select win.Count()) xs.GroupBy(x => x.A).ApplyWithUnion(gs => from win in gs.SnapshotWindow() select win.Count(), r => new { r.Key, Count = r.Payload }) from x in xs group x by x.A into gs from win in gs.SnapshotWindow() select new { gs.Key, Count = win.Count() } Alternate UDO syntax The representation of UDOs in the StreamInsight LINQ dialect confuses cardinalities. Based on the semantics of user-defined operators in StreamInsight, one would expect to construct queries in the following form: from win in xs.SnapshotWindow() from y in MyUdo(win) select y Instead, the UDO proxy method is referenced within a projection, and the (many) results returned by the user code are automatically flattened into a stream: from win in xs.SnapshotWindow() select MyUdo(win) The “many-or-one” confusion is exemplified by the following example that compiles but fails at runtime: from win in xs.SnapshotWindow() select MyUdo(win) + win.Count() The above query must fail because the UDO is in fact returning many values per window while the count aggregate is returning one. Original syntax New alternate syntax from win in xs.SnapshotWindow() select win.UdoProxy(1) from win in xs.SnapshotWindow() from y in win.UserDefinedOperator(() => new Udo(1)) select y -or- from win in xs.SnapshotWindow() from y in win.UdoMacro(1) select y Notice that this formulation also sidesteps the dynamic type pitfalls of the existing “proxy method” approach to UDOs, in which the type of the UDO implementation (TInput, TOuput) and the type of its constructor arguments (TConfig) need to align in a precise and non-obvious way with the argument and return types for the corresponding proxy method. UDSO syntax UDSO currently leverages the DataContractSerializer to clone initial state for logical instances of the user operator. Initial state will instead be described by an expression in the new LINQ surface. Before After xs.Scan(new Udso()) xs.Scan(() => new Udso()) Name changes ShiftEventTime => AlterEventStartTime: The alter event lifetime overload taking a new start time value has been renamed. CountByStartTimeWindow => CountWindow

    Read the article

  • Changes to the LINQ-to-StreamInsight Dialect

    - by Roman Schindlauer
    In previous versions of StreamInsight (1.0 through 2.0), CepStream<> represents temporal streams of many varieties: Streams with ‘open’ inputs (e.g., those defined and composed over CepStream<T>.Create(string streamName) Streams with ‘partially bound’ inputs (e.g., those defined and composed over CepStream<T>.Create(Type adapterFactory, …)) Streams with fully bound inputs (e.g., those defined and composed over To*Stream – sequences or DQC) The stream may be embedded (where Server.Create is used) The stream may be remote (where Server.Connect is used) When adding support for new programming primitives in StreamInsight 2.1, we faced a choice: Add a fourth variety (use CepStream<> to represent streams that are bound the new programming model constructs), or introduce a separate type that represents temporal streams in the new user model. We opted for the latter. Introducing a new type has the effect of reducing the number of (confusing) runtime failures due to inappropriate uses of CepStream<> instances in the incorrect context. The new types are: IStreamable<>, which logically represents a temporal stream. IQStreamable<> : IStreamable<>, which represents a queryable temporal stream. Its relationship to IStreamable<> is analogous to the relationship of IQueryable<> to IEnumerable<>. The developer can compose temporal queries over remote stream sources using this type. The syntax of temporal queries composed over IQStreamable<> is mostly consistent with the syntax of our existing CepStream<>-based LINQ provider. However, we have taken the opportunity to refine certain aspects of the language surface. Differences are outlined below. Because 2.1 introduces new types to represent temporal queries, the changes outlined in this post do no impact existing StreamInsight applications using the existing types! SelectMany StreamInsight does not support the SelectMany operator in its usual form (which is analogous to SQL’s “CROSS APPLY” operator): static IEnumerable<R> SelectMany<T, R>(this IEnumerable<T> source, Func<T, IEnumerable<R>> collectionSelector) It instead uses SelectMany as a convenient syntactic representation of an inner join. The parameter to the selector function is thus unavailable. Because the parameter isn’t supported, its type in StreamInsight 1.0 – 2.0 wasn’t carefully scrutinized. Unfortunately, the type chosen for the parameter is nonsensical to LINQ programmers: static CepStream<R> SelectMany<T, R>(this CepStream<T> source, Expression<Func<CepStream<T>, CepStream<R>>> streamSelector) Using Unit as the type for the parameter accurately reflects the StreamInsight’s capabilities: static IQStreamable<R> SelectMany<T, R>(this IQStreamable<T> source, Expression<Func<Unit, IQStreamable<R>>> streamSelector) For queries that succeed – that is, queries that do not reference the stream selector parameter – there is no difference between the code written for the two overloads: from x in xs from y in ys select f(x, y) Top-K The Take operator used in StreamInsight causes confusion for LINQ programmers because it is applied to the (unbounded) stream rather than the (bounded) window, suggesting that the query as a whole will return k rows: (from win in xs.SnapshotWindow() from x in win orderby x.A select x.B).Take(k) The use of SelectMany is also unfortunate in this context because it implies the availability of the window parameter within the remainder of the comprehension. The following compiles but fails at runtime: (from win in xs.SnapshotWindow() from x in win orderby x.A select win).Take(k) The Take operator in 2.1 is applied to the window rather than the stream: Before After (from win in xs.SnapshotWindow() from x in win orderby x.A select x.B).Take(k) from win in xs.SnapshotWindow() from b in     (from x in win     orderby x.A     select x.B).Take(k) select b Multicast We are introducing an explicit multicast operator in order to preserve expression identity, which is important given the semantics about moving code to and from StreamInsight. This also better matches existing LINQ dialects, such as Reactive. This pattern enables expressing multicasting in two ways: Implicit Explicit var ys = from x in xs          where x.A > 1          select x; var zs = from y1 in ys          from y2 in ys.ShiftEventTime(_ => TimeSpan.FromSeconds(1))          select y1 + y2; var ys = from x in xs          where x.A > 1          select x; var zs = ys.Multicast(ys1 =>     from y1 in ys1     from y2 in ys1.ShiftEventTime(_ => TimeSpan.FromSeconds(1))     select y1 + y2; Notice the product translates an expression using implicit multicast into an expression using the explicit multicast operator. The user does not see this translation. Default window policies Only default window policies are supported in the new surface. Other policies can be simulated by using AlterEventLifetime. Before After xs.SnapshotWindow(     WindowInputPolicy.ClipToWindow,     SnapshotWindowInputPolicy.Clip) xs.SnapshotWindow() xs.TumblingWindow(     TimeSpan.FromSeconds(1),     HoppingWindowOutputPolicy.PointAlignToWindowEnd) xs.TumblingWindow(     TimeSpan.FromSeconds(1)) xs.TumblingWindow(     TimeSpan.FromSeconds(1),     HoppingWindowOutputPolicy.ClipToWindowEnd) Not supported … LeftAntiJoin Representation of LASJ as a correlated sub-query in the LINQ surface is problematic as the StreamInsight engine does not support correlated sub-queries (see discussion of SelectMany). The current syntax requires the introduction of an otherwise unsupported ‘IsEmpty()’ operator. As a result, the pattern is not discoverable and implies capabilities not present in the server. The direct representation of LASJ is used instead: Before After from x in xs where     (from y in ys     where x.A > y.B     select y).IsEmpty() select x xs.LeftAntiJoin(ys, (x, y) => x.A > y.B) from x in xs where     (from y in ys     where x.A == y.B     select y).IsEmpty() select x xs.LeftAntiJoin(ys, x => x.A, y => y.B) ApplyWithUnion The ApplyWithUnion methods have been deprecated since their signatures are redundant given the standard SelectMany overloads: Before After xs.GroupBy(x => x.A).ApplyWithUnion(gs => from win in gs.SnapshotWindow() select win.Count()) xs.GroupBy(x => x.A).SelectMany(     gs =>     from win in gs.SnapshotWindow()     select win.Count()) xs.GroupBy(x => x.A).ApplyWithUnion(gs => from win in gs.SnapshotWindow() select win.Count(), r => new { r.Key, Count = r.Payload }) from x in xs group x by x.A into gs from win in gs.SnapshotWindow() select new { gs.Key, Count = win.Count() } Alternate UDO syntax The representation of UDOs in the StreamInsight LINQ dialect confuses cardinalities. Based on the semantics of user-defined operators in StreamInsight, one would expect to construct queries in the following form: from win in xs.SnapshotWindow() from y in MyUdo(win) select y Instead, the UDO proxy method is referenced within a projection, and the (many) results returned by the user code are automatically flattened into a stream: from win in xs.SnapshotWindow() select MyUdo(win) The “many-or-one” confusion is exemplified by the following example that compiles but fails at runtime: from win in xs.SnapshotWindow() select MyUdo(win) + win.Count() The above query must fail because the UDO is in fact returning many values per window while the count aggregate is returning one. Original syntax New alternate syntax from win in xs.SnapshotWindow() select win.UdoProxy(1) from win in xs.SnapshotWindow() from y in win.UserDefinedOperator(() => new Udo(1)) select y -or- from win in xs.SnapshotWindow() from y in win.UdoMacro(1) select y Notice that this formulation also sidesteps the dynamic type pitfalls of the existing “proxy method” approach to UDOs, in which the type of the UDO implementation (TInput, TOuput) and the type of its constructor arguments (TConfig) need to align in a precise and non-obvious way with the argument and return types for the corresponding proxy method. UDSO syntax UDSO currently leverages the DataContractSerializer to clone initial state for logical instances of the user operator. Initial state will instead be described by an expression in the new LINQ surface. Before After xs.Scan(new Udso()) xs.Scan(() => new Udso()) Name changes ShiftEventTime => AlterEventStartTime: The alter event lifetime overload taking a new start time value has been renamed. CountByStartTimeWindow => CountWindow

    Read the article

  • Is it possible to specify a generic constraint for a type parameter to be convertible FROM another t

    - by fostandy
    Suppose I write a library with the following: public class Bar { /* ... */ } public class SomeWeirdClass<T> where T : ??? { public T BarMaker(Bar b) { // ... play with b T t = (T)b return (T) b; } } Later, I expect users to use my library by defining their own types which are convertible to Bar and using the SomeWeirdClass 'factory'. public class Foo { public static explicit operator Foo(Bar f) { return new Bar(); } } public class Demo { public static void demo() { Bar b = new Bar(); SomeWeirdClass<Foo> weird = new SomeWeirdClass<Foo>(); Foo f = weird.BarMaker(b); } } this will compile if i set where T : Foo but the problem is that I don't know about Foo at the library's compile time, and I actually want something more like where T : some class that can be instantiated, given a Bar Is this possible? From my limited knowledge it does not seem to be, but the ingenuity of the .NET framework and its users always surprises me... This may or not be related to the idea of static interface methods - at least, I can see the value in being able to specify the presence of factory methods to create objects (similar to the same way that you can already perform where T : new()) edit: Solution - thanks to Nick and bzIm - For other readers I'll provide a completed solution as I understand it: edit2: This solution requires Foo to expose a public default constructor. For an even stupider better solution that does not require this see the very bottom of this post. public class Bar {} public class SomeWeirdClass<T> where T : IConvertibleFromBar<T>, new() { public T BarMaker(Bar b) { T t = new T(); t.Convert(b); return t; } } public interface IConvertibleFromBar<T> { T Convert(Bar b); } public class Foo : IConvertibleFromBar<Foo> { public static explicit operator Foo(Bar f) { return null; } public Foo Convert(Bar b) { return (Foo) b; } } public class Demo { public static void demo() { Bar b = new Bar(); SomeWeirdClass<Foo> weird = new SomeWeirdClass<Foo>(); Foo f = weird.BarMaker(b); } } edit2: Solution 2: Create a type convertor factory to use: #region library defined code public class Bar {} public class SomeWeirdClass<T, TFactory> where TFactory : IConvertorFactory<Bar, T>, new() { private static TFactory convertor = new TFactory(); public T BarMaker(Bar b) { return convertor.Convert(b); } } public interface IConvertorFactory<TFrom, TTo> { TTo Convert(TFrom from); } #endregion #region user defined code public class BarToFooConvertor : IConvertorFactory<Bar, Foo> { public Foo Convert(Bar from) { return (Foo) from; } } public class Foo { public Foo(int a) {} public static explicit operator Foo(Bar f) { return null; } public Foo Convert(Bar b) { return (Foo) b; } } #endregion public class Demo { public static void demo() { Bar b = new Bar(); SomeWeirdClass<Foo, BarToFooConvertor> weird = new SomeWeirdClass<Foo, BarToFooConvertor>(); Foo f = weird.BarMaker(b); } }

    Read the article

  • Is it possible to specify a generic constraint for a type parameter to be convertible FROM another t

    - by fostandy
    Suppose I write a library with the following: public class Bar { /* ... */ } public class SomeWeirdClass<T> where T : ??? { public T BarMaker(Bar b) { // ... play with b T t = (T)b return (T) b; } } Later, I expect users to use my library by defining their own types which are convertible to Bar and using the SomeWeirdClass 'factory'. public class Foo { public static explicit operator Foo(Bar f) { return new Bar(); } } public class Demo { public static void demo() { Bar b = new Bar(); SomeWeirdClass<Foo> weird = new SomeWeirdClass<Foo>(); Foo f = weird.BarMaker(b); } } this will compile if i set where T : Foo but the problem is that I don't know about Foo at the library's compile time, and I actually want something more like where T : some class that can be instantiated, given a Bar Is this possible? From my limited knowledge it does not seem to be, but the ingenuity of the .NET framework and its users always surprises me... This may or not be related to the idea of static interface methods - at least, I can see the value in being able to specify the presence of factory methods to create objects (similar to the same way that you can already perform where T : new()) edit: Solution - thanks to Nick and bzIm - For other readers I'll provide a completed solution as I understand it: edit2: This solution requires Foo to expose a public default constructor. For an even stupider better solution that does not require this see the very bottom of this post. public class Bar {} public class SomeWeirdClass<T> where T : IConvertibleFromBar<T>, new() { public T BarMaker(Bar b) { T t = new T(); t.Convert(b); return t; } } public interface IConvertibleFromBar<T> { T Convert(Bar b); } public class Foo : IConvertibleFromBar<Foo> { public static explicit operator Foo(Bar f) { return null; } public Foo Convert(Bar b) { return (Foo) b; } } public class Demo { public static void demo() { Bar b = new Bar(); SomeWeirdClass<Foo> weird = new SomeWeirdClass<Foo>(); Foo f = weird.BarMaker(b); } } edit2: Solution 2: Create a type convertor factory to use: #region library defined code public class Bar {} public class SomeWeirdClass<T, TFactory> where TFactory : IConvertorFactory<Bar, T>, new() { private static TFactory convertor = new TFactory(); public T BarMaker(Bar b) { return convertor.Convert(b); } } public interface IConvertorFactory<TFrom, TTo> { TTo Convert(TFrom from); } #endregion #region user defined code public class BarToFooConvertor : IConvertorFactory<Bar, Foo> { public Foo Convert(Bar from) { return (Foo) from; } } public class Foo { public Foo(int a) {} public static explicit operator Foo(Bar f) { return null; } public Foo Convert(Bar b) { return (Foo) b; } } #endregion public class Demo { public static void demo() { Bar b = new Bar(); SomeWeirdClass<Foo, BarToFooConvertor> weird = new SomeWeirdClass<Foo, BarToFooConvertor>(); Foo f = weird.BarMaker(b); } }

    Read the article

  • Is this a reasonable way to handle getters/setters in a PHP class?

    - by Mark Biek
    I'm going to try something with the format of this question and I'm very open to suggestions about a better way to handle it. I didn't want to just dump a bunch of code in the question so I've posted the code for the class on refactormycode. base-class-for-easy-class-property-handling My thought was that people can either post code snippets here or make changes on refactormycode and post links back to their refactorings. I'll make upvotes and accept an answer (assuming there's a clear "winner") based on that. At any rate, on to the class itself: I see a lot of debate about getter/setter class methods and is it better to just access simple property variables directly or should every class have explicit get/set methods defined, blah blah blah. I like the idea of having explicit methods in case you have to add more logic later. Then you don't have to modify any code that uses the class. However I hate having a million functions that look like this: public function getFirstName() { return $this->firstName; } public function setFirstName($firstName) { return $this->firstName; } Now I'm sure I'm not the first person to do this (I'm hoping that there's a better way of doing it that someone can suggest to me). Basically, the PropertyHandler class has a __call magic method. Any methods that come through __call that start with "get" or "set" are then routed to functions that set or retrieve values into an associative array. The key into the array is the name of the calling method after get or set. So, if the method coming into __call is "getFirstName", the array key is "FirstName". I liked using __call because it will automatically take care of the case where the subclass already has a "getFirstName" method defined. My impression (and I may be wrong) is that the __get & __set magic methods don't do that. So here's an example of how it would work: class PropTest extends PropertyHandler { public function __construct() { parent::__construct(); } } $props = new PropTest(); $props->setFirstName("Mark"); echo $props->getFirstName(); Notice that PropTest doesn't actually have "setFirstName" or "getFirstName" methods and neither does PropertyHandler. All that's doing is manipulating array values. The other case would be where your subclass is already extending something else. Since you can't have true multiple inheritance in PHP, you can make your subclass have a PropertyHandler instance as a private variable. You have to add one more function but then things behave in exactly the same way. class PropTest2 { private $props; public function __construct() { $this->props = new PropertyHandler(); } public function __call($method, $arguments) { return $this->props->__call($method, $arguments); } } $props2 = new PropTest2(); $props2->setFirstName('Mark'); echo $props2->getFirstName(); Notice how the subclass has a __call method that just passes everything along to the PropertyHandler __call method. Another good argument against handling getters and setters this way is that it makes it really hard to document. In fact, it's basically impossible to use any sort of document generation tool since the explicit methods to be don't documented don't exist. I've pretty much abandoned this approach for now. It was an interesting learning exercise but I think it sacrifices too much clarity.

    Read the article

  • Parsing Extended Events xml_deadlock_report

    - by Michael Zilberstein
    Jonathan Kehayias and Paul Randall posted more than a year ago great articles on how to monitor historical deadlocks using Extended Events system_health default trace. Both tried to fix on the fly the bug in xml output that caused failures in xml validation. Today I've found out that their version isn't bulletproof either. So here is the fixed one: SELECT CAST ( xest.target_data as XML ) xml_data , * INTO #ring_buffer_data FROM     sys.dm_xe_session_targets xest    INNER...(read more)

    Read the article

  • Java EE talks at JAX Conf

    - by arungupta
    JAX Conf is starting in San Jose today and there are several talks on Java EE there. Java EE Wednesday and Thursday Java Persistence API 2.0 with Eclipse Link RESTful Services with Java EE Cast Study: Functional programming in Scala with CDI GlassFish 3.1: Deploying your Java EE 6 Applications The future of Java Enterprise Testing Forge new ground in Rapid Enterprise Development The Java EE 7 Platform: Developing for the Cloud (Keynote) Exploring Java EE 6 for the Enterprise Developer JBoss Day JSF Summit CDI Tutorial And many more ... Check out the complete schedule and see ya there!

    Read the article

  • Java EE talks at JAX Conf

    - by arungupta
    JAX Conf is starting in San Jose today and there are several talks on Java EE there. Java EE Wednesday and Thursday Java Persistence API 2.0 with Eclipse Link RESTful Services with Java EE Cast Study: Functional programming in Scala with CDI GlassFish 3.1: Deploying your Java EE 6 Applications The future of Java Enterprise Testing Forge new ground in Rapid Enterprise Development The Java EE 7 Platform: Developing for the Cloud (Keynote) Exploring Java EE 6 for the Enterprise Developer JBoss Day JSF Summit CDI Tutorial And many more ... Check out the complete schedule and see ya there!

    Read the article

  • An Honest look at SharePoint Web Services

    - by juanlarios
    INTRODUCTION If you are a SharePoint developer you know that there are two basic ways to develop against SharePoint. 1) The object Model 2) Web services. SharePoint object model has the advantage of being quite rich. Anything you can do through the SharePoint UI as an administrator or end user, you can do through the object model. In fact everything that is done through the UI is done through the object model behind the scenes. The major disadvantage to getting at SharePoint this way is that the code needs to run on the server. This means that all web parts, event receivers, features, etc… all of this is code that is deployed to the server. The second way to get to SharePoint is through the built in web services. There are many articles on how to manipulate web services, how to authenticate to them and interact with them. The basic idea is that a remote application or process can contact SharePoint through a web service. Lots has been written about how great these web services are. This article is written to document the limitations, some of the issues and frustrations with working with SharePoint built in web services. Ultimately, for the tasks I was given to , SharePoint built in web services did not suffice. My evaluation of SharePoint built in services was compared against creating my own WCF Services to do what I needed. The current project I'm working on right now involved several "integration points". A remote application, installed on a separate server was to contact SharePoint and perform an task or operation. So I decided to start up Visual Studio and built a DLL and basically have 2 layers of logic. An integration layer and a data layer. A good friend of mine pointed me to SOLID principles and referred me to some videos and tutorials about it. I decided to implement the methodology (although a lot of the principles are common sense and I already incorporated in my coding practices). I was to deliver this dll to the application team and they would simply call the methods exposed by this dll and voila! it would do some task or operation in SharePoint. SOLUTION My integration layer implemented an interface that defined some of the basic integration tasks that I was to put together. My data layer was about the same, it implemented an interface with some of the tasks that I was going to develop. This gave me the opportunity to develop different data layers, ultimately different ways to get at SharePoint if I needed to. This is a classic SOLID principle. In this case it proved to be quite helpful because I wrote one data layer completely implementing SharePoint built in Web Services and another implementing my own WCF Service that I wrote. I should mention there is another layer underneath the data layer. In referencing SharePoint or WCF services in my visual studio project I created a class for every web service call. So for example, if I used List.asx. I created a class called "DocumentRetreival" this class would do the grunt work to connect to the correct URL, It would perform the basic operation of contacting the service and so on. If I used a view.asmx, I implemented a class called "ViewRetrieval" with the same idea as the last class but it would now interact with all he operations in view.asmx. This gave my data layer the ability to perform multiple calls without really worrying about some of the grunt work each class performs. This again, is a classic SOLID principle. So, in order to compare them side by side we can look at both data layers and with is involved in each. Lets take a look at the "Create Project" task or operation. The integration point is described as , "dll is to provide a way to create a project in SharePoint". Projects , in this case are basically document libraries. I am to implement a way in which a remote application can create a document library in SharePoint. Easy enough right? Use the list.asmx Web service in SharePoint. So here we go! Lets take a look at the code. I added the List.asmx web service reference to my project and this is the class that contacts it:  class DocumentRetrieval     {         private ListsSoapClient _service;      d   private bool _impersonation;         public DocumentRetrieval(bool impersonation, string endpt)         {             _service = new ListsSoapClient();             this.SetEndPoint(string.Format("{0}/{1}", endpt, ConfigurationManager.AppSettings["List"]));             _impersonation = impersonation;             if (_impersonation)             {                 _service.ClientCredentials.Windows.ClientCredential.Password = ConfigurationManager.AppSettings["password"];                 _service.ClientCredentials.Windows.ClientCredential.UserName = ConfigurationManager.AppSettings["username"];                 _service.ClientCredentials.Windows.AllowedImpersonationLevel =                     System.Security.Principal.TokenImpersonationLevel.Impersonation;             }     private void SetEndPoint(string p)          {             _service.Endpoint.Address = new EndpointAddress(p);          }          /// <summary>         /// Creates a document library with specific name and templateID         /// </summary>         /// <param name="listName">New list name</param>         /// <param name="templateID">Template ID</param>         /// <returns></returns>         public XmlElement CreateLibrary(string listName, int templateID, ref ExceptionContract exContract)         {             XmlDocument sample = new XmlDocument();             XmlElement viewCol = sample.CreateElement("Empty");             try             {                 _service.Open();                 viewCol = _service.AddList(listName, "", templateID);             }             catch (Exception ex)             {                 exContract = new ExceptionContract("DocumentRetrieval/CreateLibrary", ex.GetType(), "Connection Error", ex.StackTrace, ExceptionContract.ExceptionCode.error);                             }finally             {                 _service.Close();             }                                      return viewCol;         } } There was a lot more in this class (that I am not including) because i was reusing the grunt work and making other operations with LIst.asmx, For example, updating content types, changing or configuring lists or document libraries. One of the first things I noticed about working with the built in services is that you are really at the mercy of what is available to you. Before creating a document library (Project) I wanted to expose a IsProjectExisting method. This way the integration or data layer could recognize if a library already exists. Well there is no service call or method available to do that check. So this is what I wrote:   public bool DocLibExists(string listName, ref ExceptionContract exContract)         {             try             {                 var allLists = _service.GetListCollection();                                return allLists.ChildNodes.OfType<XmlElement>().ToList().Exists(x => x.Attributes["Title"].Value ==listName);             }             catch (Exception ex)             {                 exContract = new ExceptionContract("DocumentRetrieval/GetList/GetListWSCall", ex.GetType(), "Unable to Retrieve List Collection", ex.StackTrace, ExceptionContract.ExceptionCode.error);             }             return false;         } This really just gets an XMLElement with all the lists. It was then up to me to sift through the clutter and noise and see if Document library already existed. This took a little bit of getting used to. Now instead of working with code, you are working with XMLElement response format from web service. I wrote a LINQ query to go through and find if the attribute "Title" existed and had a value of the listname then it would return True, if not False. I didn't particularly like working this way. Dealing with XMLElement responses and then having to manipulate it to get at the exact data I was looking for. Once the check for the DocLibExists, was done, I would either create the document library or send back an error indicating the document library already existed. Now lets examine the code that actually creates the document library. It does what you are really after, it creates a document library. Notice how the template ID is really an integer. Every document library template in SharePoint has an ID associated with it. Document libraries, Image Library, Custom List, Project Tasks, etc… they all he a unique integer associated with it. Well, that's great but the client came back to me and gave me some specifics that each "project" or document library, should have. They specified they had 3 types of projects. Each project would have unique views, about 10 views for each project. Each Project specified unique configurations (auditing, versioning, content types, etc…) So what turned out to be a simple implementation of creating a document library as a repository for a project, turned out to be quite involved.  The first thing I thought of was to create a template for document library. There are other ways you can do this too. Using the web Service call, you could configure views, versioning, even content types, etc… the only catch is, you have to be working quite extensively with CAML. I am not fond of CAML. I can do it and work with it, I just don't like doing it. It is quite touchy and at times it is quite tough to understand where errors were made with CAML statements. Working with Web Services and CAML proved to be quite annoying. The service call would return a generic error message that did not particularly point me to a CAML statement syntax error, or even a CAML error. I was not sure if it was a security , performance or code based issue. It was quite tough to work with. At times it was difficult to work with because of the way SharePoint handles metadata. There are "Names", "Display Name", and "StaticName" fields. It was quite tough to understand at times, which one to use. So it took a lot of trial and error. There are tools that can help with CAML generation. There is also now intellisense for CAML statements in Visual Studio that might help but ultimately I'm not fond of CAML with Web Services.   So I decided on the template. So my plan was to create create a document library, configure it accordingly and then use The Template Builder that comes with the SharePoint SDK. This tool allows you to create site templates, list template etc… It is quite interesting because it does not generate an STP file, it actually generates an xml definition and a feature you can activate and make that template available on a site or site collection. The first issue I experienced with this is that one of the specifications to this template was that the "All Documents" view was to have 2 web parts on it. Well, it turns out that using the template builder , it did not include the web parts as part of the list template definition it generated. It backed up the settings, the views, the content types but not the custom web parts. I still decided to try this even without the web parts on the page. This new template defined a new Document library definition with a unique ID. The problem was that the service call accepts an int but it only has access to the built in library int definitions. Any new ones added or created will not be available to create. So this made it impossible for me to approach the problem this way.     I should also mention that one of the nice features about SharePoint is the ability to create list templates, back them up and then create lists based on that template. It can all be done by end user administrators. These templates are quite unique because they are saved as an STP file and not an xml definition. I also went this route and tried to see if there was another service call where I could create a document library based no given template name. Nope! none.      After some thinking I decide to implement a WCF service to do this creation for me. I was quite certain that the object model would allow me to create document libraries base on a template in which an ID was required and also templates saved as STP files. Now I don't want to bother with posting the code to contact WCF service because it's self explanatory, but I will post the code that I used to create a list with custom template. public ServiceResult CreateProject(string name, string templateName, string projectId)         {             string siteurl = SPContext.Current.Site.Url;             Guid webguid = SPContext.Current.Web.ID;                        using (SPSite site = new SPSite(siteurl))             {                 using (SPWeb rootweb = site.RootWeb)                 {                     SPListTemplateCollection temps = site.GetCustomListTemplates(rootweb);                     ProcessWeb(siteurl, webguid, web => Act_CreateProject(web, name, templateName, projectId, temps));                 }//SpWeb             }//SPSite              return _globalResult;                   }         private void Act_CreateProject(SPWeb targetsite, string name, string templateName, string projectId, SPListTemplateCollection temps) {                         var temp = temps.Cast<SPListTemplate>().FirstOrDefault(x => x.Name.Equals(templateName));             if (temp != null)             {                             try                 {                                         Guid listGuid = targetsite.Lists.Add(name, "", temp);                     SPList newList = targetsite.Lists[listGuid];                     _globalResult = new ServiceResult(true, "Success", "Success");                 }                 catch (Exception ex)                 {                     _globalResult = new ServiceResult(false, (string.IsNullOrEmpty(ex.Message) ? "None" : ex.Message + " " + templateName), ex.StackTrace.ToString());                 }                                       }        private void ProcessWeb(string siteurl, Guid webguid, Action<SPWeb> action) {                        using (SPSite sitecollection = new SPSite(siteurl)) {                 using (SPWeb web = sitecollection.AllWebs[webguid]) {                     action(web);                 }                     }                  } This code is actually some of the code I implemented for the service. there was a lot more I did on Project Creation which I will cover in my next blog post. I implemented an ACTION method to process the web. This allowed me to properly dispose the SPWEb and SPSite objects and not rewrite this code over and over again. So I implemented a WCF service to create projects for me, this allowed me to do a lot more than just create a document library with a template, it now gave me the flexibility to do just about anything the client wanted at project creation. Once this was implemented , the client came back to me and said, "we reference all our projects with ID's in our application. we want SharePoint to do the same". This has been something I have been doing for a little while now but I do hope that SharePoint 2010 can have more of an answer to this and address it properly. I have been adding metadata to SPWebs through property bag. I believe I have blogged about it before. This time it required metadata added to a document library. No problem!!! I also mentioned these web parts that were to go on the "All Documents" View. I took the opportunity to configure them to the appropriate settings. There were two settings that needed to be set on these web parts. One of them was a Project ID configured in the webpart properties. The following code enhances and replaces the "Act_CreateProject " method above:  private void Act_CreateProject(SPWeb targetsite, string name, string templateName, string projectId, SPListTemplateCollection temps) {                         var temp = temps.Cast<SPListTemplate>().FirstOrDefault(x => x.Name.Equals(templateName));             if (temp != null)             {                 SPLimitedWebPartManager wpmgr = null;                               try                 {                                         Guid listGuid = targetsite.Lists.Add(name, "", temp);                     SPList newList = targetsite.Lists[listGuid];                     SPFolder rootFolder = newList.RootFolder;                     rootFolder.Properties.Add(KEY, projectId);                     rootFolder.Update();                     if (rootFolder.ParentWeb != targetsite)                         rootFolder.ParentWeb.Dispose();                     if (!templateName.Contains("Natural"))                     {                         SPView alldocumentsview = newList.Views.Cast<SPView>().FirstOrDefault(x => x.Title.Equals(ALLDOCUMENTS));                         SPFile alldocfile = targetsite.GetFile(alldocumentsview.ServerRelativeUrl);                         wpmgr = alldocfile.GetLimitedWebPartManager(PersonalizationScope.Shared);                         ConfigureWebPart(wpmgr, projectId, CUSTOMWPNAME);                                              alldocfile.Update();                     }                                        if (newList.ParentWeb != targetsite)                         newList.ParentWeb.Dispose();                     _globalResult = new ServiceResult(true, "Success", "Success");                 }                 catch (Exception ex)                 {                     _globalResult = new ServiceResult(false, (string.IsNullOrEmpty(ex.Message) ? "None" : ex.Message + " " + templateName), ex.StackTrace.ToString());                 }                 finally                 {                     if (wpmgr != null)                     {                         wpmgr.Web.Dispose();                         wpmgr.Dispose();                     }                 }             }                         }       private void ConfigureWebPart(SPLimitedWebPartManager mgr, string prjId, string webpartname)         {             var wp = mgr.WebParts.Cast<System.Web.UI.WebControls.WebParts.WebPart>().FirstOrDefault(x => x.DisplayTitle.Equals(webpartname));             if (wp != null)             {                           (wp as ListRelationshipWebPart.ListRelationshipWebPart).ProjectID = prjId;                 mgr.SaveChanges(wp);             }         }   This Shows you how I was able to set metadata on the document library. It has to be added to the RootFolder of the document library, Unfortunately, the SPList does not have a Property bag that I can add a key\value pair to. It has to be done on the root folder. Now everything in the integration will reference projects by ID's and will not care about names. My, "DocLibExists" will now need to be changed because a web service is not set up to look at property bags.  I had to write another method on the Service to do the equivalent but with ID's instead of names.  The second thing you will notice about the code is the use of the Webpartmanager. I have seen several examples online, and also read a lot about memory leaks, The above code does not produce memory leaks. The web part manager creates an SPWeb, so just dispose it like I did. CONCLUSION This is a long long post so I will stop here for now, I will continue with more comparisons and limitations in my next post. My conclusion for this example is that Web Services will do the trick if you can suffer through CAML and if you are doing some simple operations. For Everything else, there's WCF! **** fireI apologize for the disorganization of this post, I was on a bus on a 12 hour trip to IOWA while I wrote it, I was half asleep and half awake, hopefully it makes enough sense to someone.

    Read the article

  • SQL SERVER – DMV – sys.dm_exec_query_optimizer_info – Statistics of Optimizer

    - by pinaldave
    Incredibly, SQL Server has so much information to share with us. Every single day, I am amazed with this SQL Server technology. Sometimes I find several interesting information by just querying few of the DMV. And when I present this info in front of my client during performance tuning consultancy, they are surprised with my findings. Today, I am going to share one of the hidden gems of DMV with you, the one which I frequently use to understand what’s going on under the hood of SQL Server. SQL Server keeps the record of most of the operations of the Query Optimizer. We can learn many interesting details about the optimizer which can be utilized to improve the performance of server. SELECT * FROM sys.dm_exec_query_optimizer_info WHERE counter IN ('optimizations', 'elapsed time','final cost', 'insert stmt','delete stmt','update stmt', 'merge stmt','contains subquery','tables', 'hints','order hint','join hint', 'view reference','remote query','maximum DOP', 'maximum recursion level','indexed views loaded', 'indexed views matched','indexed views used', 'indexed views updated','dynamic cursor request', 'fast forward cursor request') All occurrence values are cumulative and are set to 0 at system restart. All values for value fields are set to NULL at system restart. I have removed a few of the internal counters from the script above, and kept only documented details. Let us check the result of the above query. As you can see, there is so much vital information that is revealed in above query. I can easily say so many things about how many times Optimizer was triggered and what the average time taken by it to optimize my queries was. Additionally, I can also determine how many times update, insert or delete statements were optimized. I was able to quickly figure out that my client was overusing the Query Hints using this dynamic management view. If you have been reading my blog, I am sure you are aware of my series related to SQL Server Views SQL SERVER – The Limitations of the Views – Eleven and more…. With this, I can take a quick look and figure out how many times Views were used in various solutions within the query. Moreover, you can easily know what fraction of the optimizations has been involved in tuning server. For example, the following query would tell me, in total optimizations, what the fraction of time View was “reference“. As this View also includes system Views and DMVs, the number is a bit higher on my machine. SELECT (SELECT CAST (occurrence AS FLOAT) FROM sys.dm_exec_query_optimizer_info WHERE counter = 'view reference') / (SELECT CAST (occurrence AS FLOAT) FROM sys.dm_exec_query_optimizer_info WHERE counter = 'optimizations') AS ViewReferencedFraction Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL DMV, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • SQL SERVER – A Puzzle – Fun with NULL – Fix Error 8117

    - by pinaldave
    During my 8 years of career, I have been involved in many interviews. Quite often, I act as the  interview. If I am the interviewer, I ask many questions – from easy questions to difficult ones. When I am the interviewee, I frequently get an opportunity to ask the interviewer some questions back. Regardless of the my capacity in attending the interview, I always make it a point to ask the interviewer at least one question. What is NULL? It’s always fun to ask this question during interviews, because in every interview, I get a different answer. NULL is often confused with false, absence of value or infinite value. Honestly, NULL is a very interesting subject as it bases its behavior in server settings. There are a few properties of NULL that are universal, but the knowledge about these properties is not known in a universal sense. Let us run this simple puzzle. Run the following T-SQL script: SELECT SUM(data) FROM (SELECT NULL AS data) t It will return the following error: Msg 8117, Level 16, State 1, Line 1 Operand data type NULL is invalid for sum operator. Now the error makes it very clear that NULL is invalid for sum Operator. Frequently enough, I have showed this simple query to many folks whom I came across. I asked them if they could modify the subquery and return the result as NULL. Here is what I expected: Even though this is a very simple looking query, so far I’ve got the correct answer from only 10% of the people to whom I have asked this question. It was common for me to receive this kind of answer – convert the NULL to some data type. However, doing so usually returns the value as 0 or the integer they passed. SELECT SUM(data) FROM (SELECT ISNULL(NULL,0) AS data) t I usually see many people modifying the outer query to get desired NULL result, but that is not allowed in this simple puzzle. This small puzzle made me wonder how many people have a clear understanding about NULL. Well, here is the answer to my simple puzzle. Just CAST NULL AS INT and it will return the final result as NULL: SELECT SUM(data) FROM (SELECT CAST(NULL AS INT) AS data) t Now that you know the answer, don’t you think it was very simple indeed? This blog post is especially dedicated to my friend Madhivanan who has written an excellent blog post about NULL. I am confident that after reading the blog post from Madhivanan, you will have no confusion regarding NULL in the future. Read: NULL, NULL, NULL and nothing but NULL. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Crazy Linq: performing System.ComponentModel.DataAnnotations validation in a single statement

    - by Daniel Cazzulino
    public static IEnumerable&lt;ValidationResult&gt; Validate(object component) { return from descriptor in TypeDescriptor.GetProperties(component).Cast&lt;PropertyDescriptor&gt;() from validation in descriptor.Attributes.OfType&lt;System.ComponentModel.DataAnnotations.ValidationAttribute&gt;() where !validation.IsValid(descriptor.GetValue(component)) select new ValidationResult( validation.ErrorMessage ?? string.Format(CultureInfo.CurrentUICulture, "{0} validation failed.", validation.GetType().Name), new[] { descriptor.Name }); } ...Read full article

    Read the article

  • Oracle 'In Touch' PartnerCast - July 1, 2014

    - by Cinzia Mascanzoni
    27 May 2014 'In Touch' Webcast for Oracle EMEA Partners Invitation Stay Connected Oracle Media Network   OPN on PartnerCast   Oracle 'In Touch' PartnerCast (July 1, 2014)Be prepared for a year of growth Register Now! Dear partner, We would like to invite you to join David Callaghan, Senior Vice President Oracle EMEA Alliances and Channels, and his studio guests for the next broadcast of the Oracle ‘In Touch’ PartnerCast on Tuesday 1st July 2014 from 10:30am UK / 11:30am CET. In this cast, David’s studio guests and his regional reporters will be looking at your priorities as EMEA partners and how best to grow with Oracle. We also look forward to the broadcast covering topics on the following: Highlights of FY14 Strategic themes for FY15 HCM, CRM and ERP Oracle on Oracle Exclusive for ‘In Touch’ David Callaghan questions Rich Geraffo, Senior Vice President, Global Alliances & Channels, on how the FY15 partner Global kick off relates to EMEA. Plus David provides your chance to hear from some of the newly appointed Worldwide A&C Leadership team as he discusses with Bruce Chumley VP Oracle Channel Distribution Sales & Troy Richardson VP Oracle Strategic Alliances; their core focus and strategy of growth and what they intend on bringing to the table in their new role. Register Now! With lots of studio guests joining David, why not get in touch on Twitter using the hashtag #OracleInTouch or by emailing [email protected] to get your questions featured in the cast! To find out more information and to watch previous episodes on-demand, please visit our webpage here. Best regards, Oracle EMEA Alliances & Channels Oracle 'In Touch' PartnerCast: be prepared for a year of growth July 01, 2014 10:30am UK / 11:30am CET Duration: 45 mins. Host David Callaghan Senior VP Oracle EMEA Alliances & Channels Studio Guests Alistair Hopkins VP Sales & Strategy, Technology Solutions, Oracle EMEA Alliances & Channels More to be announced shortly Features Contributors Rich Geraffo Senior Vice President, Oracle Worldwide Alliances & Channels Bruce Chumley Vice President Channel Distribution Sales, Oracle WW Alliances & Channels Steve Biondi VP Channel Distribution Sales, Oracle WW Alliances & Channels Regional Reporters Silvia Kaske VP Oracle A&C WCE North Will O'Brien VP Oracle A&C UK/IE Eric Fontaine VP Oracle A&C WCE South Janusz Naklicki VP Oracle A&C ECEMEA

    Read the article

< Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >