Search Results

Search found 3700 results on 148 pages for 'strongly typed dataset'.

Page 24/148 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • c# - How do you get a variable's name as it was physically typed in its declaration?

    - by Petras
    The class below contains the field city. I need to dynamically determine the field's name as it is typed in the class declaration i.e. I need to get the string "city" from an instance of the object city. I have tried to do this by examining its Type in DoSomething() but can't find it when examining the contents of the Type in the debugger. Is it possible? public class Person { public string city = "New York"; public Person() { } public void DoSomething() { Type t = city.GetType(); string field_name = t.SomeUnkownFunction(); //would return the string "city" if it existed! } } Some people in their answers below have asked me why I want to do this. Here's why. In my real world situation, there is a custom attribute above city. [MyCustomAttribute("param1", "param2", etc)] public string city = "New York"; I need this attribute in other code. To get the attribute, I use reflection. And in the reflection code I need to type the string "city" MyCustomAttribute attr; Type t = typeof(Person); foreach (FieldInfo field in t.GetFields()) { if (field.Name == "city") { //do stuff when we find the field that has the attribute we need } } Now this isn't type safe. If I changed the variable "city" to "workCity" in my field declaration in Person this line would fail unless I knew to update the string if (field.Name == "workCity") //I have to make this change in another file for this to still work, yuk! { } So I am trying to find some way to pass the string to this code without physically typing it. Yes, I could declare it as a string constant in Person (or something like that) but that would still be typing it twice. Phew! That was tough to explain!! Thanks Thanks to all who answered this * a lot*. It sent me on a new path to better understand lambda expressions. And it created a new question.

    Read the article

  • What's the best way to paginate a dataset with Zend_Framework and Doctrine?

    - by joedevon
    Before I start to build this myself I thought I'd ask others to share their experience. What's the best / your favorite way to paginate a dataset with an application built upon Zend_Framework and Doctrine as your ORM? I'm new to Doctrine. I'm calling the model directly from a View Helper, bypassing the Controller, although I'm still interested if your solution uses controllers. I did see one article on this topic: http://ciaranmcnulty.com/blog/2009/06/Simplify-pagination-logic-using-a-custom-zend-paginator-adapter Devzone has an article using Doctrine, Zend Framework OR Pear, but none of those options mention a #ZF app that uses Doctrine.

    Read the article

  • Best way to store large dataset in SQL Server?

    - by gary
    I have a dataset which contains a string key field and up to 50 keywords associated with that information. Once the data has been inserted into the database there will be very few writes (INSERTS) but mostly queries for one or more keywords. I have read "Tagsystems: performance tests" which is MySQL based and it seems 2NF appears to be a good method for implementing this, however I was wondering if anyone had experience with doing this with SQL Server 2008 and very large datasets. I am likely to initially have 1 million key fields which could have up to 50 keywords each. Would a structure of keyfield, keyword1, keyword2, ... , keyword50 be the best solution or two tables keyid keyfield | 1 | | M keyid keyword Be a better idea if my queries are mostly going to be looking for results that have one or more keywords?

    Read the article

  • How to create select SQL statement that would produce "merged" dataset from two tables(Oracle DBMS)?

    - by Roman Kagan
    Here's my original question: merging two data sets Unfortunately I omitted some intircacies, that I'd like to elaborate here. So I have two tables events_source_1 and events_source_2 tables. I have to produce the data set from those tables into resultant dataset (that I'd be able to insert into third table, but that's irrelevant). events_source_1 contain historic event data and I have to do get the most recent event (for such I'm doing the following: select event_type,b,c,max(event_date),null next_event_date from events_source_1 group by event_type,b,c,event_date,null events_source_2 contain the future event data and I have to do the following: select event_type,b,c,null event_date, next_event_date from events_source_2 where b>sysdate; How to put outer join statement to fill the void (i.e. when same event_type,b,c found from event_source_2 then next_event_date will be filled with the first date found GREATLY APPRECIATE FOR YOUR HELP IN ADVANCE.

    Read the article

  • MySQL: Is it possible to return a "mixed" dataset?

    - by Tom
    Hi, I'm wondering if there's some clever way in MySQL to return a "mixed/balanced" dataset according to a specific criterion? To illustrate, let's say that there are potential results in a table that can be of Type 1 or Type 2 (i.e. a column has a value 1 or 2 for each record). Is there a clever query that would be able to directly return results alternating between 1 and 2 in sequence: 1st record is of type 1, 2nd record is of type 2, 3rd record is of type 1, 4th record is of type 2, etc... Apologies if the question is silly, just looking for some options. Of course, I could return any data and do this in PHP, but it does add some code. Thanks.

    Read the article

  • Can someone tell me why my dataset wont save correctly to the database in simple winforms app?

    - by Mike
    I have been struggling with this all day and I know it is probably something stupid. My code is below. If I call save then exit my program and start again I can save images to my events but if I just call save when I try to add an image and call save again I get a foreign key error. From what I know I thought my save method was updating the database from my dataset so the event associated with the image should exist. Anyway here is my save method... Private Sub Save() Me.Validate() EventsBindingSource.EndEdit() ImagesBindingSource.EndEdit() TableAdapterManager.UpdateAll(EventDataSet) EventDataSet.AcceptChanges() End Sub Am I doing this wrong? Is this enough detail?

    Read the article

  • Can Automapper populate a strongly typed object from the data in a NameValueCollection?

    - by Seth Petry-Johnson
    Can Automapper map values from a NameValueCollection onto an object, where target.Foo receives the value stored in the collection under the "Foo" key? I have a business object that stores some data in named properties and other data in a property bag. Different views make different assumptions about the data in the property bag, which I capture in page-specific view models. I want to use AutoMapper to map both the "inherent" attributes (the ones that always exist) as well as the "dynamic" attributes (the ones that vary per view and may or may not exist).

    Read the article

  • if all my views are passed a strongly typed viewdata, if they have a baseviewdata class, can I set a

    - by Blankman
    I want all my views to inherit from a baseview data so I can set some shared properties that all my views will need. Can I set some properties in OnExecuting so I don't have to do it for all Actions? I want to then display the string value of the property in all my view pages. If yes, how can I do this? I need to hook into the base view data somehow? so i'll have: public MyViewData : ViewData { } And I need one for generics also?

    Read the article

  • How to parse JSON string that can be one of two different strongly typed objects?

    - by user852194
    Background: I'm invoking a web service method which returns a JSON string. This string can be of type ASConInfo or ASErrorResponse. Question: Using the DataContractJsonSerializer, how can I convert the returned JSON string to one of those objects? Thanks in advance I have tried the following technique, but it does not work: public static object test(string inputString) { object obj = null; using (MemoryStream ms = new MemoryStream(Encoding.Unicode.GetBytes(inputString))) { DataContractJsonSerializer ser = new DataContractJsonSerializer(typeof(object)); obj = ser.ReadObject(ms) as object; } return obj; } [WebMethod] public string TypeChecker() { string str = "{\"Error\":191,\"ID\":\"112345678921212\",\"Length\":15}"; //string strErro = ""; object a = test(str); if (a is ASResponse) { return "ASResponse"; } if (a is ASErrorResponse) { return "ASErrorResponse"; } return "Nothing"; }

    Read the article

  • ASP.NET MVC: Redundant (strongly typed) views in CRUD areas.

    - by UpTheCreek
    In the CRUD areas of my MVC app I have lots of seemingly pointless view files, such as: <%@ Page Title="" Language="C#" MasterPageFile="Some.Master" Inherits="System.Web.Mvc.ViewPage<SomeModel>" %> <asp:Content ID="ContentID" ContentPlaceHolderID="SomePlaceHolder" runat="server"> <%= Html.DisplayForModel() %> </asp:Content> This is of course pretty unDRY. Is it possible to use a shared view for this while at the same time preserving the Strong Typing? (e.g. by specifying the generic type in the controller?)

    Read the article

  • C# 4.0: Dynamic Programming

    - by Paulo Morgado
    The major feature of C# 4.0 is dynamic programming. Not just dynamic typing, but dynamic in broader sense, which means talking to anything that is not statically typed to be a .NET object. Dynamic Language Runtime The Dynamic Language Runtime (DLR) is piece of technology that unifies dynamic programming on the .NET platform, the same way the Common Language Runtime (CLR) has been a common platform for statically typed languages. The CLR always had dynamic capabilities. You could always use reflection, but its main goal was never to be a dynamic programming environment and there were some features missing. The DLR is built on top of the CLR and adds those missing features to the .NET platform. The Dynamic Language Runtime is the core infrastructure that consists of: Expression Trees The same expression trees used in LINQ, now improved to support statements. Dynamic Dispatch Dispatches invocations to the appropriate binder. Call Site Caching For improved efficiency. Dynamic languages and languages with dynamic capabilities are built on top of the DLR. IronPython and IronRuby were already built on top of the DLR, and now, the support for using the DLR is being added to C# and Visual Basic. Other languages built on top of the CLR are expected to also use the DLR in the future. Underneath the DLR there are binders that talk to a variety of different technologies: .NET Binder Allows to talk to .NET objects. JavaScript Binder Allows to talk to JavaScript in SilverLight. IronPython Binder Allows to talk to IronPython. IronRuby Binder Allows to talk to IronRuby. COM Binder Allows to talk to COM. Whit all these binders it is possible to have a single programming experience to talk to all these environments that are not statically typed .NET objects. The dynamic Static Type Let’s take this traditional statically typed code: Calculator calculator = GetCalculator(); int sum = calculator.Sum(10, 20); Because the variable that receives the return value of the GetCalulator method is statically typed to be of type Calculator and, because the Calculator type has an Add method that receives two integers and returns an integer, it is possible to call that Sum method and assign its return value to a variable statically typed as integer. Now lets suppose the calculator was not a statically typed .NET class, but, instead, a COM object or some .NET code we don’t know he type of. All of the sudden it gets very painful to call the Add method: object calculator = GetCalculator(); Type calculatorType = calculator.GetType(); object res = calculatorType.InvokeMember("Add", BindingFlags.InvokeMethod, null, calculator, new object[] { 10, 20 }); int sum = Convert.ToInt32(res); And what if the calculator was a JavaScript object? ScriptObject calculator = GetCalculator(); object res = calculator.Invoke("Add", 10, 20); int sum = Convert.ToInt32(res); For each dynamic domain we have a different programming experience and that makes it very hard to unify the code. With C# 4.0 it becomes possible to write code this way: dynamic calculator = GetCalculator(); int sum = calculator.Add(10, 20); You simply declare a variable who’s static type is dynamic. dynamic is a pseudo-keyword (like var) that indicates to the compiler that operations on the calculator object will be done dynamically. The way you should look at dynamic is that it’s just like object (System.Object) with dynamic semantics associated. Anything can be assigned to a dynamic. dynamic x = 1; dynamic y = "Hello"; dynamic z = new List<int> { 1, 2, 3 }; At run-time, all object will have a type. In the above example x is of type System.Int32. When one or more operands in an operation are typed dynamic, member selection is deferred to run-time instead of compile-time. Then the run-time type is substituted in all variables and normal overload resolution is done, just like it would happen at compile-time. The result of any dynamic operation is always dynamic and, when a dynamic object is assigned to something else, a dynamic conversion will occur. Code Resolution Method double x = 1.75; double y = Math.Abs(x); compile-time double Abs(double x) dynamic x = 1.75; dynamic y = Math.Abs(x); run-time double Abs(double x) dynamic x = 2; dynamic y = Math.Abs(x); run-time int Abs(int x) The above code will always be strongly typed. The difference is that, in the first case the method resolution is done at compile-time, and the others it’s done ate run-time. IDynamicMetaObjectObject The DLR is pre-wired to know .NET objects, COM objects and so forth but any dynamic language can implement their own objects or you can implement your own objects in C# through the implementation of the IDynamicMetaObjectProvider interface. When an object implements IDynamicMetaObjectProvider, it can participate in the resolution of how method calls and property access is done. The .NET Framework already provides two implementations of IDynamicMetaObjectProvider: DynamicObject : IDynamicMetaObjectProvider The DynamicObject class enables you to define which operations can be performed on dynamic objects and how to perform those operations. For example, you can define what happens when you try to get or set an object property, call a method, or perform standard mathematical operations such as addition and multiplication. ExpandoObject : IDynamicMetaObjectProvider The ExpandoObject class enables you to add and delete members of its instances at run time and also to set and get values of these members. This class supports dynamic binding, which enables you to use standard syntax like sampleObject.sampleMember, instead of more complex syntax like sampleObject.GetAttribute("sampleMember").

    Read the article

  • Dynamic Types and DynamicObject References in C#

    - by Rick Strahl
    I've been working a bit with C# custom dynamic types for several customers recently and I've seen some confusion in understanding how dynamic types are referenced. This discussion specifically centers around types that implement IDynamicMetaObjectProvider or subclass from DynamicObject as opposed to arbitrary type casts of standard .NET types. IDynamicMetaObjectProvider types  are treated special when they are cast to the dynamic type. Assume for a second that I've created my own implementation of a custom dynamic type called DynamicFoo which is about as simple of a dynamic class that I can think of:public class DynamicFoo : DynamicObject { Dictionary<string, object> properties = new Dictionary<string, object>(); public string Bar { get; set; } public DateTime Entered { get; set; } public override bool TryGetMember(GetMemberBinder binder, out object result) { result = null; if (!properties.ContainsKey(binder.Name)) return false; result = properties[binder.Name]; return true; } public override bool TrySetMember(SetMemberBinder binder, object value) { properties[binder.Name] = value; return true; } } This class has an internal dictionary member and I'm exposing this dictionary member through a dynamic by implementing DynamicObject. This implementation exposes the properties dictionary so the dictionary keys can be referenced like properties (foo.NewProperty = "Cool!"). I override TryGetMember() and TrySetMember() which are fired at runtime every time you access a 'property' on a dynamic instance of this DynamicFoo type. Strong Typing and Dynamic Casting I now can instantiate and use DynamicFoo in a couple of different ways: Strong TypingDynamicFoo fooExplicit = new DynamicFoo(); var fooVar = new DynamicFoo(); These two commands are essentially identical and use strong typing. The compiler generates identical code for both of them. The var statement is merely a compiler directive to infer the type of fooVar at compile time and so the type of fooExplicit is DynamicFoo, just like fooExplicit. This is very static - nothing dynamic about it - and it completely ignores the IDynamicMetaObjectProvider implementation of my class above as it's never used. Using either of these I can access the native properties:DynamicFoo fooExplicit = new DynamicFoo();// static typing assignmentsfooVar.Bar = "Barred!"; fooExplicit.Entered = DateTime.Now; // echo back static values Console.WriteLine(fooVar.Bar); Console.WriteLine(fooExplicit.Entered); but I have no access whatsoever to the properties dictionary. Basically this creates a strongly typed instance of the type with access only to the strongly typed interface. You get no dynamic behavior at all. The IDynamicMetaObjectProvider features don't kick in until you cast the type to dynamic. If I try to access a non-existing property on fooExplicit I get a compilation error that tells me that the property doesn't exist. Again, it's clearly and utterly non-dynamic. Dynamicdynamic fooDynamic = new DynamicFoo(); fooDynamic on the other hand is created as a dynamic type and it's a completely different beast. I can also create a dynamic by simply casting any type to dynamic like this:DynamicFoo fooExplicit = new DynamicFoo(); dynamic fooDynamic = fooExplicit; Note that dynamic typically doesn't require an explicit cast as the compiler automatically performs the cast so there's no need to use as dynamic. Dynamic functionality works at runtime and allows for the dynamic wrapper to look up and call members dynamically. A dynamic type will look for members to access or call in two places: Using the strongly typed members of the object Using theIDynamicMetaObjectProvider Interface methods to access members So rather than statically linking and calling a method or retrieving a property, the dynamic type looks up - at runtime  - where the value actually comes from. It's essentially late-binding which allows runtime determination what action to take when a member is accessed at runtime *if* the member you are accessing does not exist on the object. Class members are checked first before IDynamicMetaObjectProvider interface methods are kick in. All of the following works with the dynamic type:dynamic fooDynamic = new DynamicFoo(); // dynamic typing assignments fooDynamic.NewProperty = "Something new!"; fooDynamic.LastAccess = DateTime.Now; // dynamic assigning static properties fooDynamic.Bar = "dynamic barred"; fooDynamic.Entered = DateTime.Now; // echo back dynamic values Console.WriteLine(fooDynamic.NewProperty); Console.WriteLine(fooDynamic.LastAccess); Console.WriteLine(fooDynamic.Bar); Console.WriteLine(fooDynamic.Entered); The dynamic type can access the native class properties (Bar and Entered) and create and read new ones (NewProperty,LastAccess) all using a single type instance which is pretty cool. As you can see it's pretty easy to create an extensible type this way that can dynamically add members at runtime dynamically. The Alter Ego of IDynamicObject The key point here is that all three statements - explicit, var and dynamic - declare a new DynamicFoo(), but the dynamic declaration results in completely different behavior than the first two simply because the type has been cast to dynamic. Dynamic binding means that the type loses its typical strong typing, compile time features. You can see this easily in the Visual Studio code editor. As soon as you assign a value to a dynamic you lose Intellisense and you see which means there's no Intellisense and no compiler type checking on any members you apply to this instance. If you're new to the dynamic type it might seem really confusing that a single type can behave differently depending on how it is cast, but that's exactly what happens when you use a type that implements IDynamicMetaObjectProvider. Declare the type as its strong type name and you only get to access the native instance members of the type. Declare or cast it to dynamic and you get dynamic behavior which accesses native members plus it uses IDynamicMetaObjectProvider implementation to handle any missing member definitions by running custom code. You can easily cast objects back and forth between dynamic and the original type:dynamic fooDynamic = new DynamicFoo(); fooDynamic.NewProperty = "New Property Value"; DynamicFoo foo = fooDynamic; foo.Bar = "Barred"; Here the code starts out with a dynamic cast and a dynamic assignment. The code then casts back the value to the DynamicFoo. Notice that when casting from dynamic to DynamicFoo and back we typically do not have to specify the cast explicitly - the compiler can induce the type so I don't need to specify as dynamic or as DynamicFoo. Moral of the Story This easy interchange between dynamic and the underlying type is actually super useful, because it allows you to create extensible objects that can expose non-member data stores and expose them as an object interface. You can create an object that hosts a number of strongly typed properties and then cast the object to dynamic and add additional dynamic properties to the same type at runtime. You can easily switch back and forth between the strongly typed instance to access the well-known strongly typed properties and to dynamic for the dynamic properties added at runtime. Keep in mind that dynamic object access has quite a bit of overhead and is definitely slower than strongly typed binding, so if you're accessing the strongly typed parts of your objects you definitely want to use a strongly typed reference. Reserve dynamic for the dynamic members to optimize your code. The real beauty of dynamic is that with very little effort you can build expandable objects or objects that expose different data stores to an object interface. I'll have more on this in my next post when I create a customized and extensible Expando object based on DynamicObject.© Rick Strahl, West Wind Technologies, 2005-2012Posted in CSharp  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Using Table-Valued Parameters With SQL Server Reporting Services

    - by Jesse
    In my last post I talked about using table-valued parameters to pass a list of integer values to a stored procedure without resorting to using comma-delimited strings and parsing out each value into a TABLE variable. In this post I’ll extend the “Customer Transaction Summary” report example to see how we might leverage this same stored procedure from within an SQL Server Reporting Services (SSRS) report. I’ve worked with SSRS off and on for the past several years and have generally found it to be a very useful tool for building nice-looking reports for end users quickly and easily. That said, I’ve been frustrated by SSRS from time to time when seemingly simple things are difficult to accomplish or simply not supported at all. I thought that using table-valued parameters from within a SSRS report would be simple, but unfortunately I was wrong. Customer Transaction Summary Example Let’s take the “Customer Transaction Summary” report example from the last post and try to plug that same stored procedure into an SSRS report. Our report will have three parameters: Start Date – beginning of the date range for which the report will summarize customer transactions End Date – end of the date range for which the report will summarize customer transactions Customer Ids – One or more customer Ids representing the customers that will be included in the report The simplest way to get started with this report will be to create a new dataset and point it at our Customer Transaction Summary report stored procedure (note that I’m using SSRS 2012 in the screenshots below, but there should be little to no difference with SSRS 2008): When you initially create this dataset the SSRS designer will try to invoke the stored procedure to determine what the parameters and output fields are for you automatically. As part of this process the following dialog pops-up: Obviously I can’t use this dialog to specify a value for the ‘@customerIds’ parameter since it is of the IntegerListTableType user-defined type that we created in the last post. Unfortunately this really throws the SSRS designer for a loop, and regardless of what combination of Data Type, Pass Null Value, or Parameter Value I used here, I kept getting this error dialog with the message, "Operand type clash: nvarchar is incompatible with IntegerListTableType". This error message makes some sense considering that the nvarchar type is indeed incompatible with the IntegerListTableType, but there’s little clue given as to how to remedy the situation. I don’t know for sure, but I think that behind-the-scenes the SSRS designer is trying to give the @customerIds parameter an nvarchar-typed SqlParameter which is causing the issue. When I first saw this error I figured that this might just be a limitation of the dataset designer and that I’d be able to work around the issue by manually defining the parameters. I know that there are some special steps that need to be taken when invoking a stored procedure with a table-valued parameter from ADO .NET, so I figured that I might be able to use some custom code embedded in the report  to create a SqlParameter instance with the needed properties and value to make this work, but the “Operand type clash" error message persisted. The Text Query Approach Just because we’re using a stored procedure to create the dataset for this report doesn’t mean that we can’t use the ‘Text’ Query Type option and construct an EXEC statement that will invoke the stored procedure. In order for this to work properly the EXEC statement will also need to declare and populate an IntegerListTableType variable to pass into the stored procedure. Before I go any further I want to make one point clear: this is a really ugly hack and it makes me cringe to do it. Simply put, I strongly feel that it should not be this difficult to use a table-valued parameter with SSRS. With that said, let’s take a look at what we’ll have to do to make this work. Manually Define Parameters First, we’ll need to manually define the parameters for report by right-clicking on the ‘Parameters’ folder in the ‘Report Data’ window. We’ll need to define the ‘@startDate’ and ‘@endDate’ as simple date parameters. We’ll also create a parameter called ‘@customerIds’ that will be a mutli-valued Integer parameter: In the ‘Available Values’ tab we’ll point this parameter at a simple dataset that just returns the CustomerId and CustomerName of each row in the Customers table of the database or manually define a handful of Customer Id values to make available when the report runs. Once we have these parameters properly defined we can take another crack at creating the dataset that will invoke the ‘rpt_CustomerTransactionSummary’ stored procedure. This time we’ll choose the ‘Text’ query type option and put the following into the ‘Query’ text area: 1: exec('declare @customerIdList IntegerListTableType ' + @customerIdInserts + 2: ' EXEC rpt_CustomerTransactionSummary 3: @startDate=''' + @startDate + ''', 4: @endDate='''+ @endDate + ''', 5: @customerIds=@customerIdList')   By using the ‘Text’ query type we can enter any arbitrary SQL that we we want to and then use parameters and string concatenation to inject pieces of that query at run time. It can be a bit tricky to parse this out at first glance, but from the SSRS designer’s point of view this query defines three parameters: @customerIdInserts – This will be a Text parameter that we use to define INSERT statements that will populate the @customerIdList variable that is being declared in the SQL. This parameter won’t actually ever get passed into the stored procedure. I’ll go into how this will work in a bit. @startDate – This is a simple date parameter that will get passed through directly into the @startDate parameter of the stored procedure on line 3. @endDate – This is another simple data parameter that will get passed through into the @endDate parameter of the stored procedure on line 4. At this point the dataset designer will be able to correctly parse the query and should even be able to detect the fields that the stored procedure will return without needing to specify any values for query when prompted to. Once the dataset has been correctly defined we’ll have a @customerIdInserts parameter listed in the ‘Parameters’ tab of the dataset designer. We need to define an expression for this parameter that will take the values selected by the user for the ‘@customerIds’ parameter that we defined earlier and convert them into INSERT statements that will populate the @customerIdList variable that we defined in our Text query. In order to do this we’ll need to add some custom code to our report using the ‘Report Properties’ dialog: Any custom code defined in the Report Properties dialog gets embedded into the .rdl of the report itself and (unfortunately) must be written in VB .NET. Note that you can also add references to custom .NET assemblies (which could be written in any language), but that’s outside the scope of this post so we’ll stick with the “quick and dirty” VB .NET approach for now. Here’s the VB .NET code (note that any embedded code that you add here must be defined in a static/shared function, though you can define as many functions as you want): 1: Public Shared Function BuildIntegerListInserts(ByVal variableName As String, ByVal paramValues As Object()) As String 2: Dim insertStatements As New System.Text.StringBuilder() 3: For Each paramValue As Object In paramValues 4: insertStatements.AppendLine(String.Format("INSERT {0} VALUES ({1})", variableName, paramValue)) 5: Next 6: Return insertStatements.ToString() 7: End Function   This method takes a variable name and an array of objects. We use an array of objects here because that is how SSRS will pass us the values that were selected by the user at run-time. The method uses a StringBuilder to construct INSERT statements that will insert each value from the object array into the provided variable name. Once this method has been defined in the custom code for the report we can go back into the dataset designer’s Parameters tab and update the expression for the ‘@customerIdInserts’ parameter by clicking on the button with the “function” symbol that appears to the right of the parameter value. We’ll set the expression to: 1: =Code.BuildIntegerListInserts("@customerIdList ", Parameters!customerIds.Value)   In order to invoke our custom code method we simply need to invoke “Code.<method name>” and pass in any needed parameters. The first parameter needs to match the name of the IntegerListTableType variable that we used in the EXEC statement of our query. The second parameter will come from the Value property of the ‘@customerIds’ parameter (this evaluates to an object array at run time). Finally, we’ll need to edit the properties of the ‘@customerIdInserts’ parameter on the report to mark it as a nullable internal parameter so that users aren’t prompted to provide a value for it when running the report. Limitations And Final Thoughts When I first started looking into the text query approach described above I wondered if there might be an upper limit to the size of the string that can be used to run a report. Obviously, the size of the actual query could increase pretty dramatically if you have a parameter that has a lot of potential values or you need to support several different table-valued parameters in the same query. I tested the example Customer Transaction Summary report with 1000 selected customers without any issue, but your mileage may vary depending on how much data you might need to pass into your query. If you think that the text query hack is a lot of work just to use a table-valued parameter, I agree! I think that it should be a lot easier than this to use a table-valued parameter from within SSRS, but so far I haven’t found a better way. It might be possible to create some custom .NET code that could build the EXEC statement for a given set of parameters automatically, but exploring that will have to wait for another post. For now, unless there’s a really compelling reason or requirement to use table-valued parameters from SSRS reports I would probably stick with the tried and true “join-multi-valued-parameter-to-CSV-and-split-in-the-query” approach for using mutli-valued parameters in a stored procedure.

    Read the article

  • Edit in desktop application with DataGridView

    - by SAMIR BHOGAYTA
    private void DataGridView_CellContentClick(object sender, DataGridViewCellEventArgs e) { if (e.ColumnIndex == 0) { string s = DataGridView.Rows[e.RowIndex].Cells[1].FormattedValue.ToString(); srno = Convert.ToInt16(s); FormName objFrm = new FormName(s); objFrm.MdiParent = this.MdiParent; objFrm.Show(); } } //Into the New Form public FormName(string id) { uid = id; i = Convert.ToInt16(id); InitializeComponent(); } //Get Detail As per id public void GetDetail() { string detail = "SELECT fieldname1,fieldname2 FROM TableName where PrimaryKeyField = "+id+""; DataSet ds = new DataSet(); ds = (DataSet)prm.RetriveData(detail); } //RetriveData Function public object RetriveData(string query) { // If you have sql connection use SqlConnection OleDbConnection con = new OleDbConnection(constr); OleDbDataAdapter drap = new OleDbDataAdapter(query, con); con.Open(); DataSet ds = new DataSet(); drap.Fill(ds); con.Close(); return ds; }

    Read the article

  • Working with Reporting Services Filters–Part 1

    - by smisner
    There are two ways that you can filter data in Reporting Services. The first way, which usually provides a faster performance, is to use query parameters to apply a filter using the WHERE clause in a SQL statement. In that case, the structure of the filter depends upon the syntax recognized by the source database. Another way to filter data in Reporting Services is to apply a filter to a dataset, data region, or a group. Using this latter method, you can even apply multiple filters. However, the use of filter operators or the setup of multiple filters is not always obvious, so in this series of posts, I'll provide some more information about the configuration of filters. First, why not use query parameters exclusively for filtering? Here are a few reasons: You might want to apply a filter to part of the report, but not all of the report. Your dataset might retrieve data from a stored procedure, and doesn't allow you to pass a query parameter for filtering purposes. Your report might be set up as a snapshot on the report server and, in that case, cannot be dynamically filtered based on a query parameter. Next, let's look at how to set up a report filter in general. The process is the same whether you are applying the filter to a dataset, data region, or a group. When you go to the Filters page in the Properties dialog box for whichever of these items you selected (dataset, data region, group), you click the Add button to create a new filter. The interface looks like this: The Expression field is usually a field in the dataset, so to make it easier for you to make a selection,the drop-down list displays all of the current dataset fields. But notice the expression button to the right, which means that you can set up any type of expression-not just a dataset field. To the right of the expression button, you'll find a data type drop-down list. It's important to specify the correct data type for the field or expression you're using. Now for the operators. Here's a list of the options that you have: This Operator Performs This Action =, <>, >, >=, <, <=, Like Compares expression to value Top N, Bottom N Compares expression to Top (Bottom) set of N values (N = integer) Top %, Bottom % Compares expression to Top (Bottom) N percent of values (N = integer or float) Between Determines whether expression is between two values, inclusive In Determines whether expression is found in list of values Last, the Value is what you're comparing to the expression using the operator. The construction of a filter using some operators (=, <>, >, etc.) is fairly simple. If my dataset (for AdventureWorks data) has a Category field, and I have a parameter that prompts the user for a single category, I can set up a filter like this: Expression Data Type Operator Value [Category] Text = [@Category] But if I set the parameter to accept multiple values, I need to change the operator from = to In, just as I would have to do if I were using a query parameter. The parameter expression, [@Category], which translates to =Parameters!Category.Value, doesn’t need to change because it represents an array as soon as I change the parameter to allow multiple values. The “In” operator requires an array. With that in mind, let’s consider a variation on Value. Let’s say that I have a parameter that prompts the user for a particular year – and for simplicity’s sake, this parameter only allows a single value, and I have an expression that evaluates the previous year based on the user’s selection. Then I want to use these two values in two separate filters with an OR condition. That is, I want to filter either by the year selected OR by the year that was computed. If I create two filters, one for each year (as shown below), then the report will only display results if BOTH filter conditions are met – which would never be true. Expression Data Type Operator Value [CalendarYear] Integer = [@Year] [CalendarYear] Integer = =Parameters!Year.Value-1 To handle this scenario, we need to create a single filter that uses the “In” operator, and then set up the Value expression as an array. To create an array, we use the Split function after creating a string that concatenates the two values (highlighted in yellow) as shown below. Expression Data Type Operator Value =Cstr(Fields!CalendarYear.Value) Text In =Split( CStr(Parameters!Year.Value) + ”,” + CStr(Parameters!Year.Value-1) , “,”) Note that in this case, I had to apply a string conversion on the year integer so that I could concatenate the parameter selection with the calculated year. Pay attention to the second argument of the Split function—you must use a comma delimiter for the result to work correctly with the In operator. I also had to change the Expression value from [CalendarYear] (or =Fields!CalendarYear.Value) so that the expression would return a string that I could compare with the values in the string array. More fun with filter expressions in future posts!

    Read the article

  • Moving DataSets through BizTalk

    - by EltonStoneman
    [Source: http://geekswithblogs.net/EltonStoneman] Yuck. But sometimes you have to, so here are a couple of things to bear in mind: Schemas Point a codegen tool at a WCF endpoint which exposes a DataSet and it will generate an XSD which describes the DataSet like this: <xs:elementminOccurs="0"name="GetDataSetResult"nillable="true">  <xs:complexType>     <xs:annotation>       <xs:appinfo>         <ActualTypeName="DataSet"                     Namespace="http://schemas.datacontract.org/2004/07/System.Data"                     xmlns="http://schemas.microsoft.com/2003/10/Serialization/" />       </xs:appinfo>     </xs:annotation>     <xs:sequence>       <xs:elementref="xs:schema" />       <xs:any />     </xs:sequence>  </xs:complexType> </xs:element>  In a serialized instance, the element of type xs:schema contains a full schema which describes the structure of the DataSet – tables, columns etc. The second element, of type xs:any, contains the actual content of the DataSet, expressed as DiffGrams: <GetDataSetResult>  <xs:schemaid="NewDataSet"xmlns:xs="http://www.w3.org/2001/XMLSchema"xmlns=""xmlns:msdata="urn:schemas-microsoft-com:xml-msdata">     <xs:elementname="NewDataSet"msdata:IsDataSet="true"msdata:UseCurrentLocale="true">       <xs:complexType>         <xs:choiceminOccurs="0"maxOccurs="unbounded">           <xs:elementname="Table1">             <xs:complexType>               <xs:sequence>                 <xs:elementname="Id"type="xs:string"minOccurs="0" />                 <xs:elementname="Name"type="xs:string"minOccurs="0" />                 <xs:elementname="Date"type="xs:string"minOccurs="0" />               </xs:sequence>             </xs:complexType>           </xs:element>         </xs:choice>       </xs:complexType>     </xs:element>  </xs:schema>  <diffgr:diffgramxmlns:diffgr="urn:schemas-microsoft-com:xml-diffgram-v1"xmlns:msdata="urn:schemas-microsoft-com:xml-msdata">     <NewDataSetxmlns="">       <Table1diffgr:id="Table11"msdata:rowOrder="0"diffgr:hasChanges="inserted">         <Id>377fdf8d-cfd1-4975-a167-2ddb41265def</Id>         <Name>157bc287-f09b-435f-a81f-2a3b23aff8c4</Name>         <Date>a5d78d83-6c9a-46ca-8277-f2be8d4658bf</Date>       </Table1>     </NewDataSet>  </diffgr:diffgram> </GetDataSetResult> Put the XSD into a BizTalk schema and it will fail to compile, giving you error: The 'http://www.w3.org/2001/XMLSchema:schema' element is not declared. You should be able to work around that, but I've had no luck in BizTalk Server 2006 R2 – instead you can safely change that xs:schema element to be another xs:any type: <xs:elementminOccurs="0"name="GetDataSetResult"nillable="true">  <xs:complexType>     <xs:sequence>       <xs:any />       <xs:any />     </xs:sequence>  </xs:complexType> </xs:element>  (This snippet omits the annotation, but you can leave it in the schema). For an XML instance to pass validation through the schema, you'll also need to flag the any attributes so they can contain any namespace and skip validation:  <xs:elementminOccurs="0"name="GetDataSetResult"nillable="true">  <xs:complexType>     <xs:sequence>       <xs:anynamespace="##any"processContents="skip" />       <xs:anynamespace="##any"processContents="skip" />     </xs:sequence>  </xs:complexType> </xs:element>  You should now have a compiling schema which can be successfully tested against a serialised DataSet. Transforms If you're mapping a DataSet element between schemas, you'll need to use the Mass Copy Functoid to populate the target node from the contents of both the xs:any type elements on the source node: This should give you a compiled map which you can test against a serialized instance. And if you have a .NET consumer on the other side of the mapped BizTalk output, it will correctly deserialize the response into a DataSet.

    Read the article

  • How to compile a schema that uses a DataSet (xs:schema)?

    - by Yaron Naveh
    I have created the simplest web service in c#: public void AddData(DataSet ds) The generated schema (Wsdl) looks like this: <s:schema xmlns:s="http://www.w3.org/2001/XMLSchema"> ... <s:element ref="s:schema" /> ... </s:schema> Note the schema does not contain any import/include elements. I am trying to load this schema to a c# System.Xml.XmlSchema and add it to System.Xml.XmlSchemaSet: var set = new XmlSchemaSet(); var fs = new FileStream(@"c:\temp\schema.xsd", FileMode.Open); var s = XmlSchema.Read(fs, null); set.Add(s); set.Compile(); The last line throws this exception: The 'http://www.w3.org/2001/XMLSchema:schema' element is not declared. It kind of makes sense: The schema generated by .Net uses the "s:schema" type which is declared in a schema which is not imported. Why does .Net create a non valid schema? How to compile the schema anyway? Whay I did is download the schema in http://www.w3.org/2001/XMLSchema and added it to the XmlSchemaSet also. This did not work since that online schema contains DTD definition. I had to manually remove it and now all works. Does this make sense or am I missing something?

    Read the article

  • How to structure an index for type ahead for extremely large dataset using Lucene or similar?

    - by Pete
    I have a dataset of 200million+ records and am looking to build a dedicated backend to power a type ahead solution. Lucene is of interest given its popularity and license type, but I'm open to other open source suggestions as well. I am looking for advice, tales from the trenches, or even better direct instruction on what I will need as far as amount of hardware and structure of software. Requirements: Must have: The ability to do starts with substring matching (I type in 'st' and it should match 'Stephen') The ability to return results very quickly, I'd say 500ms is an upper bound. Nice to have: The ability to feed relevance information into the indexing process, so that, for example, more popular terms would be returned ahead of others and not just alphabetical, aka Google style. In-word substring matching, so for example ('st' would match 'bestseller') Note: This index will purely be used for type ahead, and does not need to serve standard search queries. I am not worried about getting advice on how to set up the front end or AJAX, as long as the index can be queried as a service or directly via Java code. Up votes for any useful information that allows me to get closer to an enterprise level type ahead solution

    Read the article

  • What's the best way to transfer a large dataset over a .NET web service?

    - by Malvineous
    I've inherited a C# .NET application which talks to a web service, and the web service talks to an Oracle database. I need to add an export function to the UI, to produce an Excel spreadsheet of some of the data. I have created a web service function to run a database query, load the data into a DataTable and then return it, which works fine for a small number of rows. However there is enough data in the full run that the client application locks up for a few minutes and then returns a timeout error. Obviously this isn't the best way to retrieve such a large dataset. Before I go ahead and come up with some dodgy way of splitting the call, I'm wondering if there is already something in place that can handle this. At the moment I'm thinking of a startExport function then repeatedly calling a next50Rows function until there is no data left, but because web services are stateless this means I'm going to have to keep some sort of ID number around and deal with the associated permissions. It would mean that I don't have to load the entire data set into the web server's memory though, which is one good thing. So if anyone knows a better way to retrieve a large amount of data (in a table format) over a .NET web service, please let me know!

    Read the article

  • Convert raw IMAP server data into local folders, then upload partial dataset to new IMAP server?

    - by Manca Weeks
    I am transitioning a company with about 30 IMAP accounts, loaded with data (about 77GB total), to a new email host. The majority of the data will be converted into a local archive and distributed to the company computers as a static reference data set. The server side folders the users absolutely cannot do without being on the server will be uploaded back to the new server. I used Mac OS X Mail (Snow Leopard 10.6.6) to download the content. I notice some messages have the name [xxx].partial.emlx, which leads me to believe they have not been downloaded all the way. I have root access to the mail server data and could download the IMAP server data via FTP. I am not sure what utility to use to convert that data to local Mail.app mailboxes. Furthermore, I would appreciate any input on the best way to upload a portion of the data to the new server (GoDaddy), preserving the original dates of the messages. edit OK - forget the raw server data. I found a script that apparently does pretty good archiving IMAP folders to local mbx files. My main quest now is to batch upload a mailbox hierarchy to the new IMAP server without having to start-stop and deal with similar issues. Anyone know of a utility (hopefully for OS X, but if not, I'll fire up my XP virtual system...) that would be capable of this? Thanks, M

    Read the article

  • Static method , Abstract method , Interface method comparision ?

    - by programmerist
    When i choose these methods? i can not decide which one i must prefer or when will i use one of them?which one give best performance? First Type Usage public abstract class _AccessorForSQL { public virtual bool Save(string sp, ListDictionary ld, CommandType cmdType); public virtual bool Update(); public virtual bool Delete(); public virtual DataSet Select(); } class GenAccessor : _AccessorForSQL { DataSet ds; DataTable dt; public override bool Save(string sp, ListDictionary ld, CommandType cmdType) { } public override bool Update() { return true; } public override bool Delete() { return true; } public override DataSet Select() { DataSet dst = new DataSet(); return dst; } Second Type Usage Also i can write it below codes: public class GenAccessor { public Static bool Save() { } public Static bool Update() { } public Static bool Delete() { } } Third Type Usage Also i can write it below codes: public interface IAccessorForSQL { bool Delete(); bool Save(string sp, ListDictionary ld, CommandType cmdType); DataSet Select(); bool Update(); } public class _AccessorForSQL : IAccessorForSQL { private DataSet ds; private DataTable dt; public virtual bool Save(string sp, ListDictionary ld, CommandType cmdType) { } } } I can use first one below usage: GenAccessor gen = New GenAccessor(); gen.Save(); I can use second one below usage: GenAccessor.Save(); Which one do you prefer? When will i use them? which time i need override method ? which time i need static method?

    Read the article

  • The cost of passing by shared_ptr

    - by Artem
    I use std::tr1::shared_ptr extensively throughout my application. This includes passing objects in as function arguments. Consider the following: class Dataset {...} void f( shared_ptr< Dataset const > pds ) {...} void g( shared_ptr< Dataset const > pds ) {...} ... While passing a dataset object around via shared_ptr guarantees its existence inside f and g, the functions may be called millions of times, which causes a lot of shared_ptr objects being created and destroyed. Here's a snippet of the flat gprof profile from a recent run: Each sample counts as 0.01 seconds. % cumulative self self total time seconds seconds calls s/call s/call name 9.74 295.39 35.12 2451177304 0.00 0.00 std::tr1::__shared_count::__shared_count(std::tr1::__shared_count const&) 8.03 324.34 28.95 2451252116 0.00 0.00 std::tr1::__shared_count::~__shared_count() So, ~17% of the runtime was spent on reference counting with shared_ptr objects. Is this normal? A large portion of my application is single-threaded and I was thinking about re-writing some of the functions as void f( const Dataset& ds ) {...} and replacing the calls shared_ptr< Dataset > pds( new Dataset(...) ); f( pds ); with f( *pds ); in places where I know for sure the object will not get destroyed while the flow of the program is inside f(). But before I run off to change a bunch of function signatures / calls, I wanted to know what the typical performance hit of passing by shared_ptr was. Seems like shared_ptr should not be used for functions that get called very often. Any input would be appreciated. Thanks for reading. -Artem

    Read the article

  • SSRS How to access the current value within a list control?

    - by Dale Burrell
    In SQL Server Reporting Services I have a report which has a list control which groups on currency. Within the list control I display the detailed rows of all records filtered to those with a value = £500. i.e. the top earners. However for each row I need to calculate the percentage of its amount over the total of the entire dataset. Because I am filtering it I can't use Sum(Fields!Amount.Value) as that only sums the data after filtering, so I am trying a conditional sum over the entire dataset, but am struggling with the correct condition e.g =100.00*Fields!Amount.Value/Sum((IIf(Fields!Currency.Value = "£", Fields!Amount.Value, CDec(0))),"DataSet") So where the hardcoded currency symbol is I need to access the current value of currency for the list control, but because my sum is scoped at dataset level any field access is dataset level. Ideally I'd like something like the following, otherwise any other ideas on how to solve this problem. =100.00*Fields!Amount.Value/Sum((IIf(Fields!Currency.Value = myListControl.Value, Fields!Amount.Value, CDec(0))),"DataSet") In fact, thinking about it, it would work if I just could access the row level data at that point, but how to do that when its at dataset scope within the sum statement? Hope that makes sense, any help appreciated.

    Read the article

  • How to mount a LOFS in Solaris that doesn’t cross mountpoints

    - by jcea
    I need to access my "root" ZFS dataset to delete a file under "/var". But "/var" is overlayed by another ZFS dataset. Since these are system datasets I can't "umount" them while the machine is running. And I want to avoid to reboot the system in "failsafe" mode, since this is a production machine. Teorically ZFS would refuse to mount "/var" dataset over the underlying "/var", because it is not empty. But it works, possibly because they are system datasets mounted early in the boot process. But having the underlying "/var" not empty is preventing me to create an ABE (Alternate Boot Environment), so patching is risky, and I can't upgrade my system using Live Upgrade. The machine is remote. I have an IP KVM, but I rather prefer to avoid booting this machine in "failsafe" mode, if I can. I know there is a file in "/var/" because I can snapshot the "root" dataset and check it. But snapshots are read-only, so I can't get rid of the file. I tried "mkdir /tmp/zzz; mount -F lofs / /tmp/zzz", but when I go to "/tmp/zzz/var", I see the "/var" dataset, not the underlying "root" dataset. That is, the LOFS is crossing mountpoints. I would usually like it, but not this time!. Any suggestion, beside rebooting the machine in "failsafe" and mess with it thru the IP KVM?

    Read the article

  • Error in data view when connecting to an Oracle DB

    - by Mike Polen
    When using SharePoint Designer I found this link that stepped me through how to get it working: http://spsolution.blogspot.com/2008/12/how-to-insert-data-source-in-sharepoint.html That allowed SharePoint Designer to talk to Oracle, but when I placed a data view on a page it gave me the following error: Error while executing web part: System.Data.OracleClient.OracleException: ORA-00923: FROM keyword not found where expected at System.Data.OracleClient.OracleConnection.CheckError(OciErrorHandle errorHandle, Int32 rc) at System.Data.OracleClient.OracleCommand.Execute(OciStatementHandle statementHandle, CommandBehavior behavior, Boolean needRowid, OciRowidDescriptor& rowidDescriptor, ArrayList& resultParameterOrdinals) at System.Data.OracleClient.OracleCommand.Execute(OciStatementHandle statementHandle, CommandBehavior behavior, ArrayList& resultParameterOrdinals) at System.Data.OracleClient.OracleCommand.ExecuteReader(CommandBehavior behavior) at System.Data.OracleClient.OracleCommand.ExecuteDbDataReader(CommandBehavior behavior) at System.Data.Common.DbCommand.Syst... 09/14/2009 14:40:23.52* w3wp.exe (0x0FA0) 0x1A88 Windows SharePoint Services Web Parts 89a1 Monitorable ... em.Data.IDbCommand.ExecuteReader(CommandBehavior behavior) at System.Data.Common.DbDataAdapter.FillInternal(DataSet dataset, DataTable[] datatables, Int32 startRecord, Int32 maxRecords, String srcTable, IDbCommand command, CommandBehavior behavior) at System.Data.Common.DbDataAdapter.Fill(DataSet dataSet, Int32 startRecord, Int32 maxRecords, String srcTable, IDbCommand command, CommandBehavior behavior) at System.Data.Common.DbDataAdapter.Fill(DataSet dataSet, String srcTable) at System.Web.UI.WebControls.SqlDataSourceView.ExecuteSelect(DataSourceSelectArguments arguments) at System.Web.UI.DataSourceView.Select(DataSourceSelectArguments arguments, DataSourceViewSelectCallback callback) at Microsoft.SharePoint.WebControls.SingleDataSource.GetXPathNavigatorInternal() ... 09/14/2009 14:40:23.52* w3wp.exe (0x0FA0) 0x1A88 Windows SharePoint Services Web Parts 89a1 Monitorable ... at Microsoft.SharePoint.WebControls.SingleDataSource.GetXPathNavigator() at Microsoft.SharePoint.WebControls.SingleDataSource.GetXPathNavigator(IDataSource datasource, Boolean originalData) at Microsoft.SharePoint.WebPartPages.DataFormWebPart.GetXPathNavigator(String viewPath) at Microsoft.SharePoint.WebPartPages.DataFormWebPart.PrepareAndPerformTransform() I am mystified.

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >