Search Results

Search found 25852 results on 1035 pages for 'linq query syntax'.

Page 61/1035 | < Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >

  • using indexer to retrieve Linq to SQL object from datastore

    - by fearofawhackplanet
    class UserDatastore : IUserDatastore { ... public IUser this[Guid userId] { get { User user = (from u in _dataContext.Users where u.Id == userId select u).FirstOrDefault(); return user; } } ... } One of the developers in our team is arguing that an indexer in the above situation is not appropriate and that a GetUser(Guid id) method should be prefered. The arguments being that: 1) We aren't indexing into an in-memory collection, the indexer is basically performing a hidden SQL query 2) Using a Guid in an indexer is bad (FxCop flagged this also) 3) Returning null from an indexer isn't normal behaviour 4) An API user generally wouldn't expect any of this behaviour I agree to an extent with (most of) these points. But I'm also inclined to argue that one of the characteristics of Linq is to abstract the database access to make it appear that you're simply working with a bunch of collections, even though the lazy evaluation paradigm means those collections aren't evaluated until you run a query over them. It doesn't seem inconsistent to me to access the datastore in the same manner as if it was a concrete in-memory collection here. Also bearing in mind this is an inherited codebase which uses this pattern extensively and consistently, is it worth the refactoring? I accept that it might have been better to use a Get method from the start, but I'm not yet convinced that it's completely incorrect to be using an indexer. I'd be interested to hear all opinions, thanks.

    Read the article

  • Linq Getting Customers group by date and then by their type

    - by Nitin varpe
    I am working on generating report for showing customer using LINQ in C#. I want to show no. of customers of each type. There are 3 types of customer registered, guest and manager. I want to group by customers by registered date and then by type of customer. i.e If today 3 guest, 4 registered and 2 manager are inserted. and tomorrow 4,5 and 6 are registered resp. then report should show Number of customers registerd on the day . separate row for each type. DATE TYPEOF CUSTOMER COUNT 31-10-2013 GUEST 3 31-10-2013 REGISTERED 4 31-10-2013 MANAGER 2 30-10-2013 GUEST 5 30-10-2013 REGISTERED 10 30-10-2013 MANAGER 3 LIKE THIS . var subquery = from eat in _customerRepo.Table group eat by new { yy = eat.CreatedOnUTC.Value.Year, mm = eat.CreatedOnUTC.Value.Month, dd = eat.CreatedOnUTC.Value.Day } into g select new { Id = g.Min(x => x.Id) }; var query = from c in _customerRepo.Table join cin in subquery.Distinct() on c.Id equals cin.Id select c; By above query I get minimum cutomers registerd on that day Thanks in advance

    Read the article

  • Running an existing LINQ query against a dynamic object (DataTable like)

    - by TomTom
    Hello, I am working on a generic OData provider to go against a custom data provider that we have here. Thsi is fully dynamic in that I query the data provider for the table it knows. I have a basic storage structure in place so far based on the OData sample code. My problem is: OData supports queries and expects me to hand in an IQueryable implementation. On the lowe rside, I dont have any query support. Not a joke - the provider returns tables and the WHERE clause is not supported. Performance is not an issue here - the tables are small. It is ok to sort them in the OData provider. My main problem is this. I submit a SQL statement to get out the data of a table. The result is some sort of ADO.NET data reader here. I need to expose an IQueryable implementation for this data to potentially allow later filtering. Any ide ahow to best touch that? .NET 3.5 only (no 4.0 planned for some time). I was seriously thinking of creating dynamic DTO classes for every table (emitting bytecode) so I can use standard LINQ. Right now I am using a dictionary per entry (not too efficient) but I see no real way to filter / sort based on them.

    Read the article

  • Building an external list while filtering in LINQ

    - by Khnle
    I have an array of input strings that contains either email addresses or account names in the form of domain\account. I would like to build a List of string that contains only email addresses. If an element in the input array is of the form domain\account, I will perform a lookup in the dictionary. If the key is found in the dictionary, that value is the email address. If not found, that won't get added to the result list. The code below will makes the above description clear: private bool where(string input, Dictionary<string, string> dict) { if (input.Contains("@")) { return true; } else { try { string value = dict[input]; return true; } catch (KeyNotFoundException) { return false; } } } private string select(string input, Dictionary<string, string> dict) { if (input.Contains("@")) { return input; } else { try { string value = dict[input]; return value; } catch (KeyNotFoundException) { return null; } } } public void run() { Dictionary<string, string> dict = new Dictionary<string, string>() { { "gmail\\nameless", "[email protected]"} }; string[] s = { "[email protected]", "gmail\\nameless", "gmail\\unknown" }; var q = s.Where(p => where(p, dict)).Select(p => select(p, dict)); List<string> resultList = q.ToList<string>(); } While the above code works (hope I don't have any typo here), there are 2 problems that I do not like with the above: The code in where() and select() seems to be redundant/repeating. It takes 2 passes. The second pass converts from the query expression to List. So I would like to add to the List resultList directly in the where() method. It seems like I should be able to do so. Here's the code: private bool where(string input, Dictionary<string, string> dict, List<string> resultList) { if (input.Contains("@")) { resultList.Add(input); //note the difference from above return true; } else { try { string value = dict[input]; resultList.Add(value); //note the difference from above return true; } catch (KeyNotFoundException) { return false; } } } The my LINQ expression can be nicely in 1 single statement: List<string> resultList = new List<string>(); s.Where(p => where(p, dict, resultList)); Or var q = s.Where(p => where(p, dict, resultList)); //do nothing with q afterward Which seems like perfect and legal C# LINQ. The result: sometime it works and sometime it doesn't. So why doesn't my code work reliably and how can I make it do so?

    Read the article

  • Parsing google feed with linq to xml

    - by the_V
    Hi I'm trying to parse XML feed that I get via Google Contacts API with LINQ 2 XML. That's what feed looks like: <feed xmlns="http://www.w3.org/2005/Atom" xmlns:openSearch="http://a9.com/-/spec/opensearchrss/1.0/" xmlns:gContact="http://schemas.google.com/contact/2008" xmlns:batch="http://schemas.google.com/gdata/batch" xmlns:gd="http://schemas.google.com/g/2005"> <id>[email protected]</id> <updated>2010-06-11T17:37:06.561Z</updated> <category scheme="http://schemas.google.com/g/2005#kind" term="http://schemas.google.com/contact/2008#contact" /> <title type="text">My Contacts</title> <link rel="alternate" type="text/html" href="http://www.google.com/" /> <link rel="http://schemas.google.com/g/2005#feed" type="application/atom+xml" href="http://www.google.com/m8/feeds/contacts/myemailaddress%40gmail.com/full" /> <link rel="http://schemas.google.com/g/2005#post" type="application/atom+xml" href="http://www.google.com/m8/feeds/contacts/myemailaddress%40gmail.com/full" /> <link rel="http://schemas.google.com/g/2005#batch" type="application/atom+xml" href="http://www.google.com/m8/feeds/contacts/myemailaddress%40gmail.com/full/batch" /> <link rel="self" type="application/atom+xml" href="http://www.google.com/m8/feeds/contacts/myemailaddress%40gmail.com/full?max-results=25" /> <author> <name>My NAme</name> <email>[email protected]</email> </author> <generator version="1.0" uri="http://www.google.com/m8/feeds">Contacts</generator> <openSearch:totalResults>19</openSearch:totalResults> <openSearch:startIndex>1</openSearch:startIndex> <openSearch:itemsPerPage>25</openSearch:itemsPerPage> <entry> <id>http://www.google.com/m8/feeds/contacts/myemailaddress%40gmail.com/base/0</id> <updated>2010-01-26T20:34:03.802Z</updated> <category scheme="http://schemas.google.com/g/2005#kind" term="http://schemas.google.com/contact/2008#contact" /> <title type="text">Contact name</title> <link rel="http://schemas.google.com/contacts/2008/rel#edit-photo" type="image/*" href="http://www.google.com/m8/feeds/photos/media/myemailaddress%40gmail.com/0/O-ydnzWMJcfZWqT-6gGetw" /> <link rel="http://schemas.google.com/contacts/2008/rel#photo" type="image/*" href="http://www.google.com/m8/feeds/photos/media/myemailaddress%40gmail.com/0" /> <link rel="self" type="application/atom+xml" href="http://www.google.com/m8/feeds/contacts/myemailaddress%40gmail.com/full/0" /> <link rel="edit" type="application/atom+xml" href="http://www.google.com/m8/feeds/contacts/myemailaddress%40gmail.com/full/0/1264538043802000" /> <gd:email rel="http://schemas.google.com/g/2005#other" address="[email protected]" primary="true" /> </entry> </feed> I've tried a number of things with linq 2 sql, but they didn't work. Even this simple code snipped doesn't work: using (FileStream stream = File.OpenRead("response.xml")) { XmlReader reader = XmlReader.Create(stream); XDocument doc = XDocument.Load(reader); XElement feed = doc.Element("feed"); if (feed == null) { Console.WriteLine("feed not found"); } XElement id = doc.Element("id"); if (id == null) { Console.WriteLine("id is null"); } } Problem is that both id and feed are null here. What wrong am I doing?

    Read the article

  • Linq In Clause & Predicate building

    - by Michael G
    I have two tables. Report and ReportData. ReportData has a constraint ReportID. How can I write my linq query to return all Report objects where the predicate conditions are met for ReportData? Something like this in SQL: SELECT * FROM Report as r Where r.ServiceID = 3 and r.ReportID IN (Select ReportID FROM ReportData WHERE JobID LIKE 'Something%') This is how I'm building my predicate: Expression<Func<ReportData, bool>> predicate = PredicateBuilder.True<ReportData>(); predicate = predicate.And(x => x.JobID.StartsWith(QueryConfig.Instance.DataStreamName)); var q = engine.GetReports(predicate, reportsDataContext); reports = q.ToList(); This is my query construction at the moment: public override IQueryable<Report> GetReports(Expression<Func<ReportData, bool>> predicate, LLReportsDataContext reportDC) { if (reportDC == null) throw new ArgumentNullException("reportDC"); var q = reportDC.ReportDatas.Where(predicate).Where(r => r.ServiceID.Equals(1)).Select(r => r.Report); return q; }

    Read the article

  • LINQ - Querying a list filtered via a Many-to-Many reltionship

    - by user118190
    Please excuse the context of my question for I did not know how to exactly word it. To not complicate things further, here's my business requirement: "bring me back all the Employees where they belong in Department "X". So when I view this, it will display all of the Employees that belong to this Department. Here's my environment: Silverlight 3 with Entity Framework 1.0 and WCF Data Services 1.0. I am able to load and bind all kinds of lists (simple), no problem. I don't feel that my environment matters and that's why I feel it is a LINQ question more than the technologies. My question is for scenarios where I have 3 tables linked, i.e. entities (collections). For example, I have this in my EDM: Employee--EmployeeProject--Project. Here's the table design from the Database: Employee (table1) ------------- EmployeeID (PK) FirstName other Attributes ... EmployeeProject (table2) ------------- EmployeeProjectID (PK) EmployeeID (FK) ProjectID (FK) AssignedDate other Attributes ... Project (table3) ------------- ProjectID (PK) Name other Attributes ... Here's the EDM design from Entity Framework: ------------------------ Employee (entity1) ------------------------ (Scalar Properties) ------------------- EmployeeID (PK) FirstName other Attributes ... ------------------- (Navigation Properties) ------------------- EmployeeProjects ------------------------ EmployeeProject (entity2) ------------------------ (Scalar Properties) ------------------- EmployeeProjectID (PK) AssignedDate other Attributes ... ------------------- (Navigation Properties) ------------------- Employee Project ------------------------ Project (entity3) ------------------------ (Scalar Properties) ------------------- ProjectID (PK) Name other Attributes ... ------------------- (Navigation Properties) ------------------- EmployeeProjects So far, I have only been able to do: var filteredList = Context.Employees .Where(e => e.EmployeeProjects.Where(ep => ep.Project.Name == "ProjectX")) NOTE: I have updated the syntax of the query after John's post. As you can see, I can only query, the related entity (EmployeeProjects). All I want is being able to filter to Project from the Employee entity. Thanks for any advice.

    Read the article

  • Using LINQ to create a simple login

    - by JDWebs
    I'm an C# ASP.NET beginner, so please excuse anything that's... not quite right! In short, I want to create a really basic login system: one which runs through a database and uses sessions so that only logged in users can access certain pages. I know how to do most of that, but I'm stuck with querying data with LINQ on the login page. On the login page, I have a DropDownList to select a username, a Textbox to type in a password and a button to login (I also have a literal for errors). The DropDownList is databound to a datatable called DT_Test. DT_Test contains three columns: UsernameID (int), Username (nchar(30)) and Password (nchar(30)). UsernameID is the primary key. I want to make the button's click event query data from the database with the DropDownList and Textbox, in order to check if the username and password match. But I don't know how to do this... Current Code (not a lot!): Front End: <%@ Page Language="C#" AutoEventWireup="true" CodeFile="Login_Test.aspx.cs" Inherits="Login_Login_Test" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title>Login Test</title> </head> <body> <form id="LoginTest" runat="server"> <div> <asp:DropDownList ID="DDL_Username" runat="server" Height="20px" DataTextField="txt"> </asp:DropDownList> <br /> <asp:TextBox ID="TB_Password" runat="server" TextMode="Password"></asp:TextBox> <br /> <asp:Button ID="B_Login" runat="server" onclick="B_Login_Click" Text="Login" /> <br /> <asp:Literal ID="LI_Result" runat="server"></asp:Literal> </div> </form> </body> </html> Back End: using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; public partial class Login_Login_Test : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { if (!Page.IsPostBack) { Binder(); } } private void Binder() { using (DataClassesDataContext db = new DataClassesDataContext()) { DDL_Username.DataSource = from x in db.DT_Honeys select new { x.UsernameID, txt = x.Username }; DDL_Username.DataBind(); } } protected void B_Login_Click(object sender, EventArgs e) { if (TB_Password.Text != "") { using (DataClassesDataContext db = new DataClassesDataContext()) { } } } } I have spent hours searching and trying different code, but none of it seems to fit in for this context. Anyway, help and tips appreciated, thank you very much! I am aware of security risks etc. but this is not a live website or anything, it is simply for testing purposes as a beginner. *

    Read the article

  • Passing a WHERE clause for a Linq-to-Sql query as a parameter

    - by Mantorok
    Hi all This is probably pushing the boundaries of Linq-to-Sql a bit but given how versatile it has been so far I thought I'd ask. I have 3 queries that are selecting identical information and only differ in the where clause, now I know I can pass a delegate in but this only allows me to filter the results already returned, but I want to build up the query via parameter to ensure efficiency. Here is the query: from row in DataContext.PublishedEvents join link in DataContext.PublishedEvent_EventDateTimes on row.guid equals link.container join time in DataContext.EventDateTimes on link.item equals time.guid where row.ApprovalStatus == "Approved" && row.EventType == "Event" && time.StartDate <= DateTime.Now.Date.AddDays(1) && (!time.EndDate.HasValue || time.EndDate.Value >= DateTime.Now.Date.AddDays(1)) orderby time.StartDate select new EventDetails { Title = row.EventName, Description = TrimDescription(row.Description) }; The code I want to apply via a parameter would be: time.StartDate <= DateTime.Now.Date.AddDays(1) && (!time.EndDate.HasValue || time.EndDate.Value >= DateTime.Now.Date.AddDays(1)) Is this possible? I don't think it is but thought I'd check out first. Thanks

    Read the article

  • Reconciling a new BindingList into a master BindingList using LINQ

    - by Neo
    I have a seemingly simple problem whereby I wish to reconcile two lists so that an 'old' master list is updated by a 'new' list containing updated elements. Elements are denoted by a key property. These are my requirements: All elements in either list that have the same key results in an assignment of that element from the 'new' list over the original element in the 'old' list only if any properties have changed. Any elements in the 'new' list that have keys not in the 'old' list will be added to the 'old' list. Any elements in the 'old' list that have keys not in the 'new' list will be removed from the 'old' list. I found an equivalent problem here - http://stackoverflow.com/questions/161432/ - but it hasn't really been answered properly. So, I came up with an algorithm to iterate through the old and new lists and perform the reconciliation as per the above. Before anyone asks why I'm not just replacing the old list object with the new list object in its entirety, it's for presentation purposes - this is a BindingList bound to a grid on a GUI and I need to prevent refresh artifacts such as blinking, scrollbars moving, etc. So the list object must remain the same, only its updated elements changed. Another thing to note is that the objects in the 'new' list, even if the key is the same and all the properties are the same, are completely different instances to the equivalent objects in the 'old' list, so copying references is not an option. Below is what I've come up with so far - it's a generic extension method for a BindingList. I've put comments in to demonstrate what I'm trying to do. public static class BindingListExtension { public static void Reconcile<T>(this BindingList<T> left, BindingList<T> right, string key) { PropertyInfo piKey = typeof(T).GetProperty(key); // Go through each item in the new list in order to find all updated and new elements foreach (T newObj in right) { // First, find an object in the new list that shares its key with an object in the old list T oldObj = left.First(call => piKey.GetValue(call, null).Equals(piKey.GetValue(newObj, null))); if (oldObj != null) { // An object in each list was found with the same key, so now check to see if any properties have changed and // if any have, then assign the object from the new list over the top of the equivalent element in the old list foreach (PropertyInfo pi in typeof(T).GetProperties()) { if (!pi.GetValue(oldObj, null).Equals(pi.GetValue(newObj, null))) { left[left.IndexOf(oldObj)] = newObj; break; } } } else { // The object in the new list is brand new (has a new key), so add it to the old list left.Add(newObj); } } // Now, go through each item in the old list to find all elements with keys no longer in the new list foreach (T oldObj in left) { // Look for an element in the new list with a key matching an element in the old list if (right.First(call => piKey.GetValue(call, null).Equals(piKey.GetValue(oldObj, null))) == null) { // A matching element cannot be found in the new list, so remove the item from the old list left.Remove(oldObj); } } } } It can be called like this: _oldBindingList.Reconcile(newBindingList, "MyKey") However, I'm looking for perhaps a method of doing the same using LINQ type methods such as GroupJoin<, Join<, Select<, SelectMany<, Intersect<, etc. So far, the problem I've had is that each of these LINQ type methods result in brand new intermediary lists (as a return value) and really, I only want to modify the existing list for all the above reasons. If anyone can help with this, would be most appreciated. If not, no worries, the above method (as it were) will suffice for now. Thanks, Jason

    Read the article

  • LINQ-to-SQL - Join, Count

    - by ile
    I have following query: var result = ( from role in db.Roles join user in db.Users on role.RoleID equals user.RoleID where user.CreatedByUserID == userID orderby user.FirstName ascending select new UserViewModel { UserID = user.UserID, PhotoID = user.PhotoID.ToString(), FirstName = user.FirstName, LastName = user.LastName, FullName = user.FirstName + " " + user.LastName, Email = user.Email, PhoneNumber = user.Phone, AccessLevel = role.Name }); Now, I need to modify this query... Other table I have is table Deals. I would like to count how many deals user created last month and last year. I tried something like this: var result = ( from role in db.Roles join user in db.Users on role.RoleID equals user.RoleID //join dealsYear in db.Deals on date.Year equals dealsYear.DateCreated.Year join dealsYear in ( from deal in db.Deals group deal by deal.DateCreated into d select new { DateCreated = d.Key, dealsCount = d.Count() } ) on date.Year equals dealsYear.DateCreated.Year into dYear join dealsMonth in ( from deal in db.Deals group deal by deal.DateCreated into d select new { DateCreated = d.Key, dealsCount = d.Count() } ) on date.Month equals dealsMonth.DateCreated.Month into dMonth where user.CreatedByUserID == userID orderby user.FirstName ascending select new UserViewModel { UserID = user.UserID, PhotoID = user.PhotoID.ToString(), FirstName = user.FirstName, LastName = user.LastName, FullName = user.FirstName + " " + user.LastName, Email = user.Email, PhoneNumber = user.Phone, AccessLevel = role.Name, DealsThisYear = dYear, DealsThisMonth = dMonth }); but here is even syntax not correct. Any idea? Btw, is there any good book of LINQ to SQL with examples?

    Read the article

  • Use of where in multiple joins to remove rows - linq

    - by bergin
    hi, I have a table of orders. the status is on the soilorders which is joined to the orders. I only want to return orders where the joined soilorder does not have status "Removed". I had thought that join sso in db.SoilSamplingOrders on ord.order_id equals sso.order_id where sso.status.Equals("Removed")!=true but then no records are returned! thanks for any help (query below) var query = from ord in db.Orders join sso in db.SoilSamplingOrders on ord.order_id equals sso.order_id where sso.status.Equals("Removed")!=true join cust in db.Customers on ord.customer_id equals cust.customer_id select new Listing { assigned_to = sso.assigned_to, company = cust.company, order_id = ord.order_id, order_created = ord.order_created, customer_id = ord.customer_id, order_created_by_employ_id = ord.order_created_by_employ_id, first_farm_on_order = (from f in db.SoilSamplingSubJobs where f.order_id == ord.order_id select new ListingSubJob { first_farm_on_order = f.farm }). AsEnumerable().First().first_farm_on_order, total_fields = (from f in db.SoilSamplingSubJobs where f.order_id == ord.order_id select new { f.sssj_id }).AsEnumerable().Count(), total_area = (float?) (from f in db.SoilSamplingSubJobs where f.order_id == ord.order_id && f.area_ha != null select f.area_ha ).Sum() ?? 0 , total_area_ph_density = (float?)(from f in db.SoilSamplingSubJobs where f.order_id == ord.order_id && f.ph != null select f.ph).Sum() ?? 0, };

    Read the article

  • Retrieve nested list from XDocument with LINQ

    - by twreid
    Ok I want my link query to return a list of users. Below is the XML <section type="Users"> <User type="WorkerProcessUser"> <default property="UserName" value="Main"/> <default property="Password" value=""/> <default property="Description" value=""/> <default property="Group" value=""/> </User> <User type="AnonymousUser"> <default property="UserName" value="Second"/> <default property="Password" value=""/> <default property="Description" value=""/> <default property="Group" value=""/> </User> </section> And my current LINQ Query that doesn't work. doc is an XDocument var users = (from iis in doc.Descendants("section") where iis.Attribute("type").Value == "Users" from user in iis.Elements("User") from prop in user.Descendants("default") select new { Type = user.Attribute("type").Value, UserName = prop.Attribute("UserName").Value }); This does not work can anyone tell me what I need to fix? Here is my second attempt after fixing for the wrong property name. However this one does not seem to enumerate the UserName value for me when I try to use it or at least when I try to write it to the console. Also this returns 8 total results I should only have 2 results as I only have 2 users. (from iis in doc.Descendants("section") where iis.Attribute("type").Value == "Users" from user in iis.Elements("User") from prop in user.Descendants("default") select new { Type = user.Attribute("type").Value, UserName = (from name in prop.Attributes("property") where name.Value == "UserName" select name.NextAttribute.Value).ToString() });

    Read the article

  • ASP.NET MVC Query Help

    - by Cameron
    I have the following Index method on my app that shows a bunch of articles: public ActionResult Index(String query) { var ArticleQuery = from m in _db.ArticleSet select m; if (!string.IsNullOrEmpty(query)) { ArticleQuery = ArticleQuery.Where(m => m.headline.Contains(query)); } //ArticleQuery = ArticleQuery.OrderBy(m.posted descending); return View(ArticleQuery.ToList()); } It also doubles as a search mechanism by grabbing a query string if it exists. Problem 1.) The OrderBy does not work, what do I need to change it to get it to show the results by posted date in descending order. Problem 2.) I am going to adding a VERY SIMPLE pagination, and therefore only want to show 4 results per page. How would I be best going about this? Thanks EDIT: In addition to problem 2, I'm looking for a simple Helper class solution to implement said pagination into my current code. This ones looks very good (http://weblogs.asp.net/andrewrea/archive/2008/07/01/asp-net-mvc-quot-pager-quot-html-helper.aspx), but how would I implement it into my application. Thanks.

    Read the article

  • Translating Where() to sql

    - by MBoros
    Hi. I saw DamienG's article (http://damieng.com/blog/2009/06/24/client-side-properties-and-any-remote-linq-provider) in how to map client properties to sql. i ran throgh this article, and i saw great potential in it. Definitely mapping client properties to SQL is an awesome idea. But i wanted to use this for something a bit more complicated then just concatenating strings. Atm we are trying to introduce multilinguality to our Business objects, and i hoped we could leave all the existing linq2sql queries intact, and just change the code of the multilingual properties, so they would actually return the given property in the CurrentUICulture. The first idea was to change these fields to XMLs, and then try the Object.Property.Elements().Where(...), but it got stuck on the Elements(), as it couldnt translate it to sql. I read somewhere that XML fields are actually regarded as strings, and only on the app server they become XElements, so this way the filtering would be on the app server anyways, not the DB. Fair point, it wont work like this. Lets try something else... SO the second idea was to create a PolyGlots table (name taken from http://weblogic.sys-con.com/node/102698?page=0,1), a PolyGlotTranslations table and a Culture table, where the PolyGlots would be referenced from each internationalized property. This way i wanted to say for example: private static readonly CompiledExpression<Announcement, string> nameExpression = DefaultTranslationOf<Announcement> .Property(e => e.Name) .Is(e=> e.NamePolyGlot.PolyGlotTranslations .Where(t=> t.Culture.Code == Thread.CurrentThread.CurrentUICulture.Name) .Single().Value ); now unfortunately here i get an error that the Where() function cannot be translated to sql, what is a bit disappointing, as i was sure it will go through. I guess it is failing, cause the IEntitySet is basically an IEnumerable, not IQueryable, am i right? Is there another way to use the compiledExpressions class to achieve this goal? Any help appreciated.

    Read the article

  • Grouping a list of of entities which has a sublist in c#

    - by mikei
    I have a set of Entities which basically has this structure. {Stats Name="<Product name> (en)" TotalResources="10" ..} {DayStats Date="2009-12-10" TotalResources="5"} {DayStats Date="2009-12-11" TotalResources="5"} {Stats} {Stats Name="<Product name> (us)" TotalResources="10" ..} {DayStats Date="2009-12-10" TotalResources="5"} {DayStats Date="2009-12-11" TotalResources="5"} {Stats} ... What I wan't to extract from this set is a new set entities (or entity in the example above) where the first level has been grouped by the (ignoring the country label) where all/some of the properties has been summmed together, including the the sublist of {DayStas} on a per day basis. So the result set of the example would look something like this: {Stats Name="<Product name>" TotalResources="20" ..} {DayStats Date="2009-12-10" TotalResources="10"} {DayStats Date="2009-12-11" TotalResources="10"} {Stats} ... So my question is: Is it possible to do this in a more elegant way in LINQ rather than the vanilla "loop-trhough-each-entity-and-compare"-way? And I wan't the set to contain the same type of entities as the original set ({Stats}, {DayStats}), your answer doesn't need to include code for that, I can probably work that out by myself. Just letting you know incase you need to take that into account. (I'm sorry if this question has been discussed before, I tried searching to no avail. I guess my searching skills are fuubar ;)) Merry christmas =)

    Read the article

  • LinqToSql Select to a class then do more queries

    - by fyjham
    I have a LINQ query running with multiple joins and I want to pass it around as an IQueryable<T> and apply additional filters in other methods. The problem is that I can't work out how to pass around a var data type and keep it strongly typed, and if I try to put it in my own class (EG: .Select((a,b) => new MyClass(a,b))) I get errors when I try to add later Where clauses because my class has no translations into SQL. Is there any way I can do one of the following: Make my class map to SQL? Make the var data-type implement an interface (So I can pass it round as though it's that)? Something I haven't though of that'll solve my issue? Example: public void Main() { using (DBDataContext context = new DBDataContext()) { var result = context.TableAs.Join( context.TableBs, a => a.BID, b => b.ID, (a,b) => new {A = a, B = b} ); result = addNeedValue(result, 4); } } private ???? addNeedValue(???? result, int value) { return result.Where(r => r.A.Value == value); } PS: I know in my example I can flatten out the function easily, but in the real thing it'd be an absolute mess if I tried.

    Read the article

  • SQL SERVER – Guest Posts – Feodor Georgiev – The Context of Our Database Environment – Going Beyond the Internal SQL Server Waits – Wait Type – Day 21 of 28

    - by pinaldave
    This guest post is submitted by Feodor. Feodor Georgiev is a SQL Server database specialist with extensive experience of thinking both within and outside the box. He has wide experience of different systems and solutions in the fields of architecture, scalability, performance, etc. Feodor has experience with SQL Server 2000 and later versions, and is certified in SQL Server 2008. In this article Feodor explains the server-client-server process, and concentrated on the mutual waits between client and SQL Server. This is essential in grasping the concept of waits in a ‘global’ application plan. Recently I was asked to write a blog post about the wait statistics in SQL Server and since I had been thinking about writing it for quite some time now, here it is. It is a wide-spread idea that the wait statistics in SQL Server will tell you everything about your performance. Well, almost. Or should I say – barely. The reason for this is that SQL Server is always a part of a bigger system – there are always other players in the game: whether it is a client application, web service, any other kind of data import/export process and so on. In short, the SQL Server surroundings look like this: This means that SQL Server, aside from its internal waits, also depends on external waits and settings. As we can see in the picture above, SQL Server needs to have an interface in order to communicate with the surrounding clients over the network. For this communication, SQL Server uses protocol interfaces. I will not go into detail about which protocols are best, but you can read this article. Also, review the information about the TDS (Tabular data stream). As we all know, our system is only as fast as its slowest component. This means that when we look at our environment as a whole, the SQL Server might be a victim of external pressure, no matter how well we have tuned our database server performance. Let’s dive into an example: let’s say that we have a web server, hosting a web application which is using data from our SQL Server, hosted on another server. The network card of the web server for some reason is malfunctioning (think of a hardware failure, driver failure, or just improper setup) and does not send/receive data faster than 10Mbs. On the other end, our SQL Server will not be able to send/receive data at a faster rate either. This means that the application users will notify the support team and will say: “My data is coming very slow.” Now, let’s move on to a bit more exciting example: imagine that there is a similar setup as the example above – one web server and one database server, and the application is not using any stored procedure calls, but instead for every user request the application is sending 80kb query over the network to the SQL Server. (I really thought this does not happen in real life until I saw it one day.) So, what happens in this case? To make things worse, let’s say that the 80kb query text is submitted from the application to the SQL Server at least 100 times per minute, and as often as 300 times per minute in peak times. Here is what happens: in order for this query to reach the SQL Server, it will have to be broken into a of number network packets (according to the packet size settings) – and will travel over the network. On the other side, our SQL Server network card will receive the packets, will pass them to our network layer, the packets will get assembled, and eventually SQL Server will start processing the query – parsing, allegorizing, generating the query execution plan and so on. So far, we have already had a serious network overhead by waiting for the packets to reach our Database Engine. There will certainly be some processing overhead – until the database engine deals with the 80kb query and its 20 subqueries. The waits you see in the DMVs are actually collected from the point the query reaches the SQL Server and the packets are assembled. Let’s say that our query is processed and it finally returns 15000 rows. These rows have a certain size as well, depending on the data types returned. This means that the data will have converted to packages (depending on the network size package settings) and will have to reach the application server. There will also be waits, however, this time you will be able to see a wait type in the DMVs called ASYNC_NETWORK_IO. What this wait type indicates is that the client is not consuming the data fast enough and the network buffers are filling up. Recently Pinal Dave posted a blog on Client Statistics. What Client Statistics does is captures the physical flow characteristics of the query between the client(Management Studio, in this case) and the server and back to the client. As you see in the image, there are three categories: Query Profile Statistics, Network Statistics and Time Statistics. Number of server roundtrips–a roundtrip consists of a request sent to the server and a reply from the server to the client. For example, if your query has three select statements, and they are separated by ‘GO’ command, then there will be three different roundtrips. TDS Packets sent from the client – TDS (tabular data stream) is the language which SQL Server speaks, and in order for applications to communicate with SQL Server, they need to pack the requests in TDS packets. TDS Packets sent from the client is the number of packets sent from the client; in case the request is large, then it may need more buffers, and eventually might even need more server roundtrips. TDS packets received from server –is the TDS packets sent by the server to the client during the query execution. Bytes sent from client – is the volume of the data set to our SQL Server, measured in bytes; i.e. how big of a query we have sent to the SQL Server. This is why it is best to use stored procedures, since the reusable code (which already exists as an object in the SQL Server) will only be called as a name of procedure + parameters, and this will minimize the network pressure. Bytes received from server – is the amount of data the SQL Server has sent to the client, measured in bytes. Depending on the number of rows and the datatypes involved, this number will vary. But still, think about the network load when you request data from SQL Server. Client processing time – is the amount of time spent in milliseconds between the first received response packet and the last received response packet by the client. Wait time on server replies – is the time in milliseconds between the last request packet which left the client and the first response packet which came back from the server to the client. Total execution time – is the sum of client processing time and wait time on server replies (the SQL Server internal processing time) Here is an illustration of the Client-server communication model which should help you understand the mutual waits in a client-server environment. Keep in mind that a query with a large ‘wait time on server replies’ means the server took a long time to produce the very first row. This is usual on queries that have operators that need the entire sub-query to evaluate before they proceed (for example, sort and top operators). However, a query with a very short ‘wait time on server replies’ means that the query was able to return the first row fast. However a long ‘client processing time’ does not necessarily imply the client spent a lot of time processing and the server was blocked waiting on the client. It can simply mean that the server continued to return rows from the result and this is how long it took until the very last row was returned. The bottom line is that developers and DBAs should work together and think carefully of the resource utilization in the client-server environment. From experience I can say that so far I have seen only cases when the application developers and the Database developers are on their own and do not ask questions about the other party’s world. I would recommend using the Client Statistics tool during new development to track the performance of the queries, and also to find a synchronous way of utilizing resources between the client – server – client. Here is another example: think about similar setup as above, but add another server to the game. Let’s say that we keep our media on a separate server, and together with the data from our SQL Server we need to display some images on the webpage requested by our user. No matter how simple or complicated the logic to get the images is, if the images are 500kb each our users will get the page slowly and they will still think that there is something wrong with our data. Anyway, I don’t mean to get carried away too far from SQL Server. Instead, what I would like to say is that DBAs should also be aware of ‘the big picture’. I wrote a blog post a while back on this topic, and if you are interested, you can read it here about the big picture. And finally, here are some guidelines for monitoring the network performance and improving it: Run a trace and outline all queries that return more than 1000 rows (in Profiler you can actually filter and sort the captured trace by number of returned rows). This is not a set number; it is more of a guideline. The general thought is that no application user can consume that many rows at once. Ask yourself and your fellow-developers: ‘why?’. Monitor your network counters in Perfmon: Network Interface:Output queue length, Redirector:Network errors/sec, TCPv4: Segments retransmitted/sec and so on. Make sure to establish a good friendship with your network administrator (buy them coffee, for example J ) and get into a conversation about the network settings. Have them explain to you how the network cards are setup – are they standalone, are they ‘teamed’, what are the settings – full duplex and so on. Find some time to read a bit about networking. In this short blog post I hope I have turned your attention to ‘the big picture’ and the fact that there are other factors affecting our SQL Server, aside from its internal workings. As a further reading I would still highly recommend the Wait Stats series on this blog, also I would recommend you have the coffee break conversation with your network admin as soon as possible. This guest post is written by Feodor Georgiev. Read all the post in the Wait Types and Queue series. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL

    Read the article

  • Slow INFORMATION_SCHEMA query

    - by Thomas
    We have a .NET Windows application that runs the following query on login to get some information about the database: SELECT t.TABLE_NAME, ISNULL(pk_ccu.COLUMN_NAME,'') PK, ISNULL(fk_ccu.COLUMN_NAME,'') FK FROM INFORMATION_SCHEMA.TABLES t LEFT JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS pk_tc ON pk_tc.TABLE_NAME = t.TABLE_NAME AND pk_tc.CONSTRAINT_TYPE = 'PRIMARY KEY' LEFT JOIN INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE pk_ccu ON pk_ccu.CONSTRAINT_NAME = pk_tc.CONSTRAINT_NAME LEFT JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS fk_tc ON fk_tc.TABLE_NAME = t.TABLE_NAME AND fk_tc.CONSTRAINT_TYPE = 'FOREIGN KEY' LEFT JOIN INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE fk_ccu ON fk_ccu.CONSTRAINT_NAME = fk_tc.CONSTRAINT_NAME Usually this runs in a couple seconds, but on one server running SQL Server 2000, it is taking over four minutes to run. I ran it with the execution plan enabled, and the results are huge, but this part caught my eye (it won't let me post an image): http://img35.imageshack.us/i/plank.png/ I then updated the statistics on all of the tables that were mentioned in the execution plan: update statistics sysobjects update statistics syscolumns update statistics systypes update statistics master..spt_values update statistics sysreferences But that didn't help. The index tuning wizard doesn't help either, because it doesn't let me select system tables. There is nothing else running on this server, so nothing else could be slowing it down. What else can I do to diagnose or fix the problem on that server?

    Read the article

  • Split List into Sublists with LINQ

    - by Felipe Lima
    Hi all, I believe this is another easy one for you LINQ masters out there. Is there any way I can separe a List into several separate lists of SomeObject, using the item index as the delimiter of each split? Let me exemplify: I have a List<SomeObject> and I need a List<List<SomeObject>> or List<SomeObject>[], so that each of these resulting lists will contain a group of 3 items of the original list (sequentially). eg.: Original List: [a, g, e, w, p, s, q, f, x, y, i, m, c] Resulting lists: [a, g, e], [w, p, s], [q, f, x], [y, i, m], [c] I'd also need the resulting lists size to be a parameter of this function. Is it possible?? Thanks!

    Read the article

  • Linq to sql Invalid column name

    - by Ivo
    I am using VS 2010 and Linq-to-sql I have deleted a couple of columns from my table, whatever I try I keep getting and "invalide column name" error with the old/deleted column when I try to insert a record. What tried: remove table,save, and drag it back on create new dblm with an other name created new project and create new dbml there Also the name of the deleted column does not exist in the generated code of the dbml anymore, but still I get an exception saying that column does not exists. Anyone got a suggestion how to solve this problem?

    Read the article

  • Linq: Converting flat structure to hierarchical

    - by Stefan Steinegger
    What is the easiest and somewhat efficient way to convert a flat structure: object[][] rawData = new object[][] { { "A1", "B1", "C1" }, { "A1", "B1", "C2" }, { "A2", "B2", "C3" }, { "A2", "B2", "C4" } // .. more }; into a hierarchical structure: class X { public X () { Cs = new List<string>(); } public string A { get; set; } public string B { get; set; } public List<string> Cs { get; private set; } } the result should look like this // pseudo code which describes structure: result = { new X() { A = "A1", B = "B1", Cs = { "C1", "C2" } }, new X() { A = "A2", B = "B2", Cs = { "C3", "C4" } } } Preferably using Linq extension methods. Target class X could be changed (eg. a public setter for the List), only if not possible / useful as it is now.

    Read the article

  • Use LINQ to query nested OData collection

    - by Kyle Russell
    I'm playing around with the new Netflix OData feed (http://odata.netflix.com/Catalog/) and having some issues. I'm trying to learn LINQ at the same time but having difficulty doing what I thought was going to be quite simple. I'd like to return a list of Titles that match a given Genre. The Titles object contains a collection of Genres. I'm not sure how to write this query. My attempt below does not appear to work using LINQPad. from t in Titles where t.Genres.Name.Contains("ABC") select t

    Read the article

  • creating Linq to sqlite dbml from DbLinq source code

    - by Veer
    Hi All, I tried to create linq-to-sqlite dbml using DbLinq but in vain. Each time I get different type of errors. May be I'm somewhere wrong. Can anyone tell me the step by step procedure to create the dbml file from the Dblinq source code. Edit: Steps I Followed: I downloaded the source file from this link. Edited the "run_sqliteMetal.bat" file in \\DbLinq-0.19\src\DbMetal folder as 'DbMetal.exe -database:myDb.db3 -namespace:myNS -code:myCode.cs' -dbml:myDbml.dbml Tried to run the DbMetal project file to produce the executable but there was a runtime error since Options object in Parameters.cs was null. Hence downloaded the readymade exe file from this location Copied it into the \DbLinq-0.19\src\DbMetal folder Executed the "run_sqliteMetal.bat" file I got a blank screen and Windows Error Msg "DbLinq has stopped Working" Any help? Thanks in advance, Veer

    Read the article

< Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >