Search Results

Search found 6205 results on 249 pages for 'linq to nhibernate'.

Page 152/249 | < Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >

  • What is the fastest way to filter a list of strings when making an Intellisense/Autocomplete list?

    - by user559548
    Hello everyone, I'm writing an Intellisense/Autocomplete like the one you find in Visual Studio. It's all fine up until when the list contains probably 2000+ items. I'm using a simple LINQ statement for doing the filtering: var filterCollection = from s in listCollection where s.FilterValue.IndexOf(currentWord, StringComparison.OrdinalIgnoreCase) >= 0 orderby s.FilterValue select s; I then assign this collection to a WPF Listbox's ItemSource, and that's the end of it, works fine. Noting that, the Listbox is also virtualised as well, so there will only be at most 7-8 visual elements in memory and in the visual tree. However the caveat right now is that, when the user types extremely fast in the richtextbox, and on every key up I execute the filtering + binding, there's this semi-race condition, or out of sync filtering, like the first key stroke's filtering could still be doing it's filtering or binding work, while the fourth key stroke is also doing the same. I know I could put in a delay before applying the filter, but I'm trying to achieve a seamless filtering much like the one in Visual Studio. I'm not sure where my problem exactly lies, so I'm also attributing it to IndexOf's string operation, or perhaps my list of string's could be optimised in some kind of index, that could speed up searching. Any suggestions of code samples are much welcomed. Thanks.

    Read the article

  • How to deal with a flaw in System.Data.DataTableExtensions.CopyToDataTable()

    - by andy
    Hey guys, so I've come across something which is perhaps a flaw in the Extension method .CopyToDataTable. This method is used by Importing (in VB.NET) System.Data.DataTableExtensions and then calling the method against an IEnumerable. You would do this if you want to filter a Datatable using LINQ, and then restore the DataTable at the end. i.e: Imports System.Data.DataRowExtensions Imports System.Data.DataTableExtensions Public Class SomeClass Private Shared Function GetData() As DataTable Dim Data As DataTable Data = LegacyADO.NETDBCall Data = Data.AsEnumerable.Where(Function(dr) dr.Field(Of Integer)("SomeField") = 5).CopyToDataTable() Return Data End Function End Class In the example above, the "WHERE" filtering might return no results. If this happens CopyToDataTable throws an exception because there are no DataRows. Why? The correct behavior should be to return a DataTable with Rows.Count = 0. Can anyone think of a clean workaround to this, in such a way that whoever calls CopyToDataTable doesn't have to be aware of this issue? System.Data.DataTableExtensions is a Static Class so I can't override the behavior....any ideas? Have I missed something? cheers UPDATE: I have submitted this as an issue to Connect. I would still like some suggestions, but if you agree with me, you could vote up the issue at Connect via the link above cheers

    Read the article

  • Entity Framework + MySQL - Why is the performance so terrible?

    - by Cyril Gupta
    When I decided to use an OR/M (Entity Framework for MySQL this time) for my new project I was hoping it would save me time, but I seem to have failed it (for the second time now). Take this simple SQL Query SELECT * FROM POST ORDER BY addedOn DESC LIMIT 0, 50 It executes and gives me results in less than a second as it should (the table has about 60,000 rows). Here's the equivalent LINQ To Entities query that I wrote for this var q = (from p in db.post orderby p.addedOn descending select p).Take(50); var q1 = q.ToList(); //This is where the query is fetched and timed out But this query never even executes it times out ALWAYS (without orderby it takes 5 seconds to run)! My timeout is set to 12 seconds so you can imagine it is taking much more than that. Why is this happening? Is there a way I can see what is the actual SQL Query that Entity Framework is sending to the db? Should I give up on EF+MySQL and move to standard SQL before I lose all eternity trying to make it work? I've recalibrated my indexes, tried eager loading (which actually makes it fail even without the orderby clause) Please help, I am about to give up OR/M for MySQL as a lost cause.

    Read the article

  • Using linq2xml to query a single item deep within

    - by BrettRobi
    I'm wondering if there is a robust and graceful way using linq2xml to query an item deep within an xml hierarchy. For example: <?xml version="1.0" encoding="utf-8" ?> <data> <core id="01234"> <field1>some data</field1> <field2>more data</field2> <metadata> <response> <status code="0">Success</status> <content> <job>f1b5c3f8-e6b1-4ae4-905a-a7c5de3f13c6</job> <id>id of the content</id> </content> </response> </metadata> </core> </data> With this xml how do I query the value of without using something like this: var doc = XElement.Load("file.xml"); string id = (string)doc.Element("core") .Element("metadata") .Element("response") .Element("content") .Element("id"); I don't like the above approach because it is error prone (throws exception if any tag is missing in the hierarchy) and honestly ugly. Is there a more robust and graceful approach, perhaps using the sql-like syntax of linq?

    Read the article

  • WPF DataGrid populating blank rows on TreeView's SelectedItemChanged

    - by jes9582
    When my DataGrid populates on the TreeView's SelectedItemChanged event it finds the objects and creates the rows accordingly but the rows populate with no text or are just blank. So I know it is finding my objects but it is not displaying them properly. Does anyone see where I made an error or suggest any changes or fixes? Thank you in advance! Here is the CSharp code that is setting the DataGrid's ItemsSource (I am using .dbml and LINQ with Lambda expressions): dgSystemSettings.ItemsSource = (tvSystemConfiguration.SelectedItem as SYSTEM_SETTINGS_GROUP).SYSTEM_SETTINGS_NAMEs.Join(ssdc.SYSTEM_SETTINGS_VALUEs, x => x.SSN_ID, y => y.SSV_SSN_ID, (x, y) => new { SYSTEM_SETTINGS_NAME = x, SYSTEM_SETTINGS_VALUE = y }); And here is the .xaml: <DataGrid Name="dgSystemSettings" AutoGenerateColumns="False" Height="447" Width="513" DockPanel.Dock="Right" ItemsSource="{Binding}" VerticalAlignment="Top" Margin="10,10,0,0"> <DataGrid.Columns> <DataGridTextColumn x:Name="colDisplayName" Header="Name" Binding="{Binding SSN_DISPLAY_NAME}"></DataGridTextColumn> <DataGridTextColumn x:Name="colValue" Header="Value" Binding="{Binding SSV_VALUE}"></DataGridTextColumn> </DataGrid.Columns> </DataGrid>

    Read the article

  • ASP.Net MVC - how can I easily serialize query results to a database?

    - by Mortanis
    I've been working on a little property search engine while I learn ASP.Net MVC. I've gotten the results from various property database tables and sorted them into a master generic property response. The search form is passed via Model Binding and works great. Now, I'd like to add pagination. I'm returning the chunk of properties for the current page with .Skip() and .Take(), and that's working great. I have a SearchResults model that has the paged result set and various other data like nextPage and prevPage. Except, I no longer have the original form of course to pass to /Results/2. Previously I'd have just hidden a copy of the form and done a POST each time, but it seems inelegant. I'd like to serialize the results to my MS SQL database and return a unique key for that results set - this also helps with a "Send this query to a friend!" link. Killing two birds with one stone. Is there an easy way to take an IQueryable result set that I have, serialize it, stick it into the DB, return a unique key and then reverse the process with said key? I'm using Linq to SQL currently on a MS SQL Express install, though in production it'll be on MS SQL 2008.

    Read the article

  • Which isolation level should I use for the following insert-if-not-present transaction?

    - by Steve Guidi
    I've written a linq-to-sql program that essentially performs an ETL task, and I've noticed many places where parallelization will improve its performance. However, I'm concerned about preventing uniquness constraint violations when two threads perform the following task (psuedo code). Record CreateRecord(string recordText) { using (MyDataContext database = GetDatabase()) { Record existingRecord = database.MyTable.FirstOrDefault(record.KeyPredicate()); if(existingRecord == null) { existingRecord = CreateRecord(recordText); database.MyTable.InsertOnSubmit(existingRecord); } database.SubmitChanges(); return existingRecord; } } In general, this code executes a SELECT statement to test for record existance, followed by an INSERT statement if the record doesn't exist. It is encapsulated by an implicit transaction. When two threads run this code for the same instance of recordText, I want to prevent them from simultaneously determining that the record doesn't exist, thereby both attempting to create the same record. An isolation level and explicit transaction will work well, except I'm not certain which isolation level I should use -- Serializable should work, but seems too strict. Is there a better choice?

    Read the article

  • Select rows from table1 and all the children from table2 into an object

    - by Patrick
    I want to pull data from table "Province_Notifiers" and also fetch all corresponding items from table "Province_Notifier_Datas". The table "Province_Notifier" has a guid to identify it (PK), table "Province_Notifier_Datas" has a column called BelongsToProvinceID witch is a foreign key to the "Province_Notifier" tables guid. I tried something like this: var records = from data in ctx.Province_Notifiers where DateTime.Now >= data.SendTime && data.Sent == false join data2 in ctx.Province_Notifier_Datas on data.Province_ID equals data2.BelongsToProvince_ID select new Province_Notifier { Email = data.Email, Province_ID = data.Province_ID, ProvinceName = data.ProvinceName, Sent = data.Sent, UserName = data.UserName, User_ID = data.User_ID, Province_Notifier_Datas = (new List<Province_Notifier_Data>().AddRange(data2)) }; This line is not working and i am trying to figure out how topull the data from table2 into that Province_Notifier_Datas variable. Province_Notifier_Datas = (new List<Province_Notifier_Data>().AddRange(data2)) I can add a record easily by adding the second table row into the Province_Notifier_Datas but i can't fetch it back. Province_Notifier dbNotifier = new Province_Notifier(); // set some values for dbNotifier dbNotifier.Province_Notifier_Datas.Add( new Province_Notifier_Data { BelongsToProvince_ID = userInput.Value.ProvinceId, EventText = GenerateNotificationDetail(notifierDetail) }); This works and inserts the data correctly into both tables. Edit: These error messages is thrown: Cannot convert from 'Province_Notifier_Data' to 'System.Collections.Generic.IEnumerable' If i look in Visual Studio, the variable "Province_Notifier_Datas" is of type System.Data.Linq.EntitySet The best overloaded method match for 'System.Collections.Generic.List.AddRange(System.Collections.Generic.IEnumerable)' has some invalid arguments Edit: var records = from data in ctx.Province_Notifiers where DateTime.Now >= data.SendTime && data.Sent == false join data2 in ctx.Province_Notifier_Datas on data.Province_ID equals data2.BelongsToProvince_ID into data2list select new Province_Notifier { Email = data.Email, Province_ID = data.Province_ID, ProvinceName = data.ProvinceName, Sent = data.Sent, UserName = data.UserName, User_ID = data.User_ID, Province_Notifier_Datas = new EntitySet<Province_Notifier_Data>().AddRange(data2List) }; Error 3 The name 'data2List' does not exist in the current context.

    Read the article

  • How to check if an entityset is populated

    - by TheQ
    How can i check if an entityset of a linq-object is populated or not? Example code below. My model have two methods, one joins data, and the other does not: public static Member GetMemberWithSettings(Guid memberId) { using (DataContext db = new DataContext()) { DataLoadOptions dataLoadOptions = new DataLoadOptions(); dataLoadOptions.LoadWith<Member>(x => x.Settings); db.LoadOptions = dataLoadOptions; var query = from x in db.Members where x.MemberId == memberId select x; return query.FirstOrDefault(); } } public static Member GetMember(Guid memberId) { using (DataContext db = new DataContext()) { var query = from x in db.Members where x.MemberId == memberId select x; return query.FirstOrDefault(); } } Then my control have the following code: Member member1 = Member.GetMemberWithSettings(memberId); Member member2 = Member.GetMember(memberId); Debug.WriteLine(member1.Settings.Count); Debug.WriteLine(member2.Settings.Count); The last line will generate a "Cannot access a disposed object" exception. I know that i can get rid of that exception just by not disposing the datacontext, but then the last line will generate a new query to the database, and i don't want that. What i would like is something like: Debug.WriteLine((member1.Settings.IsPopulated()) ? member1.Settings.Count : -1); Debug.WriteLine((member2.Settings.IsPopulated()) ? member2.Settings.Count : -1); Is it possible?

    Read the article

  • How do you bind SQL Data to a .NET DataGridView?

    - by Jordan S
    I am trying to bind a table in an SQL database to a DataGridView Control. I would like to make it so that when the user enters a new line of data in the DataGridView that a record is automatically added to the database. Is there a way to do this using LINQ to SQL? I have tried using the code below but after I add a new entry I dont think the data gets added to the DB. Please Help! BOMClassesDataContext DB = new BOMClassesDataContext(); var mfrs = from m in DB.Manufacturers select m; BindingSource bs = new BindingSource(); bs.DataSource = mfrs; dataGridView1.DataSource = bs; I tried adding DB.SubmitChanges() to the CellValueChanged eventhandler and that partially works. If I click the bottom empty row it automatically fills in the ID (identity) column of the table with a "0" instead of the next unused value. If I change that value manually to the next available then it adds the new record fine but if I leave it at 0 it does nothing. How can i fix this?

    Read the article

  • Filtering two arrays to avoid Inf/NaN values

    - by Gacek
    I have two arrays of doubles of the same size, containg X and Y values for some plots. I need to create some kind of protection against Inf/NaN values. I need to find all that pairs of values (X, Y) for which both, X and Y are not Inf nor NaN If I have one array, I can do it using lambdas: var filteredValues = someValues.Where(d=> !(double.IsNaN(d) || double.IsInfinity(d))).ToList(); Now, for two arrays I use following loop: List<double> filteredX=new List<double>(); List<double> filteredX=new List<double>(); for(int i=0;i<XValues.Count;i++) { if(!double.IsNan(XValues[i]) && !double.IsInfinity(XValues[i]) && !double.IsNan(YValues[i]) && !double.IsInfinity(YValues[i]) ) { filteredX.Add(XValues[i]); filteredY.Add(YValues[i]); } } Is there any way of filtering two arrays at the same time using LINQ/Lambdas, as it was done for single array?

    Read the article

  • Why do I get this exception? {An item with the same key has already been added."})

    - by Alan
    Aknittel NewSellerID is the result of a lookup on tblSellers. These tables (tblSellerListings and tblSellers) are not "officially" joined with a foreign key relationship, either in the model or in the database, but I want some referential integrity maintained for the future. So my issue remains. Why do I get the exception ({"An item with the same key has already been added."}) with this code, if I don't begin each iteration of the foreach loop with a new ObjectContext and end it with SaveChanges, which I think will affect performance. Also, could you tell me why ORCSolutionsDataService.tblSellerListings (An ADO.NET DataServices/WCF object is not IDisposable, like LINQ to Entities?? ============================================== // Add listings to previous seller int NewSellerID = 0; // Look up existing Seller key using SellerUniqueEBAYID var qryCurrentSeller = from s in service.tblSellers where s.SellerEBAYUserID == SellerUserID select s; foreach (var s in qryCurrentSeller) NewSellerID = s.SellerID; // Save the selected listings for this seller foreach (DataGridViewRow dgr in dgvRows) { ORCSolutionsDataService.tblSellerListings NewSellerListing = new ORCSolutionsDataService.tblSellerListings(); NewSellerListing.ItemID = dgr.Cells["txtSellerItemID"].Value.ToString(); NewSellerListing.Title = dgr.Cells["txtSellerItemTitle"].Value.ToString(); NewSellerListing.CurrentPrice = Convert.ToDecimal(dgr.Cells["txtSellerItemPrice"].Value); NewSellerListing.QuantitySold = Convert.ToInt32(dgr.Cells["txtSellerItemSold"].Value); NewSellerListing.EndTime = Convert.ToDateTime(dgr.Cells["txtSellerItemEnds"].Value); NewSellerListing.CategoryName = dgr.Cells["txtSellerItemCategory"].Value.ToString(); NewSellerListing.ExtendedPrice = Convert.ToDecimal(dgr.Cells["txtExtendedReceipts"].Value); NewSellerListing.RetrievedDtime = Convert.ToDateTime(dtSellerDataRetrieved.ToString()); NewSellerListing.SellerID = NewSellerID; service.AddTotblSellerListings(NewSellerListing); } service.SaveChanges(); } catch (Exception ex) { MessageBox.Show("Unable to add a new case. Exception: " + ex.Message); }

    Read the article

  • Database/Object Mapping

    - by Eric
    Hello everyone, This is a beginner question, but it's been frustrating me... I am using C#, by the way. I'd like to make a few classes, each with their own properties and methods. I would also like to have a database to store certain instances of these classes in case I would ever need to look at them again. So, for example... class Polygon { String name; Double perimiter; int numSides; public Double GetArea() { // ... } } class Circle { String name; Double radius; public void PrintName() { // ... } } Say I've got these classes. I also want a database that has the TABLES "Polygon" and "Circle" with the COLUMNS "name" "perimeter" "radius" etc. And I want an easy way to save a class instance into the database, or pull a class instance out of the database. I have previously been using MS Access for my database stuff, which I don't mind using, but I would prefer if nothing other than .NET need to be installed. I've been researching online a bit, but I wanted to get some opinions on here. I have looked at Linq-to-Sql, but it seems you need Sql-Server. Is this true? If so, I'd really rather not use it because I don't want to have to have it installed everywhere. Anway, I'm just fishing for some ideas/insights/suggestions/etc. so please help me out if you can. Thanks.

    Read the article

  • C# casting question: from IEnumerable to custom type

    - by Sarah Vessels
    I have a custom class called Rows that implements IEnumerable<Row>. I often use LINQ queries on Rows instances: Rows rows = new Rows { row1, row2, row3 }; IEnumerable<Row> particularRows = rows.Where<Row>(row => condition); What I would like is to be able to do the following: Rows rows = new Rows { row1, row2, row3 }; Rows particularRows = (Rows)rows.Where<Row>(row => condition); However, I get a "System.InvalidCastException: Unable to cast object of type 'WhereEnumerableIterator1[NS.Row]' to type 'NS.Rows'". I do have a Rows constructor taking IEnumerable<Row>, so I could do: Rows rows = new Rows { row1, row2, row3 }; Rows particularRows = new Rows(rows.Where<Row>(row => condition)); This seems bulky, however, and I would love to be able to cast an IEnumerable<Row> to be a Rows since Rows implements IEnumerable<Row>. Any ideas?

    Read the article

  • ASP.NET DynamicData: Whats happening during an update?

    - by Jens A.
    I am using ASP.NET DynamicData (based on LINQ to SQL) on my site for basic scaffolding. On one table I have added additional properties, that are not stored in the table, but are retrieved from somewhere else. (Profile information for a user account, in this case). They are displayed just fine, but when editing these values and pressing "Update", they are not changed. Here's what the properties look like, the table is the standard aspnet_Users table: public String Address { get { UserProfile profile = UserProfile.GetUserProfile(UserName); return profile.Address; } set { UserProfile profile = UserProfile.GetUserProfile(UserName); profile.Address = value; profile.Save(); } } When I fired up the debugger, I've noticed that for each update the set accessor is called three times. Once with the new value, but on a newly created instance of user, then once with the old value, again on an new instance, and finally with the old value on the existing instance. Wondering a bit, I checked with the properties created by the designer, and they, too, are called three times in (almost) the same fashion. The only difference is, that the last call contains the new value for the property. I am a bit stumped here. Why three times, and why are my new properties behaving differently? I'd be grateful for any help on that matter! =)

    Read the article

  • How to check if a child-object is populated

    - by TheQ
    How can i check if a child-object of a linq-object is populated or not? Example code below. My model have two methods, one joins data, and the other does not: public static Member GetMemberWithPhoto(Guid memberId) { using (DataContext db = new DataContext()) { DataLoadOptions dataLoadOptions = new DataLoadOptions(); dataLoadOptions.LoadWith<Member>(x => x.UserPhoto); db.LoadOptions = dataLoadOptions; var query = from x in db.Members where x.MemberId == memberId select x; return query.FirstOrDefault(); } } public static Member GetMember(Guid memberId) { using (DataContext db = new DataContext()) { var query = from x in db.Members where x.MemberId == memberId select x; return query.FirstOrDefault(); } } Then my control have the following code: Member member1 = Member.GetMemberWithPhoto(memberId); Member member2 = Member.GetMember(memberId); Debug.WriteLine(member1.UserPhoto.ToString()); Debug.WriteLine(member2.UserPhoto.ToString()); The last line will generate a "Cannot access a disposed object" exception. I know that i can get rid of that exception just by not disposing the datacontext, but then the last line will generate a new query to the database, and i don't want that. What i would like is something like: Debug.WriteLine((member1.UserPhoto.IsPopulated()) ? member1.UserPhoto.ToString() : "none"); Debug.WriteLine((member2.UserPhoto.IsPopulated()) ? member2.UserPhoto.ToString() : "none"); Is it possible?

    Read the article

  • Array Indexing Properties of A Class

    - by Chris
    I have a class that has several properties: class Person { string Name; int Age; DateTime BirthDate; } Then I have a sort of wrapper class with a List<Person>. Within this wrapper class I want to be able to do something like Wrapper["Name"] that returns a new List<string> using .Select(x=>x.Name). How do I create a wrapper class around an IEnumerable that supports mapping a string to the Property name? Something like the pseudo code below, but obviously it doesn't work. I'm 99.9% sure the solution will have to use Reflection and thats fine. class Wrapper { List<Person> PersonList; List<dynamic> this[string Column] { return PersonList.Select(x => x.[Column]).ToList(); } } This may not seem like a good design, but its a fix to eventually enable the correct design from .NET 2.0 days. As I have it right now, the data is stored in Columns, so there is actually a List of Lists within my class right now. Using the above example there would be three ILists (with a string Title) Name, Age, and Birthdate. Everything is currently predicated on addressing the columns by their "string" name. I'm trying to convert the data structure to row based with an IEnumberable interface to allow Linq eventually while still maintaining the functionality of my current code. Is converting the code to a Row based IEnumberable a good idea?

    Read the article

  • Centering Divisions Around Zero

    - by Mark
    I'm trying to create something that sort of resembles a histogram. I'm trying to create buckets from an array. Suppose I have a random array doubles between -10 and 10; this is very simplified. I then want to specify a center point, in this case 0 and the number of buckets. If I want 4 buckets the division would be -10 to -5, -5 to 0, 0 to 5 and 5 to 10. Not that complicated right. Now if I change the min and max to -12 and -9 and as for 4 divisions its more complicated. I either want a division at -3 and 3; it is centered around 0 ; or one at -6 to 0 and 0 to 6. Its not that hard to find the division size = Math.Ceiling((Abs(Max) + Abs(Min)) / Divisions) Then you would basically have an if statement to determine whether you want it centered on 0 or on an edge. You then iterate out from either 0 or DivisionSize/2 depending on the situation. You may not ALWAYS end up with the specified number of divisions but it will be close. Then you iterate through the array and increment the bin count. Does this seem like a good way to go about this? This method would surely work but it does not seem to be the most elegant. I'm curious as to whether the creation of the bins and the counting from the list could be done in a clever class with linq in a more elegant way? Something like creating the bins and then having each bin be a property {get;} that returns list.Count(x=> x >= Lower && x < Upper).

    Read the article

  • Is it possible to create the following XML using Xdocument(C#3.0)

    - by Newbie
    <?xml version='1.0' encoding='UTF-8'?> <StockMarket> <StockDate Day = "02" Month="06" Year="2010"> <Stock> <Symbol>ABC</Symbol> <Amount>110.45</Amount> </Stock> <Stock> <Symbol>XYZ</Symbol> <Amount>366.25</Amount> </Stock> </StockDate> <StockDate Day = "03" Month="06" Year="2010"> <Stock> <Symbol>ABC</Symbol> <Amount>110.35</Amount> </Stock> <Stock> <Symbol>XYZ</Symbol> <Amount>369.70</Amount> </Stock> </StockDate> </StockMarket> My approach so far is XDocument doc = new XDocument( new XElement("StockMarket", new XElement("StockDate", new XAttribute("Day", "02"),new XAttribute("Month","06"),new XAttribute("Year","2010")), new XElement("Stock") ) ); Since I am new to Linq to XML, I am presently struggling a lot and henceforth seeking for help. Using C#3.0 . Thanks

    Read the article

  • Parse particular text from an XML string

    - by Dan Sewell
    Hi all, Im writing an app which reads an RSS feed and places items on a map. I need to read the lat and long numbers only from this string: http://www.xxxxxxxxxxxxxx.co.uk/map.aspx?isTrafficAlert=true&lat=53.647351&lon=-1.933506 .This is contained in link tags Im a bit of a programming noob but im writing this in C#/Silverlight using Linq to XML. Shold this text be extrated when parsing or after parsing and sent to a class to do this? Many thanks for your assistance. EDIT. Im going to try and do a regex on this this is where I need to integrate the regex somewhere in this code. I need to take the lat and long from the Link element and seperate it into two variables I can use (the results are part of a foreach loop that creates a list.) var events = from ev in document.Descendants("item") select new { Title = (ev.Element("title").Value), Description = (ev.Element("description").Value), Link = (ev.Element("link").Value), }; Question is im not quite ure where to put the regex (once I work out how to use the regex properly! :-) )

    Read the article

  • Composable FLinq expressions

    - by Daniel
    When doing linq-to-sql in c#, you could do something like this: var data = context.MyTable.Where(x => x.Parameter > 10); var q1 = data.Take(10); var q2 = data.Take(3); q1.ToArray(); q2.ToArray(); This would generate 2 separate SQL queries, one with TOP 10, and the other with TOP 3. In playing around with Flinq, I see that: let data = query <@ seq { for i in context.MyTable do if x.Parameter > 10 then yield i } @> data |> Seq.take 10 |> Seq.toList data |> Seq.take 3 |> Seq.toList is not doing the same thing. Here it seems to do one full query, and then do the "take" calls on the client side. An alternative that I see used is: let q1 = query <@ for i in context.MyTable do if x.Param > 10 then yield i } |> Seq.take 10 @> let q2 = query <@ for i in context.MyTable do if x.Param > 10 then yield i } |> Seq.take 3 @> These 2 generate the SQL with the appropriate TOP N filter. My problem with this is that it doesn't seem composable. I'm basically having to duplicate the "where" clause, and potentially would have to duplicate other other subqueries that I might want to run on a base query. Is there a way to have F# give me something more composable? (I originally posted this question to hubfs, where I have gotten a few answers, dealing with the fact that C# performs the query transformation "at the end", i.e. when the data is needed, where F# is doing that transformation eagerly.)

    Read the article

  • Refactoring two methods down to one

    - by bflemi3
    I have two methods that almost do the same thing. They get a List<XmlNode> based on state OR state and schoolType and then return a distinct, ordered IEnumerable<KeyValuePair<string,string>>. I know they can be refactored but I'm struggling to determine what type the parameter should be for the linq statement in the return of the method (the last line of each method). I thank you for your help in advance. private IEnumerable<KeyValuePair<string, string>> getAreaDropDownDataSource() { StateInfoXmlDocument stateInfoXmlDocument = new StateInfoXmlDocument(); string schoolTypeXmlPath = string.Format(STATE_AND_SCHOOL_TYPE_XML_PATH, StateOfInterest, ConnectionsLearningSchoolType); var schoolNodes = new List<XmlNode>(stateInfoXmlDocument.SelectNodes(schoolTypeXmlPath).Cast<XmlNode>()); return schoolNodes.Select(x => new KeyValuePair<string, string>(x.Attributes["idLocation"].Value, x.Value)).OrderBy(x => x.Key).Distinct(); } private IEnumerable<KeyValuePair<string, string>> getStateOfInterestDropDownDataSource() { StateInfoXmlDocument stateInfoXmlDocument = new StateInfoXmlDocument(); string schoolTypeXmlPath = string.Format(SCHOOL_TYPE_XML_PATH, ConnectionsLearningSchoolType); var schoolNodes = new List<XmlNode>(stateInfoXmlDocument.SelectNodes(schoolTypeXmlPath).Cast<XmlNode>()); return schoolNodes.Select(x => new KeyValuePair<string, string>(x.Attributes["stateCode"].Value, x.Attributes["stateName"].Value)).OrderBy(x => x.Key).Distinct(); }

    Read the article

  • Efficient way to call .Sum() on multiple properties

    - by SherCoder
    I have a function that uses Linq to get data from the database and then I call that function in another function to sum all the individual properties using .Sum on each individual property. I was wondering if there is an efficient way to sum all the properties at once rather than calling .Sum() on each individual property. I think the way I am doing as of right now, is very slow (although untested). public OminitureStats GetAvgOmnitureData(int? fnsId, int dateRange) { IQueryable<OminitureStats> query = GetOmnitureDataAsQueryable(fnsId, dateRange); int pageViews = query.Sum(q => q.PageViews); int monthlyUniqueVisitors = query.Sum(q => q.MonthlyUniqueVisitors); int visits = query.Sum(q => q.Visits); double pagesPerVisit = (double)query.Sum(q => q.PagesPerVisit); double bounceRate = (double)query.Sum(q => q.BounceRate); return new OminitureStats(pageViews, monthlyUniqueVisitors, visits, bounceRate, pagesPerVisit); }

    Read the article

  • EF 4 Query - Issue with Multiple Parameters

    - by Brian
    Hello, A trick to avoiding filtering by nullable parameters in SQL was something like the following: select * from customers where (@CustomerName is null or CustomerName = @CustomerName) This worked well for me in LINQ to SQL: string customerName = "XYZ"; var results = (from c in ctx.Customers where (customerName == null || (customerName != null && c.CustomerName == customerName)) select c); But that above query, when in ADO.NET EF, doesn't work for me; it should filter by customer name because it exists, but it doesn't. Instead, it's querying all the customer records. Now, this is a simplified example, because I have many fields that I'm utilizing this kind of logic with. But it never actually filters, queries all the records, and causes a timeout exception. But the wierd thing is another query does something similarly, with no issues. Any ideas why? Seems like a bug to me, or is there a workaround for this? I've since switched to extension methods which works. Thanks.

    Read the article

  • How to initialize list with parent child relation

    - by user2917702
    Let's say I have the following classes. public class Parent { public string name; IList<Children> children; } public class Child { public string parentName; public int age; } As it is understandable, each parent can have multiple children, and we can have multiple parents. What is the best way to initialize these classes? Is it better to get all of the parents, and all of the children from database then use LINQ? IList<Parent> parents = GetParents()//assume this gets parents from db IList<Child> children = GetChildren() //assume this gets children from db foreach(Parent parent in parents) { parent.children = children.Where(x=>x.parentName==parent.name).ToList(); } or get all of the parents and iterate through each parent to query database by parentName to get children information? Due to requirement that I have, I cannot use datatable or dataset; I can only use datareader. IList<Parent> parents = GetParents()//assume this gets parents from db foreach(Parent parent in parents) { parent.children = GetChildrenByParentName();//assume this gets parents from db by parentName } Thank you

    Read the article

< Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >