Search Results

Search found 220 results on 9 pages for 'enumerable'.

Page 8/9 | < Previous Page | 4 5 6 7 8 9  | Next Page >

  • Binding DataTemplates (or another aproach)

    - by Bataglião
    Hi all, I'm having some troubles trying to dynamically generate content in WPF and after it bind data. I have the following scenario: TabControl - Dynamically generated TabItems through DataTemplate - inside TabItems, I have dynamic content generated by DataTemplate that I wish to bind (ListBox). The code follows: ::TabControl <TabControl Height="252" HorizontalAlignment="Left" Name="tabControl1" VerticalAlignment="Top" Width="458" Margin="12,12,12,12" ContentTemplate="{StaticResource tabItemContent}"></TabControl> ::The Template for TabControl to generate TabItems <DataTemplate x:Key="tabItemContent"> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition Width="*" /> </Grid.ColumnDefinitions> <Grid.RowDefinitions> <RowDefinition Height="*" /> </Grid.RowDefinitions> <ListBox ItemTemplate="{StaticResource listBoxContent}" ItemsSource="{Binding}"> </ListBox> </Grid> </DataTemplate> ::The template for ListBox Inside each TabItem <DataTemplate x:Key="listBoxContent"> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition Width="22"/> <ColumnDefinition Width="*" /> </Grid.ColumnDefinitions> <Image Grid.Column="0" Source="{Binding Path=PluginIcon}" /> <TextBlock Grid.Column="1" Text="{Binding Path=Text}" /> </Grid> </DataTemplate> So, when I try to do this on code inside a loop to create the tabitems: TabItem tabitem = tabControl1.Items[catIndex] as TabItem; tabitem.DataContext = plugins.ToList(); where 'plugins' is an Enumerable The ListBox is not bounded. I tried also to find the ListBox inside the TabItem to set the ItemSource property but no success at all. Someone have an idea on how to do that? Thanks in advance.

    Read the article

  • BindingList<T> and reflection!

    - by Aren B
    Background Working in .NET 2.0 Here, reflecting lists in general. I was originally using t.IsAssignableFrom(typeof(IEnumerable)) to detect if a Property I was traversing supported the IEnumerable Interface. (And thus I could cast the object to it safely) However this code was not evaluating to True when the object is a BindingList<T>. Next I tried to use t.IsSubclassOf(typeof(IEnumerable)) and didn't have any luck either. Code /// <summary> /// Reflects an enumerable (not a list, bad name should be fixed later maybe?) /// </summary> /// <param name="o">The Object the property resides on.</param> /// <param name="p">The Property We're reflecting on</param> /// <param name="rla">The Attribute tagged to this property</param> public void ReflectList(object o, PropertyInfo p, ReflectedListAttribute rla) { Type t = p.PropertyType; //if (t.IsAssignableFrom(typeof(IEnumerable))) if (t.IsSubclassOf(typeof(IEnumerable))) { IEnumerable e = p.GetValue(o, null) as IEnumerable; int count = 0; if (e != null) { foreach (object lo in e) { if (count >= rla.MaxRows) break; ReflectObject(lo, count); count++; } } } } The Intent I want to basically tag lists i want to reflect through with the ReflectedListAttribute and call this function on the properties that has it. (Already Working) Once inside this function, given the object the property resides on, and the PropertyInfo related, get the value of the property, cast it to an IEnumerable (assuming it's possible) and then iterate through each child and call ReflectObject(...) on the child with the count variable.

    Read the article

  • Concise C# code for gathering several properties with a non-null value into a collection?

    - by stakx
    A fairly basic problem for a change. Given a class such as this: public class X { public T A; public T B; public T C; ... // (other fields, properties, and methods are not of interest here) } I am looking for a concise way to code a method that will return all A, B, C, ... that are not null in an enumerable collection. (Assume that declaring these fields as an array is not an option.) public IEnumerable<T> GetAllNonNullAs(this X x) { // ? } The obvious implementation of this method would be: public IEnumerable<T> GetAllNonNullAs(this X x) { var resultSet = new List<T>(); if (x.A != null) resultSet.Add(x.A); if (x.B != null) resultSet.Add(x.B); if (x.C != null) resultSet.Add(x.C); ... return resultSet; } What's bothering me here in particular is that the code looks verbose and repetitive, and that I don't know the initial List capacity in advance. It's my hope that there is a more clever way, probably something involving the ?? operator? Any ideas?

    Read the article

  • NHibernate unintential lazy property loading

    - by chiccodoro
    I introduced a mapping for a business object which has (among others) a property called "Name": public class Foo : BusinessObjectBase { ... public virtual string Name { get; set; } } For some reason, when I fetch "Foo" objects, NHibernate seems to apply lazy property loading (for simple properties, not associations): The following code piece generates n+1 SQL statements, whereof the first only fetches the ids, and the remaining n fetch the Name for each record: ISession session = ...IQuery query = session.CreateQuery(queryString); ITransaction tx = session.BeginTransaction(); List<Foo> result = new List<Foo>(); foreach (Foo foo in query.Enumerable()) { result.Add(foo); } tx.Commit(); session.Close(); produces: NHibernate: select foo0_.FOO_ID as col_0_0_ from V1_FOO foo0_ NHibernate: SELECT foo0_.FOO_ID as FOO1_2_0_, foo0_.NAME as NAME2_0_ FROM V1_FOO foo0_ WHERE foo0_.FOO_ID=:p0;:p0 = 81 NHibernate: SELECT foo0_.FOO_ID as FOO1_2_0_, foo0_.NAME as NAME2_0_ FROM V1_FOO foo0_ WHERE foo0_.FOO_ID=:p0;:p0 = 36470 NHibernate: SELECT foo0_.FOO_ID as FOO1_2_0_, foo0_.NAME as NAME2_0_ FROM V1_FOO foo0_ WHERE foo0_.FOO_ID=:p0;:p0 = 36473 Similarly, the following code leads to a LazyLoadingException after session is closed: ISession session = ... ITransaction tx = session.BeginTransaction(); Foo result = session.Load<Foo>(id); tx.Commit(); session.Close(); Console.WriteLine(result.Name); Following this post, "lazy properties ... is rarely an important feature to enable ... (and) in Hibernate 3, is disabled by default." So what am I doing wrong? I managed to work around the LazyLoadingException by doing a NHibernateUtil.Initialize(foo) but the even worse part are the n+1 sql statements which bring my application to its knees. This is how the mapping looks like: <class name="Foo" table="V1_FOO"> ... <property name="Name" column="NAME"/> </class> BTW: The abstract "BusinessObjectBase" base class encapsulates the ID property which serves as the internal identifier.

    Read the article

  • Tool to detect use/abuse of String.Concat (where StringBuilder should be used)

    - by Mark Rushakoff
    It's common knowledge that you shouldn't use a StringBuilder in place of a small number of concatenations: string s = "Hello"; if (greetingWorld) { s += " World"; } s += "!"; However, in loops of a significant size, StringBuilder is the obvious choice: string s = ""; foreach (var i in Enumerable.Range(1,5000)) { s += i.ToString(); } Console.WriteLine(s); Is there a tool that I can run on either raw C# source or a compiled assembly to identify where in the source code that String.Concat is being called? (If you're not familiar, s += "foo" is mapped to String.Concat in the IL output.) Obviously, I can't realistically search through an entire project and evaluate every += to identify whether the lvalue is a string. Ideally, it would only point out calls inside a for/foreach loop, but I would even put up with all the false positives of noting every String.Concat. Also, I'm aware that there are some refactoring tools that will automatically refactor my code to use StringBuilder, but I am only interested in identifying the Concat usage at this point. I routinely run Gendarme and FxCop on my code, and neither of those tools identify what I've described.

    Read the article

  • odd nullreference error at foreach when rendering view

    - by giddy
    This error is so weird I Just can't really figure out what is really wrong! In UserController I have public virtual ActionResult Index() { var usersmdl = from u in RepositoryFactory.GetUserRepo().GetAll() select new UserViewModel { ID = u.ID, UserName = u.Username, UserGroupName = u.UserGroupMain.GroupName, BranchName = u.Branch.BranchName, Password = u.Password, Ace = u.ACE, CIF = u.CIF, PF = u.PF }; if (usersmdl != null) { return View(usersmdl.AsEnumerable()); } return View(); } My view is of type @model IEnumerable<UserViewModel> on the top. This is what happens: Where and what exactly IS null!? I create the users from a fake repository with moq. I also wrote unit tests, which pass, to ensure the right amount of mocked users are returned. Maybe someone can point me in the right direction here? Top of the stack trace is : at lambda_method(Closure , User ) at System.Linq.Enumerable.WhereSelectArrayIterator`2.MoveNext() at ASP.Index_cshtml.Execute() Is it something to do with linq here? Tell me If I should include the full stack trace.

    Read the article

  • Why is PLINQ slower than LINQ for this code?

    - by Rob Packwood
    First off, I am running this on a dual core 2.66Ghz processor machine. I am not sure if I have the .AsParallel() call in the correct spot. I tried it directly on the range variable too and that was still slower. I don't understand why... Here are my results: Process non-parallel 1000 took 146 milliseconds Process parallel 1000 took 156 milliseconds Process non-parallel 5000 took 5187 milliseconds Process parallel 5000 took 5300 milliseconds using System; using System.Collections.Generic; using System.Diagnostics; using System.Linq; namespace DemoConsoleApp { internal class Program { private static void Main() { ReportOnTimedProcess( () => GetIntegerCombinations(), "non-parallel 1000"); ReportOnTimedProcess( () => GetIntegerCombinations(runAsParallel: true), "parallel 1000"); ReportOnTimedProcess( () => GetIntegerCombinations(5000), "non-parallel 5000"); ReportOnTimedProcess( () => GetIntegerCombinations(5000, true), "parallel 5000"); Console.Read(); } private static List<Tuple<int, int>> GetIntegerCombinations( int iterationCount = 1000, bool runAsParallel = false) { IEnumerable<int> range = Enumerable.Range(1, iterationCount); IEnumerable<Tuple<int, int>> integerCombinations = from x in range from y in range select new Tuple<int, int>(x, y); return runAsParallel ? integerCombinations.AsParallel().ToList() : integerCombinations.ToList(); } private static void ReportOnTimedProcess( Action process, string processName) { var stopwatch = new Stopwatch(); stopwatch.Start(); process(); stopwatch.Stop(); Console.WriteLine("Process {0} took {1} milliseconds", processName, stopwatch.ElapsedMilliseconds); } } }

    Read the article

  • getting a "default" concrete class that implements an interface

    - by Roger Joys
    I am implementing a custom (and generic) Json.net serializer and hit a bump in the road that I could use some help on. When the deserializer is mapping to a property that is an interface, how can I best determine what sort of object to construct to deserialize to to place into the interface property. I have the following: [JsonConverter(typeof(MyCustomSerializer<foo>))] class foo { int Int1 { get; set; } IList<string> StringList {get; set; } } My serializer properly serializes this object, and but when it comes back in, and I try to map the json parts to to object, I have a JArray and an interface. I am currently instantiating anything enumerable like List as theList = Activator.CreateInstance(property.PropertyType); This works create to work with in the deserialization process, but when the property is IList, I get runtime complaints (obviously) about not being able to instantiate an interface. So how would I know what type of concrete class to create in a case like this? Thank you

    Read the article

  • Namespace scoped aliases for generic types in C#

    - by TN
    Let's have a following example: public class X { } public class Y { } public class Z { } public delegate IDictionary<Y, IList<Z>> Bar(IList<X> x, int i); public interface IFoo { // ... Bar Bar { get; } } public class Foo : IFoo { // ... public Bar Bar { get { return null; //... } } } void Main() { IFoo foo; //= ... IEnumerable<IList<X>> source; //= ... var results = source.Select(foo.Bar); } The compiler says: The type arguments for method 'System.Linq.Enumerable.Select(System.Collections.Generic.IEnumerable, System.Func)' cannot be inferred from the usage. Try specifying the type arguments explicitly. It's because, it cannot convert Bar to Func<IList<X>, int, IDictionary<Y, IList<Z>>>. It would be great if I could create type namespace scoped type aliases for generic types in C#. Then I would define Bar not as a delegate, but rather I would define it as an namespace scoped alias for Func<IList<X>, int, IDictionary<Y, IList<Z>>>. public alias Bar = Func<IList<X>, int, IDictionary<Y, IList<Z>>>; I could then also define namespace scoped alias for e.g. IDictionary<Y, IList<Z>>. And if used appropriately:), it will make the code more readable. Now I have inline the generic types and the real code is not well readable:( Have you find the same trouble:)? Is there any good reason why it is not in C# 3.0? Or there is no good reason, it's just matter of money and/or time? EDIT: I know that I can use using, but it is not namespace based - not so convenient for my case.

    Read the article

  • Difference between MVC FilterAttribute and Filter

    - by zaaaaphod
    I'm trying to write my own custom AuthorizationAttribute that uses DI. I'm using the MUNQ IoC provider for it's speed and have decided to use constructor injection on all my classes as opposed to post instatiation property binding (because I prefer it). I'm trying to write a custom IFilterProvider that will use my IoC container to return requests for filters (so that I can map concrete classes using the container). I've come up with the following. public class FilterProvider : IFilterProvider { private readonly IocContainer _container; public FilterProvider(IocContainer container) { _container = container; } public IEnumerable<Filter> GetFilters(ControllerContext controllerContext, ActionDescriptor actionDescriptor) { var x = Enumerable.Union<Object>(_container.ResolveAll<IActionFilter>(), _container.ResolveAll<IAuthorizationFilter>()); foreach (Filter actionFilter in x) yield return new Filter(actionFilter, FilterScope.First, null); } } The above code will fail during the foreach because my objects that implement IAuthorizationFilter are based on FilterAttribute and not Filter My question is, what is the difference between Filter and FilterAttribute? I would have thought that there would have been a common link between them, unless I'm missing something. Another deeper question is, how come there is no IFilterAttributeProvider that would support IEnumerable GetFilters(...) Is there some other way that I should be using to resolve IAuthorizationFilter via my IoC container? Thank you very much for your help. Z

    Read the article

  • How to get last Friday of month(s) using .NET

    - by Newbie
    I have a function that returns me only the fridays from a range of dates public static List<DateTime> GetDates(DateTime startDate, int weeks) { int days = weeks * 7; //Get the whole date range List<DateTime> dtFulldateRange = Enumerable.Range(-days, days).Select(i => startDate.AddDays(i)).ToList(); //Get only the fridays from the date range List<DateTime> dtOnlyFridays = (from dtFridays in dtFulldateRange where dtFridays.DayOfWeek == DayOfWeek.Friday select dtFridays).ToList(); return dtOnlyFridays; } Purpose of the function: "List of dates from the Week number specified till the StartDate i.e. If startdate is 23rd April, 2010 and the week number is 1,then the program should return the dates from 16th April, 2010 till the startddate". I am calling the function as: DateTime StartDate1 = DateTime.ParseExact("20100430", "yyyyMMdd", System.Globalization.CultureInfo.InvariantCulture); List<DateTime> dtList = Utility.GetDates(StartDate1, 4).ToList(); Now the requirement has changed a bit. I need to find out only the last Fridays of every month. The input to the function will remain same.

    Read the article

  • Multithreading for loop while maintaining order

    - by David
    I started messing around with multithreading for a CPU intensive batch process I'm running. Essentially I'm trying to condense multiple single page tiffs into single PDF documents. This works fine with a foreach loop or standard iteration but can be very slow for several 100 page documents. I tried the following based on a some examples I found to use multithreading and it has significant performance improvements however it obliterates the page order instead of 1,2,3,4 it will be 1,3,4,2,6,5 on what thread completes first. My question is how would I utilize this technique while maintaining the page order and if I can will it negate the performance benefit of the multithreading? Thank you in advance. PdfDocument doc = new PdfDocument(); string mail = textBox1.Text; string[] split = mail.Split(new string[] { Environment.NewLine }, StringSplitOptions.None); int counter = split.Count(); // Source must be array or IList. var source = Enumerable.Range(0, 100000).ToArray(); // Partition the entire source array. var rangePartitioner = Partitioner.Create(0, counter); double[] results = new double[counter]; // Loop over the partitions in parallel. Parallel.ForEach(rangePartitioner, (range, loopState) => { // Loop over each range element without a delegate invocation. for (int i = range.Item1; i < range.Item2; i++) { f_prime = split[i].Replace(" " , ""); PdfPage page = doc.AddPage(); XGraphics gfx = XGraphics.FromPdfPage(page); XImage image = XImage.FromFile(f_prime); double x = 0; gfx.DrawImage(image, x, 0); } });

    Read the article

  • C# Linq Select Problem in Method Chain.

    - by Gnought
    Note that the _src inherit IQueryable<U> and V inherit new(); I wrote the following statement, there is no syntax error. IQueryable<V> a = from s in _src where (s.Right - 1 == s.Left) select new V(); But if i re-wrote it as follows, the Visual Studio editor complains an error in the "Select" IQueryable<V> d = _src.Where(s => s.Right - 1 == s.Left).Select(s=> new V()); The Error is: The type arguments cannot be inferred from the usage. Try specifying the type arguments explicitly. Candidates are: System.Collections.Generic.IEnumerable<V> Select<U,V>(this System.Collections.Generic.IEnumerable<U>, System.Func<U,V>) (in class Enumerable) System.Linq.IQueryable<V> Select<U,V>(this System.Linq.IQueryable<U>, System.Linq.Expressions.Expression<System.Func<U,V>>) (in class Queryable) Could anyone explain this phenomenon, and what the solution is to fix the error?

    Read the article

  • Why is it possible to enumerate a LinqToSql query after calling Dispose() on the DataContext?

    - by DanM
    I'm using the Repository Pattern with some LinqToSql objects. My repository objects all implement IDisposable, and the Dispose() method does only thing--calls Dispose() on the DataContext. Whenever I use a repository, I wrap it in a using person, like this: public IEnumerable<Person> SelectPersons() { using (var repository = _repositorySource.GetNew<Person>(dc => dc.Person)) { return repository.GetAll(); } } This method returns an IEnumerable<Person>, so if my understanding is correct, no querying of the database actually takes place until Enumerable<Person> is traversed (e.g., by converting it to a list or array or by using it in a foreach loop), as in this example: var persons = gateway.SelectPersons(); // Dispose() is fired here var personViewModels = ( from b in persons select new PersonViewModel { Id = b.Id, Name = b.Name, Age = b.Age, OrdersCount = b.Order.Count() }).ToList(); // executes queries In this example, Dispose() gets called immediately after setting persons, which is an IEnumerable<Person>, and that's the only time it gets called. So, a couple questions: How does this work? How can a disposed DataContext still query the database for results when I walk the IEnumerable<Person>? What does Dispose() actually do? I've heard that it is not necessary (e.g., see this question) to dispose of a DataContext, but my impression was that it's not a bad idea. Is there any reason not to dispose of it?

    Read the article

  • Jquery + Prototype Question

    - by mikeyhill
    I recently inherited a site which is botched in all sorts of ways. I'm more of a php guy and initially the js was working just fine. I made no changes to the javascript or the any of the include files but after making a few content edits I'm getting errors from firebug. a.dispatchEvent is not a function emptyFunction()protot...ects.js (line 2) emptyFunction()protot...ects.js (line 2) fireContentLoadedEvent()protot...ects.js (line 2) [Break on this error] var Prototype={Version:'1.6.0.2',Brows...pe,Enumerable);Element.addMethods(); protot...ects.js (line 2) this.m_eTarget.setStyle is not a function [Break on this error] this.m_eTarget.setStyle( { position: 'relative', overflow:'hidden'} ); protot...ects.js (line 43) uncaught exception: [Exception... "Component returned failure code: 0x80070057 (NS_ERROR_ILLEGAL_VALUE)" nsresult: "0x80070057 (NS_ERROR_ILLEGAL_VALUE)" location: "JS frame :: js/prototype_effects.js :: anonymous :: line 2" data: no] Googling around I found several posts that sometimes jquery+prototype don't play well and rearranging the scripts could fix this issue, however being that I didn't touch these sections I'm not sure where I even need to begin to debug. The previous developer incorporated a head.inc file which loads up prototype, scriptaculous and then many of the pages are in a sub-template loading up jquery for functions like lightbox. The site is temp housed at http://dawn.mikeyhill.com Any help is appreciated.

    Read the article

  • A better way of handling the below program(may be with Take/Skip/TakeWhile.. or anything better)

    - by Newbie
    I have a data table which has only one row. But it is having 44 columns. My task is to get the columns from the 4th row till the end. Henceforth, I have done the below program that suits my requirement. (kindly note that dt is the datatable) List<decimal> lstDr = new List<decimal>(); Enumerable.Range(0, dt.Columns.Count).ToList().ForEach(i => { if (i > 3) lstDr.Add(Convert.ToDecimal(dt.Rows[0][i])); } ); There is nothing harm in the program. Works fine. But I feel that there may be a better way of handimg the program may be with Skip ot Take or TakeWhile or anyother stuff. I am looking for a better solution that the one I implemented. Is it possible? I am using c#3.0 Thanks.

    Read the article

  • Some non-generic collections

    - by Simon Cooper
    Although the collections classes introduced in .NET 2, 3.5 and 4 cover most scenarios, there are still some .NET 1 collections that don't have generic counterparts. In this post, I'll be examining what they do, why you might use them, and some things you'll need to bear in mind when doing so. BitArray System.Collections.BitArray is conceptually the same as a List<bool>, but whereas List<bool> stores each boolean in a single byte (as that's what the backing bool[] does), BitArray uses a single bit to store each value, and uses various bitmasks to access each bit individually. This means that BitArray is eight times smaller than a List<bool>. Furthermore, BitArray has some useful functions for bitmasks, like And, Xor and Not, and it's not limited to 32 or 64 bits; a BitArray can hold as many bits as you need. However, it's not all roses and kittens. There are some fundamental limitations you have to bear in mind when using BitArray: It's a non-generic collection. The enumerator returns object (a boxed boolean), rather than an unboxed bool. This means that if you do this: foreach (bool b in bitArray) { ... } Every single boolean value will be boxed, then unboxed. And if you do this: foreach (var b in bitArray) { ... } you'll have to manually unbox b on every iteration, as it'll come out of the enumerator an object. Instead, you should manually iterate over the collection using a for loop: for (int i=0; i<bitArray.Length; i++) { bool b = bitArray[i]; ... } Following on from that, if you want to use BitArray in the context of an IEnumerable<bool>, ICollection<bool> or IList<bool>, you'll need to write a wrapper class, or use the Enumerable.Cast<bool> extension method (although Cast would box and unbox every value you get out of it). There is no Add or Remove method. You specify the number of bits you need in the constructor, and that's what you get. You can change the length yourself using the Length property setter though. It doesn't implement IList. Although not really important if you're writing a generic wrapper around it, it is something to bear in mind if you're using it with pre-generic code. However, if you use BitArray carefully, it can provide significant gains over a List<bool> for functionality and efficiency of space. OrderedDictionary System.Collections.Specialized.OrderedDictionary does exactly what you would expect - it's an IDictionary that maintains items in the order they are added. It does this by storing key/value pairs in a Hashtable (to get O(1) key lookup) and an ArrayList (to maintain the order). You can access values by key or index, and insert or remove items at a particular index. The enumerator returns items in index order. However, the Keys and Values properties return ICollection, not IList, as you might expect; CopyTo doesn't maintain the same ordering, as it copies from the backing Hashtable, not ArrayList; and any operations that insert or remove items from the middle of the collection are O(n), just like a normal list. In short; don't use this class. If you need some sort of ordered dictionary, it would be better to write your own generic dictionary combining a Dictionary<TKey, TValue> and List<KeyValuePair<TKey, TValue>> or List<TKey> for your specific situation. ListDictionary and HybridDictionary To look at why you might want to use ListDictionary or HybridDictionary, we need to examine the performance of these dictionaries compared to Hashtable and Dictionary<object, object>. For this test, I added n items to each collection, then randomly accessed n/2 items: So, what's going on here? Well, ListDictionary is implemented as a linked list of key/value pairs; all operations on the dictionary require an O(n) search through the list. However, for small n, the constant factor that big-o notation doesn't measure is much lower than the hashing overhead of Hashtable or Dictionary. HybridDictionary combines a Hashtable and ListDictionary; for small n, it uses a backing ListDictionary, but switches to a Hashtable when it gets to 9 items (you can see the point it switches from a ListDictionary to Hashtable in the graph). Apart from that, it's got very similar performance to Hashtable. So why would you want to use either of these? In short, you wouldn't. Any gain in performance by using ListDictionary over Dictionary<TKey, TValue> would be offset by the generic dictionary not having to cast or box the items you store, something the graphs above don't measure. Only if the performance of the dictionary is vital, the dictionary will hold less than 30 items, and you don't need type safety, would you use ListDictionary over the generic Dictionary. And even then, there's probably more useful performance gains you can make elsewhere.

    Read the article

  • Most efficient algorithm for merging sorted IEnumerable<T>

    - by franck
    Hello, I have several huge sorted enumerable sequences that I want to merge. Theses lists are manipulated as IEnumerable but are already sorted. Since input lists are sorted, it should be possible to merge them in one trip, without re-sorting anything. I would like to keep the defered execution behavior. I tried to write a naive algorithm which do that (see below). However, it looks pretty ugly and I'm sure it can be optimized. It may exist a more academical algorithm... IEnumerable<T> MergeOrderedLists<T, TOrder>(IEnumerable<IEnumerable<T>> orderedlists, Func<T, TOrder> orderBy) { var enumerators = orderedlists.ToDictionary(l => l.GetEnumerator(), l => default(T)); IEnumerator<T> tag = null; var firstRun = true; while (true) { var toRemove = new List<IEnumerator<T>>(); var toAdd = new List<KeyValuePair<IEnumerator<T>, T>>(); foreach (var pair in enumerators.Where(pair => firstRun || tag == pair.Key)) { if (pair.Key.MoveNext()) toAdd.Add(pair); else toRemove.Add(pair.Key); } foreach (var enumerator in toRemove) enumerators.Remove(enumerator); foreach (var pair in toAdd) enumerators[pair.Key] = pair.Key.Current; if (enumerators.Count == 0) yield break; var min = enumerators.OrderBy(t => orderBy(t.Value)).FirstOrDefault(); tag = min.Key; yield return min.Value; firstRun = false; } } The method can be used like that: // Person lists are already sorted by age MergeOrderedLists(orderedList, p => p.Age); assuming the following Person class exists somewhere: public class Person { public int Age { get; set; } } Duplicates should be conserved, we don't care about their order in the new sequence. Do you see any obvious optimization I could use?

    Read the article

  • jPlayer widget created with static error as result

    - by goldengel
    I've created a widged with Orchard. Unfortunately I've used the same "Title" for a jPlayer widget twice. Now I receive an error: Server Error in '/wgk' Application. Sequence contains more than one element Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.InvalidOperationException: Sequence contains more than one element Source Error: Line 2: <fieldset> Line 3: <div>@Html.LabelFor(o => o.MediaGalleryName, @T("Media gallery"))</div> Line 4: @if(!Model.HasAvailableGalleries) { Line 5: <div>@T("You need first to create an media gallery on Media Gallery menu")</div> Line 6: } Source File: x:\Intepub\wgk\Modules\Orchard.jPlayer\Views\EditorTemplates\Parts\MediaGallery.cshtml Line: 4 Stack Trace: [InvalidOperationException: Sequence contains more than one element] System.Linq.Enumerable.SingleOrDefault(IEnumerable`1 source) +4206966 NHibernate.Linq.Visitors.ImmediateResultsVisitor`1.HandleSingleOrDefaultCall(MethodCallExpression call) +51 NHibernate.Linq.Visitors.ImmediateResultsVisitor`1.VisitMethodCall(MethodCallExpression call) +411 NHibernate.Linq.Visitors.ExpressionVisitor.Visit(Expression exp) +371 In MediaGallery.cshtml (found in error description above) is written: @model Orchard.jPlayer.Models.MediaGalleryPart <fieldset> <div>@Html.LabelFor(o => o.MediaGalleryName, @T("Media gallery"))</div> @if(!Model.HasAvailableGalleries) { <div>@T("You need first to create an media gallery on Media Gallery menu")</div> } else { <div>@Html.DropDownListFor(o => o.SelectedGallery, Model.AvailableGalleries)</div> <div>@Html.LabelFor(o => o.SelectedType, @T("Media gallery type"))</div> <div>@Html.DropDownListFor(o => o.SelectedType, Model.AvailableTypes)</div> <div>@Html.LabelFor(o => o.AutoPlay, @T("Auto play"))</div> <div>@Html.CheckBoxFor(o => o.AutoPlay)</div> } </fieldset> My problem is now, I cannot find or edit the widget with double used name. I would love to replace it to another name. But I do not know where to do this. Please advice.

    Read the article

  • How can I make the C# compiler infer these type parameters automatically?

    - by John Feminella
    I have some code that looks like the following. First I have some domain classes and some special comparators for them. public class Fruit { public int Calories { get; set; } public string Name { get; set; } } public class FruitEqualityComparer : IEqualityComparer<Fruit> { // ... } public class BasketEqualityComparer : IEqualityComparer<IEnumerable<Fruit>> { // ... } Next, I have a helper class called ConstraintChecker. It has a simple BaseEquals method that makes sure some simple base cases are considered: public static class ConstraintChecker { public static bool BaseEquals(T lhs, T rhs) { bool sameObject = l == r; bool leftNull = l == null; bool rightNull = r == null; return sameObject && !leftNull && !rightNull; } There's also a SemanticEquals method which is just a BaseEquals check and a comparator function that you specify. public static bool SemanticEquals<T>(T lhs, T rhs, Func<T, T, bool> f) { return BaseEquals(lhs, rhs) && f(lhs, rhs); } And finally there's a SemanticSequenceEquals method which accepts two IEnumerable<T> instances to compare, and an IEqualityComparer instance that will get called on each pair of elements in the list via Enumerable.SequenceEquals. public static bool SemanticSequenceEquals<T, U, V>(U lhs, U rhs, V comparator) where U : IEnumerable<T> where V : IEqualityComparer<T> { return SemanticEquals(lhs, rhs, (l, r) => lhs.SequenceEqual(rhs, comparator)); } } // end of ConstraintChecker The point of SemanticSequenceEquals is that you don't have to define two comparators whenever you want to compare both IEnumerable<T> and T instances; now you can just specify an IEqualityComparer<T> and it will also handle lists when you invoke SemanticSequenceEquals. So I could get rid of the BasketEqualityComparer class, which would be nice. But there's a problem. The C# compiler can't figure out the types involved when you invoke SemanticSequenceEquals: return ConstraintChecker.SemanticSequenceEquals(lhs, rhs, new FruitEqualityComparer()); If I specify them explicitly, it works: return ConstraintChecker.SemanticSequenceEquals< Fruit, IEnumerable<Fruit>, IEqualityComparer<Fruit> > (lhs, rhs, new FruitEqualityComparer()); What can I change here so that I don't have to write the type parameters explicitly?

    Read the article

  • NHibernate unintentional lazy property loading

    - by chiccodoro
    I introduced a mapping for a business object which has (among others) a property called "Name": public class Foo : BusinessObjectBase { ... public virtual string Name { get; set; } } For some reason, when I fetch "Foo" objects, NHibernate seems to apply lazy property loading (for simple properties, not associations): The following code piece generates n+1 SQL statements, whereof the first only fetches the ids, and the remaining n fetch the Name for each record: ISession session = ...IQuery query = session.CreateQuery(queryString); ITransaction tx = session.BeginTransaction(); List<Foo> result = new List<Foo>(); foreach (Foo foo in query.Enumerable()) { result.Add(foo); } tx.Commit(); session.Close(); produces: NHibernate: select foo0_.FOO_ID as col_0_0_ from V1_FOO foo0_<br/> NHibernate: SELECT foo0_.FOO_ID as FOO1_2_0_, foo0_.NAME as NAME2_0_ FROM V1_FOO foo0_ WHERE foo0_.FOO_ID=:p0;:p0 = 81<br/> NHibernate: SELECT foo0_.FOO_ID as FOO1_2_0_, foo0_.NAME as NAME2_0_ FROM V1_FOO foo0_ WHERE foo0_.FOO_ID=:p0;:p0 = 36470<br/> NHibernate: SELECT foo0_.FOO_ID as FOO1_2_0_, foo0_.NAME as NAME2_0_ FROM V1_FOO foo0_ WHERE foo0_.FOO_ID=:p0;:p0 = 36473 Similarly, the following code leads to a LazyLoadingException after session is closed: ISession session = ... ITransaction tx = session.BeginTransaction(); Foo result = session.Load<Foo>(id); tx.Commit(); session.Close(); Console.WriteLine(result.Name); Following this post, "lazy properties ... is rarely an important feature to enable ... (and) in Hibernate 3, is disabled by default." So what am I doing wrong? I managed to work around the LazyLoadingException by doing a NHibernateUtil.Initialize(foo) but the even worse part are the n+1 sql statements which bring my application to its knees. This is how the mapping looks like: <class name="Foo" table="V1_FOO"> ... <property name="Name" column="NAME"/> </class> BTW: The abstract "BusinessObjectBase" base class encapsulates the ID property which serves as the internal identifier.

    Read the article

  • Help With LINQ: Mixed Joins and Specifying Default Values

    - by Corey O.
    I am trying to figure out how to do a mixed-join in LINQ with specific access to 2 LINQ objects. Here is an example of how the actual TSQL query might look: SELECT * FROM [User] AS [a] INNER JOIN [GroupUser] AS [b] ON [a].[UserID] = [b].[UserID] INNER JOIN [Group] AS [c] ON [b].[GroupID] = [c].[GroupID] LEFT JOIN [GroupEntries] AS [d] ON [a].[GroupID] = [d].[GroupID] WHERE [a].[UserID] = @UserID At the end, basically what I would like is an enumerable object full of GroupEntry objects. What am interested is the last two tables/objects in this query. I will be displaying Groups as a group header, and all of the Entries underneath their group heading. If there are no entries for a group, I still want to see that group as a header without any entries. Here's what I have so far: So from that I'd like to make a function: public void DisplayEntriesByUser(int user_id) { MyDataContext db = new MyDataContext(); IEnumberable<GroupEntries> entries = ( from user in db.Users where user.UserID == user_id join group_user in db.GroupUsers on user.UserID = group_user.UserID into a from join1 in a join group in db.Groups on join1.GroupID equals group.GroupID into b from join2 in b join entry in db.Entries.DefaultIfEmpty() on join2.GroupID equals entry.GroupID select entry ); Group last_group_id = 0; foreach(GroupEntry entry in entries) { if (last_group_id == 0 || entry.GroupID != last_group_id) { last_group_id = entry.GroupID; System.Console.WriteLine("---{0}---", entry.Group.GroupName.ToString().ToUpper()); } if (entry.EntryID) { System.Console.WriteLine(" {0}: {1}", entry.Title, entry.Text); } } } The example above does not work quite as expected. There are 2 problems that I have not been able to solve: I still seem to be getting an INNER JOIN instead of a LEFT JOIN on the last join. I am not getting any empty results, so groups without entries do not appear. I need to figure out a way so that I can fill in the default values for blank sets of entries. That is, if there is a group without an entry, I would like to have a mostly blank entry returned, except that I'd want the EntryID to be null or 0, the GroupID to be that of of the empty group that it represents, and I'd need a handle on the entry.Group object (i.e. it's parent, empty Group object). Any help on this would be greatly appreciated. Note: Table names and real-world representation were derived purely for this example, but their relations simplify what I'm trying to do.

    Read the article

  • Helping linqtosql datacontext use implicit conversion between varchar column in the database and tab

    - by user213256
    I am creating an mssql database table, "Orders", that will contain a varchar(50) field, "Value" containing a string that represents a slightly complex data type, "OrderValue". I am using a linqtosql datacontext class, which automatically types the "Value" column as a string. I gave the "OrderValue" class implicit conversion operators to and from a string, so I can easily use implicit conversion with the linqtosql classes like this: // get an order from the orders table MyDataContext db = new MyDataContext(); Order order = db.Orders(o => o.id == 1); // use implicit converstion to turn the string representation of the order // value into the complex data type. OrderValue value = order.Value; // adjust one of the fields in the complex data type value.Shipping += 10; // use implicit conversion to store the string representation of the complex // data type back in the linqtosql order object order.Value = value; // save changes db.SubmitChanges(); However, I would really like to be able to tell the linqtosql class to type this field as "OrderValue" rather than as "string". Then I would be able to avoid complex code and re-write the above as: // get an order from the orders table MyDataContext db = new MyDataContext(); Order order = db.Orders(o => o.id == 1); // The Value field is already typed as the "OrderValue" type rather than as string. // When a string value was read from the database table, it was implicity converted // to "OrderValue" type. order.Value.Shipping += 10; // save changes db.SubmitChanges(); In order to achieve this desired goal, I looked at the datacontext designer and selected the "Value" field of the "Order" table. Then, in properties, I changed "Type" to "global::MyApplication.OrderValue". The "Server Data Type" property was left as "VarChar(50) NOT NULL" The project built without errors. However, when reading from the database table, I was presented with the following error message: Could not convert from type 'System.String' to type 'MyApplication.OrderValue'. at System.Data.Linq.DBConvert.ChangeType(Object value, Type type) at Read_Order(ObjectMaterializer1 ) at System.Data.Linq.SqlClient.ObjectReaderCompiler.ObjectReader2.MoveNext() at System.Linq.Buffer1..ctor(IEnumerable1 source) at System.Linq.Enumerable.ToArray[TSource](IEnumerable`1 source) at Example.OrdersProvider.GetOrders() at ... etc From the stack trace, I believe this error is happening while reading the data from the table. When presented with converting a string to my custom data type, even though the implicit conversion operators are present, the DBConvert class gets confused and throws an error. Is there anything I can do to help it not get confused and do the implicit conversion? Thanks in advance, and apologies if I have posted in the wrong forum. cheers / Ben

    Read the article

  • Sending Messages to SignalR Hubs from the Outside

    - by Ricardo Peres
    Introduction You are by now probably familiarized with SignalR, Microsoft’s API for real-time web functionality. This is, in my opinion, one of the greatest products Microsoft has released in recent time. Usually, people login to a site and enter some page which is connected to a SignalR hub. Then they can send and receive messages – not just text messages, mind you – to other users in the same hub. Also, the server can also take the initiative to send messages to all or a specified subset of users on its own, this is known as server push. The normal flow is pretty straightforward, Microsoft has done a great job with the API, it’s clean and quite simple to use. And for the latter – the server taking the initiative – it’s also quite simple, just involves a little more work. The Problem The API for sending messages can be achieved from inside a hub – an instance of the Hub class – which is something that we don’t have if we are the server and we want to send a message to some user or group of users: the Hub instance is only instantiated in response to a client message. The Solution It is possible to acquire a hub’s context from outside of an actual Hub instance, by calling GlobalHost.ConnectionManager.GetHubContext<T>(). This API allows us to: Broadcast messages to all connected clients (possibly excluding some); Send messages to a specific client; Send messages to a group of clients. So, we have groups and clients, each is identified by a string. Client strings are called connection ids and group names are free-form, given by us. The problem with client strings is, we do not know how these map to actual users. One way to achieve this mapping is by overriding the Hub’s OnConnected and OnDisconnected methods and managing the association there. Here’s an example: 1: public class MyHub : Hub 2: { 3: private static readonly IDictionary<String, ISet<String>> users = new ConcurrentDictionary<String, ISet<String>>(); 4:  5: public static IEnumerable<String> GetUserConnections(String username) 6: { 7: ISet<String> connections; 8:  9: users.TryGetValue(username, out connections); 10:  11: return (connections ?? Enumerable.Empty<String>()); 12: } 13:  14: private static void AddUser(String username, String connectionId) 15: { 16: ISet<String> connections; 17:  18: if (users.TryGetValue(username, out connections) == false) 19: { 20: connections = users[username] = new HashSet<String>(); 21: } 22:  23: connections.Add(connectionId); 24: } 25:  26: private static void RemoveUser(String username, String connectionId) 27: { 28: users[username].Remove(connectionId); 29: } 30:  31: public override Task OnConnected() 32: { 33: AddUser(this.Context.Request.User.Identity.Name, this.Context.ConnectionId); 34: return (base.OnConnected()); 35: } 36:  37: public override Task OnDisconnected() 38: { 39: RemoveUser(this.Context.Request.User.Identity.Name, this.Context.ConnectionId); 40: return (base.OnDisconnected()); 41: } 42: } As you can see, I am using a static field to store the mapping between a user and its possibly many connections – for example, multiple open browser tabs or even multiple browsers accessing the same page with the same login credentials. The user identity, as is normal in .NET, is obtained from the IPrincipal which in SignalR hubs case is stored in Context.Request.User. Of course, this property will only have a meaningful value if we enforce authentication. Another way to go is by creating a group for each user that connects: 1: public class MyHub : Hub 2: { 3: public override Task OnConnected() 4: { 5: this.Groups.Add(this.Context.ConnectionId, this.Context.Request.User.Identity.Name); 6: return (base.OnConnected()); 7: } 8:  9: public override Task OnDisconnected() 10: { 11: this.Groups.Remove(this.Context.ConnectionId, this.Context.Request.User.Identity.Name); 12: return (base.OnDisconnected()); 13: } 14: } In this case, we will have a one-to-one equivalence between users and groups. All connections belonging to the same user will fall in the same group. So, if we want to send messages to a user from outside an instance of the Hub class, we can do something like this, for the first option – user mappings stored in a static field: 1: public void SendUserMessage(String username, String message) 2: { 3: var context = GlobalHost.ConnectionManager.GetHubContext<MyHub>(); 4: 5: foreach (String connectionId in HelloHub.GetUserConnections(username)) 6: { 7: context.Clients.Client(connectionId).sendUserMessage(message); 8: } 9: } And for using groups, its even simpler: 1: public void SendUserMessage(String username, String message) 2: { 3: var context = GlobalHost.ConnectionManager.GetHubContext<MyHub>(); 4:  5: context.Clients.Group(username).sendUserMessage(message); 6: } Using groups has the advantage that the IHubContext interface returned from GetHubContext has direct support for groups, no need to send messages to individual connections. Of course, you can wrap both mapping options in a common API, perhaps exposed through IoC. One example of its interface might be: 1: public interface IUserToConnectionMappingService 2: { 3: //associate and dissociate connections to users 4:  5: void AddUserConnection(String username, String connectionId); 6:  7: void RemoveUserConnection(String username, String connectionId); 8: } SignalR has built-in dependency resolution, by means of the static GlobalHost.DependencyResolver property: 1: //for using groups (in the Global class) 2: GlobalHost.DependencyResolver.Register(typeof(IUserToConnectionMappingService), () => new GroupsMappingService()); 3:  4: //for using a static field (in the Global class) 5: GlobalHost.DependencyResolver.Register(typeof(IUserToConnectionMappingService), () => new StaticMappingService()); 6:  7: //retrieving the current service (in the Hub class) 8: var mapping = GlobalHost.DependencyResolver.Resolve<IUserToConnectionMappingService>(); Now all you have to do is implement GroupsMappingService and StaticMappingService with the code I shown here and change SendUserMessage method to rely in the dependency resolver for the actual implementation. Stay tuned for more SignalR posts!

    Read the article

  • SSIS - XML Source Script

    - by simonsabin
    The XML Source in SSIS is great if you have a 1 to 1 mapping between entity and table. You can do more complex mapping but it becomes very messy and won't perform. What other options do you have? The challenge with XML processing is to not need a huge amount of memory. I remember using the early versions of Biztalk with loaded the whole document into memory to map from one document type to another. This was fine for small documents but was an absolute killer for large documents. You therefore need a streaming approach. For flexibility however you want to be able to generate your rows easily, and if you've ever used the XmlReader you will know its ugly code to write. That brings me on to LINQ. The is an implementation of LINQ over XML which is really nice. You can write nice LINQ queries instead of the XMLReader stuff. The downside is that by default LINQ to XML requires a whole XML document to work with. No streaming. Your code would look like this. We create an XDocument and then enumerate over a set of annoymous types we generate from our LINQ statement XDocument x = XDocument.Load("C:\\TEMP\\CustomerOrders-Attribute.xml");   foreach (var xdata in (from customer in x.Elements("OrderInterface").Elements("Customer")                        from order in customer.Elements("Orders").Elements("Order")                        select new { Account = customer.Attribute("AccountNumber").Value                                   , OrderDate = order.Attribute("OrderDate").Value }                        )) {     Output0Buffer.AddRow();     Output0Buffer.AccountNumber = xdata.Account;     Output0Buffer.OrderDate = Convert.ToDateTime(xdata.OrderDate); } As I said the downside to this is that you are loading the whole document into memory. I did some googling and came across some helpful videos from a nice UK DPE Mike Taulty http://www.microsoft.com/uk/msdn/screencasts/screencast/289/LINQ-to-XML-Streaming-In-Large-Documents.aspx. Which show you how you can combine LINQ and the XmlReader to get a semi streaming approach. I took what he did and implemented it in SSIS. What I found odd was that when I ran it I got different numbers between using the streamed and non streamed versions. I found the cause was a little bug in Mikes code that causes the pointer in the XmlReader to progress past the start of the element and thus foreach (var xdata in (from customer in StreamReader("C:\\TEMP\\CustomerOrders-Attribute.xml","Customer")                                from order in customer.Elements("Orders").Elements("Order")                                select new { Account = customer.Attribute("AccountNumber").Value                                           , OrderDate = order.Attribute("OrderDate").Value }                                ))         {             Output0Buffer.AddRow();             Output0Buffer.AccountNumber = xdata.Account;             Output0Buffer.OrderDate = Convert.ToDateTime(xdata.OrderDate);         } These look very similiar and they are the key element is the method we are calling, StreamReader. This method is what gives us streaming, what it does is return a enumerable list of elements, because of the way that LINQ works this results in the data being streamed in. static IEnumerable<XElement> StreamReader(String filename, string elementName) {     using (XmlReader xr = XmlReader.Create(filename))     {         xr.MoveToContent();         while (xr.Read()) //Reads the first element         {             while (xr.NodeType == XmlNodeType.Element && xr.Name == elementName)             {                 XElement node = (XElement)XElement.ReadFrom(xr);                   yield return node;             }         }         xr.Close();     } } This code is specifically designed to return a list of the elements with a specific name. The first Read reads the root element and then the inner while loop checks to see if the current element is the type we want. If not we do the xr.Read() again until we find the element type we want. We then use the neat function XElement.ReadFrom to read an element and all its sub elements into an XElement. This is what is returned and can be consumed by the LINQ statement. Essentially once one element has been read we need to check if we are still on the same element type and name (the inner loop) This was Mikes mistake, if we called .Read again we would advance the XmlReader beyond the start of the Element and so the ReadFrom method wouldn't work. So with the code above you can use what ever LINQ statement you like to flatten your XML into the rowsets you want. You could even have multiple outputs and generate your own surrogate keys.        

    Read the article

< Previous Page | 4 5 6 7 8 9  | Next Page >