Search Results

Search found 459 results on 19 pages for 'bounded contexts'.

Page 17/19 | < Previous Page | 13 14 15 16 17 18 19  | Next Page >

  • xerces-c: Xml parsing multiple files

    - by user459811
    I'm atempting to learn xerces-c and was following this tutorial online. http://www.yolinux.com/TUTORIALS/XML-Xerces-C.html I was able to get the tutorial to compile and run through a memory checker (valgrind) with no problems however when I made alterations to the program slightly, the memory checker returned some potential leak bytes. I only added a few extra lines to main to allow the program to read two files instead of one. int main() { string configFile="sample.xml"; // stat file. Get ambigious segfault otherwise. GetConfig appConfig; appConfig.readConfigFile(configFile); cout << "Application option A=" << appConfig.getOptionA() << endl; cout << "Application option B=" << appConfig.getOptionB() << endl; // Added code configFile = "sample1.xml"; appConfig.readConfigFile(configFile); cout << "Application option A=" << appConfig.getOptionA() << endl; cout << "Application option B=" << appConfig.getOptionB() << endl; return 0; } I was wondering why is it when I added the extra lines of code to read in another xml file, it would result in the following output? ==776== Using Valgrind-3.6.0 and LibVEX; rerun with -h for copyright info ==776== Command: ./a.out ==776== Application option A=10 Application option B=24 Application option A=30 Application option B=40 ==776== ==776== HEAP SUMMARY: ==776== in use at exit: 6 bytes in 2 blocks ==776== total heap usage: 4,031 allocs, 4,029 frees, 1,092,045 bytes allocated ==776== ==776== 3 bytes in 1 blocks are definitely lost in loss record 1 of 2 ==776== at 0x4C28B8C: operator new(unsigned long) (vg_replace_malloc.c:261) ==776== by 0x5225E9B: xercesc_3_1::MemoryManagerImpl::allocate(unsigned long) (MemoryManagerImpl.cpp:40) ==776== by 0x53006C8: xercesc_3_1::IconvGNULCPTranscoder::transcode(unsigned short const*, xercesc_3_1::MemoryManager*) (IconvGNUTransService.cpp:751) ==776== by 0x4038E7: GetConfig::readConfigFile(std::string&) (in /home/bonniehan/workspace/test/a.out) ==776== by 0x403B13: main (in /home/bonniehan/workspace/test/a.out) ==776== ==776== 3 bytes in 1 blocks are definitely lost in loss record 2 of 2 ==776== at 0x4C28B8C: operator new(unsigned long) (vg_replace_malloc.c:261) ==776== by 0x5225E9B: xercesc_3_1::MemoryManagerImpl::allocate(unsigned long) (MemoryManagerImpl.cpp:40) ==776== by 0x53006C8: xercesc_3_1::IconvGNULCPTranscoder::transcode(unsigned short const*, xercesc_3_1::MemoryManager*) (IconvGNUTransService.cpp:751) ==776== by 0x40393F: GetConfig::readConfigFile(std::string&) (in /home/bonniehan/workspace/test/a.out) ==776== by 0x403B13: main (in /home/bonniehan/workspace/test/a.out) ==776== ==776== LEAK SUMMARY: ==776== definitely lost: 6 bytes in 2 blocks ==776== indirectly lost: 0 bytes in 0 blocks ==776== possibly lost: 0 bytes in 0 blocks ==776== still reachable: 0 bytes in 0 blocks ==776== suppressed: 0 bytes in 0 blocks ==776== ==776== For counts of detected and suppressed errors, rerun with: -v ==776== ERROR SUMMARY: 2 errors from 2 contexts (suppressed: 2 from 2)

    Read the article

  • How to configure Jetty to reload a WebAppContext when classes are changed

    - by Guss
    I'm developing a web application and I run Jetty as the development and testing environment when I develop under Eclipse. When I make changes to Java classes, Eclipse automatically compiles them to the build directory, but Jetty won't see the changes until I stop and start the server. I know that Jetty supports "hot deployment" using ContextDeployer that will refresh updated application contexts, but it relies on a context file in a context directory being updated - which is not very useful in my case. Is there a way to set up Jetty so that it will reload the web app when any of the classes it uses is updated? My current jetty.xml looks something like this: <?xml version="1.0"?> <!DOCTYPE Configure PUBLIC "-//Jetty//Configure//EN" "http://www.eclipse.org/jetty/configure.dtd"> <Configure id="Server" class="org.eclipse.jetty.server.Server"> <Set name="ThreadPool"><!-- bla bla --></Set> <Call name="addConnector"><!-- bla bla --></Call> <Set name="handler"> <New id="Handlers" class="org.eclipse.jetty.server.handler.HandlerCollection"> <Set name="handlers"> <Array type="org.eclipse.jetty.server.Handler"> <Item> <New id="webapp" class="org.eclipse.jetty.webapp.WebAppContext"> <Set name="displayName">My Web App</Set> <Set name="resourceBase">src/main/webapp</Set> <Set name="descriptor">src/main/webapp/WEB-INF/web.xml</Set> <Set name="contextPath">/mywebapp</Set> </New> </Item> <Item> <New id="DefaultHandler" class="org.eclipse.jetty.server.handler.DefaultHandler"/> </Item> </Array> </Set> </New> </Set> </Configure>

    Read the article

  • IOC/Autofac problem

    - by Krazzy
    I am currently using Autofac, but am open to commentary regarding other IOC containers as well. I would prefer a solution using Autofac if possible. I am also somewhat new to IOC so I may be grossly misunderstanding what I should be using an IOC container for. Basically, the situation is as follows: I have a topmost IOC container for my app. I have a tree of child containers/scopes where I would like the same "service" (IWhatever) to resolve differently depending on which level in the tree it is resolved. Furthermore if a service is not registered at some level in the tree I would like the tree to be transversed upward until a suitable implementation is found. Furthermore, when constructing a given component, it is quite possible that I will need access to the parent container/scope. In many cases the component that I'm registering will have a dependency on the same or a different service in a parent scope. Is there any way to express this dependency with Autofac? Something like: builder.Register(c=> { var parentComponent = ?.Resolve<ISomeService>(); var childComponent = new ConcreteService(parentComponent, args...); return childComponent; }).As<ISomeService>(); I can't get anything similar to the above pseudocode to work for serveral reasons: A) It seems that all levels in the scope tree share a common set of registrations. I can't seem to find a way to make a given registration confined a certain "scope". B) I can't seem to find a way to get a hold of the parent scope of a given scope. I CAN resolve ILifetimeScope in the container and then case it to a concrete LifetimeScope instance which provides its parent scope, but I'm guessing it is probably note meant to be used this way. Is this safe? C) I'm not sure how to tell Autofac which container owns the resolved object. For many components I would like component to be "owned" by the scope in which it is constructed. Could tagged contexts help me here? Would I have to tag every level of the tree with a unique tag? This would be difficult because the tree depth is determined at runtime. Sorry for the extremely lengthy question. In summary: 1) Is there any way to do what I want to do using Autofac? 2) Is there another container more suited to this kind of dependency structure? 3) Is IOC the wrong tool for this altogether?

    Read the article

  • Using Core Data Concurrently and Reliably

    - by John Topley
    I'm building my first iOS app, which in theory should be pretty straightforward but I'm having difficulty making it sufficiently bulletproof for me to feel confident submitting it to the App Store. Briefly, the main screen has a table view, upon selecting a row it segues to another table view that displays information relevant for the selected row in a master-detail fashion. The underlying data is retrieved as JSON data from a web service once a day and then cached in a Core Data store. The data previous to that day is deleted to stop the SQLite database file from growing indefinitely. All data persistence operations are performed using Core Data, with an NSFetchedResultsController underpinning the detail table view. The problem I am seeing is that if you switch quickly between the master and detail screens several times whilst fresh data is being retrieved, parsed and saved, the app freezes or crashes completely. There seems to be some sort of race condition, maybe due to Core Data importing data in the background whilst the main thread is trying to perform a fetch, but I'm speculating. I've had trouble capturing any meaningful crash information, usually it's a SIGSEGV deep in the Core Data stack. The table below shows the actual order of events that happen when the detail table view controller is loaded: Main Thread Background Thread viewDidLoad Get JSON data (using AFNetworking) Create child NSManagedObjectContext (MOC) Parse JSON data Insert managed objects in child MOC Save child MOC Post import completion notification Receive import completion notification Save parent MOC Perform fetch and reload table view Delete old managed objects in child MOC Save child MOC Post deletion completion notification Receive deletion completion notification Save parent MOC Once the AFNetworking completion block is triggered when the JSON data has arrived, a nested NSManagedObjectContext is created and passed to an "importer" object that parses the JSON data and saves the objects to the Core Data store. The importer executes using the new performBlock method introduced in iOS 5: NSManagedObjectContext *child = [[NSManagedObjectContext alloc] initWithConcurrencyType:NSPrivateQueueConcurrencyType]; [child setParentContext:self.managedObjectContext]; [child performBlock:^{ // Create importer instance, passing it the child MOC... }]; The importer object observes its own MOC's NSManagedObjectContextDidSaveNotification and then posts its own notification which is observed by the detail table view controller. When this notification is posted the table view controller performs a save on its own (parent) MOC. I use the same basic pattern with a "deleter" object for deleting the old data after the new data for the day has been imported. This occurs asynchronously after the new data has been fetched by the fetched results controller and the detail table view has been reloaded. One thing I am not doing is observing any merge notifications or locking any of the managed object contexts or the persistent store coordinator. Is this something I should be doing? I'm a bit unsure how to architect this all correctly so would appreciate any advice.

    Read the article

  • Sound Complete Not Firing (AS3)

    - by JasonMc92
    I have a bit of a quandary. I need to call a function inside a MovieClip once a particular sound has finished playing. The sound is played via a sound channel in an external class which I have imported. Playback is working perfectly. Here is the relevent code from my external class, Sonus. public var SFXPRChannel:SoundChannel = new SoundChannel; var SFXPRfishbeg:Sound = new sfxpr_fishbeg(); var SFXPRfishmid:Sound = new sfxpr_fishmid(); var SFXPRfishend3:Sound = new sfxpr_fishend3(); var SFXPRfishend4:Sound = new sfxpr_fishend4() public function PlayPrompt(promptname:String):void { var sound:String = "SFXPR" + promptname; SFXPRChannel = this[sound].play(); } This is called via an import in the document class "osr", thus I access it in my project via "osr.Sonus.---" In my project, I have the following line of code. osr.Sonus.SFXPRChannel.addEventListener(Event.SOUND_COMPLETE, promptIsFinished); function prompt():void { var level = osr.Gradua.Fetch("fish", "arr_con_level"); Wait(true); switch(level) { case 1: osr.Sonus.PlayPrompt("fishbeg"); break; case 2: osr.Sonus.PlayPrompt("fishmid"); break; case 3: osr.Sonus.PlayPrompt("fishend3"); break; case 4: osr.Sonus.PlayPrompt("fishend4"); break; } } function Wait(yesno):void { gui.Wait(yesno); } function promptIsFinished(evt:Event):void { Wait(false); } osr.Sonus.PlayPrompt(...) and gui.Wait(...) both work perfectly, as I use them in other contexts in this part of the project without error. Basically, after the sound finishes playing, I need Wait(false); to be called, but the event listener does not appear to be "hearing" the SOUND_COMPLETE event. Did I make a mistake somewhere? For the record, due to my project structure, I cannot call the appropriate Wait(...) function from within Sonus. Help?

    Read the article

  • h:selectOneMenu not populating a 'selected' item

    - by dann.dev
    I'm having trouble with an h:selectOneMenu not having a selected item when there is already something set on the backing bean. I am using seam and have specified a customer converter. When working on my 'creation' page, everything works fine, something from the menu can be selected, and when the page is submitted, the correct value is assigned and persisted to the database as well. However when I work on my 'edit' page the menu's default selection is not the current selection. i have gone through and confirmed that something is definitely set etc. My selectOneMenu looks like this: <h:selectOneMenu id="selVariable" value="#{customer.variableLookup}" converter="#{variableLookupConverter}"> <s:selectItems var="source" value="#{customerReferenceHelper.variableLookups()}" label="#{source.name}" /> </h:selectOneMenu> And the converter is below. It very simple and just turns the id from string to int and back etc: @Name( "sourceOfWealthLookupConverter" ) public class SourceOfWealthLookupConverter implements Serializable, Converter { @In private CustomerReferenceHelper customerReferenceHelper; @Override public Object getAsObject( FacesContext arg0, UIComponent arg1, String arg2 ) { VariableLookup variable= null; try { if ( "org.jboss.seam.ui.NoSelectionConverter.noSelectionValue".equals( arg2 ) ) { return null; } CustomerReferenceHelper customerReferenceHelper = ( CustomerReferenceHelper ) Contexts.getApplicationContext().get( "customerReferenceHelper" ); Integer id = Integer.parseInt( arg2 ); source = customerReferenceHelper.getVariable( id ); } catch ( NumberFormatException e ) { log.error( e, e ); } return variable; } @Override public String getAsString( FacesContext arg0, UIComponent arg1, Object arg2 ) { String result = null; VariableLookup variable= ( VariableLookup ) arg2; Integer id = variable.getId(); result = String.valueOf( id ); return result; } } I've seen a few things about it possibly being the equals() method on the class, (that doesn't add up with everything else working, but I overrode it anyway as below, where the hashcode is just the id (id is a unique identifier for each item). Equals method: @Override public boolean equals( Object other ) { if ( other == null ) { return false; } if ( this == other ) { return true; } if ( !( other instanceof VariableLookup ) ) { return false; } VariableLookup otherVariable = ( VariableLookup ) other; if ( this.hashCode() == otherVariable.hashCode() ) { return true; } return false; } I'm at my wits end with this, I can't find what I could have missed?! Any help would be much appreciated

    Read the article

  • On StringComparison Values

    - by Jesse
    When you use the .NET Framework’s String.Equals and String.Compare methods do you use an overloStringComparison enumeration value? If not, you should be because the value provided for that StringComparison argument can have a big impact on the results of your string comparison. The StringComparison enumeration defines values that fall into three different major categories: Culture-sensitive comparison using a specific culture, defaulted to the Thread.CurrentThread.CurrentCulture value (StringComparison.CurrentCulture and StringComparison.CurrentCutlureIgnoreCase) Invariant culture comparison (StringComparison.InvariantCulture and StringComparison.InvariantCultureIgnoreCase) Ordinal (byte-by-byte) comparison of  (StringComparison.Ordinal and StringComparison.OrdinalIgnoreCase) There is a lot of great material available that detail the technical ins and outs of these different string comparison approaches. If you’re at all interested in the topic these two MSDN articles are worth a read: Best Practices For Using Strings in the .NET Framework: http://msdn.microsoft.com/en-us/library/dd465121.aspx How To Compare Strings: http://msdn.microsoft.com/en-us/library/cc165449.aspx Those articles cover the technical details of string comparison well enough that I’m not going to reiterate them here other than to say that the upshot is that you typically want to use the culture-sensitive comparison whenever you’re comparing strings that were entered by or will be displayed to users and the ordinal comparison in nearly all other cases. So where does that leave the invariant culture comparisons? The “Best Practices For Using Strings in the .NET Framework” article has the following to say: “On balance, the invariant culture has very few properties that make it useful for comparison. It does comparison in a linguistically relevant manner, which prevents it from guaranteeing full symbolic equivalence, but it is not the choice for display in any culture. One of the few reasons to use StringComparison.InvariantCulture for comparison is to persist ordered data for a cross-culturally identical display. For example, if a large data file that contains a list of sorted identifiers for display accompanies an application, adding to this list would require an insertion with invariant-style sorting.” I don’t know about you, but I feel like that paragraph is a bit lacking. Are there really any “real world” reasons to use the invariant culture comparison? I think the answer to this question is, “yes”, but in order to understand why we should first think about what the invariant culture comparison really does. The invariant culture comparison is really just a culture-sensitive comparison using a special invariant culture (Michael Kaplan has a great post on the history of the invariant culture on his blog: http://blogs.msdn.com/b/michkap/archive/2004/12/29/344136.aspx). This means that the invariant culture comparison will apply the linguistic customs defined by the invariant culture which are guaranteed not to differ between different machines or execution contexts. This sort of consistently does prove useful if you needed to maintain a list of strings that are sorted in a meaningful and consistent way regardless of the user viewing them or the machine on which they are being viewed. Example: Prototype Names Let’s say that you work for a large multi-national toy company with branch offices in 10 different countries. Each year the company would work on 15-25 new toy prototypes each of which is assigned a “code name” while it is under development. Coming up with fun new code names is a big part of the company culture that everyone really enjoys, so to be fair the CEO of the company spent a lot of time coming up with a prototype naming scheme that would be fun for everyone to participate in, fair to all of the different branch locations, and accessible to all members of the organization regardless of the country they were from and the language that they spoke. Each new prototype will get a code name that begins with a letter following the previously created name using the alphabetical order of the Latin/Roman alphabet. Each new year prototype names would start back at “A”. The country that leads the prototype development effort gets to choose the name in their native language. (An appropriate Romanization system will be used for countries where the primary language is not written in the Latin/Roman alphabet. For example, the Pinyin system could be used for Chinese). To avoid repeating names, a list of all current and past prototype names will be maintained on each branch location’s company intranet site. Assuming that maintaining a single pre-sorted list is not feasible among all of the highly distributed intranet implementations, what string comparison method would you use to sort each year’s list of prototype names so that the list is both meaningful and consistent regardless of the country within which the list is being viewed? Sorting the list with a culture-sensitive comparison using the default configured culture on each country’s intranet server the list would probably work most of the time, but subtle differences between cultures could mean that two different people would see a list that was sorted slightly differently. The CEO wants the prototype names to be a unifying aspect of company culture and is adamant that everyone see the the same list sorted in the same order and there’s no way to guarantee a consistent sort across different cultures using the culture-sensitive string comparison rules. The culture-sensitive sort would produce a meaningful list for the specific user viewing it, but it wouldn’t always be consistent between different users. Sorting with the ordinal comparison would certainly be consistent regardless of the user viewing it, but would it be meaningful? Let’s say that the current year’s prototype name list looks like this: Antílope (Spanish) Babouin (French) Cahoun (Czech) Diamond (English) Flosse (German) If you were to sort this list using ordinal rules you’d end up with: Antílope Babouin Diamond Flosse Cahoun This sort is no good because the entry for “C” appears the bottom of the list after “F”. This is because the Czech entry for the letter “C” makes use of a diacritic (accent mark). The ordinal string comparison does a byte-by-byte comparison of the code points that make up each character in the string and the code point for the “C” with the diacritic mark is higher than any letter without a diacritic mark, which pushes that entry to the bottom of the sorted list. The CEO wants each country to be able to create prototype names in their native language, which means we need to allow for names that might begin with letters that have diacritics, so ordinal sorting kills the meaningfulness of the list. As it turns out, this situation is actually well-suited for the invariant culture comparison. The invariant culture accounts for linguistically relevant factors like the use of diacritics but will provide a consistent sort across all machines that perform the sort. Now that we’ve walked through this example, the following line from the “Best Practices For Using Strings in the .NET Framework” makes a lot more sense: One of the few reasons to use StringComparison.InvariantCulture for comparison is to persist ordered data for a cross-culturally identical display That line describes the prototype name example perfectly: we need a way to persist ordered data for a cross-culturally identical display. While this example is 100% made-up, I think it illustrates that there are indeed real-world situations where the invariant culture comparison is useful.

    Read the article

  • Testing Workflows &ndash; Test-After

    - by Timothy Klenke
    Originally posted on: http://geekswithblogs.net/TimothyK/archive/2014/05/30/testing-workflows-ndash-test-after.aspxIn this post I’m going to outline a few common methods that can be used to increase the coverage of of your test suite.  This won’t be yet another post on why you should be doing testing; there are plenty of those types of posts already out there.  Assuming you know you should be testing, then comes the problem of how do I actual fit that into my day job.  When the opportunity to automate testing comes do you take it, or do you even recognize it? There are a lot of ways (workflows) to go about creating automated tests, just like there are many workflows to writing a program.  When writing a program you can do it from a top-down approach where you write the main skeleton of the algorithm and call out to dummy stub functions, or a bottom-up approach where the low level functionality is fully implement before it is quickly wired together at the end.  Both approaches are perfectly valid under certain contexts. Each approach you are skilled at applying is another tool in your tool belt.  The more vectors of attack you have on a problem – the better.  So here is a short, incomplete list of some of the workflows that can be applied to increasing the amount of automation in your testing and level of quality in general.  Think of each workflow as an opportunity that is available for you to take. Test workflows basically fall into 2 categories:  test first or test after.  Test first is the best approach.  However, this post isn’t about the one and only best approach.  I want to focus more on the lesser known, less ideal approaches that still provide an opportunity for adding tests.  In this post I’ll enumerate some test-after workflows.  In my next post I’ll cover test-first. Bug Reporting When someone calls you up or forwards you a email with a vague description of a bug its usually standard procedure to create or verify a reproduction plan for the bug via manual testing and log that in a bug tracking system.  This can be problematic.  Often reproduction plans when written down might skip a step that seemed obvious to the tester at the time or they might be missing some crucial environment setting. Instead of data entry into a bug tracking system, try opening up the test project and adding a failing unit test to prove the bug.  The test project guarantees that all aspects of the environment are setup properly and no steps are missing.  The language in the test project is much more precise than the English that goes into a bug tracking system. This workflow can easily be extended for Enhancement Requests as well as Bug Reporting. Exploratory Testing Exploratory testing comes in when you aren’t sure how the system will behave in a new scenario.  The scenario wasn’t planned for in the initial system requirements and there isn’t an existing test for it.  By definition the system behaviour is “undefined”. So write a new unit test to define that behaviour.  Add assertions to the tests to confirm your assumptions.  The new test becomes part of the living system specification that is kept up to date with the test suite. Examples This workflow is especially good when developing APIs.  When you are finally done your production API then comes the job of writing documentation on how to consume the API.  Good documentation will also include code examples.  Don’t let these code examples merely exist in some accompanying manual; implement them in a test suite. Example tests and documentation do not have to be created after the production API is complete.  It is best to write the example code (tests) as you go just before the production code. Smoke Tests Every system has a typical use case.  This represents the basic, core functionality of the system.  If this fails after an upgrade the end users will be hosed and they will be scratching their heads as to how it could be possible that an update got released with this core functionality broken. The tests for this core functionality are referred to as “smoke tests”.  It is a good idea to have them automated and run with each build in order to avoid extreme embarrassment and angry customers. Coverage Analysis Code coverage analysis is a tool that reports how much of the production code base is exercised by the test suite.  In Visual Studio this can be found under the Test main menu item. The tool will report a total number for the code coverage, which can be anywhere between 0 and 100%.  Coverage Analysis shouldn’t be used strictly for numbers reporting.  Companies shouldn’t set minimum coverage targets that mandate that all projects must have at least 80% or 100% test coverage.  These arbitrary requirements just invite gaming of the coverage analysis, which makes the numbers useless. The analysis tool will break down the coverage by the various classes and methods in projects.  Instead of focusing on the total number, drill down into this view and see which classes have high or low coverage.  It you are surprised by a low number on a class this is an opportunity to add tests. When drilling through the classes there will be generally two types of reaction to a surprising low test coverage number.  The first reaction type is a recognition that there is low hanging fruit to be picked.  There may be some classes or methods that aren’t being tested, which could easy be.  The other reaction type is “OMG”.  This were you find a critical piece of code that isn’t under test.  In both cases, go and add the missing tests. Test Refactoring The general theme of this post up to this point has been how to add more and more tests to a test suite.  I’ll step back from that a bit and remind that every line of code is a liability.  Each line of code has to be read and maintained, which costs money.  This is true regardless whether the code is production code or test code. Remember that the primary goal of the test suite is that it be easy to read so that people can easily determine the specifications of the system.  Make sure that adding more and more tests doesn’t interfere with this primary goal. Perform code reviews on the test suite as often as on production code.  Hold the test code up to the same high readability standards as the production code.  If the tests are hard to read then change them.  Look to remove duplication.  Duplicate setup code between two or more test methods that can be moved to a shared function.  Entire test methods can be removed if it is found that the scenario it tests is covered by other tests.  Its OK to delete a test that isn’t pulling its own weight anymore. Remember to only start refactoring when all the test are green.  Don’t refactor the tests and the production code at the same time.  An automated test suite can be thought of as a double entry book keeping system.  The unchanging, passing production code serves as the tests for the test suite while refactoring the tests. As with all refactoring, it is best to fit this into your regular work rather than asking for time later to get it done.  Fit this into the standard red-green-refactor cycle.  The refactor step no only applies to production code but also the tests, but not at the same time.  Perhaps the cycle should be called red-green-refactor production-refactor tests (not quite as catchy).   That about covers most of the test-after workflows I can think of.  In my next post I’ll get into test-first workflows.

    Read the article

  • Changes to the LINQ-to-StreamInsight Dialect

    - by Roman Schindlauer
    In previous versions of StreamInsight (1.0 through 2.0), CepStream<> represents temporal streams of many varieties: Streams with ‘open’ inputs (e.g., those defined and composed over CepStream<T>.Create(string streamName) Streams with ‘partially bound’ inputs (e.g., those defined and composed over CepStream<T>.Create(Type adapterFactory, …)) Streams with fully bound inputs (e.g., those defined and composed over To*Stream – sequences or DQC) The stream may be embedded (where Server.Create is used) The stream may be remote (where Server.Connect is used) When adding support for new programming primitives in StreamInsight 2.1, we faced a choice: Add a fourth variety (use CepStream<> to represent streams that are bound the new programming model constructs), or introduce a separate type that represents temporal streams in the new user model. We opted for the latter. Introducing a new type has the effect of reducing the number of (confusing) runtime failures due to inappropriate uses of CepStream<> instances in the incorrect context. The new types are: IStreamable<>, which logically represents a temporal stream. IQStreamable<> : IStreamable<>, which represents a queryable temporal stream. Its relationship to IStreamable<> is analogous to the relationship of IQueryable<> to IEnumerable<>. The developer can compose temporal queries over remote stream sources using this type. The syntax of temporal queries composed over IQStreamable<> is mostly consistent with the syntax of our existing CepStream<>-based LINQ provider. However, we have taken the opportunity to refine certain aspects of the language surface. Differences are outlined below. Because 2.1 introduces new types to represent temporal queries, the changes outlined in this post do no impact existing StreamInsight applications using the existing types! SelectMany StreamInsight does not support the SelectMany operator in its usual form (which is analogous to SQL’s “CROSS APPLY” operator): static IEnumerable<R> SelectMany<T, R>(this IEnumerable<T> source, Func<T, IEnumerable<R>> collectionSelector) It instead uses SelectMany as a convenient syntactic representation of an inner join. The parameter to the selector function is thus unavailable. Because the parameter isn’t supported, its type in StreamInsight 1.0 – 2.0 wasn’t carefully scrutinized. Unfortunately, the type chosen for the parameter is nonsensical to LINQ programmers: static CepStream<R> SelectMany<T, R>(this CepStream<T> source, Expression<Func<CepStream<T>, CepStream<R>>> streamSelector) Using Unit as the type for the parameter accurately reflects the StreamInsight’s capabilities: static IQStreamable<R> SelectMany<T, R>(this IQStreamable<T> source, Expression<Func<Unit, IQStreamable<R>>> streamSelector) For queries that succeed – that is, queries that do not reference the stream selector parameter – there is no difference between the code written for the two overloads: from x in xs from y in ys select f(x, y) Top-K The Take operator used in StreamInsight causes confusion for LINQ programmers because it is applied to the (unbounded) stream rather than the (bounded) window, suggesting that the query as a whole will return k rows: (from win in xs.SnapshotWindow() from x in win orderby x.A select x.B).Take(k) The use of SelectMany is also unfortunate in this context because it implies the availability of the window parameter within the remainder of the comprehension. The following compiles but fails at runtime: (from win in xs.SnapshotWindow() from x in win orderby x.A select win).Take(k) The Take operator in 2.1 is applied to the window rather than the stream: Before After (from win in xs.SnapshotWindow() from x in win orderby x.A select x.B).Take(k) from win in xs.SnapshotWindow() from b in     (from x in win     orderby x.A     select x.B).Take(k) select b Multicast We are introducing an explicit multicast operator in order to preserve expression identity, which is important given the semantics about moving code to and from StreamInsight. This also better matches existing LINQ dialects, such as Reactive. This pattern enables expressing multicasting in two ways: Implicit Explicit var ys = from x in xs          where x.A > 1          select x; var zs = from y1 in ys          from y2 in ys.ShiftEventTime(_ => TimeSpan.FromSeconds(1))          select y1 + y2; var ys = from x in xs          where x.A > 1          select x; var zs = ys.Multicast(ys1 =>     from y1 in ys1     from y2 in ys1.ShiftEventTime(_ => TimeSpan.FromSeconds(1))     select y1 + y2; Notice the product translates an expression using implicit multicast into an expression using the explicit multicast operator. The user does not see this translation. Default window policies Only default window policies are supported in the new surface. Other policies can be simulated by using AlterEventLifetime. Before After xs.SnapshotWindow(     WindowInputPolicy.ClipToWindow,     SnapshotWindowInputPolicy.Clip) xs.SnapshotWindow() xs.TumblingWindow(     TimeSpan.FromSeconds(1),     HoppingWindowOutputPolicy.PointAlignToWindowEnd) xs.TumblingWindow(     TimeSpan.FromSeconds(1)) xs.TumblingWindow(     TimeSpan.FromSeconds(1),     HoppingWindowOutputPolicy.ClipToWindowEnd) Not supported … LeftAntiJoin Representation of LASJ as a correlated sub-query in the LINQ surface is problematic as the StreamInsight engine does not support correlated sub-queries (see discussion of SelectMany). The current syntax requires the introduction of an otherwise unsupported ‘IsEmpty()’ operator. As a result, the pattern is not discoverable and implies capabilities not present in the server. The direct representation of LASJ is used instead: Before After from x in xs where     (from y in ys     where x.A > y.B     select y).IsEmpty() select x xs.LeftAntiJoin(ys, (x, y) => x.A > y.B) from x in xs where     (from y in ys     where x.A == y.B     select y).IsEmpty() select x xs.LeftAntiJoin(ys, x => x.A, y => y.B) ApplyWithUnion The ApplyWithUnion methods have been deprecated since their signatures are redundant given the standard SelectMany overloads: Before After xs.GroupBy(x => x.A).ApplyWithUnion(gs => from win in gs.SnapshotWindow() select win.Count()) xs.GroupBy(x => x.A).SelectMany(     gs =>     from win in gs.SnapshotWindow()     select win.Count()) xs.GroupBy(x => x.A).ApplyWithUnion(gs => from win in gs.SnapshotWindow() select win.Count(), r => new { r.Key, Count = r.Payload }) from x in xs group x by x.A into gs from win in gs.SnapshotWindow() select new { gs.Key, Count = win.Count() } Alternate UDO syntax The representation of UDOs in the StreamInsight LINQ dialect confuses cardinalities. Based on the semantics of user-defined operators in StreamInsight, one would expect to construct queries in the following form: from win in xs.SnapshotWindow() from y in MyUdo(win) select y Instead, the UDO proxy method is referenced within a projection, and the (many) results returned by the user code are automatically flattened into a stream: from win in xs.SnapshotWindow() select MyUdo(win) The “many-or-one” confusion is exemplified by the following example that compiles but fails at runtime: from win in xs.SnapshotWindow() select MyUdo(win) + win.Count() The above query must fail because the UDO is in fact returning many values per window while the count aggregate is returning one. Original syntax New alternate syntax from win in xs.SnapshotWindow() select win.UdoProxy(1) from win in xs.SnapshotWindow() from y in win.UserDefinedOperator(() => new Udo(1)) select y -or- from win in xs.SnapshotWindow() from y in win.UdoMacro(1) select y Notice that this formulation also sidesteps the dynamic type pitfalls of the existing “proxy method” approach to UDOs, in which the type of the UDO implementation (TInput, TOuput) and the type of its constructor arguments (TConfig) need to align in a precise and non-obvious way with the argument and return types for the corresponding proxy method. UDSO syntax UDSO currently leverages the DataContractSerializer to clone initial state for logical instances of the user operator. Initial state will instead be described by an expression in the new LINQ surface. Before After xs.Scan(new Udso()) xs.Scan(() => new Udso()) Name changes ShiftEventTime => AlterEventStartTime: The alter event lifetime overload taking a new start time value has been renamed. CountByStartTimeWindow => CountWindow

    Read the article

  • NUMA-aware placement of communication variables

    - by Dave
    For classic NUMA-aware programming I'm typically most concerned about simple cold, capacity and compulsory misses and whether we can satisfy the miss by locally connected memory or whether we have to pull the line from its home node over the coherent interconnect -- we'd like to minimize channel contention and conserve interconnect bandwidth. That is, for this style of programming we're quite aware of where memory is homed relative to the threads that will be accessing it. Ideally, a page is collocated on the node with the thread that's expected to most frequently access the page, as simple misses on the page can be satisfied without resorting to transferring the line over the interconnect. The default "first touch" NUMA page placement policy tends to work reasonable well in this regard. When a virtual page is first accessed, the operating system will attempt to provision and map that virtual page to a physical page allocated from the node where the accessing thread is running. It's worth noting that the node-level memory interleaving granularity is usually a multiple of the page size, so we can say that a given page P resides on some node N. That is, the memory underlying a page resides on just one node. But when thinking about accesses to heavily-written communication variables we normally consider what caches the lines underlying such variables might be resident in, and in what states. We want to minimize coherence misses and cache probe activity and interconnect traffic in general. I don't usually give much thought to the location of the home NUMA node underlying such highly shared variables. On a SPARC T5440, for instance, which consists of 4 T2+ processors connected by a central coherence hub, the home node and placement of heavily accessed communication variables has very little impact on performance. The variables are frequently accessed so likely in M-state in some cache, and the location of the home node is of little consequence because a requester can use cache-to-cache transfers to get the line. Or at least that's what I thought. Recently, though, I was exploring a simple shared memory point-to-point communication model where a client writes a request into a request mailbox and then busy-waits on a response variable. It's a simple example of delegation based on message passing. The server polls the request mailbox, and having fetched a new request value, performs some operation and then writes a reply value into the response variable. As noted above, on a T5440 performance is insensitive to the placement of the communication variables -- the request and response mailbox words. But on a Sun/Oracle X4800 I noticed that was not the case and that NUMA placement of the communication variables was actually quite important. For background an X4800 system consists of 8 Intel X7560 Xeons . Each package (socket) has 8 cores with 2 contexts per core, so the system is 8x8x2. Each package is also a NUMA node and has locally attached memory. Every package has 3 point-to-point QPI links for cache coherence, and the system is configured with a twisted ladder "mobius" topology. The cache coherence fabric is glueless -- there's not central arbiter or coherence hub. The maximum distance between any two nodes is just 2 hops over the QPI links. For any given node, 3 other nodes are 1 hop distant and the remaining 4 nodes are 2 hops distant. Using a single request (client) thread and a single response (server) thread, a benchmark harness explored all permutations of NUMA placement for the two threads and the two communication variables, measuring the average round-trip-time and throughput rate between the client and server. In this benchmark the server simply acts as a simple transponder, writing the request value plus 1 back into the reply field, so there's no particular computation phase and we're only measuring communication overheads. In addition to varying the placement of communication variables over pairs of nodes, we also explored variations where both variables were placed on one page (and thus on one node) -- either on the same cache line or different cache lines -- while varying the node where the variables reside along with the placement of the threads. The key observation was that if the client and server threads were on different nodes, then the best placement of variables was to have the request variable (written by the client and read by the server) reside on the same node as the client thread, and to place the response variable (written by the server and read by the client) on the same node as the server. That is, if you have a variable that's to be written by one thread and read by another, it should be homed with the writer thread. For our simple client-server model that means using split request and response communication variables with unidirectional message flow on a given page. This can yield up to twice the throughput of less favorable placement strategies. Our X4800 uses the QPI 1.0 protocol with source-based snooping. Briefly, when node A needs to probe a cache line it fires off snoop requests to all the nodes in the system. Those recipients then forward their response not to the original requester, but to the home node H of the cache line. H waits for and collects the responses, adjudicates and resolves conflicts and ensures memory-model ordering, and then sends a definitive reply back to the original requester A. If some node B needed to transfer the line to A, it will do so by cache-to-cache transfer and let H know about the disposition of the cache line. A needs to wait for the authoritative response from H. So if a thread on node A wants to write a value to be read by a thread on node B, the latency is dependent on the distances between A, B, and H. We observe the best performance when the written-to variable is co-homed with the writer A. That is, we want H and A to be the same node, as the writer doesn't need the home to respond over the QPI link, as the writer and the home reside on the very same node. With architecturally informed placement of communication variables we eliminate at least one QPI hop from the critical path. Newer Intel processors use the QPI 1.1 coherence protocol with home-based snooping. As noted above, under source-snooping a requester broadcasts snoop requests to all nodes. Those nodes send their response to the home node of the location, which provides memory ordering, reconciles conflicts, etc., and then posts a definitive reply to the requester. In home-based snooping the snoop probe goes directly to the home node and are not broadcast. The home node can consult snoop filters -- if present -- and send out requests to retrieve the line if necessary. The 3rd party owner of the line, if any, can respond either to the home or the original requester (or even to both) according to the protocol policies. There are myriad variations that have been implemented, and unfortunately vendor terminology doesn't always agree between vendors or with the academic taxonomy papers. The key is that home-snooping enables the use of a snoop filter to reduce interconnect traffic. And while home-snooping might have a longer critical path (latency) than source-based snooping, it also may require fewer messages and less overall bandwidth. It'll be interesting to reprise these experiments on a platform with home-based snooping. While collecting data I also noticed that there are placement concerns even in the seemingly trivial case when both threads and both variables reside on a single node. Internally, the cores on each X7560 package are connected by an internal ring. (Actually there are multiple contra-rotating rings). And the last-level on-chip cache (LLC) is partitioned in banks or slices, which with each slice being associated with a core on the ring topology. A hardware hash function associates each physical address with a specific home bank. Thus we face distance and topology concerns even for intra-package communications, although the latencies are not nearly the magnitude we see inter-package. I've not seen such communication distance artifacts on the T2+, where the cache banks are connected to the cores via a high-speed crossbar instead of a ring -- communication latencies seem more regular.

    Read the article

  • Experiences with learning Chinese

    - by Greg Low
    I've had a few friends asking me about learning Chinese and what I've found works and doesn't work. I was answering a question on a mailing list today and I thought I should post this info where it might be useful to many. The question that was initially asked was whether Rosetta Stone was useful but I've provided much more info on learning the language here. I’ve used Rosetta Stone with Chinese but it’s really hard to know whether to recommend it or not. Rosetta Stone works the same way in all languages. They show you photos and then let you both see and hear the target language and get you to work out what they’re talking about. The thinking is that that’s how children learn. However, at first, I found it very frustrating. I’d be staring at photos trying to work out what they were really trying to get at. Sometimes it’s far from obvious. I could not have survived without Google Translate open at the same time. The other weird thing is that the photos are from a mixture of countries. While that’s good in a way, it also means that they are endlessly showing pictures of something that would never happen in the target language and culture. For any language, constant interaction with a speaker of the target language is needed. Rosetta Stone has a “Studio” option. That’s the best part of the program. In my case, it lets me connect around twice a week to a live online class from Beijing. Classes usually have the teacher plus two to four students. You get some Studio access with the initial packages but need to purchase it for ongoing use. I find it very inexpensive. It seems to work out to about $70 (AUD/USD) for six months. That’s a real bargain. The other downside to Rosetta Stone is that they tend to teach very formal language, but as with other languages, that’s not how the locals speak. It might have been correct at one point but no-one actually says that. As an example, Rosetta Stone teach Gonggòng qìche (pronounced roughly like “gong gong chee chure” for bus. Most of my friends from areas like Taiwan would just say Gongche. Google Translate says Zongxiàn (pronounced somewhat like “dzong sheean”) instead. Mind you, the Rosetta Stone option isn't really as bad as "omnibus"; it's more like saying "public bus". If you say the option they provide, people would understand you. I also listen to ChinesePod in the car. They also have SpanishPod. Each podcast is about five minutes of spoken conversation. It is very good for providing current language. Another resource I use is local Meetup groups. Most cities have these and for a variety of languages. It’s way less structured (just standard conversation) but good for getting interaction. The obvious challenge for Asian languages is reading/writing. The input editors for Chinese that are part of Windows are excellent. Many of my Chinese friends speak fluently but cannot read or write. I was determined to learn to do both. For writing, I’m talking about on a computer, not with a pen. (Mind you, I can barely write English with a pen nowadays). When using Rosetta Stone, you can choose to have the Chinese words displayed in pinyin (Wo xihuan xuéxí zhongguó) or in Chinese characters (???????) or both. This year, I’ve been forcing myself to just use the Chinese characters. I use a pinyin input editor in Windows though, as it’s very fast.  (The character recognition input in the iPad is also amazing). Notice from the example that I provided above that the pronunciation of the pinyin isn’t that obvious to us at first either.  Since changing to only using characters, I find I can now read many more Chinese characters fluently. It’s a major challenge though. I can read about 300 now and yet you need around 2,500 to be able to read a newspaper fairly well. Tones are a major issue for some Asian languages. Mandarin has four tones (plus a neutral tone) and there is a major difference in meaning between two words that are spelled the same in pinyin but with different tones. For example, Ma (3rd tone?) is a horse, Ma (1st tone?) is like “mom”, and ma (neutral tone?) is a question mark and so on. Clearly you don’t want to mix these up. As in English, they also have words that do sound the same but mean different things in different contexts. What’s interesting is that even though we see two words that differ only by tone as very similar, to a native speaker, if you say the right words with the wrong tone, you might as well have said a completely different word. My wife’s dialect of Chinese has eight tones. It’s much worse. The reason I’m so keen to learn to read/write Chinese is that even though the different dialects are pronounced so differently that speakers of one dialect often cannot understand another dialect, the writing is generally the same. The only difference is that many years ago, the Chinese government created a simplified set of characters for some of the most commonly used ones. Older Chinese and most Cantonese speakers often struggle with the simplified characters. This is the simplified form of “three apples”: ????   This is the traditional form of the same words: ????  Note that two of the characters are the same but the middle two are quite different. For most languages, the best thing is to watch current movies in the target language but to watch them with the target language as subtitles, not your native language. You want to know what they actually said, not what it roughly means (which is what the English subtitle would give you). The difficulty with Asian languages like Chinese is that you have the added challenge of understanding the subtitles when they are written in the target language. I wish there were Mandarin Chinese movies with pinyin subtitles. For learning to read characters, I also recommend HanCard on the iPad. It is targeted at the HSK language proficiency levels. (I’m intending to take the first HSK exam as soon as I’m ready). Hope that info helps someone get started.  

    Read the article

  • Changes to the LINQ-to-StreamInsight Dialect

    - by Roman Schindlauer
    In previous versions of StreamInsight (1.0 through 2.0), CepStream<> represents temporal streams of many varieties: Streams with ‘open’ inputs (e.g., those defined and composed over CepStream<T>.Create(string streamName) Streams with ‘partially bound’ inputs (e.g., those defined and composed over CepStream<T>.Create(Type adapterFactory, …)) Streams with fully bound inputs (e.g., those defined and composed over To*Stream – sequences or DQC) The stream may be embedded (where Server.Create is used) The stream may be remote (where Server.Connect is used) When adding support for new programming primitives in StreamInsight 2.1, we faced a choice: Add a fourth variety (use CepStream<> to represent streams that are bound the new programming model constructs), or introduce a separate type that represents temporal streams in the new user model. We opted for the latter. Introducing a new type has the effect of reducing the number of (confusing) runtime failures due to inappropriate uses of CepStream<> instances in the incorrect context. The new types are: IStreamable<>, which logically represents a temporal stream. IQStreamable<> : IStreamable<>, which represents a queryable temporal stream. Its relationship to IStreamable<> is analogous to the relationship of IQueryable<> to IEnumerable<>. The developer can compose temporal queries over remote stream sources using this type. The syntax of temporal queries composed over IQStreamable<> is mostly consistent with the syntax of our existing CepStream<>-based LINQ provider. However, we have taken the opportunity to refine certain aspects of the language surface. Differences are outlined below. Because 2.1 introduces new types to represent temporal queries, the changes outlined in this post do no impact existing StreamInsight applications using the existing types! SelectMany StreamInsight does not support the SelectMany operator in its usual form (which is analogous to SQL’s “CROSS APPLY” operator): static IEnumerable<R> SelectMany<T, R>(this IEnumerable<T> source, Func<T, IEnumerable<R>> collectionSelector) It instead uses SelectMany as a convenient syntactic representation of an inner join. The parameter to the selector function is thus unavailable. Because the parameter isn’t supported, its type in StreamInsight 1.0 – 2.0 wasn’t carefully scrutinized. Unfortunately, the type chosen for the parameter is nonsensical to LINQ programmers: static CepStream<R> SelectMany<T, R>(this CepStream<T> source, Expression<Func<CepStream<T>, CepStream<R>>> streamSelector) Using Unit as the type for the parameter accurately reflects the StreamInsight’s capabilities: static IQStreamable<R> SelectMany<T, R>(this IQStreamable<T> source, Expression<Func<Unit, IQStreamable<R>>> streamSelector) For queries that succeed – that is, queries that do not reference the stream selector parameter – there is no difference between the code written for the two overloads: from x in xs from y in ys select f(x, y) Top-K The Take operator used in StreamInsight causes confusion for LINQ programmers because it is applied to the (unbounded) stream rather than the (bounded) window, suggesting that the query as a whole will return k rows: (from win in xs.SnapshotWindow() from x in win orderby x.A select x.B).Take(k) The use of SelectMany is also unfortunate in this context because it implies the availability of the window parameter within the remainder of the comprehension. The following compiles but fails at runtime: (from win in xs.SnapshotWindow() from x in win orderby x.A select win).Take(k) The Take operator in 2.1 is applied to the window rather than the stream: Before After (from win in xs.SnapshotWindow() from x in win orderby x.A select x.B).Take(k) from win in xs.SnapshotWindow() from b in     (from x in win     orderby x.A     select x.B).Take(k) select b Multicast We are introducing an explicit multicast operator in order to preserve expression identity, which is important given the semantics about moving code to and from StreamInsight. This also better matches existing LINQ dialects, such as Reactive. This pattern enables expressing multicasting in two ways: Implicit Explicit var ys = from x in xs          where x.A > 1          select x; var zs = from y1 in ys          from y2 in ys.ShiftEventTime(_ => TimeSpan.FromSeconds(1))          select y1 + y2; var ys = from x in xs          where x.A > 1          select x; var zs = ys.Multicast(ys1 =>     from y1 in ys1     from y2 in ys1.ShiftEventTime(_ => TimeSpan.FromSeconds(1))     select y1 + y2; Notice the product translates an expression using implicit multicast into an expression using the explicit multicast operator. The user does not see this translation. Default window policies Only default window policies are supported in the new surface. Other policies can be simulated by using AlterEventLifetime. Before After xs.SnapshotWindow(     WindowInputPolicy.ClipToWindow,     SnapshotWindowInputPolicy.Clip) xs.SnapshotWindow() xs.TumblingWindow(     TimeSpan.FromSeconds(1),     HoppingWindowOutputPolicy.PointAlignToWindowEnd) xs.TumblingWindow(     TimeSpan.FromSeconds(1)) xs.TumblingWindow(     TimeSpan.FromSeconds(1),     HoppingWindowOutputPolicy.ClipToWindowEnd) Not supported … LeftAntiJoin Representation of LASJ as a correlated sub-query in the LINQ surface is problematic as the StreamInsight engine does not support correlated sub-queries (see discussion of SelectMany). The current syntax requires the introduction of an otherwise unsupported ‘IsEmpty()’ operator. As a result, the pattern is not discoverable and implies capabilities not present in the server. The direct representation of LASJ is used instead: Before After from x in xs where     (from y in ys     where x.A > y.B     select y).IsEmpty() select x xs.LeftAntiJoin(ys, (x, y) => x.A > y.B) from x in xs where     (from y in ys     where x.A == y.B     select y).IsEmpty() select x xs.LeftAntiJoin(ys, x => x.A, y => y.B) ApplyWithUnion The ApplyWithUnion methods have been deprecated since their signatures are redundant given the standard SelectMany overloads: Before After xs.GroupBy(x => x.A).ApplyWithUnion(gs => from win in gs.SnapshotWindow() select win.Count()) xs.GroupBy(x => x.A).SelectMany(     gs =>     from win in gs.SnapshotWindow()     select win.Count()) xs.GroupBy(x => x.A).ApplyWithUnion(gs => from win in gs.SnapshotWindow() select win.Count(), r => new { r.Key, Count = r.Payload }) from x in xs group x by x.A into gs from win in gs.SnapshotWindow() select new { gs.Key, Count = win.Count() } Alternate UDO syntax The representation of UDOs in the StreamInsight LINQ dialect confuses cardinalities. Based on the semantics of user-defined operators in StreamInsight, one would expect to construct queries in the following form: from win in xs.SnapshotWindow() from y in MyUdo(win) select y Instead, the UDO proxy method is referenced within a projection, and the (many) results returned by the user code are automatically flattened into a stream: from win in xs.SnapshotWindow() select MyUdo(win) The “many-or-one” confusion is exemplified by the following example that compiles but fails at runtime: from win in xs.SnapshotWindow() select MyUdo(win) + win.Count() The above query must fail because the UDO is in fact returning many values per window while the count aggregate is returning one. Original syntax New alternate syntax from win in xs.SnapshotWindow() select win.UdoProxy(1) from win in xs.SnapshotWindow() from y in win.UserDefinedOperator(() => new Udo(1)) select y -or- from win in xs.SnapshotWindow() from y in win.UdoMacro(1) select y Notice that this formulation also sidesteps the dynamic type pitfalls of the existing “proxy method” approach to UDOs, in which the type of the UDO implementation (TInput, TOuput) and the type of its constructor arguments (TConfig) need to align in a precise and non-obvious way with the argument and return types for the corresponding proxy method. UDSO syntax UDSO currently leverages the DataContractSerializer to clone initial state for logical instances of the user operator. Initial state will instead be described by an expression in the new LINQ surface. Before After xs.Scan(new Udso()) xs.Scan(() => new Udso()) Name changes ShiftEventTime => AlterEventStartTime: The alter event lifetime overload taking a new start time value has been renamed. CountByStartTimeWindow => CountWindow

    Read the article

  • Entity Framework 6: Alpha2 Now Available

    - by ScottGu
    The Entity Framework team recently announced the 2nd alpha release of EF6.   The alpha 2 package is available for download from NuGet. Since this is a pre-release package make sure to select “Include Prereleases” in the NuGet package manager, or execute the following from the package manager console to install it: PM> Install-Package EntityFramework -Pre This week’s alpha release includes a bunch of great improvements in the following areas: Async language support is now available for queries and updates when running on .NET 4.5. Custom conventions now provide the ability to override the default conventions that Code First uses for mapping types, properties, etc. to your database. Multi-tenant migrations allow the same database to be used by multiple contexts with full Code First Migrations support for independently evolving the model backing each context. Using Enumerable.Contains in a LINQ query is now handled much more efficiently by EF and the SQL Server provider resulting greatly improved performance. All features of EF6 (except async) are available on both .NET 4 and .NET 4.5. This includes support for enums and spatial types and the performance improvements that were previously only available when using .NET 4.5. Start-up time for many large models has been dramatically improved thanks to improved view generation performance. Below are some additional details about a few of the improvements above: Async Support .NET 4.5 introduced the Task-Based Asynchronous Pattern that uses the async and await keywords to help make writing asynchronous code easier. EF 6 now supports this pattern. This is great for ASP.NET applications as database calls made through EF can now be processed asynchronously – avoiding any blocking of worker threads. This can increase scalability on the server by allowing more requests to be processed while waiting for the database to respond. The following code shows an MVC controller that is querying a database for a list of location entities:     public class HomeController : Controller     {         LocationContext db = new LocationContext();           public async Task<ActionResult> Index()         {             var locations = await db.Locations.ToListAsync();               return View(locations);         }     } Notice above the call to the new ToListAsync method with the await keyword. When the web server reaches this code it initiates the database request, but rather than blocking while waiting for the results to come back, the thread that is processing the request returns to the thread pool, allowing ASP.NET to process another incoming request with the same thread. In other words, a thread is only consumed when there is actual processing work to do, allowing the web server to handle more concurrent requests with the same resources. A more detailed walkthrough covering async in EF is available with additional information and examples. Also a walkthrough is available showing how to use async in an ASP.NET MVC application. Custom Conventions When working with EF Code First, the default behavior is to map .NET classes to tables using a set of conventions baked into EF. For example, Code First will detect properties that end with “ID” and configure them automatically as primary keys. However, sometimes you cannot or do not want to follow those conventions and would rather provide your own. For example, maybe your primary key properties all end in “Key” instead of “Id”. Custom conventions allow the default conventions to be overridden or new conventions to be added so that Code First can map by convention using whatever rules make sense for your project. The following code demonstrates using custom conventions to set the precision of all decimals to 5. As with other Code First configuration, this code is placed in the OnModelCreating method which is overridden on your derived DbContext class:         protected override void OnModelCreating(DbModelBuilder modelBuilder)         {             modelBuilder.Properties<decimal>()                 .Configure(x => x.HasPrecision(5));           } But what if there are a couple of places where a decimal property should have a different precision? Just as with all the existing Code First conventions, this new convention can be overridden for a particular property simply by explicitly configuring that property using either the fluent API or a data annotation. A more detailed description of custom code first conventions is available here. Community Involvement I blogged a while ago about EF being released under an open source license.  Since then a number of community members have made contributions and these are included in EF6 alpha 2. Two examples of community contributions are: AlirezaHaghshenas contributed a change that increases the startup performance of EF for larger models by improving the performance of view generation. The change means that it is less often necessary to use of pre-generated views. UnaiZorrilla contributed the first community feature to EF: the ability to load all Code First configuration classes in an assembly with a single method call like the following: protected override void OnModelCreating(DbModelBuilder modelBuilder) {        modelBuilder.Configurations            .AddFromAssembly(typeof(LocationContext).Assembly); } This code will find and load all the classes that inherit from EntityTypeConfiguration<T> or ComplexTypeConfiguration<T> in the assembly where LocationContext is defined. This reduces the amount of coupling between the context and Code First configuration classes, and is also a very convenient shortcut for large models. Other upcoming features coming in EF 6 Lots of information about the development of EF6 can be found on the EF CodePlex site, including a roadmap showing the other features that are planned for EF6. One of of the nice upcoming features is connection resiliency, which will automate the process of retying database operations on transient failures common in cloud environments and with databases such as the Windows Azure SQL Database. Another often requested feature that will be included in EF6 is the ability to map stored procedures to query and update operations on entities when using Code First. Summary EF6 is the first open source release of Entity Framework being developed in CodePlex. The alpha 2 preview release of EF6 is now available on NuGet, and contains some really great features for you to try. The EF team are always looking for feedback from developers - especially on the new features such as custom Code First conventions and async support. To provide feedback you can post a comment on the EF6 alpha 2 announcement post, start a discussion or file a bug on the CodePlex site. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Converting OCaml to F#: F# equivelent of Pervasives at_exit

    - by Guy Coder
    I am converting the OCaml Format module to F# and tracked a problem back to a use of the OCaml Pervasives at_exit. val at_exit : (unit -> unit) -> unit Register the given function to be called at program termination time. The functions registered with at_exit will be called when the program executes exit, or terminates, either normally or because of an uncaught exception. The functions are called in "last in, first out" order: the function most recently added with at_exit is called first. In the process of conversion I commented out the line as the compiler did not flag it as being needed and I was not expecting an event in the code. I checked the FSharp.PowerPack.Compatibility.PervasivesModule for at_exit using VS Object Browser and found none. I did find how to run code "at_exit"? and How do I write an exit handler for an F# application? The OCaml line is at_exit print_flush with print_flush signature: val print_flush : (unit -> unit) Also in looking at the use of it during a debug session of the OCaml code, it looks like at_exit is called both at the end of initialization and at the end of each use of a call to the module. Any suggestions, hints on how to do this. This will be my first event in F#. EDIT Here is some of what I have learned about the Format module that should shed some light on the problem. The Format module is a library of functions for basic pretty printer commands of simple OCaml values such as int, bool, string. The format module has commands like print_string, but also some commands to say put the next line in a bounded box, think new set of left and right margins. So one could write: print_string "Hello" or open_box 0; print_string "<<"; open_box 0; print_string "p \/ q ==> r"; close_box(); print_string ">>"; close_box() The commands such as open_box and print_string are handled by a loop that interprets the commands and then decides wither to print on the current line or advance to the next line. The commands are held in a queue and there is a state record to hold mutable values such as left and right margin. The queue and state needs to be primed, which from debugging the test cases against working OCaml code appears to be done at the end of initialization of the module but before the first call is made to any function in the Format module. The queue and state is cleaned up and primed again for the next set of commands by the use of mechanisms for at_exit that recognize that the last matching frame for the initial call to the format modules has been removed thus triggering the call to at_exit which pushes out any remaining command in the queue and re-initializes the queue and state. So the sequencing of the calls to print_flush is critical and appears to be at more than what the OCaml documentation states.

    Read the article

  • How do I make solr/jetty find the installed slf4j jars in Ubuntu 12.04?

    - by J. Pablo Fernández
    I'm running Ubuntu 12.04's packaged Jetty in which I installed solr 4.3.1 (by copying the war file to /var/lib/jetty/webapps. When I start Jetty, I get this error: failed SolrRequestFilter: org.apache.solr.common.SolrException: Could not find necessary SLF4j logging jars. If using Jetty, the SLF4j logging jars need to go in the jetty lib/ext directory. The package libslf4j-java is installed, and the jars are in /usr/share/java: /usr/share/java/log4j-over-slf4j.jar /usr/share/java/slf4j-api.jar /usr/share/java/slf4j-jcl.jar /usr/share/java/slf4j-jdk14.jar /usr/share/java/slf4j-log4j12.jar /usr/share/java/slf4j-migrator.jar /usr/share/java/slf4j-nop.jar /usr/share/java/slf4j-simple.jar but somehow, Jetty and/or Solr are not finding them. How do I make them find them? or how do I install some other jars where jetty/solr would find them? The full error is: 88 [main] INFO org.mortbay.log - jetty-6.1.24 443 [main] INFO org.mortbay.log - Deploy /etc/jetty/contexts/javadoc.xml -> org.mortbay.jetty.handler.ContextHandler@cec0c5{/javadoc,file:/usr/share/jetty/javadoc} 522 [main] INFO org.mortbay.log - Extract file:/var/lib/jetty/webapps/solr.war to /var/cache/jetty/data/Jetty__8080_solr.war__solr__zdafkg/webapp 1501 [main] WARN org.mortbay.log - failed SolrRequestFilter: org.apache.solr.common.SolrException: Could not find necessary SLF4j logging jars. If using Jetty, the SLF4j logging jars need to go in the jetty lib/ext directory. For other containers, the corresponding directory should be used. For more information, see: http://wiki.apache.org/solr/SolrLogging 1501 [main] ERROR org.mortbay.log - Failed startup of context org.mortbay.jetty.webapp.WebAppContext@5329c5{/solr,file:/var/lib/jetty/webapps/solr.war} org.apache.solr.common.SolrException: Could not find necessary SLF4j logging jars. If using Jetty, the SLF4j logging jars need to go in the jetty lib/ext directory. For other containers, the corresponding directory should be used. For more information, see: http://wiki.apache.org/solr/SolrLogging at org.apache.solr.servlet.SolrDispatchFilter.<init>(SolrDispatchFilter.java:105) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:532) at java.lang.Class.newInstance0(Class.java:374) at java.lang.Class.newInstance(Class.java:327) at org.mortbay.jetty.servlet.Holder.newInstance(Holder.java:153) at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:92) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) at org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:662) at org.mortbay.jetty.servlet.Context.startContext(Context.java:140) at org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1250) at org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518) at org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:467) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) at org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152) at org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) at org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) at org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130) at org.mortbay.jetty.Server.doStart(Server.java:224) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) at org.mortbay.xml.XmlConfiguration.main(XmlConfiguration.java:985) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.mortbay.start.Main.invokeMain(Main.java:194) at org.mortbay.start.Main.start(Main.java:534) at org.mortbay.jetty.start.daemon.Bootstrap.start(Bootstrap.java:30) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:243) Caused by: java.lang.NoClassDefFoundError: org/slf4j/LoggerFactory at org.apache.solr.servlet.SolrDispatchFilter.<init>(SolrDispatchFilter.java:103) ... 36 more Caused by: java.lang.ClassNotFoundException: org.slf4j.LoggerFactory at java.net.URLClassLoader$1.run(URLClassLoader.java:217) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:205) at org.mortbay.jetty.webapp.WebAppClassLoader.loadClass(WebAppClassLoader.java:392) at org.mortbay.jetty.webapp.WebAppClassLoader.loadClass(WebAppClassLoader.java:363) ... 37 more 1505 [main] WARN org.mortbay.log - failed org.mortbay.jetty.webapp.WebAppContext@5329c5{/solr,file:/var/lib/jetty/webapps/solr.war}: java.lang.NoClassDefFoundError: org/slf4j/Logger 1579 [main] WARN org.mortbay.log - failed ContextHandlerCollection@19d0a1: java.lang.NoClassDefFoundError: org/slf4j/Logger 1582 [main] INFO org.mortbay.log - Opened /var/log/jetty/2013_06_27.request.log 1582 [main] WARN org.mortbay.log - failed HandlerCollection@cbf30e: java.lang.NoClassDefFoundError: org/slf4j/Logger 1582 [main] ERROR org.mortbay.log - Error starting handlers java.lang.NoClassDefFoundError: org/slf4j/Logger at java.lang.Class.getDeclaredMethods0(Native Method) at java.lang.Class.privateGetDeclaredMethods(Class.java:2454) at java.lang.Class.getMethod0(Class.java:2697) at java.lang.Class.getMethod(Class.java:1622) at org.mortbay.log.Log.unwind(Log.java:228) at org.mortbay.log.Log.warn(Log.java:197) at org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:475) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) at org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152) at org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) at org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) at org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130) at org.mortbay.jetty.Server.doStart(Server.java:224) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) at org.mortbay.xml.XmlConfiguration.main(XmlConfiguration.java:985) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.mortbay.start.Main.invokeMain(Main.java:194) at org.mortbay.start.Main.start(Main.java:534) at org.mortbay.jetty.start.daemon.Bootstrap.start(Bootstrap.java:30) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:243) Caused by: java.lang.ClassNotFoundException: org.slf4j.Logger at java.net.URLClassLoader$1.run(URLClassLoader.java:217) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:205) at org.mortbay.jetty.webapp.WebAppClassLoader.loadClass(WebAppClassLoader.java:392) at org.mortbay.jetty.webapp.WebAppClassLoader.loadClass(WebAppClassLoader.java:363) ... 29 more

    Read the article

  • Freezes (not crashes) with GCD, blocks and Core Data

    - by Lukasz
    I have recently rewritten my Core Data driven database controller to use Grand Central Dispatch to manage fetching and importing in the background. Controller can operate on 2 NSManagedContext's: NSManagedObjectContext *mainMoc instance variable for main thread. this contexts is used only by quick access for UI by main thread or by dipatch_get_main_queue() global queue. NSManagedObjectContext *bgMoc for background tasks (importing and fetching data for NSFetchedresultsController for tables). This background tasks are fired ONLY by user defined queue: dispatch_queue_t bgQueue (instance variable in database controller object). Fetching data for tables is done in background to not block user UI when bigger or more complicated predicates are performed. Example fetching code for NSFetchedResultsController in my table view controllers: -(void)fetchData{ dispatch_async([CDdb db].bgQueue, ^{ NSError *error = nil; [[self.fetchedResultsController fetchRequest] setPredicate:self.predicate]; if (self.fetchedResultsController && ![self.fetchedResultsController performFetch:&error]) { NSSLog(@"Unresolved error in fetchData %@", error); } if (!initial_fetch_attampted)initial_fetch_attampted = YES; fetching = NO; dispatch_async(dispatch_get_main_queue(), ^{ [self.table reloadData]; [self.table scrollRectToVisible:CGRectMake(0, 0, 100, 20) animated:YES]; }); }); } // end of fetchData function bgMoc merges with mainMoc on save using NSManagedObjectContextDidSaveNotification: - (void)bgMocDidSave:(NSNotification *)saveNotification { // CDdb - bgMoc didsave - merging changes with main mainMoc dispatch_async(dispatch_get_main_queue(), ^{ [self.mainMoc mergeChangesFromContextDidSaveNotification:saveNotification]; // Extra notification for some other, potentially interested clients [[NSNotificationCenter defaultCenter] postNotificationName:DATABASE_SAVED_WITH_CHANGES object:saveNotification]; }); } - (void)mainMocDidSave:(NSNotification *)saveNotification { // CDdb - main mainMoc didSave - merging changes with bgMoc dispatch_async(self.bgQueue, ^{ [self.bgMoc mergeChangesFromContextDidSaveNotification:saveNotification]; }); } NSfetchedResultsController delegate has only one method implemented (for simplicity): - (void)controllerDidChangeContent:(NSFetchedResultsController *)controller { dispatch_async(dispatch_get_main_queue(), ^{ [self fetchData]; }); } This way I am trying to follow Apple recommendation for Core Data: 1 NSManagedObjectContext per thread. I know this pattern is not completely clean for at last 2 reasons: bgQueue not necessarily fires the same thread after suspension but since it is serial, it should not matter much (there is never 2 threads trying access bgMoc NSManagedObjectContext dedicated to it). Sometimes table view data source methods will ask NSFetchedResultsController for info from bgMoc (since fetch is done on bgQueue) like sections count, fetched objects in section count, etc.... Event with this flaws this approach works pretty well of the 95% of application running time until ... AND HERE GOES MY QUESTION: Sometimes, very randomly application freezes but not crashes. It does not response on any touch and the only way to get it back to live is to restart it completely (switching back to and from background does not help). No exception is thrown and nothing is printed to the console (I have Breakpoints set for all exception in Xcode). I have tried to debug it using Instruments (time profiles especially) to see if there is something hard going on on main thread but nothing is showing up. I am aware that GCD and Core Data are the main suspects here, but I have no idea how to track / debug this. Let me point out, that this also happens when I dispatch all the tasks to the queues asynchronously only (using dispatch_async everywhere). This makes me think it is not just standard deadlock. Is there any possibility or hints of how could I get more info what is going on? Some extra debug flags, Instruments magical tricks or build setting etc... Any suggestions on what could be the cause are very much appreciated as well as (or) pointers to how to implement background fetching for NSFetchedResultsController and background importing in better way.

    Read the article

  • Ruby. Mongoid. Relations

    - by Scepion1d
    I've encountered some problems with MongoID. I have three models: require 'mongoid' class Configuration include Mongoid::Document belongs_to :user field :links, :type => Array field :root, :type => String field :objects, :type => Array field :categories, :type => Array has_many :entries end class TimeDim include Mongoid::Document field :day, :type => Integer field :month, :type => Integer field :year, :type => Integer field :day_of_week, :type => Integer field :minute, :type => Integer field :hour, :type => Integer has_many :entries end class Entry include Mongoid::Document belongs_to :configuration belongs_to :time_dim field :category, :type => String # any other dynamic fields end Creating documents for Configurations and TimeDims is successful. But when i've trying to execute following code: params = Hash.new params[:configuration] = config # an instance of Configuration from DB entry.each do |key, value| params[key.to_sym] = value # String end unless Entry.exists?(conditions: params) params[:time_dim] = self.generate_time_dim # an instance of TimeDim from DB params[:category] = self.detect_category(descr) # String Entry.new(params).save end ... i saw following output: /home/scepion1d/Workspace/RubyMine/dana-x/.bundle/ruby/1.9.1/gems/bson-1.6.1/lib/bson/bson_c.rb:24:in `serialize': Cannot serialize an object of class Configuration into BSON. (BSON::InvalidDocument) from /home/scepion1d/Workspace/RubyMine/dana-x/.bundle/ruby/1.9.1/gems/bson-1.6.1/lib/bson/bson_c.rb:24:in `serialize' from /home/scepion1d/Workspace/RubyMine/dana-x/.bundle/ruby/1.9.1/gems/mongo-1.6.1/lib/mongo/cursor.rb:604:in `construct_query_message' from /home/scepion1d/Workspace/RubyMine/dana-x/.bundle/ruby/1.9.1/gems/mongo-1.6.1/lib/mongo/cursor.rb:465:in `send_initial_query' from /home/scepion1d/Workspace/RubyMine/dana-x/.bundle/ruby/1.9.1/gems/mongo-1.6.1/lib/mongo/cursor.rb:458:in `refresh' from /home/scepion1d/Workspace/RubyMine/dana-x/.bundle/ruby/1.9.1/gems/mongo-1.6.1/lib/mongo/cursor.rb:128:in `next' from /home/scepion1d/Workspace/RubyMine/dana-x/.bundle/ruby/1.9.1/gems/mongo-1.6.1/lib/mongo/db.rb:509:in `command' from /home/scepion1d/Workspace/RubyMine/dana-x/.bundle/ruby/1.9.1/gems/mongo-1.6.1/lib/mongo/cursor.rb:191:in `count' from /home/scepion1d/Workspace/RubyMine/dana-x/.bundle/ruby/1.9.1/gems/mongoid-2.4.6/lib/mongoid/cursor.rb:42:in `block in count' from /home/scepion1d/Workspace/RubyMine/dana-x/.bundle/ruby/1.9.1/gems/mongoid-2.4.6/lib/mongoid/collections/retry.rb:29:in `retry_on_connection_failure' from /home/scepion1d/Workspace/RubyMine/dana-x/.bundle/ruby/1.9.1/gems/mongoid-2.4.6/lib/mongoid/cursor.rb:41:in `count' from /home/scepion1d/Workspace/RubyMine/dana-x/.bundle/ruby/1.9.1/gems/mongoid-2.4.6/lib/mongoid/contexts/mongo.rb:93:in `count' from /home/scepion1d/Workspace/RubyMine/dana-x/.bundle/ruby/1.9.1/gems/mongoid-2.4.6/lib/mongoid/criteria.rb:45:in `count' from /home/scepion1d/Workspace/RubyMine/dana-x/.bundle/ruby/1.9.1/gems/mongoid-2.4.6/lib/mongoid/finders.rb:60:in `exists?' from /home/scepion1d/Workspace/RubyMine/dana-x/crawler/crawler.rb:110:in `block (2 levels) in push_entries_to_db' from /home/scepion1d/Workspace/RubyMine/dana-x/crawler/crawler.rb:103:in `each' from /home/scepion1d/Workspace/RubyMine/dana-x/crawler/crawler.rb:103:in `block in push_entries_to_db' from /home/scepion1d/Workspace/RubyMine/dana-x/crawler/crawler.rb:102:in `each' from /home/scepion1d/Workspace/RubyMine/dana-x/crawler/crawler.rb:102:in `push_entries_to_db' from main_starter.rb:15:in `<main>' Can anyone tell what am I doing wrong?

    Read the article

  • Analyzing bitmaps produced by NSAffineTransform and CILineOverlay filters

    - by Adam
    I am trying to manipulate an image using a chain of CIFilters, and then examine each byte of the resulting image (bitmap). Long term, I do not need to display the resulting image (bitmap) -- I just need to "analyze" it in memory. But near-term I am displaying it on screen, to help with debugging. I have some "bitmap examination" code that works as expected when examining the NSImage (bitmap representation) I use as my input (loaded from a JPG file into an NSImage). And it SOMETIMES works as expected when I use it on the outputBitmap produced by the code below. More specifically, when I use an NSAffineTransform filter to create outputBitmap, then outputBitmap contains the data I would expect. But if I use a CILineOverlay filter to create the outputBitmap, none of the bytes in the bitmap have any data in them. I believe both of these filters are working as expected, because when I display their results on screen (via outputImageView), they look "correct." Yet when I examine the outputBitmaps, the one created from the CILineOverlay filter is "empty" while the one created from NSAffineTransfer contains data. Furthermore, if I chain the two filters together, the final resulting bitmap only seems to contain data if I run the AffineTransform last. Seems very strange, to me??? My understanding (from reading the CI programming guide) is that the CIImage should be considered an "image recipe" rather than an actual image, because the image isn't actually created until the image is "drawn." Given that, it would make sense that the CIimage bitmap doesn't have data -- but I don't understand why it has data after I run the NSAffineTransform but doesn't have data after running the CILineOverlay transform? Basically, I am trying to determine if creating the NSCIImageRep (ir in the code below) from the CIImage (myResult) is equivalent to "drawing" the CIImage -- in other words if that should force the bitmap to be populated? If someone knows the answer to this please let me know -- it will save me a few hours of trial and error experimenting! Finally, if the answer is "you must draw to a graphics context" ... then I have another question: would I need to do something along the lines of what is described in the Quartz 2D Programming Guide: Graphics Contexts, listing 2-7 and 2-8: drawing to a bitmap graphics context? That is the path down which I am about to head ... but it seems like a lot of code just to force the bitmap data to be dumped into an array where I can get at it. So if there is an easier or better way please let me know. I just want to take the data (that should be) in myResult and put it into a bitmap array where I can access it at the byte level. And since I already have code that works with an NSBitmapImageRep, unless doing it that way is a bad idea for some reason that is not readily apparent to me, then I would prefer to "convert" myResult into an NSBitmapImageRep. CIImage * myResult = [transform valueForKey:@"outputImage"]; NSImage *outputImage; NSCIImageRep *ir = [NSCIImageRep alloc]; ir = [NSCIImageRep imageRepWithCIImage:myResult]; outputImage = [[[NSImage alloc] initWithSize: NSMakeSize(inputImage.size.width, inputImage.size.height)] autorelease]; [outputImage addRepresentation:ir]; [outputImageView setImage: outputImage]; NSBitmapImageRep *outputBitmap = [[NSBitmapImageRep alloc] initWithCIImage: myResult]; Thanks, Adam

    Read the article

  • Servlet response wrapper has encoding problem

    - by John O
    A servlet response wrapper is being used in a Servlet Filter. The idea is that the response is manipulated, with a 'nonce' value being injected into forms, as part of defence against CSRF attacks. The web app is using UTF-8 everywhere. When the Servlet Filter is absent, no problems. When the filter is added, encoding issues occur. (It seems as if the response is reverting to 8859-1.) The guts of the code : final class CsrfResponseWrapper extends AbstractResponseWrapper { ... byte[] modifyResponse(byte[] aInputResponse){ ... String originalInput = new String(aInputResponse, encoding); String modifiedResult = addHiddenParamToPostedForms(originalInput); result = modifiedResult.getBytes(encoding); ... } ... } As I understand it, the transition between byte-land and String-land should specify an encoding. That is done here, as you can see, in two places. The value of the 'encoding' variable is 'UTF-8'; the alteration of the String itself is standard string manipulation (with a regex), and never specifies an encoding (addHiddenParamToPostedForms). Where am I in error about the encoding? EDIT: Here is the base class (sorry it's rather long): package hirondelle.web4j.security; import javax.servlet.ServletOutputStream; import javax.servlet.ServletResponse; import javax.servlet.http.HttpServletResponse; import javax.servlet.http.HttpServletResponseWrapper; import java.io.ByteArrayOutputStream; import java.io.IOException; import java.io.PrintWriter; /** Abstract Base Class for altering response content. (May be useful in future contexts as well. For now, keep package-private.) */ abstract class AbstractResponseWrapper extends HttpServletResponseWrapper { AbstractResponseWrapper(ServletResponse aServletResponse) throws IOException { super((HttpServletResponse)aServletResponse); fOutputStream = new ModifiedOutputStream(aServletResponse.getOutputStream()); fWriter = new PrintWriter(fOutputStream); } /** Return the modified response. */ abstract byte[] modifyResponse(byte[] aInputResponse); /** Standard servlet method. */ public final ServletOutputStream getOutputStream() { //fLogger.fine("Modified Response : Getting output stream."); if ( fWriterReturned ) { throw new IllegalStateException(); } fOutputStreamReturned = true; return fOutputStream; } /** Standard servlet method. */ public final PrintWriter getWriter() { //fLogger.fine("Modified Response : Getting writer."); if ( fOutputStreamReturned ) { throw new IllegalStateException(); } fWriterReturned = true; return fWriter; } // PRIVATE /* Well-behaved servlets return either an OutputStream or a PrintWriter, but not both. */ private PrintWriter fWriter; private ModifiedOutputStream fOutputStream; /* These items are used to implement conformance to the javadoc for ServletResponse, regarding exceptions being thrown. */ private boolean fWriterReturned; private boolean fOutputStreamReturned; /** Modified low level output stream. */ private class ModifiedOutputStream extends ServletOutputStream { public ModifiedOutputStream(ServletOutputStream aOutputStream) { fServletOutputStream = aOutputStream; fBuffer = new ByteArrayOutputStream(); } /** Must be implemented to make this class concrete. */ public void write(int aByte) { fBuffer.write(aByte); } public void close() throws IOException { if ( !fIsClosed ){ processStream(); fServletOutputStream.close(); fIsClosed = true; } } public void flush() throws IOException { if ( fBuffer.size() != 0 ){ if ( !fIsClosed ) { processStream(); fBuffer = new ByteArrayOutputStream(); } } } /** Perform the core processing, by calling the abstract method. */ public void processStream() throws IOException { fServletOutputStream.write(modifyResponse(fBuffer.toByteArray())); fServletOutputStream.flush(); } // PRIVATE // private ServletOutputStream fServletOutputStream; private ByteArrayOutputStream fBuffer; /** Tracks if this stream has been closed. */ private boolean fIsClosed = false; } }

    Read the article

  • CodePlex Daily Summary for Wednesday, February 24, 2010

    CodePlex Daily Summary for Wednesday, February 24, 2010New ProjectsADO.Net DataSets to ExtJs.data.Store: A JavaScript (and C#) based project to reduce the amount of client-side code necessary to consume ADO.Net / ASP.Net web services when using ExtJS.AMP.Net Wrapper: AMP is a platform to build on-line marketplaces (http://www.poweredbyamp.com). AMP.Net provided Object-Like interaction with AMP's restful service...ArkSwitch: ArkSwitch is an easy to use, finger-friendly task manager for Windows Mobile 6.5.3 (with a WM6.5 compatibility mode). It is developed mainly in C#,...Biffen: Cinema-booking project in Computer Science at University College Nordjylland, Denmark.Braintree Client Library: Client library for integrating with the Braintree Gateway.Business Framework: A framework which helps building business applications. It provides business rules, validation rules and a text-based language for writing rules. I...Camp Araminta: This project will be used to coordinate development efforts on the Camp Araminta website.ChoServiceHost: Simple and easy way to create and host Windows Service Applications in .NET 3.5/Visual Studio 2008Delta College Game Development Project: Project site for cs 16 game development classDotNetNuke® Labs: DotNetNuke Labs is a collection of "research & development" type projects for the DotNetNuke platform.Generic web part for hosting Silverlight content on SharePoint sites (WSS,MOSS): This is a generic web part for hosting Silverlight content on WSS 30 and MOSS 2007 sites. The objective of this web part was to make it easy for us...GpTiming: GpTiming is a simple "lab" application related to race events, based on a Domain Model.HTML Forms in Windows Forms: As the names suggests this code library is designed to introduce HTML code (primarily form code) into Windows Forms. It was created because standar...imgur uploader - .net open source uploader for image sharing site imgur: Imgur uploader strives to be an easy to use uploader for images you would like to share with friends and family. It is written in c#.kuuy static system: kuuy static system is a full static publish website system!LaTeX Grapher: The goal of this project is to make a tool that facilitates making high quality two dimensional vector graphic function plots with a minimal amount...LightREST: A .NET library to consume REST-based HTTP services.Machiavelli: Machiavelli is Stackoverflow inspired project that I am working on following Andrew Siemer's article on DotNetSlackers. Mover: Mover makes it easier for developers to create programmatic animations in Silverlight. It provides an expressive API to the platform's underlying S...MVC Presenter: ASP.NET MVC 2で作るプレゼンビューアーnHibernate Attribute mapping: How to use Attibute mapping with a ManyToMany Relationship with nHibernateNIPO Data Processing Component Framework: NIPO is a general purpose component framework for data processing applications (that follow the IPO-principle). Its plugin-based architecture makes...PowerShell Remote File Explorer: This project intends to develop a Windows forms based file explorer to browse/transfer files over PowerShell 2.0 remoting channel. The file transfe...Process Flow Tracking of Biomass Distribution Project (University of Mumbai): At Larsen & Toubro Infotech India Ltd., my team worked on a SCM (Supply Chain Management) based project titled 'Process Flow Tracking of Biomass Di...VS2010 Rc1 Fix: Illustrates a fix for working with the ASAP.NET Wizard control with VS2010 RC1Yicker: a microblog program devolep by c#.New ReleasesADO.Net DataSets to ExtJs.data.Store: Ext.net: This is the first version of Ext.net. This version contains a single class, Ext.net.Store which extends the Ext.data.Store class to consume ADO.Ne...AMP.Net Wrapper: AMP.Net v1.0: Provides abstraction for all the product search functionality offered by AMP.ArkSwitch: ArkSwitch legacy versions: Old versions - no need to download themArkSwitch: ArkSwitch v1.1.0: ArkSwitch v1.1.0Braintree Client Library: Braintree 1.0.0: Braintree .NET client library 1.0.0Business Framework: BusinessFramework preview: Early preview bits. See Rules for a sample.Business Framework: Samples: SamplesCC.Votd: CC.Votd 1.0.10.224: This is the initial release of CC.Votd. Marking as beta since I'm the only one who has used it up to this point.ChoServiceHost: ChoServiceHost.msi: Easy way to develop Windows Service applications in .NET 3.5/VS.NET 2008. (Installer)ChoServiceHost: ChoServiceHost-Src.zip: Easy way to develop Windows Service applications in .NET 3.5/VS.NET 2008. (Source Files)CHS Extranet: Beta 2.4: Beta 2.4 Release: Change Log: Added HTML preview options for XLS, XLSX, DOCX File Changes: ~/MyComputer.aspx ~/mycomputer.css ~/basestyle.css...Composure: AvalonDock-55751-VS2010.NET4: This is a "convenience build" of AvalonDock (drop 55751) for VIsual Studio 2010 and .NET 4.0. Nothing has been altered in the source code (which ...Data Access Component: Version 2.6: Add LINQ support.Desktop Google Reader: 1.3 Beta 1: New features: Read it Later included (see http://readitlaterlist.com/) Liking added (working: see number of liking users, see if liking yourself,...Explorer Plus: Explorer Plus v0.3: Amazon Locales AddedFree Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts 3.0.3 Released: Hi, Today we have released the final version of Visifire v3.0.3 which contains the following major features: * DataBinding. * IndicatorEn...Generic web part for hosting Silverlight content on SharePoint sites (WSS,MOSS): CTP: The objective of this release was to gather feedback from the wider community. I intend to pursue further development and make fixes wherever appro...HTML Forms in Windows Forms: HTMLForms 1.0: First Release.imgur uploader - .net open source uploader for image sharing site imgur: Release 2010-02-23-01: This is the first codeplex release! Let mayhem commence...Jeremi Stadler: Stick Tops 2.5: Sticktops is a very light program that makes it easy to paste stuff on small notes on the screen. All notes you have is saved on a server so you ca...kuuy static system: kss_v1.0beta sql: kss_v1.0beta sql scripts sourceMDownloader: MDownloader-0.15.2.55998: Fixed detecting uploading.com dead links; Added hiding rss entries without files;Mover: MoverLib for Silverlight 3: A first version of MoverLib for Silverlight 3.nHibernate Attribute mapping: 1.0: Source CodenHibernate Attribute mapping: Download 1: Zip fileNodeXL: Network Overview, Discovery and Exploration for Excel: NodeXL Class Libraries, version 1.0.1.113: The NodeXL class libraries can be used to display network graphs in .NET applications. To include a NodeXL network graph in a WPF desktop or Windo...NodeXL: Network Overview, Discovery and Exploration for Excel: NodeXL Excel 2007 Template, version 1.0.1.113: The NodeXL Excel 2007 template displays a network graph using edge and vertex lists stored in an Excel 2007 workbook. What's NewThis version inclu...OAuthLib: OAuthLib (1.6.0.0): Difference between previous version is as next. 7079 Make it possible to pass factory method of request in ObtainUnauthorizedRequestToken and Reque...patterns & practices SharePoint Guidance: SPG2010 Drop 5: SharePoint Guidance Drop Notes Microsoft patterns and practices ****************************************** ***************************************...PowerShell Remote File Explorer: PSRemoteExplorer 0.1: This release is the initial release of PowerShell remote file explorer. This enables the basic functionality of a remote file explorer. This also p...Reusable Library: v1.0.3: A collection of reusable abstractions for enterprise application developer.SharePoint Outlook Connector: Version 1.0.2.4: Version 1.0.2.4 Minor bugs have been fixed.Silverlight Server File Manager: First production release: This release is in production. Release on change set 37268.SIMD Detector: 2nd Release: Released C/CLI assembly project for use in CSharp and VB. Tested in CSharp console application. A Windows Form application coming soon. Projects ma...Source Analysis Policy: Source Analysis Policy v1.1 SP1: This release contains the compiled, and signed binaries in an installation package. This package also registers the policy with Microsoft Visual St...SpecExpress : A Fluent Validation Framework: SpecExpress 1.1: UpdatesAdded Validation Contexts feature Fixed bug with handling for Bool Types and Required MessageStore now allows for overriding individual ...VCC: Latest build, v2.1.30223.0: Automatic drop of latest buildVS2010 Rc1 Fix: RC1Fix01: This is a very simple project implementing a Microsoft Walkthrough at http://msdn.microsoft.com/en-us/library/wdb4eb30%28VS.100%29.aspx and the man...WPF AutoComplete TextBox Control: version 1.0: Initial releaseMost Popular ProjectsASP.NET Ajax LibraryManaged Extensibility FrameworkAccelerators for Microsoft Dynamics CRMWindows 7 USB/DVD Download ToolDotNetZip LibraryMDownloaderVirtual Router - Wifi Hot Spot for Windows 7 / 2008 R2MFCMAPIDroid ExplorerUseful Sharepoint Designer Custom Workflow ActivitiesMost Active ProjectsDinnerNow.netRawrBlogEngine.NETInfoServiceNB_Store - Free DotNetNuke Ecommerce Catalog ModuleRapid Entity Framework. (ORM). CTP 2SharpMap - Geospatial Application Framework for the CLRjQuery Library for SharePoint Web Servicespatterns & practices – Enterprise LibraryXcoordination Application Space

    Read the article

  • Introducing… SharePress!

    - by Bil Simser
    For those that follow me I’ve been away from blogging and twittering for a couple of months. This is the reason. For the last few months I’ve been working with a cross-functional team putting together a new product from the people that run WordPress, the free premiere blogging platform. The result is a new product we call SharePress, a highly extensible blogging and content management platform with the usability of WordPress and the power of SharePoint combined into a single product. SharePress gives you SharePoint sites that are SEO-friendly delivered with a Web 2.0 ease of use, leveraging all of the existing abilities of SharePoint and WordPress that we know today. The Reason Back in December I was approached by the WordPress team about building a new platform that took advantage of the power of SharePoint but the ease of WordPress. I’m no stranger to WordPress and it’s 5 minute no-holds-barred install (I’ve always wanted SharePoint to do this!) and I run my personal blog on WordPress as does my better half, Princess Jenn. There’s always been a pitch by so-called Web 2.0 applications to deliver the power of SharePoint but the ease of [insert product here] over the past year or so. I checked each and every one of them out, but they fell woefully short when it came to SharePoint’s document management, versioning, and customization. They try, but it’s never been up to par in my books. On the flipside, SharePoint has always been tops in collaboration in the Enterprise but it’s painful to develop web parts, UI customization can be tricky, and there’s just no user community for something as simple as themes and designs. The Product Enter SharePress. Is it SharePoint? Is it WordPress? It’s both, and neither. Everything you like about both products are there but this is a bold new product that is positioned to bring SharePoint to the masses while maintaining the fidelity of an Enterprise 2.0 collaboration platform. SharePress delivers on all fronts including: The ability to leverage any WordPress/Joomla/Drupal/DotNetNuke themes and skins inside of SharePoint Run any WordPress/Drupal/Joomla/DotNetNuke/SharePoint plug-in/module/web part/feature works out of the box with SharePress SEO-friendly URLs and pages Permalinks for all content All the features of SharePoint Server 2010 (including InfoPath, Excel, and Access services) included in the price Small deployment footprint. You decide how much to deploy and where. Independent Database Abstraction Layer (iDal) that allows you to deploy to SQL Server 2005/2008, MySQL, and PostgreSQL Portable Rendering Engine Layer (PREL) so you host .NET or PHP on Apache or IIS (version 7 or higher). The install feature is built around WordPress and it’s famous 5-minute install (actually, it’s never taken me more than 1 minute). SharePress installs with two screens after the files are uploaded to your server (which can be done entirely using FTP): After you enter two fields of information click “Install SharePress” and you’ll be done: No mess, no fuss, no complicated dependencies, and no server access required! How simpler could this be? The Technology WordPress plug-ins and themes working with SharePoint? Of course! The answer is IronPython which has now reached a maturity level capable of doing on the fly code language conversions. SharePress is a brand new product not built on top of any previous platform but leverages all the power of each of those applications through a patent pending technique called SharePress Multi-plAtfoRm Technology (SMART). SMART will convert PHP code on the fly into Python (using SWIG as an intermediate processor) which is then compiled to MSIL and then delivered back as an ASP.NET MVC application (output is C# or VB.NET, but you can build your own SMART converter to output a different language). Sound complicated? It is, but it’s all behind the scenes and you don’t have to worry about a thing. This image illustrates the technology stack and process: So users can load up out of the box PHP themes and plug-ins from the WordPress/Joomla/Drupal community into the SMART converter and output MSIL that is used by the SharePress engine and rendered on the fly to the end user. Supported PHP versions are 4.xx and 5.xx with version 6 support to come when it’s released. Similarly you can take any .NET application, DotNetNuke Module, SharePoint Web Part or event handler and feed it into the converter to output the same. Everything is reverse compiled into MSIL so it becomes technology agnostic. No source code access is needed and the SMART converter can handle obfuscated .NET assemblies that were built with .NET 1.0, 1.1, 2.0, 3.5, and 4.0. With this technology you can also with the flip of a switch have the output create PHP pages for you. This allows you to run SharePress on Unix based systems running PHP and MySQL, allowing you to deliver your SharePoint like experience to your users with a $0 infrastructure footprint. Here’s SharePress with the default WordPress post imported then a stock SharePoint collaboration site was imported. The site was then applied with the default Kubrick theme from WordPress. The Features Deploy any of the freely available 100,000 WordPress/Joomla/Drupal themes instantly to your runtime SharePress environment and preview or activate them right from your browser. Built-in Web 2.0 jQuery Enabled End User and Administrator Web Interface. Never have to remote into a server again! Run any SharePoint Web Part or Event Handler directly without modification or access to source code in SharePress. Use any WordPress/Joomla/Drupal plug-in directly in SharePress, no local admin or access to server. Just upload and activate. Upload and Activate any SharePoint Solution Package to any site remotely. No rebuilding. Changes made to sites require no compiling or rebuilding and are published immediately. Password Protected Content. You can give passwords to individual posts, articles, pages, documents, forms, and list items. A powerful polymorphic Captcha system backs the security interface and vendors can easily tie into smart card readers, fingerprint readers, and retina scanners for authorization and identification. OpenID, Windows Live, and Windows Authentication are supported out of the box. Infinitely customizable and extensible. You can leverage plug-ins from the open source community to do practically anything, all configured and uploaded via the browser. Additionally the developer API (available soon) allows you to build extensions in .NET, PHP, and Python with little effort. Easy Importing. We have importers for Blogger, WordPress, Drupal, Joomla, DotNetNuke, and SharePoint so you can populate your site quickly and easily with full metadata modeling and creation. Banner Management. It’s easy to setup banners for your web site complete with impression numbers, special URLs, and more. Menu Manager. The Menu Manager allows you to create as many menus as you want, each one can be associated to specific audiences or roles and then be styled across multiple contexts including the same menu delivered as a fly out, rollover, drop down, and just about any navigation you can think of. Collaborative ShareBook. Our exclusive book feature allows you to setup a “book” and then authorize individuals to contribute content. Permalinks. All content in SharePress has a permanent or “perma link” associated with it so people can link to it freely without fear of broken links. Apache or IIS, Unix / Linux / BSD / Solaris / Windows / Mac OS X support. Deliver SharePress the way *you* want from the platform *you* decide. Database Independence. We know people wanted to run on any database platform so SharePress is built on top of a database abstraction layer that allows you to run on SQL Server, MySQL, PostgreSQL. Other databases can be supported by writing a supporting database script consisting of fourteen function calls. The script can be written in Perl, Python, AWK, PowerShell, Unix Shell scripts, VBA, or simple DOS batch files. The Team SharePress is the work of a lot of people in both the WordPress and SharePoint community. I worked with a lot of SharePoint MVPs to create this new product as we really wanted to deliver the most compatible and feature rich system in a product that we would be proud of. Many thanks go out to Eli Bleeker, Todd Robillard, Scot Larson, Daniel Hillier, Shane Fox, Box Peran, Amanda English, and Bill Murray for doing the heavy lifting and all of their expertise and innovative thinking to get this product out. Licensing and Pricing SharePress is still in the final stages for pricing but we’re looking at a price point somewhere between $99-$100 to make it affordable for everyone. We plan to announce final pricing sometime in the next few weeks. There are no additional charges for Enterprise versions or additional features. Everything you see is what’s available and it’s just a matter of lighting up your site with whatever feature you want to enable. The product will not be open source but source code licenses will be available to ISVs who are interested in interfacing with the API at a low level. Cost will be $25,000 USD per developer and gives you complete access to the source code to the SharePress Foundation System and the .NET 4.0 Framework source code. Conclusion We hope you enjoy the launch of SharePress as the new premium blogging and content management platform for both Intranets and the Internet. We think we’ve build the best of breed solutions here and made it easy for anyone to get started with a minimal of infrastructure but allow the scalability of SharePress to shine through in the Enterprise 2.0 world. We encourage your feedback so please leave comments as to what you’re looking for in this system as we’re always evolving it to make it a better product for everyone.

    Read the article

  • Entity Framework v1 &ndash; tips and Tricks Part 3

    - by Rohit Gupta
    General Tips on Entity Framework v1 & Linq to Entities: ToTraceString() If you need to know the underlying SQL that the EF generates for a Linq To Entities query, then use the ToTraceString() method of the ObjectQuery class. (or use LINQPAD) Note that you need to cast the LINQToEntities query to ObjectQuery before calling TotraceString() as follows: 1: string efSQL = ((ObjectQuery)from c in ctx.Contact 2: where c.Address.Any(a => a.CountryRegion == "US") 3: select c.ContactID).ToTraceString(); ================================================================================ MARS or MultipleActiveResultSet When you create a EDM Model (EDMX file) from the database using Visual Studio, it generates a connection string with the same name as the name of the EntityContainer in CSDL. In the ConnectionString so generated it sets the MultipleActiveResultSet attribute to true by default. So if you are running the following query then it streams multiple readers over the same connection: 1: using (BAEntities context = new BAEntities()) 2: { 3: var cons = 4: from con in context.Contacts 5: where con.FirstName == "Jose" 6: select con; 7: foreach (var c in cons) 8: { 9: if (c.AddDate < new System.DateTime(2007, 1, 1)) 10: { 11: c.Addresses.Load(); 12: } 13: } 14: } ================================================================================= Explicitly opening and closing EntityConnection When you call ToList() or foreach on a LINQToEntities query the EF automatically closes the connection after all the records from the query have been consumed. Thus if you need to run many LINQToEntities queries over the same connection then explicitly open and close the connection as follows: 1: using (BAEntities context = new BAEntities()) 2: { 3: context.Connection.Open(); 4: var cons = from con in context.Contacts where con.FirstName == "Jose" 5: select con; 6: var conList = cons.ToList(); 7: var allCustomers = from con in context.Contacts.OfType<Customer>() 8: select con; 9: var allcustList = allCustomers.ToList(); 10: context.Connection.Close(); 11: } ====================================================================== Dispose ObjectContext only if required After you retrieve entities using the ObjectContext and you are not explicitly disposing the ObjectContext then insure that your code does consume all the records from the LinqToEntities query by calling .ToList() or foreach statement, otherwise the the database connection will remain open and will be closed by the garbage collector when it gets to dispose the ObjectContext. Secondly if you are making updates to the entities retrieved using LinqToEntities then insure that you dont inadverdently dispose of the ObjectContext after the entities are retrieved and before calling .SaveChanges() since you need the SAME ObjectContext to keep track of changes made to the Entities (by using ObjectStateEntry objects). So if you do need to explicitly dispose of the ObjectContext do so only after calling SaveChanges() and only if you dont need to change track the entities retrieved any further. ======================================================================= SQL InjectionAttacks under control with EFv1 LinqToEntities and LinqToSQL queries are parameterized before they are sent to the DB hence they are not vulnerable to SQL Injection attacks. EntitySQL may be slightly vulnerable to attacks since it does not use parameterized queries. However since the EntitySQL demands that the query be valid Entity SQL syntax and valid native SQL syntax at the same time. So the only way one can do a SQLInjection Attack is by knowing the SSDL of the EDM Model and be able to write the correct EntitySQL (note one cannot append regular SQL since then the query wont be a valid EntitySQL syntax) and append it to a parameter. ====================================================================== Improving Performance You can convert the EntitySets and AssociationSets in a EDM Model into precompiled Views using the edmgen utility. for e.g. the Customer Entity can be converted into a precompiled view using edmgen and all LinqToEntities query against the contaxt.Customer EntitySet will use the precompiled View instead of the EntitySet itself (the same being true for relationships (EntityReference & EntityCollections of a Entity)). The advantage being that when using precompiled views the performance will be much better. The syntax for generating precompiled views for a existing EF project is : edmgen /mode:ViewGeneration /inssdl:BAModel.ssdl /incsdl:BAModel.csdl /inmsl:BAModel.msl /p:Chap14.csproj Note that this will only generate precompiled views for EntitySets and Associations and not for existing LinqToEntities queries in the project.(for that use CompiledQuery.Compile<>) Secondly if you have a LinqToEntities query that you need to run multiple times, then one should precompile the query using CompiledQuery.Compile method. The CompiledQuery.Compile<> method accepts a lamda expression as a parameter, which denotes the LinqToEntities query  that you need to precompile. The following is a example of a lamda that we can pass into the CompiledQuery.Compile() method 1: Expression<Func<BAEntities, string, IQueryable<Customer>>> expr = (BAEntities ctx1, string loc) => 2: from c in ctx1.Contacts.OfType<Customer>() 3: where c.Reservations.Any(r => r.Trip.Destination.DestinationName == loc) 4: select c; Then we call the Compile Query as follows: 1: var query = CompiledQuery.Compile<BAEntities, string, IQueryable<Customer>>(expr); 2:  3: using (BAEntities ctx = new BAEntities()) 4: { 5: var loc = "Malta"; 6: IQueryable<Customer> custs = query.Invoke(ctx, loc); 7: var custlist = custs.ToList(); 8: foreach (var item in custlist) 9: { 10: Console.WriteLine(item.FullName); 11: } 12: } Note that if you created a ObjectQuery or a Enitity SQL query instead of the LINQToEntities query, you dont need precompilation for e.g. 1: An Example of EntitySQL query : 2: string esql = "SELECT VALUE c from Contacts AS c where c is of(BAGA.Customer) and c.LastName = 'Gupta'"; 3: ObjectQuery<Customer> custs = CreateQuery<Customer>(esql); 1: An Example of ObjectQuery built using ObjectBuilder methods: 2: from c in Contacts.OfType<Customer>().Where("it.LastName == 'Gupta'") 3: select c This is since the Query plan is cached and thus the performance improves a bit, however since the ObjectQuery or EntitySQL query still needs to materialize the results into Entities hence it will take the same amount of performance hit as with LinqToEntities. However note that not ALL EntitySQL based or QueryBuilder based ObjectQuery plans are cached. So if you are in doubt always create a LinqToEntities compiled query and use that instead ============================================================ GetObjectStateEntry Versus GetObjectByKey We can get to the Entity being referenced by the ObjectStateEntry via its Entity property and there are helper methods in the ObjectStateManager (osm.TryGetObjectStateEntry) to get the ObjectStateEntry for a entity (for which we know the EntityKey). Similarly The ObjectContext has helper methods to get an Entity i.e. TryGetObjectByKey(). TryGetObjectByKey() uses GetObjectStateEntry method under the covers to find the object, however One important difference between these 2 methods is that TryGetObjectByKey queries the database if it is unable to find the object in the context, whereas TryGetObjectStateEntry only looks in the context for existing entries. It will not make a trip to the database ============================================================= POCO objects with EFv1: To create POCO objects that can be used with EFv1. We need to implement 3 key interfaces: IEntityWithKey IEntityWithRelationships IEntityWithChangeTracker Implementing IEntityWithKey is not mandatory, but if you dont then we need to explicitly provide values for the EntityKey for various functions (for e.g. the functions needed to implement IEntityWithChangeTracker and IEntityWithRelationships). Implementation of IEntityWithKey involves exposing a property named EntityKey which returns a EntityKey object. Implementation of IEntityWithChangeTracker involves implementing a method named SetChangeTracker since there can be multiple changetrackers (Object Contexts) existing in memory at the same time. 1: public void SetChangeTracker(IEntityChangeTracker changeTracker) 2: { 3: _changeTracker = changeTracker; 4: } Additionally each property in the POCO object needs to notify the changetracker (objContext) that it is updating itself by calling the EntityMemberChanged and EntityMemberChanging methods on the changeTracker. for e.g.: 1: public EntityKey EntityKey 2: { 3: get { return _entityKey; } 4: set 5: { 6: if (_changeTracker != null) 7: { 8: _changeTracker.EntityMemberChanging("EntityKey"); 9: _entityKey = value; 10: _changeTracker.EntityMemberChanged("EntityKey"); 11: } 12: else 13: _entityKey = value; 14: } 15: } 16: ===================== Custom Property ==================================== 17:  18: [EdmScalarPropertyAttribute(IsNullable = false)] 19: public System.DateTime OrderDate 20: { 21: get { return _orderDate; } 22: set 23: { 24: if (_changeTracker != null) 25: { 26: _changeTracker.EntityMemberChanging("OrderDate"); 27: _orderDate = value; 28: _changeTracker.EntityMemberChanged("OrderDate"); 29: } 30: else 31: _orderDate = value; 32: } 33: } Finally you also need to create the EntityState property as follows: 1: public EntityState EntityState 2: { 3: get { return _changeTracker.EntityState; } 4: } The IEntityWithRelationships involves creating a property that returns RelationshipManager object: 1: public RelationshipManager RelationshipManager 2: { 3: get 4: { 5: if (_relManager == null) 6: _relManager = RelationshipManager.Create(this); 7: return _relManager; 8: } 9: } ============================================================ Tip : ProviderManifestToken – change EDMX File to use SQL 2008 instead of SQL 2005 To use with SQL Server 2008, edit the EDMX file (the raw XML) changing the ProviderManifestToken in the SSDL attributes from "2005" to "2008" ============================================================= With EFv1 we cannot use Structs to replace a anonymous Type while doing projections in a LINQ to Entities query. While the same is supported with LINQToSQL, it is not with LinqToEntities. For e.g. the following is not supported with LinqToEntities since only parameterless constructors and initializers are supported in LINQ to Entities. (the same works with LINQToSQL) 1: public struct CompanyInfo 2: { 3: public int ID { get; set; } 4: public string Name { get; set; } 5: } 6: var companies = (from c in dc.Companies 7: where c.CompanyIcon == null 8: select new CompanyInfo { Name = c.CompanyName, ID = c.CompanyId }).ToList(); ;

    Read the article

  • CodePlex Daily Summary for Thursday, May 27, 2010

    CodePlex Daily Summary for Thursday, May 27, 2010New ProjectsBinding Navigator: Clone of WinForms BindingNavigator that is able to work with any type of DataSource. For full functionality it requires the DataSource to implement...DEWD: DEWD is an Umbraco 4.0 extension used to edit sequential data such as rows in a SQL server database table. It's meant to allow developers to quickl...Eletronic Invoice Extensions: A simple DLL to use against XSD and validate a XML.EssionCalendar: EssionCal EssionCalExpression Encoder Batch Processor: Importing your videotapes into Windows is easy with the built-in utilities, but if your importer does not encode to your desired format, you need t...Feedback Form: Feedback application makes it easier for attendees who attend an seminar/event and event organizers. Organizers of the event will no longer need to...Find in Start Menu: Find in Start MenuIE8 Web Slices Pack: IE8 Web Slices Pack is a package of 5 types of web slices ready to be customized via a mini-CMS for your web site or for your custom IE8 installer....KafTK: Iskakov AzamatMicrosoft Dynamics NAV text export splitter: Utility for splitting Microsoft Dynamics NAV object exports (in text format) into files that each contain a single object. MultiPoint Vote: Voting is more innovative and catchy through this MultiPoint application. Using MultiPoint SDK 1.5 and Visual C#, this prototype emphasizes the cap...NebDotNet: NebDotNetnsim: Some simulation issues.OpenLight Group Common Lib: This project is a set of classes commonly used across OpenLight Group projects.pstsdk.net: .NET port of PST File Format SDK: pstsdk.net makes it easier for .NET developers to access the PST file format. This is a direct C# port of the PST File Format SDK project which is ...RavenMVC: A NoSQL Demo App using RavenDB and ASP.NET MVC 2.Remics: Remics is a toolkit for reverse engineering tools. Open source (MIT license). The goal of Remics is enabling developers (or researchers) quickly...RIA Services to Legacy DAL Integration Library: RIA Services to Legacy DAL Integration LibrarySharePoint List Adapters for SSIS: SSIS source and destination components to access SharePoint lists using basic authenticationSql Query Modelling Language: This project library creates simple Sql queries.Thumb nail creator and image resizer: a user control to create thubnails and resize images for displayTK: study projectViperWorks Ignition: Ignition is application framework for WinForms and WPF business applications. Built in webservice generation, reporting and rapid application devel...XNA Image Reflector: XNA Image Reflector allows you to add Web 2.0-like reflections to images in a few clicksXNArkanoid: XNArkanoid is a Windows Phone 7 remake of the classic Taito´s Arkanoid. It´s developed in C#, using XNA Framework v4.0.New ReleasesAcies: Acies - Alpha Build 0.0.5: First alpha release. Requires Microsoft XNA Framework Redistributable 3.1 (http://www.microsoft.com/downloads/details.aspx?FamilyID=53867a2a-e249...Ajax Toolkit for ASP.NET MVC: Ajax Toolkit for ASP.NET MVC v20100527: change array datasource to json datasource allow lowercaseAttribute Builder: Attribute Builder 1.0: Attribute Builder 1.0Support for parameter-less constructors, constructors with parameters and object initialization with field and property assign...Binding Navigator: TTBindingNavigator preview: First binary release. Only control without samples.Bojinx: Bojinx Core V4.5.15: Bugs fixed: Clean up in the event handler Added more disposal features to better clean up contexts on unload. Optimized command processing fur...CBM-Command: Version 1.0 - 2010-05-26: Release Notes - 2010-05-26 - Version 1.0New Features None Changes Fixed bug where you could move to an unopened panel Fixed bug where you could ...Community Forums NNTP bridge: Community Forums NNTP Bridge V03: This is the second release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open sourcen NNTP Bridge. ...Community Forums NNTP bridge: Community Forums NNTP Bridge V04: This is the second release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open sourcen NNTP Bridge. ...Community Forums NNTP bridge: Community Forums NNTP Bridge V05: This is the second release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open sourcen NNTP Bridge. ...Community Forums NNTP bridge: Community Forums NNTP Bridge V06: This is the second release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open sourcen NNTP Bridge. ...Community Forums NNTP bridge: Community Forums NNTP Bridge V07: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open sourcen NNTP Bridge. This release solves...Community Forums NNTP bridge: Community Forums NNTP Bridge V08: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release solves ...Dot Game: Dot Game Network version: Now, you can play over network.EpsiRisk: EPSI RISK V1: First Stable Version of EpsiRISK ! Enjoy !Event Scavenger: Viewer 3.3.0: Added grouping functionality to Viewer. Group expanding/collapsing only supported under Windows Vista/7 and onwards. Viewer version set to 3.3.0.FAST for Sharepoint MOSS 2010 Query Tool: Version 1.0: Full Release Added sorting Added custom trim duplicates Added UI improvements Added ignore certificate errorsFoursquare for Windows Phone 7: Foursquare 2010.05.26.01: Foursquare 2010.05.26.01 Updates: Corrected issue with isPrivate, sendToTwitter, and sendToFacebookHobbyBrew Mobile: Beta 3: ATTENZIONE notifica nuove versioni via email disponibile (leggi in fondo)! Supporto alla rotazione dello schermo: Malti e Luppoli si affiancano or...IE8 Web Slices Pack: IE8 Web Slices Pack v2.0: This release of the Web Slices Pack supports 5 different Web Slices and a mini CMS to administer the info in the Web Slices.manx: manx data 1.0: Initial dump of data. This is a raw SQL dump script that deletes all existing data and inserts the initial dataset as received from Paul Williams....Microsoft Web Protection Library: WPL May CTP: This preview of the Web Protection Library includes AntiXSS - an updated Anti-Cross Site Scripting Library removes some bugs and is now usable in ...MultiPoint Vote: MultiPoint Vote v.1: MultiPoint Vote v1 Features This accepts user inputs: number of participants, poll/survey title and the list of options A text file containing th...NLog - Advanced .NET Logging: Nightly Build 2010.05.26.001: Changes since the last build:2010-05-25 19:21:49 Jarek Kowalski Reordered parameters of AsynchronousAction<T> 2010-05-25 19:16:02 Jarek Kowalski N...NodeXL: Network Overview, Discovery and Exploration for Excel: NodeXL Class Libraries, version 1.0.1.124: The NodeXL class libraries can be used to display network graphs in .NET applications. To include a NodeXL network graph in a WPF desktop or Windo...NodeXL: Network Overview, Discovery and Exploration for Excel: NodeXL Excel 2007 Template, version 1.0.1.124: The NodeXL Excel 2007 template displays a network graph using edge and vertex lists stored in an Excel 2007 workbook. What's NewThis is a minor re...OpenExpressApp let business people build application: OpenExpressApp for .Net4: 从.Net3.5 SP1 升级到.Net4 ,升级主要内容: 1. 解决了一些内存泄露问题 2. 修改了一些bug 3. 进行了部分代码重构 4. 使用MEF替代了PrismRavenMVC: RavenMVC 0.1: A NoSQL demo app in ASP.NET MVC using RavenDB.SharePoint List Adapters for SSIS: Initial Release: Contains the raw assemblies necessary to use the SharePoint list adapters with basic authentication. See the read me file for details on using in ...Sql Query Modelling Language: Sql Qml V1: This is first versionThumb nail creator and image resizer: ThumbnailCreator 1.0: This is the very first release I built it for my self so it a bit rough but i thought others might fint it usefulluptime.exe: uptime.exe v1.1: Changed the calculation of the uptime. It's now based on the LastbootUpTime value obtained from WMI.VCC: Latest build, v2.1.30526.0: Automatic drop of latest buildViperWorks Ignition: Test: TestXmlCodeEditor: Release 0.91 Alpha: Release 0.91 AlphaXNA Image Reflector: XNA Image Reflector v 1.1: This release has been compiled with XNA Game Studio 3.1, so you will need to download the XNA Runtime 3.1 in order to run it.Xna.Extend: Xna Extend (Ver 0.0.1 beta): This is the betas, betas, beta, test version (Version 0.0.1 beta). It includes only the input and audio components. If you experience any errors, o...XNArkanoid: XNArkanoid v 0.2b: XNArkanoid v 0.2. Initial beta releaseMost Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryMicrosoft SQL Server Community & SamplesPHPExcelASP.NETMost Active ProjectsAStar.netpatterns & practices – Enterprise Librarypatterns & practices: Windows Azure Security GuidanceSqlServerExtensionsMono.AddinsBlogEngine.NETCustomer Portal Accelerator for Microsoft Dynamics CRMRawrCodeReviewGMap.NET - Great Maps for Windows Forms & Presentation

    Read the article

  • C#/.NET Fundamentals: Choosing the Right Collection Class

    - by James Michael Hare
    The .NET Base Class Library (BCL) has a wide array of collection classes at your disposal which make it easy to manage collections of objects. While it's great to have so many classes available, it can be daunting to choose the right collection to use for any given situation. As hard as it may be, choosing the right collection can be absolutely key to the performance and maintainability of your application! This post will look at breaking down any confusion between each collection and the situations in which they excel. We will be spending most of our time looking at the System.Collections.Generic namespace, which is the recommended set of collections. The Generic Collections: System.Collections.Generic namespace The generic collections were introduced in .NET 2.0 in the System.Collections.Generic namespace. This is the main body of collections you should tend to focus on first, as they will tend to suit 99% of your needs right up front. It is important to note that the generic collections are unsynchronized. This decision was made for performance reasons because depending on how you are using the collections its completely possible that synchronization may not be required or may be needed on a higher level than simple method-level synchronization. Furthermore, concurrent read access (all writes done at beginning and never again) is always safe, but for concurrent mixed access you should either synchronize the collection or use one of the concurrent collections. So let's look at each of the collections in turn and its various pros and cons, at the end we'll summarize with a table to help make it easier to compare and contrast the different collections. The Associative Collection Classes Associative collections store a value in the collection by providing a key that is used to add/remove/lookup the item. Hence, the container associates the value with the key. These collections are most useful when you need to lookup/manipulate a collection using a key value. For example, if you wanted to look up an order in a collection of orders by an order id, you might have an associative collection where they key is the order id and the value is the order. The Dictionary<TKey,TVale> is probably the most used associative container class. The Dictionary<TKey,TValue> is the fastest class for associative lookups/inserts/deletes because it uses a hash table under the covers. Because the keys are hashed, the key type should correctly implement GetHashCode() and Equals() appropriately or you should provide an external IEqualityComparer to the dictionary on construction. The insert/delete/lookup time of items in the dictionary is amortized constant time - O(1) - which means no matter how big the dictionary gets, the time it takes to find something remains relatively constant. This is highly desirable for high-speed lookups. The only downside is that the dictionary, by nature of using a hash table, is unordered, so you cannot easily traverse the items in a Dictionary in order. The SortedDictionary<TKey,TValue> is similar to the Dictionary<TKey,TValue> in usage but very different in implementation. The SortedDictionary<TKey,TValye> uses a binary tree under the covers to maintain the items in order by the key. As a consequence of sorting, the type used for the key must correctly implement IComparable<TKey> so that the keys can be correctly sorted. The sorted dictionary trades a little bit of lookup time for the ability to maintain the items in order, thus insert/delete/lookup times in a sorted dictionary are logarithmic - O(log n). Generally speaking, with logarithmic time, you can double the size of the collection and it only has to perform one extra comparison to find the item. Use the SortedDictionary<TKey,TValue> when you want fast lookups but also want to be able to maintain the collection in order by the key. The SortedList<TKey,TValue> is the other ordered associative container class in the generic containers. Once again SortedList<TKey,TValue>, like SortedDictionary<TKey,TValue>, uses a key to sort key-value pairs. Unlike SortedDictionary, however, items in a SortedList are stored as an ordered array of items. This means that insertions and deletions are linear - O(n) - because deleting or adding an item may involve shifting all items up or down in the list. Lookup time, however is O(log n) because the SortedList can use a binary search to find any item in the list by its key. So why would you ever want to do this? Well, the answer is that if you are going to load the SortedList up-front, the insertions will be slower, but because array indexing is faster than following object links, lookups are marginally faster than a SortedDictionary. Once again I'd use this in situations where you want fast lookups and want to maintain the collection in order by the key, and where insertions and deletions are rare. The Non-Associative Containers The other container classes are non-associative. They don't use keys to manipulate the collection but rely on the object itself being stored or some other means (such as index) to manipulate the collection. The List<T> is a basic contiguous storage container. Some people may call this a vector or dynamic array. Essentially it is an array of items that grow once its current capacity is exceeded. Because the items are stored contiguously as an array, you can access items in the List<T> by index very quickly. However inserting and removing in the beginning or middle of the List<T> are very costly because you must shift all the items up or down as you delete or insert respectively. However, adding and removing at the end of a List<T> is an amortized constant operation - O(1). Typically List<T> is the standard go-to collection when you don't have any other constraints, and typically we favor a List<T> even over arrays unless we are sure the size will remain absolutely fixed. The LinkedList<T> is a basic implementation of a doubly-linked list. This means that you can add or remove items in the middle of a linked list very quickly (because there's no items to move up or down in contiguous memory), but you also lose the ability to index items by position quickly. Most of the time we tend to favor List<T> over LinkedList<T> unless you are doing a lot of adding and removing from the collection, in which case a LinkedList<T> may make more sense. The HashSet<T> is an unordered collection of unique items. This means that the collection cannot have duplicates and no order is maintained. Logically, this is very similar to having a Dictionary<TKey,TValue> where the TKey and TValue both refer to the same object. This collection is very useful for maintaining a collection of items you wish to check membership against. For example, if you receive an order for a given vendor code, you may want to check to make sure the vendor code belongs to the set of vendor codes you handle. In these cases a HashSet<T> is useful for super-quick lookups where order is not important. Once again, like in Dictionary, the type T should have a valid implementation of GetHashCode() and Equals(), or you should provide an appropriate IEqualityComparer<T> to the HashSet<T> on construction. The SortedSet<T> is to HashSet<T> what the SortedDictionary<TKey,TValue> is to Dictionary<TKey,TValue>. That is, the SortedSet<T> is a binary tree where the key and value are the same object. This once again means that adding/removing/lookups are logarithmic - O(log n) - but you gain the ability to iterate over the items in order. For this collection to be effective, type T must implement IComparable<T> or you need to supply an external IComparer<T>. Finally, the Stack<T> and Queue<T> are two very specific collections that allow you to handle a sequential collection of objects in very specific ways. The Stack<T> is a last-in-first-out (LIFO) container where items are added and removed from the top of the stack. Typically this is useful in situations where you want to stack actions and then be able to undo those actions in reverse order as needed. The Queue<T> on the other hand is a first-in-first-out container which adds items at the end of the queue and removes items from the front. This is useful for situations where you need to process items in the order in which they came, such as a print spooler or waiting lines. So that's the basic collections. Let's summarize what we've learned in a quick reference table.  Collection Ordered? Contiguous Storage? Direct Access? Lookup Efficiency Manipulate Efficiency Notes Dictionary No Yes Via Key Key: O(1) O(1) Best for high performance lookups. SortedDictionary Yes No Via Key Key: O(log n) O(log n) Compromise of Dictionary speed and ordering, uses binary search tree. SortedList Yes Yes Via Key Key: O(log n) O(n) Very similar to SortedDictionary, except tree is implemented in an array, so has faster lookup on preloaded data, but slower loads. List No Yes Via Index Index: O(1) Value: O(n) O(n) Best for smaller lists where direct access required and no ordering. LinkedList No No No Value: O(n) O(1) Best for lists where inserting/deleting in middle is common and no direct access required. HashSet No Yes Via Key Key: O(1) O(1) Unique unordered collection, like a Dictionary except key and value are same object. SortedSet Yes No Via Key Key: O(log n) O(log n) Unique ordered collection, like SortedDictionary except key and value are same object. Stack No Yes Only Top Top: O(1) O(1)* Essentially same as List<T> except only process as LIFO Queue No Yes Only Front Front: O(1) O(1) Essentially same as List<T> except only process as FIFO   The Original Collections: System.Collections namespace The original collection classes are largely considered deprecated by developers and by Microsoft itself. In fact they indicate that for the most part you should always favor the generic or concurrent collections, and only use the original collections when you are dealing with legacy .NET code. Because these collections are out of vogue, let's just briefly mention the original collection and their generic equivalents: ArrayList A dynamic, contiguous collection of objects. Favor the generic collection List<T> instead. Hashtable Associative, unordered collection of key-value pairs of objects. Favor the generic collection Dictionary<TKey,TValue> instead. Queue First-in-first-out (FIFO) collection of objects. Favor the generic collection Queue<T> instead. SortedList Associative, ordered collection of key-value pairs of objects. Favor the generic collection SortedList<T> instead. Stack Last-in-first-out (LIFO) collection of objects. Favor the generic collection Stack<T> instead. In general, the older collections are non-type-safe and in some cases less performant than their generic counterparts. Once again, the only reason you should fall back on these older collections is for backward compatibility with legacy code and libraries only. The Concurrent Collections: System.Collections.Concurrent namespace The concurrent collections are new as of .NET 4.0 and are included in the System.Collections.Concurrent namespace. These collections are optimized for use in situations where multi-threaded read and write access of a collection is desired. The concurrent queue, stack, and dictionary work much as you'd expect. The bag and blocking collection are more unique. Below is the summary of each with a link to a blog post I did on each of them. ConcurrentQueue Thread-safe version of a queue (FIFO). For more information see: C#/.NET Little Wonders: The ConcurrentStack and ConcurrentQueue ConcurrentStack Thread-safe version of a stack (LIFO). For more information see: C#/.NET Little Wonders: The ConcurrentStack and ConcurrentQueue ConcurrentBag Thread-safe unordered collection of objects. Optimized for situations where a thread may be bother reader and writer. For more information see: C#/.NET Little Wonders: The ConcurrentBag and BlockingCollection ConcurrentDictionary Thread-safe version of a dictionary. Optimized for multiple readers (allows multiple readers under same lock). For more information see C#/.NET Little Wonders: The ConcurrentDictionary BlockingCollection Wrapper collection that implement producers & consumers paradigm. Readers can block until items are available to read. Writers can block until space is available to write (if bounded). For more information see C#/.NET Little Wonders: The ConcurrentBag and BlockingCollection Summary The .NET BCL has lots of collections built in to help you store and manipulate collections of data. Understanding how these collections work and knowing in which situations each container is best is one of the key skills necessary to build more performant code. Choosing the wrong collection for the job can make your code much slower or even harder to maintain if you choose one that doesn’t perform as well or otherwise doesn’t exactly fit the situation. Remember to avoid the original collections and stick with the generic collections.  If you need concurrent access, you can use the generic collections if the data is read-only, or consider the concurrent collections for mixed-access if you are running on .NET 4.0 or higher.   Tweet Technorati Tags: C#,.NET,Collecitons,Generic,Concurrent,Dictionary,List,Stack,Queue,SortedList,SortedDictionary,HashSet,SortedSet

    Read the article

  • So…is it a Seek or a Scan?

    - by Paul White
    You’re probably most familiar with the terms ‘Seek’ and ‘Scan’ from the graphical plans produced by SQL Server Management Studio (SSMS).  The image to the left shows the most common ones, with the three types of scan at the top, followed by four types of seek.  You might look to the SSMS tool-tip descriptions to explain the differences between them: Not hugely helpful are they?  Both mention scans and ranges (nothing about seeks) and the Index Seek description implies that it will not scan the index entirely (which isn’t necessarily true). Recall also yesterday’s post where we saw two Clustered Index Seek operations doing very different things.  The first Seek performed 63 single-row seeking operations; and the second performed a ‘Range Scan’ (more on those later in this post).  I hope you agree that those were two very different operations, and perhaps you are wondering why there aren’t different graphical plan icons for Range Scans and Seeks?  I have often wondered about that, and the first person to mention it after yesterday’s post was Erin Stellato (twitter | blog): Before we go on to make sense of all this, let’s look at another example of how SQL Server confusingly mixes the terms ‘Scan’ and ‘Seek’ in different contexts.  The diagram below shows a very simple heap table with two columns, one of which is the non-clustered Primary Key, and the other has a non-unique non-clustered index defined on it.  The right hand side of the diagram shows a simple query, it’s associated query plan, and a couple of extracts from the SSMS tool-tip and Properties windows. Notice the ‘scan direction’ entry in the Properties window snippet.  Is this a seek or a scan?  The different references to Scans and Seeks are even more pronounced in the XML plan output that the graphical plan is based on.  This fragment is what lies behind the single Index Seek icon shown above: You’ll find the same confusing references to Seeks and Scans throughout the product and its documentation. Making Sense of Seeks Let’s forget all about scans for a moment, and think purely about seeks.  Loosely speaking, a seek is the process of navigating an index B-tree to find a particular index record, most often at the leaf level.  A seek starts at the root and navigates down through the levels of the index to find the point of interest: Singleton Lookups The simplest sort of seek predicate performs this traversal to find (at most) a single record.  This is the case when we search for a single value using a unique index and an equality predicate.  It should be readily apparent that this type of search will either find one record, or none at all.  This operation is known as a singleton lookup.  Given the example table from before, the following query is an example of a singleton lookup seek: Sadly, there’s nothing in the graphical plan or XML output to show that this is a singleton lookup – you have to infer it from the fact that this is a single-value equality seek on a unique index.  The other common examples of a singleton lookup are bookmark lookups – both the RID and Key Lookup forms are singleton lookups (an RID lookup finds a single record in a heap from the unique row locator, and a Key Lookup does much the same thing on a clustered table).  If you happen to run your query with STATISTICS IO ON, you will notice that ‘Scan Count’ is always zero for a singleton lookup. Range Scans The other type of seek predicate is a ‘seek plus range scan’, which I will refer to simply as a range scan.  The seek operation makes an initial descent into the index structure to find the first leaf row that qualifies, and then performs a range scan (either backwards or forwards in the index) until it reaches the end of the scan range. The ability of a range scan to proceed in either direction comes about because index pages at the same level are connected by a doubly-linked list – each page has a pointer to the previous page (in logical key order) as well as a pointer to the following page.  The doubly-linked list is represented by the green and red dotted arrows in the index diagram presented earlier.  One subtle (but important) point is that the notion of a ‘forward’ or ‘backward’ scan applies to the logical key order defined when the index was built.  In the present case, the non-clustered primary key index was created as follows: CREATE TABLE dbo.Example ( key_col INTEGER NOT NULL, data INTEGER NOT NULL, CONSTRAINT [PK dbo.Example key_col] PRIMARY KEY NONCLUSTERED (key_col ASC) ) ; Notice that the primary key index specifies an ascending sort order for the single key column.  This means that a forward scan of the index will retrieve keys in ascending order, while a backward scan would retrieve keys in descending key order.  If the index had been created instead on key_col DESC, a forward scan would retrieve keys in descending order, and a backward scan would return keys in ascending order. A range scan seek predicate may have a Start condition, an End condition, or both.  Where one is missing, the scan starts (or ends) at one extreme end of the index, depending on the scan direction.  Some examples might help clarify that: the following diagram shows four queries, each of which performs a single seek against a column holding every integer from 1 to 100 inclusive.  The results from each query are shown in the blue columns, and relevant attributes from the Properties window appear on the right: Query 1 specifies that all key_col values less than 5 should be returned in ascending order.  The query plan achieves this by seeking to the start of the index leaf (there is no explicit starting value) and scanning forward until the End condition (key_col < 5) is no longer satisfied (SQL Server knows it can stop looking as soon as it finds a key_col value that isn’t less than 5 because all later index entries are guaranteed to sort higher). Query 2 asks for key_col values greater than 95, in descending order.  SQL Server returns these results by seeking to the end of the index, and scanning backwards (in descending key order) until it comes across a row that isn’t greater than 95.  Sharp-eyed readers may notice that the end-of-scan condition is shown as a Start range value.  This is a bug in the XML show plan which bubbles up to the Properties window – when a backward scan is performed, the roles of the Start and End values are reversed, but the plan does not reflect that.  Oh well. Query 3 looks for key_col values that are greater than or equal to 10, and less than 15, in ascending order.  This time, SQL Server seeks to the first index record that matches the Start condition (key_col >= 10) and then scans forward through the leaf pages until the End condition (key_col < 15) is no longer met. Query 4 performs much the same sort of operation as Query 3, but requests the output in descending order.  Again, we have to mentally reverse the Start and End conditions because of the bug, but otherwise the process is the same as always: SQL Server finds the highest-sorting record that meets the condition ‘key_col < 25’ and scans backward until ‘key_col >= 20’ is no longer true. One final point to note: seek operations always have the Ordered: True attribute.  This means that the operator always produces rows in a sorted order, either ascending or descending depending on how the index was defined, and whether the scan part of the operation is forward or backward.  You cannot rely on this sort order in your queries of course (you must always specify an ORDER BY clause if order is important) but SQL Server can make use of the sort order internally.  In the four queries above, the query optimizer was able to avoid an explicit Sort operator to honour the ORDER BY clause, for example. Multiple Seek Predicates As we saw yesterday, a single index seek plan operator can contain one or more seek predicates.  These seek predicates can either be all singleton seeks or all range scans – SQL Server does not mix them.  For example, you might expect the following query to contain two seek predicates, a singleton seek to find the single record in the unique index where key_col = 10, and a range scan to find the key_col values between 15 and 20: SELECT key_col FROM dbo.Example WHERE key_col = 10 OR key_col BETWEEN 15 AND 20 ORDER BY key_col ASC ; In fact, SQL Server transforms the singleton seek (key_col = 10) to the equivalent range scan, Start:[key_col >= 10], End:[key_col <= 10].  This allows both range scans to be evaluated by a single seek operator.  To be clear, this query results in two range scans: one from 10 to 10, and one from 15 to 20. Final Thoughts That’s it for today – tomorrow we’ll look at monitoring singleton lookups and range scans, and I’ll show you a seek on a heap table. Yes, a seek.  On a heap.  Not an index! If you would like to run the queries in this post for yourself, there’s a script below.  Thanks for reading! IF OBJECT_ID(N'dbo.Example', N'U') IS NOT NULL BEGIN DROP TABLE dbo.Example; END ; -- Test table is a heap -- Non-clustered primary key on 'key_col' CREATE TABLE dbo.Example ( key_col INTEGER NOT NULL, data INTEGER NOT NULL, CONSTRAINT [PK dbo.Example key_col] PRIMARY KEY NONCLUSTERED (key_col) ) ; -- Non-unique non-clustered index on the 'data' column CREATE NONCLUSTERED INDEX [IX dbo.Example data] ON dbo.Example (data) ; -- Add 100 rows INSERT dbo.Example WITH (TABLOCKX) ( key_col, data ) SELECT key_col = V.number, data = V.number FROM master.dbo.spt_values AS V WHERE V.[type] = N'P' AND V.number BETWEEN 1 AND 100 ; -- ================ -- Singleton lookup -- ================ ; -- Single value equality seek in a unique index -- Scan count = 0 when STATISTIS IO is ON -- Check the XML SHOWPLAN SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col = 32 ; -- =========== -- Range Scans -- =========== ; -- Query 1 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col <= 5 ORDER BY E.key_col ASC ; -- Query 2 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col > 95 ORDER BY E.key_col DESC ; -- Query 3 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col >= 10 AND E.key_col < 15 ORDER BY E.key_col ASC ; -- Query 4 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col >= 20 AND E.key_col < 25 ORDER BY E.key_col DESC ; -- Final query (singleton + range = 2 range scans) SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col = 10 OR E.key_col BETWEEN 15 AND 20 ORDER BY E.key_col ASC ; -- === TIDY UP === DROP TABLE dbo.Example; © 2011 Paul White email: [email protected] twitter: @SQL_Kiwi

    Read the article

< Previous Page | 13 14 15 16 17 18 19  | Next Page >