Search Results

Search found 26297 results on 1052 pages for 'unit test'.

Page 467/1052 | < Previous Page | 463 464 465 466 467 468 469 470 471 472 473 474  | Next Page >

  • Is there a clean separation of my layers with this attempt at Domain Driven Design in XAML and C#

    - by Buddy James
    I'm working on an application. I'm using a mixture of TDD and DDD. I'm working hard to separate the layers of my application and that is where my question comes in. My solution is laid out as follows Solution MyApp.Domain (WinRT class library) Entity (Folder) Interfaces(Folder) IPost.cs (Interface) BlogPosts.cs(Implementation of IPost) Service (Folder) Interfaces(Folder) IDataService.cs (Interface) BlogDataService.cs (Implementation of IDataService) MyApp.Presentation(Windows 8 XAML + C# application) ViewModels(Folder) BlogViewModel.cs App.xaml MainPage.xaml (Contains a property of BlogViewModel MyApp.Tests (WinRT Unit testing project used for my TDD) So I'm planning to use my ViewModel with the XAML UI I'm writing a test and define my interfaces in my system and I have the following code thus far. [TestMethod] public void Get_Zero_Blog_Posts_From_Presentation_Layer_Returns_Empty_Collection() { IBlogViewModel viewModel = _container.Resolve<IBlogViewModel>(); viewModel.LoadBlogPosts(0); Assert.AreEqual(0, viewModel.BlogPosts.Count, "There should be 0 blog posts."); } viewModel.BlogPosts is an ObservableCollection<IPost> Now.. my first thought is that I'd like the LoadBlogPosts method on the ViewModel to call a static method on the BlogPost entity. My problem is I feel like I need to inject the IDataService into the Entity object so that it promotes loose coupling. Here are the two options that I'm struggling with: Not use a static method and use a member method on the BlogPost entity. Have the BlogPost take an IDataService in the constructor and use dependency injection to resolve the BlogPost instance and the IDataService implementation. Don't use the entity to call the IDataService. Put the IDataService in the constructor of the ViewModel and use my container to resolve the IDataService when the viewmodel is instantiated. So with option one the layers will look like this ViewModel(Presentation layer) - Entity (Domain layer) - IDataService (Service Layer) or ViewModel(Presentation layer) - IDataService (Service Layer)

    Read the article

  • Content Catalog Live!

    - by marius.ciortea
    tweetmeme_url = 'http://blogs.oracle.com/javaone/2010/06/content_catalog_live.html'; Share .FBConnectButton_Small{background-position:-5px -232px !important;border-left:1px solid #1A356E;} .FBConnectButton_Text{margin-left:12px !important ;padding:2px 3px 3px !important;} The Oracle OpenWorld, JavaOne and Oracle Develop 2010 content catalog is live. You can peruse most of the almost 2,000 sessions available this year at OpenWorld, JavaOne and Oracle Develop, including session titles, abstracts, track info, and confirmed speakers.You can find the latest on JDK 7, deep dives on the JVM, REST, JavaFX, JSF, Enterprise Java, Seam, OSGi, HTTP, Swing, GWT, Groovy, JRuby, Unit Testing, Metro, Lift, Comet, jclouds, Hudson, Scala, [insert technology here], etc. To access the Content Catalog, just look under Tools on the right side of this page. You can tag content in the catalog so you--and others who do what you do, or think the way you think--can easily find this year's don't-miss sessions. Take a few minutes to look around, and start planning your most productive/informative/valuable JavaOne ever! Schedule Builder, where you can sign up for sessions, will be up in July.

    Read the article

  • Using the Script Component as a Conditional Split

    This is a quick walk through on how you can use the Script Component to perform Conditional Split like behaviour, splitting your data across multiple outputs. We will use C# code to decide what does flows to which output, rather than the expression syntax of the Conditional Split transformation. Start by setting up the source. For my example the source is a list of SQL objects from sys.objects, just a quick way to get some data: SELECT type, name FROM sys.objects type name S syssoftobjrefs F FK_Message_Page U Conference IT queue_messages_23007163 Shown above is a small sample of the data you could expect to see. Once you have setup your source, add the Script Component, selecting Transformation when prompted for the type, and connect it up to the source. Now open the component, but don’t dive into the script just yet. First we need to select some columns. Select the Input Columns page and then select the columns we want to uses as part of our filter logic. You don’t need to choose columns that you may want later, this is just the columns used in the script itself. Next we need to add our outputs. Select the Inputs and Outputs page.You get one by default, but we need to add some more, it wouldn’t be much of a split otherwise. For this example we’ll add just one more. Click the Add Output button, and you’ll see a new output is added. Now we need to set some properties, so make sure our new Output 1 is selected. In the properties grid change the SynchronousInputID property to be our input Input 0, and  change the ExclusionGroup property to 1. Now select Ouput 0 and change the ExclusionGroup property to 2. This value itself isn’t important, provided each output has a different value other than zero. By setting this property on both outputs it allows us to split the data down one or the other, making each exclusive. If we left it to 0, that output would get all the rows. It can be a useful feature allowing you to copy selected rows to one output whilst retraining the full set of data in the other. Now we can go back to the Script page and start writing some code. For the example we will do a very simple test, if the value of the type column is U, for user table, then it goes down the first output, otherwise it ends up in the other. This mimics the exclusive behaviour of the conditional split transformation. public override void Input0_ProcessInputRow(Input0Buffer Row) { // Filter all user tables to the first output, // the remaining objects down the other if (Row.type.Trim() == "U") { Row.DirectRowToOutput0(); } else { Row.DirectRowToOutput1(); } } The code itself is very simple, a basic if clause that determines which of the DirectRowToOutput methods we call, there is one for each output. Of course you could write a lot more code to implement some very complex logic, but the final direction is still just a method call. If we now close the script component, we can hook up the outputs and test the package. Your numbers will vary depending on the sample database but as you can see we have clearly split out input data into two outputs. As a final tip, when adding the outputs I would normally rename them, changing the Name in the Properties grid. This means the generated methods follow the pattern as do the path label shown on the design surface, making everything that much easier to recognise.

    Read the article

  • Copying files to zfs mountpoint doesn't work - the files aren't actually copied to the other filesystem,

    - by user113904
    I have 3 x 4 TB disks in a NAS that I want to group together and access as if they were one whole 'unit' of some kind. I also have a 250GB disk containing the OS - this is full of films and tv shows currently. I thought zfs sounded good so I created a raidz zpool, after installing the ppa sudo zpool create store raidz /dev/sdb /dev/sdc /dev/sdd and set the mountpoint to /mnt/store sudo zfs set mountpoint=/mnt/store /store checked it was successful - I think it was sudo zfs list NAME USED AVAIL REFER MOUNTPOINT store 266K 7.16T 170K /mnt/store Then I wanted to move over a whole load of files from my home directory. I went to where the to-be-copied folder was (called media) and entered sudo cp -R * /mnt/store cp: cannot create directory `/mnt/store/media': No space left on device It seems like it's not copying over to the new filesystem I made (or thought I did). I've never really done this type of thing until a few days ago so may be running before I can walk... is this not the right way to copy files across? I've only used windows before so the idea of mountpoints is a bit mind boggling. I'm using XBMCbuntu 12 beta 2.0 which is based on 12.04. Will retry with normal Ubuntu 12.04 desktop to see if that's the problem. thanks for the help!

    Read the article

  • Code Contracts and Pex at MSDN Live 2010

    - by terje
    One of the 6 sessions I and Mikael Nitell is running on MSDN Live 2010 here in Norway is about Code Quality, and part of that session goes through the use of Code Contracts and Pex.  Both fantastic tools ! They can be used togethers, but are also completely independent from each other, and can be used as a single Code Contracts  has to downloaded separately from VS 2010 (works also on VS 2008).   Start looking at http://msdn.microsoft.com/en-us/devlabs/dd491992.aspx . This download is a free download.   Code Contracts originates form the ideas of Bertrand Meyer – Design by Contract, take a look here. Pex is found on the MSDN Subscription download, so it requires an active MSDN Subscription. Start to get it from here http://research.microsoft.com/en-us/projects/pex/downloads.aspx .  The current version as of 14.4.10 is 0.9, which works with the 2010 RC.  A new version is due this week.  Pex is a tool to generate unit tests, and does this very intelligently.  Perfect to make tests for legacy code, but also to make sure you get all paths tested.  See the Reference information and project startup information.

    Read the article

  • Launcher icon size and window behavior broken

    - by philipp
    I have installed the nvidia driver for my graphic card, just following some tutorials what works fine now. After this I could set the Icon size of the launcher, windows had a nice litte shadow, resolution was better and the windows showed up a nice effect when popping up an or when bringing to full-screen... But today the this was just gone after reboot. What could this be? Nvidia xserver-settings are availible. I installed and reinstalled wine1.5 via the apt-get commands, so this might broke something. What can do to fix this again? Greetings philipp EDIT: I went on searching and all i found was that this problem might be connected to the mode of unit, so there is 2d and 3d, but could also be something else, just because setting the mode brings no change. EDIT 2: the version of Ubuntu is: 12.04 and it is a 64 bit environment the graphic card is: GeForce GT 330M Edit 3: Using maps.google in webGL mode does not work anymore too, it was working yesterday. EDIT 4: the screenshot. btw: I think that blender is not working anymore too... EDIT: 5 I think that the problem is closely connected to this output

    Read the article

  • Implementing game rules in a tactical battle board game

    - by Setzer22
    I'm trying to create a game similar to what one would find in a typical D&D board game combat. For mor examples you could think of games like Advance Wars, Fire Emblem or Disgaea. I should say that I'm using design by component so far, but I can't find a nice way to fit components into the part I want to ask. I'm struggling right now with the "game rules" logic. That is, the code that displays the menu, allows the player to select units, and command them, then tells the unit game objects what to do given the player input. The best way I could thing of handling this was using a big state machine, so everything that could be done in a "turn" is handled by this state machine, and the update code of this state machine does different things depending on the state. This approach, though, leads to a large amount of code (anything not model-related) to go into a big class. Of course I can subdivide this big class into more classes, but it doesn't feel modular and upgradable enough. I'd like to know of better systems to handle this in order to be able to upgrade the game with new rules without having a monstruous if/else chain (or switch / case, for that matter). So, any ideas? I'd also like to ask that if you recommend me a specific design pattern to also provide some kind of example or further explanation and not stick to "Yeah you should use MVC and it'll work".

    Read the article

  • SQL Rally Pre-Con: Data Warehouse Modeling – Making the Right Choices

    - by Davide Mauri
    As you may have already learned from my old post or Adam’s or Kalen’s posts, there will be two SQL Rally in North Europe. In the Stockholm SQL Rally, with my friend Thomas Kejser, I’ll be delivering a pre-con on Data Warehouse Modeling: Data warehouses play a central role in any BI solution. It's the back end upon which everything in years to come will be created. For this reason, it must be rock solid and yet flexible at the same time. To develop such a data warehouse, you must have a clear idea of its architecture, a thorough understanding of the concepts of Measures and Dimensions, and a proven engineered way to build it so that quality and stability can go hand-in-hand with cost reduction and scalability. In this workshop, Thomas Kejser and Davide Mauri will share all the information they learned since they started working with data warehouses, giving you the guidance and tips you need to start your BI project in the best way possible?avoiding errors, making implementation effective and efficient, paving the way for a winning Agile approach, and helping you define how your team should work so that your BI solution will stand the test of time. You'll learn: Data warehouse architecture and justification Agile methodology Dimensional modeling, including Kimball vs. Inmon, SCD1/SCD2/SCD3, Junk and Degenerate Dimensions, and Huge Dimensions Best practices, naming conventions, and lessons learned Loading the data warehouse, including loading Dimensions, loading Facts (Full Load, Incremental Load, Partitioned Load) Data warehouses and Big Data (Hadoop) Unit testing Tracking historical changes and managing large sizes With all the Self-Service BI hype, Data Warehouse is become more and more central every day, since if everyone will be able to analyze data using self-service tools, it’s better for him/her to rely on correct, uniform and coherent data. Already 50 people registered from the workshop and seats are limited so don’t miss this unique opportunity to attend to this workshop that is really a unique combination of years and years of experience! http://www.sqlpass.org/sqlrally/2013/nordic/Agenda/PreconferenceSeminars.aspx See you there!

    Read the article

  • SQL SERVER – Identifying Column Data Type of uniqueidentifier without Querying System Tables

    - by pinaldave
    I love interesting conversations with related to SQL Server. One of my friends Madhivanan always comes up with an interesting point of conversation. Here is one of the conversation between us. I am very confident this blog post will for sure enable you with some new knowledge. Madhi: How do I know if any table has a uniqueidentifier column used in it? Pinal:  I am sure you know that you can do it through some DMV or catalogue views. Madhi: I know that but how can we do that without using DMV or catalogue views? Pinal: Hm… what can I use? Madhi: You can use table name. Pinal: Easy, just say SELECT YourUniqueIdentCol FROM Table. Madhi: Hold on, the question seems to be not clear to you – you do know the name of the column. The matter of the fact, you do not know if the table has uniqueidentifier column. Only information you have is table name. Pinal: Madhi, this seems like you are changing the question when I am close to answer. Madhi: Well, are you clear now? Let me say it again – How do I know if any table has a uniqueidentifier column and what is its value without using any DMV or System Catalogues? Only information you know is table name and you are allowed to return any kind of error if table does not have uniqueidentifier column. Pinal: Do you know the answer? Madhi: Yes. I just wanted to test your knowledge about SQL. Pinal: I will have to think. Let me accept I do not know it right away. Can you share the answer please? Madhi: I won! Here it goes! Pinal: When I have friends like you – who needs enemies? Madhi: (laughter which did not stop for a minute). CREATE TABLE t ( GuidCol UNIQUEIDENTIFIER DEFAULT newsequentialid() ROWGUIDCOL, data VARCHAR(60) ) INSERT INTO t (data) SELECT 'test' INSERT INTO t (data) SELECT 'test1' SELECT $rowguid FROM t DROP TABLE t This is indeed very interesting to me. Please note that this is not the optimal way and there will be many other ways to retrieve uniqueidentifier name and value. What I learned from this was if I am in a rush to check if the table has uniqueidentifier and I do not know the name of the same, I can use SELECT TOP (1) $rowguid and quickly know the name of the column. I can later use the same columnname in my query. Madhi did teach me this new trick. Did you know this? What are other ways to get the check uniqueidentifier column existence in a database? Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • ArchBeat Link-o-Rama for 2012-03-21

    - by Bob Rhubart
    Webcast: Simplify Oracle RAC Deployment with Oracle VM event.on24.com Tuesday March 20, 2012 - 9am PT / Noon ET Learn how you can: Deploy an Oracle (RAC) Database environment in minutes with Oracle VM templates Create, deploy or convert existing systems into highly available cluster environments Instantly respond to changing demand by relocating resources between servers Speakers: Ronen Kofman – Product Management Director, Oracle Markus Michalewicz – Senior Principal Product Manager, Oracle Webcast: Oracle Business Intelligence Mobile event.on24.com Event Date: Wednesday, March 28, 2012 Time: 10 a.m. PT / 1 p.m. ET Speakers: Pete Manhardt – Director Enterprise Information at Smiths Group, plc Shailesh Shedge – Director BI & Analytics Practice at Ascentt Manan Goel – Director BI Product Marketing at Oracle Seth's Blog: The extraordinary software development manager sethgodin.typepad.com "Being good at programming is insufficient qualification for becoming a world class software project manager/leader," says marketing guru Seth Godin. Mismatch: Developer skills and customer demands | Floyd Teter orclville.blogspot.com "Those of us in the developer community may need to reconsider the law of supply and demand," says Oracle ACE Director Floyd Teter, "and get on with the process of matching our skills to the demands of our customers." SOA gets mobilized; mobile gets SOA-ized: survey | Joe McKendrick www.zdnet.com "Maybe mobile is the killer app for SOA that actually will convince people to adopt the architectural style." Integrating with Oracle Fusion Applications: Discovering Integration Artifacts | Rajesh Raheja rraheja.wordpress.com Rajesh Raheja briefly discusses "the ease with which integrations are now possible using standards-based technologies with enterprise applications." Chargeback and showChargeback and showback...both a 'throw back' | Tom Laszewski blogs.oracle.com Tom Laszeski discusses strategies for tracking and applying the costs of "IT services, hardware or software to the business unit in which they are used." GlassFish 4.0 Virtualization Progress - VirtualBox | The Aquarium blogs.oracle.com Want to spawn GlassFish instances as VirtualBox virtual machines? The Aquarium shares resources that will help you get it done. Thought for the Day "Spring is the time of plans and projects." — Leo Tolstoy

    Read the article

  • Entry / JR Php Programmer - What do I learn next?

    - by dtj
    I got very interested in programming toward the end of college. Took a few classes, but learned most everything on my own via books and such. Its mostly been Php and MySQL. Right out of school, I got a job working at a company for 2 years (web media) and ended up learning a lot of stuff and programming some things for them. I am no longer at that company but I am looking for my next steps as a programmer. I really enjoy Web Development and Php and MySQL seems to be my thing. Basically, I know how to do CRUD operations, i am mediocre at OOP and still have more to learn, I know HTML and CSS quite well, I know my way around a Unix terminal and can access MySQL through it and set up cron jobs and such. I know some basic Javascript. Whats a good next step? I don't anything about 3rd party services, PDO, APIs (twitter, facebook, etc), Drupal / Joomla, Unit Testing, E-Commerce, PECL, PEAR ....in other words A LOT I get easily overwhelmed by the amount of stuff there is to learn, so I'm sort of trying to find a path. Right now, I'm digging into OOP more, as that seems like a good conceptual first-step. Any suggestions?

    Read the article

  • CodePlex Daily Summary for Thursday, July 05, 2012

    CodePlex Daily Summary for Thursday, July 05, 2012Popular ReleasesTaskScheduler ASP.NET: Release 2 - 1.1.0.0: Release 2 - Version 1.1.0.0 In this version the following features were added to the library: Event fired on all tasks end The ASP.NET project takes a example of management of scheduled tasks.Umbraco CMS: Umbraco 4.8.0 Beta: Whats newuComponents in the core Multi-Node Tree Picker, Multiple Textstring, Slider and XPath Lists Easier Lucene searching built in IFile providers for easier file handling Updated 3rd party libraries Applications / Trees moved out of the database SQL Azure support added Various bug fixes Getting Started A great place to start is with our Getting Started Guide: Getting Started Guide: http://umbraco.codeplex.com/Project/Download/FileDownload.aspx?DownloadId=197051 Make sure to...CODE Framework: 4.0.20704.0: See CODE Framework (.NET) Change Log for changes in this version.?????????? - ????????: All-In-One Code Framework ??? 2012-07-04: http://download.codeplex.com/Project/Download/FileDownload.aspx?ProjectName=1codechs&DownloadId=216140 ???OneCode??????,??????????10????Microsoft OneCode Sample,????4?Windows Base Sample,2?XML Sample?4?ASP.NET Sample。???????????。 ????,?????。http://i3.codeplex.com/Project/Download/FileDownload.aspx?ProjectName=1code&DownloadId=128165 Windows Base Sample CSCheckOSBitness VBCheckOSBitness CSCheckOSVersion VBCheckOSVersion XML Sample CSXPath VBXPath ASP.NET Sample CSASPNETDataPager VBASPNET...sheetengine - Isometric HTML5 JavaScript Display Engine: sheetengine v1.0: The first release of sheetengine. See sheetengine.codeplex.com for a list of features and examples.AssaultCube Reloaded: 2.5.1 Intrepid Fixed: Linux has Ubuntu 11.10 32-bit precompiled binaries and Ubuntu 10.10 64-bit precompiled binaries, but you can compile your own as it also contains the source. If you are using Mac or other operating systems, download the Linux package. Try to compile it. If it fails, download a virtual machine. The server pack is ready for both Windows and Linux, but you might need to compile your own for Linux (source included) If you use the default maprot or any maprot, you need to fix it Well, 2.5 was...xUnit.net - Unit testing framework for C# and .NET (a successor to NUnit): xUnit.net 1.9.1: xUnit.net release 1.9.1Build #1600 Important note for Resharper users: Resharper support has been moved to the xUnit.net Contrib project. Important note for TestDriven.net users: If you are having issues running xUnit.net tests in TestDriven.net, especially on 64-bit Windows, we strongly recommend you upgrade to TD.NET version 3.0 or later. Important note for VS2012 users: The VS2012 runner is in the Visual Studio Gallery now, and should be installed via Tools | Extension Manager from insi...NETDeob0: NETDeob 0.2.0: - Big structural changes - Safer signature identification - More accurate signatures - Minor bugs fixed - Minor compability issues fixedMVC Controls Toolkit: Mvc Controls Toolkit 2.2.0: Added Modified all Mv4 related features to conform with the Mvc4 RC Now all items controls accept any IEnumerable<T>(before just List<T> were accepted by most of controls) retrievalManager class that retrieves automatically data from a data source whenever it catchs events triggered by filtering, sorting, and paging controls move method to the updatesManager to move one child objects from a father to another. The move operation can be undone like the insert, update and delete operatio...BlackJumboDog: Ver5.6.6: 2012.07.03 Ver5.6.6 (1) ???????????ftp://?????????、????LIST?????Mini SQL Query: Mini SQL Query (v1.0.68.441): Just a bug fix release for when the connections try to refresh after an edit. Make sure you read the Quickstart for an introduction.Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.58: Fix for Issue #18296: provide "ALL" value to the -ignore switch to ignore all error and warning messages. Fix for issue #18293: if encountering EOF before a function declaration or expression is properly closed, throw an appropriate error and don't crash. Adjust the variable-renaming algorithm so it's very specific when renaming variables with the same number of references so a single source file ends up with the same minified names on different platforms. add the ability to specify kno...LogExpert: 1.4 build 4566: This release for the 1.4 version line contains various fixes which have been made some times ago. Until now these fixes were only available in the 1.5 alpha versions. It also contains a fix for: 710. Column finder (press F8 to show) Terminal server issues: Multiple sessions with same user should work now Settings Export/Import available via Settings Dialog still incomple (e.g. tab colors are not saved) maybe I change the file format one day no command line support yet (for importin...View Layout Replicator for Microsoft Dynamics CRM 2011: View Layout Replicator (1.0.1802.65): Add support for OSDP authenticationCommonLibrary.NET: CommonLibrary.NET 0.9.8.5 - Final Release: A collection of very reusable code and components in C# 4.0 ranging from ActiveRecord, Csv, Command Line Parsing, Configuration, Holiday Calendars, Logging, Authentication, and much more. FluentscriptCommonLibrary.NET 0.9.8 contains a scripting language called FluentScript. Releases notes for FluentScript located at http://fluentscript.codeplex.com/wikipage?action=Edit&title=Release%20Notes&referringTitle=Documentation Fluentscript - 0.9.8.5 - Final ReleaseApplication: FluentScript Versio...SharePoint 2010 Metro UI: SharePoint 2010 Metro UI8: Please review the documentation link for how to install. Installation takes some basic knowledge of how to upload and edit SharePoint Artifact files. Please view the discussions tab for ongoing FAQsnopCommerce. Open source shopping cart (ASP.NET MVC): nopcommerce 2.60: Highlight features & improvements: • Significant performance optimization. • Use AJAX for adding products to the cart. • New flyout mini-shopping cart. • Auto complete suggestions for product searching. • Full-Text support. • EU cookie law support. To see the full list of fixes and changes please visit the release notes page (http://www.nopCommerce.com/releasenotes.aspx).THE NVL Maker: The NVL Maker Ver 3.51: http://download.codeplex.com/Download?ProjectName=nvlmaker&DownloadId=371510 ????:http://115.com/file/beoef05k#THE-NVL-Maker-ver3.51-sim.7z ????:http://www.mediafire.com/file/6tqdwj9jr6eb9qj/THENVLMakerver3.51tra.7z ======================================== ???? ======================================== 3.51 beta ???: ·?????????????????????? ·?????????,?????????0,?????????????????????? ·??????????????????????????? ·?????????????TJS????(EXP??) ·??4:3???,???????????????,??????????? ·?????????...????: ????2.0.3: 1、???????????。 2、????????。 3、????????????。 4、bug??,????。Magelia WebStore Open-source Ecommerce software: Magelia WebStore 2.0: User Right Licensing ContentType version 2.0.267.1New Projects$ME: a new kind of javascript library.NET Micro Framework Driver Library: .NET Micro Framework Driver Library This is just a library of classes that I have created that can be used by Micro Framework Devices. aishe: ?????????BoogieTools: The goal of this project is to provide editors and additional tools to improve working with the Microsoft Boogie language and tools.BWAPI-CLI: .NET wrapper for the Broodwar API (BWAPI) and Broodwar Terrain Analyzer (BWTA) written in C++/CLICoffee Survey Framework: Coffee Survey Framework is an extensible, XML-based ASP.NET 4.0 framework for building and maintaining tabular survey pages. Dauphine SmartControls for K2 blackpearl: SmartControls for K2 blackpearl simplifies the integration of K2 blackpearl processes with ASP.NET web forms. It is a collection of ASP.NET web controls with both design-time and run-time capabilities for code-less integration to K2 blackpearl processes. The underlying framework of SmartControls for K2 blackpearl can also be used to extend existing ASP.NET web controls with the same K2 blackpearl integration capabilities that it offers.Easy Full-Text Search Queries: Very lightweight class to convert user-friendly search queries into Microsoft SQL Server Full-Text Search queries. Gracefully handles errors in input query.EnvironmentCheck: A Windows Forms application which will read from an XML config a number of checks which need to be performed to verify that a server meets pre-reqs.Lindeberg edge detector: Just a simple program I wrote for 'Image processing fundamentals' course MesanAnsatte: Prosjekt for utforskning av .net. Multi-Touch Scrum tool: This is a application use to support Scrum software development, based on Microsoft Surface SDK 2.0.MyWCFService: This project is a simple explanation for WCF service, that can help you understand how the WCF works!!race4fun engine: Open source driving simulator. Modable engine for easy use. SchoolManagerMVC: School Management System, written in MVC3 with unity framework, DI unity and unit testsSharePoint Managed Metadata Claims Provider: Custom Claims Provider implementation for SharePoint. Claims Playground.SQL data access: .NET library for accessing a Microsoft SQL Server database.sqlscriptmover: Exports stored procedures, function, views, tables and triggers to individual files or imports same and attempts to create in designated database.

    Read the article

  • Drawing lines in 3D space

    - by DeadMG
    When attempting to draw a line in 3D space with D3DPT_LINELIST, then Direct3D gives me an error about an invalid vertex declaration, saying that it cannot be converted to an FVF. I am using the same vertex declaration and shader/stream setup as for my D3DPT_TRIANGLELIST rendering which works absolutely correctly. How can I use D3DPT_LINELIST to render some lines in 3D space? Edit: Oopsie, forgot my codeses. Here's my raw Draw call. D3DCALL(device->SetStreamSource(1, PerBoneBuffer.get(), 0, sizeof(PerInstanceData))); D3DCALL(device->SetStreamSourceFreq(1, D3DSTREAMSOURCE_INSTANCEDATA | 1)); D3DCALL(device->SetStreamSource(0, LineVerts, 0, sizeof(D3DXVECTOR3))); D3DCALL(device->SetStreamSourceFreq(0, D3DSTREAMSOURCE_INDEXEDDATA | lines.size())); D3DCALL(device->SetIndices(LineIndices)); PerInstanceData* data; std::vector<Wide::Render::Line*> lines_vec(lines.begin(), lines.end()); D3DCALL(PerBoneBuffer->Lock(0, lines.size() * sizeof(PerInstanceData), reinterpret_cast<void**>(&data), D3DLOCK_DISCARD)); std::for_each(lines.begin(), lines.end(), [&](Wide::Render::Line* ptr) { data->Color = D3DXColor(ptr->Colour); D3DXMATRIXA16 Translate, Scale, Rotate; D3DXMatrixTranslation(&Translate, ptr->Start.x, ptr->Start.y, ptr->Start.z); D3DXMatrixScaling(&Scale, ptr->Scale, 1, 1); D3DXMatrixRotationQuaternion(&Rotate, &D3DQuaternion(ptr->Rotation)); data->World = Scale * Rotate * Translate; }); D3DCALL(PerBoneBuffer->Unlock()); D3DCALL(device->DrawIndexedPrimitive(D3DPRIMITIVETYPE::D3DPT_LINELIST, 0, 0, 2, 0, 1)); Here's my vertex declaration: D3DVERTEXELEMENT9 BasicMeshVertices[] = { {0, 0, D3DDECLTYPE_FLOAT3, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_POSITION, 0}, {1, 0, D3DDECLTYPE_FLOAT4, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_TEXCOORD, 0}, {1, 16, D3DDECLTYPE_FLOAT4, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_TEXCOORD, 1}, {1, 32, D3DDECLTYPE_FLOAT4, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_TEXCOORD, 2}, {1, 48, D3DDECLTYPE_FLOAT4, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_TEXCOORD, 3}, {1, 64, D3DDECLTYPE_FLOAT4, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_COLOR, 0}, D3DDECL_END() }; The LineIndices are just 0, 1 and the LineVerts are just {0,0,0} and {1,0,0}, to represent a unit 3D line along the X axis.

    Read the article

  • Ubuntu 12.10 unmet dependencies

    - by John
    I have Ubuntu 12.10 with 3.2.0.24-generic #39-Ubuntu running on a Dell Inspiron 700m laptop. I have unmet dependencies as follows: root@John-700m:/home/John# sudo apt-get remove --purge linux-image-3.5.0-{18,27,31,34}-generic Reading package lists... Done Building dependency tree Reading state information... Done Package 'linux-image-3.5.0-18-generic' is not installed, so not removed Package 'linux-image-3.5.0-27-generic' is not installed, so not removed Package 'linux-image-3.5.0-31-generic' is not installed, so not removed Package 'linux-image-3.5.0-34-generic' is not installed, so not removed You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: linux-generic : Depends: linux-image-generic but it is not going to be installed linux-image-extra-3.5.0-18-generic : Depends: linux-image-3.5.0-18-generic but it is not going to be installed linux-image-extra-3.5.0-27-generic : Depends: linux-image-3.5.0-27-generic but it is not going to be installed linux-image-extra-3.5.0-31-generic : Depends: linux-image-3.5.0-31-generic but it is not going to be installed linux-image-extra-3.5.0-34-generic : Depends: linux-image-3.5.0-34-generic but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). I tried to remove the packages, but I still get the same error message. Any help would certainly be appreciated. @Eric, well noted. Here is the reults: model name : Intel(R) Pentium(R) M processor 1.60GHz flags : fpu vme de pse tsc msr mce cx8 mtrr pge mca cmov clflush dts acpi mmx fxsr sse sse2 ss tm pbe up bts est tm2 Thanks again. @Eric, If I need to downgrade to Ubuntu 12.04 from 12.10, can I do it without having to mess my current partition or files? I also have Windows XP running on the machine. Best. @Eric, if you are online, I could use your help (or anybody's else). I installed the package for non-pae unit and I still get the same error. Any suggestions? Thanks

    Read the article

  • Memory concerns while plotting escape from DLL Hell in Delphi

    - by Peter Turner
    I work on a program with about 50 DLLs that are loaded from one executable, it's an old organically grown program where the only rationale for creating a new DLL is that one previously didn't exist to fill a given need. (and namespaces didn't exist in Delphi so it never crossed our mind to make dll1.main.pas, dll2.main.pas or something even more unique) What we want to do is consolidate all these DLLs into one executable, since none of them are used out of the program, there shouldn't be much of a problem. The concern my boss has is that if we did this, the memory overhead for terminal server clients would go through the roof. So, I've stepped through enough initialization code to know that lots of stuff is done every time a DLL is loaded in to memory, but say I've got a project with about 4000 files, and 50 dlls, 10 of which are probably utilized by any one user in any one session of the program. The 50 dlls are about 2/3rds form files, if not more, but beyond that there's not a lot of other resources being loaded (only a few embedded pictures, icons, cursors, etc..). If I loaded all these files in to memory, how much memory is used per unit? how much is used per class? How do I keep the overhead down? and what is the biggest project one can reasonably expect to build with Delphi? This tidbit won't help answering, but I think it might clarify what my boss is worried about, we currently start our program at about 18megs, normal working conditions are usually less than 40 megs, he thinks it could climb as high as 120 megs.

    Read the article

  • Code Measuring and Metrics Tools?

    - by David
    I'm in the process of setting up a build server for personal projects. This server will handle all the normal CI stuff, including running large suites of tests (unit, integration, automated UI). While I'm working out the kinks for including code coverage output with MSTest, it occurs to me that there may be lots of tools out there which give me additional metrics other than just code coverage. FxCop comes to mind as an example. Though I'm sure there are others. Anything that can generate useful reportable data and metrics would be good. Whether it's class dependency charts (looking for Law of Demeter violations, for example), analyses of the uses of classes/functions (looking for a function that isn't used in the system other than just the tests, for example), and so on. I'm not sure the right way to formulate the question, since polling questions or "What's your favorite code analysis tool" aren't very good. But I'm essentially just looking for recommendations on what metrics to gather and the tools that can gather them. The eventual vision for something like this is to have the CI server run a bunch of automated tests and analysis tools and track performance metrics over time. Imagine a dashboard full of graphs plotting these metrics over time. The lines should all relatively be at an equilibrium, and if one starts to stray toward the negative then it's an early indication of problems with the code. In the age old struggle to quantify code quality with management, this sounds like a potentially helpful means of doing just that.

    Read the article

  • What Poor Project Management Might Be Costing You

    - by Sylvie MacKenzie, PMP
    For project-intensive organizations, capital investment decisions define both success and failure. Getting them wrong—the risk of delays and schedule and cost overruns are ever present—introduces the potential for huge financial losses. The resulting consequences can be significant, and directly impact both a company’s profit outlook and its share price performance—which in turn is the fundamental measure of executive performance. This intrinsic link between long-term investment planning and short-term market performance is investigated in the independent report Stock Shock, written by a consultant from Clarity Economics and commissioned by the EPPM Board. A new international steering group organized by Oracle, the EPPM Board brings together senior executives from leading public and private sector organizations to explore the critical role played by enterprise project and portfolio management (EPPM). Stock Shock reviews several high-profile recent project failures, and combined with other research reviews the lessons to be learned. It analyzes how portfolio management is an exercise in balancing risk and reward, a process that places the emphasis firmly on executives to correctly determine which potential investments will deliver the greatest value and contribute most to the bottom line. Conversely, it also details how poor evaluation decisions can quickly impact the overall value of an organization’s project portfolio and compromise long-range capital planning goals. Failure to Deliver—In Search of ROI The report also cites figures from the Economist Intelligence Unit survey that found that more organizations (12 percent) expected to deliver planned ROI less than half the time, than those (11 percent) who claim to deliver it 90 percent or more of the time. This fact is linked to a recent report from Booz & Co. that shows how the average tenure of a global chief executive has fallen from 8.1 years to 6.3 years. “Senior executives need to begin looking at effective project delivery not as a bonus, but as an essential facet of business success,” according to Stock Shock author Phil Thornton. “Consolidated and integrated visibility into individual projects is the most practical solution to overcoming these challenges, which explains the increasing popularity of PPM technologies as an effective oversight and delivery platform.” Stock Shock is available for download on the EPPM microsite at http://www.oracle.com/oms/eppm/us/stock-shock-report-1691569.html

    Read the article

  • Uganda .NET Usergroup April meeting

    - by Malisa L. Ncube
    Our April meeting was presented by Wilson Kutegeka on the topic of Building the Data Access a layer. In his presentation he showed a tool which he has developed to generate the entities, stores procedures that would be used to reduce having to retype the same boilerplate code for each entity. He uses visual basic samples to demonstrate access to the data from the database and inherits his classes from an abstract class which contains common properties including connection strings, save and delete methods. A number of questions emerged from the group, mostly those that use a business model based approaches. Some of the questions are on unit testing and mocking the models without using the database, the use of IoCs and loose coupled patterns. Some of the questions were on caching, Linq support and data annotations based validation. The presentation details can be found here. Intellisense LTD agreed to sponsor our website and we are glad to have that as we really need to have a website running. We would like to thank the following companies for supporting our community activities: Apress, Telerik, Manning, DevExpress (CodeRush), Ncover, and Intellisense.   Technorati Tags: Uganda .NET Usergroup

    Read the article

  • How do I choose the scaling factor of a 3D game world?

    - by concept3d
    I am making a 3D tank game prototype with some physics simulation, am using C++. One of the decisions I need to make is the scale of the game world in relation to reality. For example, I could consider 1 in-game unit of measurement to correspond to 1 meter in reality. This feels intuitive, but I feel like I might be missing something. I can think of the following as potential problems: 3D modelling program compatibility. (?) Numerical accuracy. (Does this matter?) Especially at large scales, how games like Battlefield have huge maps: How don't they lose numerical accuracy if they use 1:1 mapping with real world scale, since floating point representation tend to lose more precision with larger numbers (e.g. with ray casting, physics simulation)? Gameplay. I don't want the movement of units to feel slow or fast while using almost real world values like -9.8 m/s^2 for gravity. (This might be subjective.) Is it ok to scale up/down imported assets or it's best fit with a world with its original scale? Rendering performance. Are large meshes with the same vertex count slower to render? I'm wondering if I should split this into multiple questions...

    Read the article

  • Swapping from NHibernate to Entity Framework &ndash; Sanity Check

    - by DesigningCode
    Now I’m not an expert in either of these techs.  I have a nice framework for unit of work / repository built with NHibernate.  Works pretty well.  I use FluentNhibernate to do the mappings.  Works well.  Takes very little code to get going with a DB back OO model. So why swap? Linq.  In Entity Framework you get much better linq support.  Visibility. I have no idea what's really happening with NHibernate….its a cloud of mystery most of the time.  You have to read all the blogs, mailing lists, etc to know what's going on. So, EF 4.0 looks like pretty good….  it has reasonably good support for mapping POCOs.  Wrapping UnitOfWork and Repository around it seems ok. Only thing I haven’t liked too much is having to explicitly load lazy loading entities. So…. am I sane?  is EF the way to go?  or is NHibernate going to suddenly release the next generation of coolness?  Is there any other major gotchas of using EF over NHibernate?

    Read the article

  • Tyrus 1.8

    - by Pavel Bucek
    Another version of Tyrus, the reference implementation of JSR 356 – Java API for WebSocket is out! Complete list of fixes and features is below, but let me describe some of the new features in more detail. All information presented here is also available in Tyrusdocumentation. What’s new? First to mention is that JSR 356 Maintenance review Ballot is over and the change proposed for 1.1 release was accepted. More details about changes in the API can be found in this article. Important part is that Tyrus 1.8 implements this API, meaning you can use Lambda expressions and some features of Nashorn without the need for any workarounds. Almost all other features are related to client side support, which was significantly improved in this release. Firstly – I have to admit, that Tyrus client contained security issue – SSL Hostname verification was not performed when connecting to “wss” endpoints. This was fixed as part of TYRUS-339 and resulted in some changes in the client configuration API. Now you can control whether HostnameVerification should be performed (SslEngineConfigurator#setHostnameVerificationEnabled(boolean)) or even set your own HostnameVerifier (please use carefully): #setHostnameVerifier(…). Detailed description can be found in Host verification chapter. Another related enhancement is support for Http Basic and Digest authentication schemes. Tyrus client now enables users to provide credentials and underlying implementation will take care of everything else. Our implementation is strictly non pre-emptive, so the login information is sent always as a response to 401 Http Status Code. If the Basic and Digest are not good enough and there is a need to use some custom scheme or something which is not yet supported in Tyrus, custom Authenticator can be registered and the authentication part of the handshake process will be handled by it. Please seeClient HTTP Authentication chapter in the user guide for more details. There are other features, like fine-grain threadpool configuration for JDK client container, build-in Http redirect support and some reshuffling related to unifying the location of client configuration classes and properties definition – every property should be now part of ClientProperties class. All new features are described in the user guide – in chapterTyrus proprietary configuration. Update – Tyrus 1.8.1 There was another slightly late reported issue related to running in environments with SecurityManager enabled, so this version fixes that. Another noteworthy fixes are TYRUS-355 and TYRUS-361; the first one is about incorrect thread factory used for shared container timeout, which resulted in JVM waiting for that thread and not exiting as it should. The other issue enables relative URIs in Location header when using redirect feature. Links Tyrus homepage mailing list JIRA Complete list of changes: Bug [TYRUS-333] – Multiple endpoints on one client [TYRUS-334] – When connection is closed by a peer, periodic heartbeat pong is not stopped [TYRUS-336] – ReaderBuffer.getNextChars() keeps blocking a server thread after client has closed the session [TYRUS-338] – JDK client SSL filter needs better synchronization during handshake phase [TYRUS-339] – SSL hostname verification is missing [TYRUS-340] – Test PathParamTest are not stable with JDK client [TYRUS-341] – A control frame inside a stream of continuation frames is treated as the part of the stream [TYRUS-343] – ControlFrameInDataStreamTest does not pass on GF [TYRUS-345] – NPE is thrown, when shared container timeout property in JDK client is not set [TYRUS-346] – IllegalStateException is thrown, when using proxy in JDK client [TYRUS-347] – Introduce better synchronization in JDK client thread pool [TYRUS-348] – When a client and server close connection simultaneously, JDK client throws NPE [TYRUS-356] – Tyrus cannot determine the connection port for a wss URL [TYRUS-357] – Exception thrown in MessageHandler#OnMessage is not caught in @OnError method [TYRUS-359] – Client based on Java 7 Asynchronous IO makes application unexitable Improvement [TYRUS-328] – JDK 1.7 AIO Client container – threads – (setting threadpool, limits, …) [TYRUS-332] – Consolidate shared client properties into one file. [TYRUS-337] – Create an SSL version of Basic Servlet test New Feature [TYRUS-228] – Add client support for HTTP Basic/Digest Task [TYRUS-330] – create/run tests/servlet/basic via wss [TYRUS-335] – [clustering] – introduce RemoteSession and expose them via separate method (not include remote sessions in the getOpenSessions()) [TYRUS-344] – Introduce Client support for HTTP Redirect

    Read the article

  • Oracle Brings Analytics to Project Management

    - by Sylvie MacKenzie, PMP
    Excerpt from PROFIT - ORACLE - by Alison Weiss  Nonprofit and for-profit organizations have many differences, but there is one way they are alike—managers struggle with huge amounts of data generated every day. Project data by itself has limited use—but any organization that can gain insight to make accurate predictions or to use resources more effectively can gain an operational advantage. Oracle’s Primavera P6 Analytics 2.0 business intelligence solution enables organizations using Oracle’s Primavera P6 Professional Project Management to do just that: identify critical issues and uncover trends in stores of project data. Primavera P6 Analytics provides management with the ability to look at not only how a single effort is progressing, but also how the entire organization is doing from a project perspective. The latest release includes new features that make it even easier to gather and analyze critical information. For example, the addition of geocoding gives Primavera P6 Analytics users the ability to track resources geographically on longitude and latitude and use a map to get an overall view of how projects, programs, and activities are deployed. “A nonprofit with relief projects in Vietnam, for example, can drill down to the project and get a world view and a regional view,” says Yasser Mahmud, vice president of product strategy and industry marketing in Oracle’s Primavera Global Business Unit. “Then they can drill down further to show statistics; key performance indicators; and how that program, portfolio, or project work is actually getting done.” The addition of new mobile capabilities to Primavera P6 Analytics puts deep-dive analysis into project managers’ hands with compatibility with major tablet operating systems. Now, nonprofits or for-profits working in remote locations can provide real-time visibility into projects to alert management if issues are occurring that need to be addressed immediately. “Primavera P6 Analytics generates information that can help organizations improve their utilization and trim down overall operating costs,” says Mahmud. “But more importantly, it gives organizations improved visibility.”

    Read the article

  • Is DQS-in-the-cloud on its way?

    - by jamiet
    LinkedIn profiles are always a useful place to find out what's really going on in Microsoft. Today I stumbled upon this little nugget from former SSIS product team member Matt Carroll: March 2012 – December 2012 (10 months)Redmond, WA Took ownership of the SQL 2012 Data Quality Services box product and re-architected and extended it to become a cloud service. Led team and managed product to add dynamic scale, security, multi-tenancy, deployment, logging, monitoring, and telemetry as well as creating new Excel add-in and new ecosystem experience around easily sharing and finding cleansing agents. Personally designed, coded, and unit tested in-memory trigram matching algorithm core to better performance, scale and maintainability. Delivered and supported successful private preview of the new service prior to SQL wide reorganization.  http://www.linkedin.com/profile/view?id=9657184  Sounds as though a Data-Quality-Services-in-the-cloud (which I spoke of as being a useful addition to Microsoft's BI portfolio in my previous blog post Thoughts on Power BI for Office 365 ) might be on its way some time in the future. And what's this SQL wide reorganization? Interesting stuff. @Jamiet  

    Read the article

  • Changes to the LINQ-to-StreamInsight Dialect

    - by Roman Schindlauer
    In previous versions of StreamInsight (1.0 through 2.0), CepStream<> represents temporal streams of many varieties: Streams with ‘open’ inputs (e.g., those defined and composed over CepStream<T>.Create(string streamName) Streams with ‘partially bound’ inputs (e.g., those defined and composed over CepStream<T>.Create(Type adapterFactory, …)) Streams with fully bound inputs (e.g., those defined and composed over To*Stream – sequences or DQC) The stream may be embedded (where Server.Create is used) The stream may be remote (where Server.Connect is used) When adding support for new programming primitives in StreamInsight 2.1, we faced a choice: Add a fourth variety (use CepStream<> to represent streams that are bound the new programming model constructs), or introduce a separate type that represents temporal streams in the new user model. We opted for the latter. Introducing a new type has the effect of reducing the number of (confusing) runtime failures due to inappropriate uses of CepStream<> instances in the incorrect context. The new types are: IStreamable<>, which logically represents a temporal stream. IQStreamable<> : IStreamable<>, which represents a queryable temporal stream. Its relationship to IStreamable<> is analogous to the relationship of IQueryable<> to IEnumerable<>. The developer can compose temporal queries over remote stream sources using this type. The syntax of temporal queries composed over IQStreamable<> is mostly consistent with the syntax of our existing CepStream<>-based LINQ provider. However, we have taken the opportunity to refine certain aspects of the language surface. Differences are outlined below. Because 2.1 introduces new types to represent temporal queries, the changes outlined in this post do no impact existing StreamInsight applications using the existing types! SelectMany StreamInsight does not support the SelectMany operator in its usual form (which is analogous to SQL’s “CROSS APPLY” operator): static IEnumerable<R> SelectMany<T, R>(this IEnumerable<T> source, Func<T, IEnumerable<R>> collectionSelector) It instead uses SelectMany as a convenient syntactic representation of an inner join. The parameter to the selector function is thus unavailable. Because the parameter isn’t supported, its type in StreamInsight 1.0 – 2.0 wasn’t carefully scrutinized. Unfortunately, the type chosen for the parameter is nonsensical to LINQ programmers: static CepStream<R> SelectMany<T, R>(this CepStream<T> source, Expression<Func<CepStream<T>, CepStream<R>>> streamSelector) Using Unit as the type for the parameter accurately reflects the StreamInsight’s capabilities: static IQStreamable<R> SelectMany<T, R>(this IQStreamable<T> source, Expression<Func<Unit, IQStreamable<R>>> streamSelector) For queries that succeed – that is, queries that do not reference the stream selector parameter – there is no difference between the code written for the two overloads: from x in xs from y in ys select f(x, y) Top-K The Take operator used in StreamInsight causes confusion for LINQ programmers because it is applied to the (unbounded) stream rather than the (bounded) window, suggesting that the query as a whole will return k rows: (from win in xs.SnapshotWindow() from x in win orderby x.A select x.B).Take(k) The use of SelectMany is also unfortunate in this context because it implies the availability of the window parameter within the remainder of the comprehension. The following compiles but fails at runtime: (from win in xs.SnapshotWindow() from x in win orderby x.A select win).Take(k) The Take operator in 2.1 is applied to the window rather than the stream: Before After (from win in xs.SnapshotWindow() from x in win orderby x.A select x.B).Take(k) from win in xs.SnapshotWindow() from b in     (from x in win     orderby x.A     select x.B).Take(k) select b Multicast We are introducing an explicit multicast operator in order to preserve expression identity, which is important given the semantics about moving code to and from StreamInsight. This also better matches existing LINQ dialects, such as Reactive. This pattern enables expressing multicasting in two ways: Implicit Explicit var ys = from x in xs          where x.A > 1          select x; var zs = from y1 in ys          from y2 in ys.ShiftEventTime(_ => TimeSpan.FromSeconds(1))          select y1 + y2; var ys = from x in xs          where x.A > 1          select x; var zs = ys.Multicast(ys1 =>     from y1 in ys1     from y2 in ys1.ShiftEventTime(_ => TimeSpan.FromSeconds(1))     select y1 + y2; Notice the product translates an expression using implicit multicast into an expression using the explicit multicast operator. The user does not see this translation. Default window policies Only default window policies are supported in the new surface. Other policies can be simulated by using AlterEventLifetime. Before After xs.SnapshotWindow(     WindowInputPolicy.ClipToWindow,     SnapshotWindowInputPolicy.Clip) xs.SnapshotWindow() xs.TumblingWindow(     TimeSpan.FromSeconds(1),     HoppingWindowOutputPolicy.PointAlignToWindowEnd) xs.TumblingWindow(     TimeSpan.FromSeconds(1)) xs.TumblingWindow(     TimeSpan.FromSeconds(1),     HoppingWindowOutputPolicy.ClipToWindowEnd) Not supported … LeftAntiJoin Representation of LASJ as a correlated sub-query in the LINQ surface is problematic as the StreamInsight engine does not support correlated sub-queries (see discussion of SelectMany). The current syntax requires the introduction of an otherwise unsupported ‘IsEmpty()’ operator. As a result, the pattern is not discoverable and implies capabilities not present in the server. The direct representation of LASJ is used instead: Before After from x in xs where     (from y in ys     where x.A > y.B     select y).IsEmpty() select x xs.LeftAntiJoin(ys, (x, y) => x.A > y.B) from x in xs where     (from y in ys     where x.A == y.B     select y).IsEmpty() select x xs.LeftAntiJoin(ys, x => x.A, y => y.B) ApplyWithUnion The ApplyWithUnion methods have been deprecated since their signatures are redundant given the standard SelectMany overloads: Before After xs.GroupBy(x => x.A).ApplyWithUnion(gs => from win in gs.SnapshotWindow() select win.Count()) xs.GroupBy(x => x.A).SelectMany(     gs =>     from win in gs.SnapshotWindow()     select win.Count()) xs.GroupBy(x => x.A).ApplyWithUnion(gs => from win in gs.SnapshotWindow() select win.Count(), r => new { r.Key, Count = r.Payload }) from x in xs group x by x.A into gs from win in gs.SnapshotWindow() select new { gs.Key, Count = win.Count() } Alternate UDO syntax The representation of UDOs in the StreamInsight LINQ dialect confuses cardinalities. Based on the semantics of user-defined operators in StreamInsight, one would expect to construct queries in the following form: from win in xs.SnapshotWindow() from y in MyUdo(win) select y Instead, the UDO proxy method is referenced within a projection, and the (many) results returned by the user code are automatically flattened into a stream: from win in xs.SnapshotWindow() select MyUdo(win) The “many-or-one” confusion is exemplified by the following example that compiles but fails at runtime: from win in xs.SnapshotWindow() select MyUdo(win) + win.Count() The above query must fail because the UDO is in fact returning many values per window while the count aggregate is returning one. Original syntax New alternate syntax from win in xs.SnapshotWindow() select win.UdoProxy(1) from win in xs.SnapshotWindow() from y in win.UserDefinedOperator(() => new Udo(1)) select y -or- from win in xs.SnapshotWindow() from y in win.UdoMacro(1) select y Notice that this formulation also sidesteps the dynamic type pitfalls of the existing “proxy method” approach to UDOs, in which the type of the UDO implementation (TInput, TOuput) and the type of its constructor arguments (TConfig) need to align in a precise and non-obvious way with the argument and return types for the corresponding proxy method. UDSO syntax UDSO currently leverages the DataContractSerializer to clone initial state for logical instances of the user operator. Initial state will instead be described by an expression in the new LINQ surface. Before After xs.Scan(new Udso()) xs.Scan(() => new Udso()) Name changes ShiftEventTime => AlterEventStartTime: The alter event lifetime overload taking a new start time value has been renamed. CountByStartTimeWindow => CountWindow

    Read the article

  • We've Been Busy: World Tour 2010

    - by [email protected]
    By Brian Dayton on March 11, 2010 11:17 PM Right after Oracle OpenWorld 2009 we went right into planning for our 2010 World Tour. An ambitious 90+ city tour visiting cities on every continent. The Oracle Applications Strategy Update Tour started January 19th and is in full swing right now. We've put some heavy hitters on the road. If you didn't get a chance to see Steve Miranda, Senior Vice President of Oracle Application Development in Tokyo, Anthony Lye, Senior Vice President of Oracle CRM Development in New Delhi or Sonny Singh, Senior Vice President of Oracle Industries Business Unit in Stockholm don't worry...we're not done yet. The theme, Smart Strategies: Your Roadmap to the Future is a nod to the fact that everyone needs to be smart about what's going on in their business and industry right now. But just as important---how to make sure that you're on the course to where you need to be down the road. Get the big picture and key trends in "The New Normal" of today's business climate and drill down and find out about the latest and greatest innovations in Oracle Applications. Check out http://www.oracle.com/events/applicationstour/index.html for an upcoming tour date near you. Pictures, feedback, summaries and learnings from the tour to come soon.

    Read the article

< Previous Page | 463 464 465 466 467 468 469 470 471 472 473 474  | Next Page >