Search Results

Search found 1100 results on 44 pages for 'bitwise operators'.

Page 39/44 | < Previous Page | 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • How to find next (by a single parameter) element in c++? (stl) [closed]

    - by user2136963
    I have n humans of THuman class Each human has scored some points in one of two rounds. (score1 and score2) Each human has its unique id. Score1 and 2 are also unique. Besides, a human has a score_t=score1+score2, which can be the same for two of them. I need to implement 6 variables to THuman which return id of a human with: bigger score1 smaller score1 bigger score2 smaller score2 bigger score_t smaller score_t (if there are many humans those satisfy theese conditions, the one with smallest difference of corresponding parameter should be chosen (like score1 for 1 and 2)) In other words, it's some kind of storing 3 human sortings. Two more functions I need should get argument x, set score1 or score 2 to x, and then refresh some of the 6 variables above. If I needed sorting by only one variable, I would simply create set and defined and < operators for my class. But what is the solution for three of parameters? Is it possible to use STL here, or I should create my own lists/treaps? __ Answer: How to update set of pointers c++?

    Read the article

  • Inequality joins, Asynchronous transformations and Lookups : SSIS

    - by jamiet
    It is pretty much accepted by SQL Server Integration Services (SSIS) developers that synchronous transformations are generally quicker than asynchronous transformations (for a description of synchronous and asynchronous transformations go read Asynchronous and synchronous data flow components). Notice I said “generally” and not “always”; there are circumstances where using asynchronous transformations can be beneficial and in this blog post I’ll demonstrate such a scenario, one that is pretty common when building data warehouses. Imagine I have a [Customer] dimension table that manages information about all of my customers as a slowly-changing dimension. If that is a type 2 slowly changing dimension then you will likely have multiple rows per customer in that table. Furthermore you might also have datetime fields that indicate the effective time period of each member record. Here is such a table that contains data for four dimension members {Terry, Max, Henry, Horace}: Notice that we have multiple records per customer and that the [SCDStartDate] of a record is equivalent to the [SCDEndDate] of the record that preceded it (if there was one). (Note that I am on record as saying I am not a fan of this technique of storing an [SCDEndDate] but for the purposes of clarity I have included it here.) Anyway, the idea here is that we will have some incoming data containing [CustomerName] & [EffectiveDate] and we need to use those values to lookup [Customer].[CustomerId]. The logic will be: Lookup a [CustomerId] WHERE [CustomerName]=[CustomerName] AND [SCDStartDate] <= [EffectiveDate] AND [EffectiveDate] <= [SCDEndDate] The conventional approach to this would be to use a full cached lookup but that isn’t an option here because we are using inequality conditions. The obvious next step then is to use a non-cached lookup which enables us to change the SQL statement to use inequality operators: Let’s take a look at the dataflow: Notice these are all synchronous components. This approach works just fine however it does have the limitation that it has to issue a SQL statement against your lookup set for every row thus we can expect the execution time of our dataflow to increase linearly in line with the number of rows in our dataflow; that’s not good. OK, that’s the obvious method. Let’s now look at a different way of achieving this using an asynchronous Merge Join transform coupled with a Conditional Split. I’ve shown it post-execution so that I can include the row counts which help to illustrate what is going on here: Notice that there are more rows output from our Merge Join component than on the input. That is because we are joining on [CustomerName] and, as we know, we have multiple records per [CustomerName] in our lookup set. Notice also that there are two asynchronous components in here (the Sort and the Merge Join). I have embedded a video below that compares the execution times for each of these two methods. The video is just over 8minutes long. View on Vimeo  For those that can’t be bothered watching the video I’ll tell you the results here. The dataflow that used the Lookup transform took 36 seconds whereas the dataflow that used the Merge Join took less than two seconds. An illustration in case it is needed: Pretty conclusive proof that in some scenarios it may be quicker to use an asynchronous component than a synchronous one. Your mileage may of course vary. The scenario outlined here is analogous to performance tuning procedural SQL that uses cursors. It is common to eliminate cursors by converting them to set-based operations and that is effectively what we have done here. Our non-cached lookup is performing a discrete operation for every single row of data, exactly like a cursor does. By eliminating this cursor-in-disguise we have dramatically sped up our dataflow. I hope all of that proves useful. You can download the package that I demonstrated in the video from my SkyDrive at http://cid-550f681dad532637.skydrive.live.com/self.aspx/Public/BlogShare/20100514/20100514%20Lookups%20and%20Merge%20Joins.zip Comments are welcome as always. @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • OWB 11gR2 &ndash; OLAP and Simba

    - by David Allan
    Oracle Warehouse Builder was the first ETL product to provide a single integrated and complete environment for managing enterprise data warehouse solutions that also incorporate multi-dimensional schemas. The OWB 11gR2 release provides Oracle OLAP 11g deployment for multi-dimensional models (in addition to support for prior releases of OLAP). This means users can easily utilize Simba's MDX Provider for Oracle OLAP (see here for details and cost) which allows you to use the powerful and popular ad hoc query and analysis capabilities of Microsoft Excel PivotTables® and PivotCharts® with your Oracle OLAP business intelligence data. The extensions to the dimensional modeling capabilities have been built on established relational concepts, with the option to seamlessly move from a relational deployment model to a multi-dimensional model at the click of a button. This now means that ETL designers can logically model a complete data warehouse solution using one single tool and control the physical implementation of a logical model at deployment time. As a result data warehouse projects that need to provide a multi-dimensional model as part of the overall solution can be designed and implemented faster and more efficiently. Wizards for dimensions and cubes let you quickly build dimensional models and realize either relationally or as an Oracle database OLAP implementation, both 10g and 11g formats are supported based on a configuration option. The wizard provides a good first cut definition and the objects can be further refined in the editor. Both wizards let you choose the implementation, to deploy to OLAP in the database select MOLAP: multidimensional storage. You will then be asked what levels and attributes are to be defined, by default the wizard creates a level bases hierarchy, parent child hierarchies can be defined in the editor. Once the dimension or cube has been designed there are special mapping operators that make it easy to load data into the objects, below we load a constant value for the total level and the other levels from a source table.   Again when the cube is defined using the wizard we can edit the cube and define a number of analytic calculations by using the 'generate calculated measures' option on the measures panel. This lets you very easily add a lot of rich analytic measures to your cube. For example one of the measures is the percentage difference from a year ago which we can see in detail below. You can also add your own custom calculations to leverage the capabilities of the Oracle OLAP option, either by selecting existing template types such as moving averages to defining true custom expressions. The 11g OLAP option now supports percentage based summarization (the amount of data to precompute and store), this is available from the option 'cost based aggregation' in the cube's configuration. Ensure all measure-dimensions level based aggregation is switched off (on the cube-dimension panel) - previously level based aggregation was the only option. The 11g generated code now uses the new unified API as you see below, to generate the code, OWB needs a valid connection to a real schema, this was not needed before 11gR2 and is a new requirement since the OLAP API which OWB uses is not an offline one. Once all of the objects are deployed and the maps executed then we get to the fun stuff! How can we analyze the data? One option which is powerful and at many users' fingertips is using Microsoft Excel PivotTables® and PivotCharts®, which can be used with your Oracle OLAP business intelligence data by utilizing Simba's MDX Provider for Oracle OLAP (see Simba site for details of cost). I'll leave the exotic reporting illustrations to the experts (see Bud's demonstration here), but with Simba's MDX Provider for Oracle OLAP its very simple to easily access the analytics stored in the database (all built and loaded via the OWB 11gR2 release) and get the regular features of Excel at your fingertips such as using the conditional formatting features for example. That's a very quick run through of the OWB 11gR2 with respect to Oracle 11g OLAP integration and the reporting using Simba's MDX Provider for Oracle OLAP. Not a deep-dive in any way but a quick overview to illustrate the design capabilities and integrations possible.

    Read the article

  • Silverlight Cream for March 09, 2011 -- #1057

    - by Dave Campbell
    In this Issue: Dennis Doomen, Peter Kuhn, Michael Crump, Joe McBride, Martin Krüger, Jeremy Likness, Manas Patnaik, Jesse Liberty(-2-), WindowsPhoneGeek(-2-). Above the Fold: Silverlight: "A highlighting AutoCompleteBox in Silverlight" Peter Kuhn WP7: "WP7 WatermarkedTextBox custom control" WindowsPhoneGeek Training: "" Shoutouts: Karl Shifflett announced that he and Josh Smith have heard the developers and released a demo: Mole 2010 Demo Released This is a somewhat older post, but the material is good and I was reminded of it while talking to Josh Smith at the MVP summit last week: Advanced MVVM ... money well-spent From SilverlightCream.com: Introducing the Silverlight Cookbook Dennis Doomen unveils a Codeplex site "containing a Silverlight 4 app that includes most of the complexities you might run into" ... I'm tagging this in my WynApse outlookbar... great stuff, Dennis! A highlighting AutoCompleteBox in Silverlight Peter Kuhn took on a task in response to a forum query and created a highlighting AutoCompleteBox, and is giving it to us... this really looks cool, Peter, and great explanation. Taking a look at the Mindscape Phone Elements for WP7. Michael Crump takes a good look at the Mindscape Phone Elements for WP7... and if you read closely you might still be able to get a free license! Windows Phone – “Can’t connect to your phone. Disconnect it, Restart it, then try connecting again.” Joe McBride explains a way out of an issue that many should be seeing as we repave or replace machines... how to get our device recognized on the updated machine... without giving cryptic messages. How to: only with the full visibility of an application in the browser window start an action Martin Krüger continues his journey in starting storyboards and tackles the condition that the application is completely in the browser window prior to the storyboard starting. A Numeric Input Control for Windows Phone 7 Jeremy Likness came up with a great idea for numeric input for WP7 ... you'll smile when you see it, but what a great idea... and a NumericTextBox to go along with it. Performing CRUD on Relational Data (Multiple table) using RIA in SL4 Manas Patnaik has a post up that breaks the normal blog post or demo mold by having two tables with a relational constraint and doing CRUD operations on them. Plenty of diagrams and good information. Select Many: Reactive Extensions’ Mother Of All Operators [Chaining] Jesse Liberty has part 9 in his Rx series up, and is looking at SelectMany this time, and chaining calls. He's using WPF for the sample, but the goodness is all there for us Silverlight guys too. The Full Stack 8–Adding Search to the Phone Client Jesse Liberty and Jon Galloway have part 8 of their Full Stack series up ... this is the MVC3, ASP.NET, Silverlight, and WP7 app development series... this time out they're putting Search in the Phone client. All about ResourceDictionary in WP7 WindowsPhoneGeek is discussing ResourceDictionaries in this post... beginning with What is a ResourceDictionary and continuing out through creating and using one, plus a good comment on merging. WP7 WatermarkedTextBox custom control In his next post, WindowsPhoneGeek walks us through the creation of a WatermarkedTextBox for WP7 right from the derivation from TextBox... very nice tutorial and lots of code/examples. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • SQLUG Events - London/Edinburgh/Cardiff/Reading - Masterclass, NoSQL, TSQL Gotcha's, Replication, BI

    - by tonyrogerson
    We have acquired two additional tickets to attend the SQL Server Master Class with Paul Randal and Kimberly Tripp next Thurs (17th June), for a chance to win these coveted tickets email us ([email protected]) before 9pm this Sunday with the subject "MasterClass" - people previously entered need not worry - your still in with a chance. The winners will be announced Monday morning.As ever plenty going on physically, we've got dates for a stack of events in Manchester and Leeds, I'm looking at Birmingham if anybody has ideas? We are growing our online community with the Cuppa Corner section, to participate online remember to use the #sqlfaq twitter tag; for those wanting to get more involved in presenting and fancy trying it out we are always after people to do 1 - 5 minute SQL nuggets or Cuppa Corners (short presentations) at any of these User Group events - just email us [email protected] removing from this email list? Then just reply with remove please on the subject line.Kimberly Tripp and Paul Randal Master Class - Thurs, 17th June - LondonREGISTER NOW AND GET A SECOND REGISTRATION FREE*The top things YOU need to know about managing SQL Server - in one place, on one day - presented by two of the best SQL Server industry trainers!This one-day MasterClass will focus on many of the top issues companies face when implementing and maintaining a SQL Server-based solution. In the case where a company has no dedicated DBA, IT managers sometimes struggle to keep the data tier performing well and the data available. This can be especially troublesome when the development team is unfamiliar with the affect application design choices have on database performance.The Microsoft SQL Server MasterClass 2010 is presented by Paul S. Randal and Kimberly L. Tripp, two of the most experienced and respected people in the SQL Server world. Together they have over 30 years combined experience working with SQL Server in the field, and on the SQL Server product team itself. This is a unique opportunity to hear them present at a UK event which will:>> Debunk many of the ingrained misconceptions around SQL Server's behaviour >> Show you disaster recovery techniques critical to preserving your company's life-blood - the data >> Explain how a common application design pattern can wreak havoc in the database >> Walk through the top-10 points to follow around operations and maintenance for a well-performing and available data tier! Where: Radisson Edwardian Heathrow Hotel, LondonWhen: Thursday 17th June 2010*REGISTER TODAY AT www.regonline.co.uk/kimtrippsql on the registration form simply quote discount code: BOGOF for both yourself and your colleague and you will save 50% off each registration – that’s a 249 GBP saving! This offer is limited, book early to avoid disappointment.Wed, 23 JunREADINGEvening Meeting, More info and registerIntroduction to NoSQL (Not Only SQL) - Gavin Payne; T-SQL Gotcha's and how to avoid them - Ashwani Roy; Introduction to Recency Frequency - Tony Rogerson; Reporting Services - Tim LeungThu, 24 JunCARDIFFEvening Meeting, More info and registerAlex Whittles of Purple Frog Systems talks about Data warehouse design case studies, Other BI related session TBC Mon, 28 JunEDINBURGHEvening Meeting, More info and registerReplication (Components, Adminstration, Performance and Troubleshooting) - Neil Hambly Server Upgrades (Notes and Best practice from the field) - Satya Jayanty Wed, 14 JulLONDONEvening Meeting, More info and registerMeeting is being sponsored by DBSophic (http://www.dbsophic.com/download), database optimisation software. Physical Join Operators in SQL Server - Ami LevinWorkload Tuning - Ami LevinSQL Server and Disk IO (File Groups/Files, SSD's, Fusion-IO, In-RAM DB's, Fragmentation) - Tony RogersonComplex Event Processing - Allan MitchellMany thanks,Tony Rogerson, SQL Server MVPUK SQL Server User Grouphttp://sqlserverfaq.com"

    Read the article

  • Big Data – Data Mining with Hive – What is Hive? – What is HiveQL (HQL)? – Day 15 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the operational database in Big Data Story. In this article we will understand what is Hive and HQL in Big Data Story. Yahoo started working on PIG (we will understand that in the next blog post) for their application deployment on Hadoop. The goal of Yahoo to manage their unstructured data. Similarly Facebook started deploying their warehouse solutions on Hadoop which has resulted in HIVE. The reason for going with HIVE is because the traditional warehousing solutions are getting very expensive. What is HIVE? Hive is a datawarehouseing infrastructure for Hadoop. The primary responsibility is to provide data summarization, query and analysis. It  supports analysis of large datasets stored in Hadoop’s HDFS as well as on the Amazon S3 filesystem. The best part of HIVE is that it supports SQL-Like access to structured data which is known as HiveQL (or HQL) as well as big data analysis with the help of MapReduce. Hive is not built to get a quick response to queries but it it is built for data mining applications. Data mining applications can take from several minutes to several hours to analysis the data and HIVE is primarily used there. HIVE Organization The data are organized in three different formats in HIVE. Tables: They are very similar to RDBMS tables and contains rows and tables. Hive is just layered over the Hadoop File System (HDFS), hence tables are directly mapped to directories of the filesystems. It also supports tables stored in other native file systems. Partitions: Hive tables can have more than one partition. They are mapped to subdirectories and file systems as well. Buckets: In Hive data may be divided into buckets. Buckets are stored as files in partition in the underlying file system. Hive also has metastore which stores all the metadata. It is a relational database containing various information related to Hive Schema (column types, owners, key-value data, statistics etc.). We can use MySQL database over here. What is HiveSQL (HQL)? Hive query language provides the basic SQL like operations. Here are few of the tasks which HQL can do easily. Create and manage tables and partitions Support various Relational, Arithmetic and Logical Operators Evaluate functions Download the contents of a table to a local directory or result of queries to HDFS directory Here is the example of the HQL Query: SELECT upper(name), salesprice FROM sales; SELECT category, count(1) FROM products GROUP BY category; When you look at the above query, you can see they are very similar to SQL like queries. Tomorrow In tomorrow’s blog post we will discuss about very important components of the Big Data Ecosystem – Pig. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • JDK 7u25: Solutions to Issues caused by changes to Runtime.exec

    - by Devika Gollapudi
    The following examples were prepared by Java engineering for the benefit of Java developers who may have faced issues with Runtime.exec on the Windows platform. Background In JDK 7u21, the decoding of command strings specified to Runtime.exec(String), Runtime.exec(String,String[]) and Runtime.exec(String,String[],File) methods, has been made more strict. See JDK 7u21 Release Notes for more information. This caused several issues for applications. The following section describes some of the problems faced by developers and their solutions. Note: In JDK 7u25, the system property jdk.lang.Process.allowAmbigousCommands can be used to relax the checking process and helps as a workaround for some applications that cannot be changed. The workaround is only effective for applications that are run without a SecurityManager. See JDK 7u25 Release Notes for more information. Note: To understand the details of the Windows API CreateProcess call, see: http://msdn.microsoft.com/en-us/library/windows/desktop/ms682425%28v=vs.85%29.aspx There are two forms of Runtime.exec calls: with the command as string: "Runtime.exec(String command[, ...])" with the command as string array: "Runtime.exec(String[] cmdarray [, ...] )" The issues described in this section relate to the first form of call. With the first call form, developers expect the command to be passed "as is" to Windows where the command needs be split into its executable name and arguments parts first. But, in accordance with Java API, the command argument is split into executable name and arguments by spaces. Problem 1: "The file path for the command includes spaces" In the call: Runtime.getRuntime().exec("c:\\Program Files\\do.exe") the argument is split by spaces to an array of strings as: c:\\Program, Files\\do.exe The first element of parsed array is interpreted as the executable name, verified by SecurityManager (if present) and surrounded by quotations to avoid ambiguity in executable path. This results in the wrong command: "c:\\Program" "Files\\do.exe" which will fail. Solution: Use the ProcessBuilder class, or the Runtime.exec(String[] cmdarray [, ...] ) call, or quote the executable path. Where it is not possible to change the application code and where a SecurityManager is not used, the Java property jdk.lang.Process.allowAmbigousCommands could be used by setting its value to "true" from the command line: -Djdk.lang.Process.allowAmbigousCommands=true This will relax the checking process to allow ambiguous input. Examples: new ProcessBuilder("c:\\Program Files\\do.exe").start() Runtime.getRuntime().exec(new String[]{"c:\\Program Files\\do.exe"}) Runtime.getRuntime().exec("\"c:\\Program Files\\do.exe\"") Problem 2: "Shell command/.bat/.cmd IO redirection" The following implicit cmd.exe calls: Runtime.getRuntime().exec("dir temp.txt") new ProcessBuilder("foo.bat", "", "temp.txt").start() Runtime.getRuntime().exec(new String[]{"foo.cmd", "", "temp.txt"}) lead to the wrong command: "XXXX" "" temp.txt Solution: To specify the command correctly, use the following options: Runtime.getRuntime().exec("cmd /C \"dir temp.txt\"") new ProcessBuilder("cmd", "/C", "foo.bat temp.txt").start() Runtime.getRuntime().exec(new String[]{"cmd", "/C", "foo.cmd temp.txt"}) or Process p = new ProcessBuilder("cmd", "/C" "XXX").redirectOutput(new File("temp.txt")).start(); Problem 3: "Group execution of shell command and/or .bat/.cmd files" Due to enforced verification procedure, arguments in the following calls create the wrong commands.: Runtime.getRuntime().exec("first.bat && second.bat") new ProcessBuilder("dir", "&&", "second.bat").start() Runtime.getRuntime().exec(new String[]{"dir", "|", "more"}) Solution: To specify the command correctly, use the following options: Runtime.exec("cmd /C \"first.bat && second.bat\"") new ProcessBuilder("cmd", "/C", "dir && second.bat").start() Runtime.exec(new String[]{"cmd", "/C", "dir | more"}) The same scenario also works for the "&", "||", "^" operators of the cmd.exe shell. Problem 4: ".bat/.cmd with special DOS chars in quoted params” Due to enforced verification, arguments in the following calls will cause exceptions to be thrown.: Runtime.getRuntime().exec("log.bat \"error new ProcessBuilder("log.bat", "error Runtime.getRuntime().exec(new String[]{"log.bat", "error Solution: To specify the command correctly, use the following options: Runtime.getRuntime().exec("cmd /C log.bat \"error new ProcessBuilder("cmd", "/C", "log.bat", "error Runtime.getRuntime().exec(new String[]{"cmd", "/C", "log.bat", "error Examples: Complicated redirection for shell construction: cmd /c dir /b C:\ "my lovely spaces.txt" becomes Runtime.getRuntime().exec(new String[]{"cmd", "/C", "dir \b \"my lovely spaces.txt\"" }); The Golden Rule: In most cases, cmd.exe has two arguments: "/C" and the command for interpretation.

    Read the article

  • Is inconsistent formatting a sign of a sloppy programmer?

    - by dreza
    I understand that everyone has their own style of programming and that you should be able to read other people's styles and accept it for what it is. However, would one be considered a sloppy programmer if one's style of coding was inconsistent across whatever standard they were working against? Some example of inconsistencies might be: Sometimes naming private variables with _ and sometimes not Sometimes having varying indentations within code blocks Not aligning braces up i.e. same column if using start using new line style Spacing not always consistent around operators i.e. p=p+1, p+=1 vs other times p =p+1 or p = p + 1 etc Is this even something that as a programmer I should be concerned with addressing? Or is it such a minor nit picking thing that at the end of the day I should just not worry about it and worry about what the end user sees and whether the code works rather than how it looks while working? Is it sloppy programming or just over obsessive nit picking? EDIT: After some excellent comments I realized I may have left out some information in my question. This question came about after reviewing another colleagues code check-in and noticing some of these things and then realizing that I've seen these kind of in-consistencies in previous check-ins. It then got me thinking about my code and whether I do the same things and noticed that I typically don't etc I'm not suggesting his technique is either bad or good in this question or whether his way of doing things is right or wrong. EDIT: To answer some queries to some more good feed back. The specific instance this review occurred in was using Visual Studio 2010 and programming in c# so I don't think the editor would cause any issues. In fact it should only help I would hope. Sorry if I left that piece of info out and it effects any current answers. I was trying to be a bit more generic in understanding if this would be considered sloppy etc. And to add an even more specific example of a code piece I saw during reading of the check-in: foreach(var block in Blocks) { // .. some other code in here foreach(var movement in movements) { movement.Moved.Zero(); } // the un-formatted brace } Such a minor thing I know, but many small things add up(???), and I did have to double glance at the code at the time to see where everything lined up I guess. Please note this code was formatted appropriately before this check-in. EDIT: After reading some great answers and varying thoughts, the summary I've taken from this was. It's not necessarily a sign of a sloppy programmer however as programmers we have a duty (to ourselves and other programmers) to make the code as readable as possible to assist in further ongoing development. However it can hint at inadequacies which is something that is only possible to review on a case by case (person by person) basis. There are many reasons why this might occur. They should be taken in context and worked through with the person/people involved if reasonable. We have a duty to try and help all programmers become better programmers! In the good old days when development was done using good old notepad (or other simple text editing tool) this occurred much more frequently. However we have the assistance of modern IDE's now so although we shouldn't necessarily become OTT about this, it should still probably be addressed to some degree. We as programmers vary in our standards, styles and approaches to solutions. However it seems that in general we all take PRIDE in our work and as a trait it is something that can stand programmers apart. Making something to the best of our abilities both internal (code) and external (end user result) goes along way to giving us that big fat pat on the back that we may not go looking for but swells our heart with pride. And finally to quote CrazyEddie from his post below. Don't sweat the small stuff

    Read the article

  • History of Mobile Technology

    - by David Dorf
    Over the last ten years, mobile phones have gone through several incremental technology leaps that have added capabilities that impact the retail industry.  I've listed the six major ones below, along with their long-lasting impact. 1. Location In the US, the FCC required mobile phones to implement E911 (emergency calls) by 2006, requiring the caller to be located to within 300 meters.  Back in 2000, GPS was opened up for civilian use, and by 2004 Qualcomm had figured out how to use GPS in mobile phones.  So mobile operators moved from cell tower triangulation to GPS, principally for E911.  But then lots of other uses became apparent, especially navigation.  The earliest mobile apps from retailers made it easy to find nearby stores, and companies are looking at ways to use WiFi triangulation inside stores. 2. Computer Vision In 1997 Philippe Kahn shared a photo of his newborn using a mobile phone thus launching the popularity of instant visual communications.  Over the years the quality of the cameras got better, reaching the point where barcodes could be read around 2008.  That's when Occipital came on the scene with their Red Laser application, which was eventually acquired by eBay.  This opened up the ability for consumers to easily price compare inside stores.  Other interesting apps included Tesco's Wine Finder and Amazon's Price Checker, both allowing products to be identified by picture. 3. Augmented Reality Once the mobile phone had GPS, a video camera, and compass functionality it was suddenly possible to overlay digital information on the screen in real-time.  Yelp, which was using GPS to find nearby merchants, created a backdoor called Monocle on the iPhone that showed nearby merchants overlayed on the video camera view.  Today AR apps are mostly used by retailers for marketing, like Moosejaw's app that undresses models in their catalog. 4. Geo-Fencing So if we're able to track the location of a mobile phone, why not use that context to offer timely information?  My first experience with geo-fencing came courtesy of North Face, the outdoor enthusiast store. When a mobile phone enters a predetermined area, like near a store, a text message is sent to phone with an offer or useful information.  Of course retailers can geo-fence their competitors as well and find out which customers are aren't so loyal. 5. Digital Wallet Mobile payments leverage different technologies such as NFC, QRCodes, bluetooth, and SMS to facilitate communication between the consumers's phone and the retailer's point-of-sale. The key here is the potential to consolidate loyalty cards, coupons, and bank cards into the mobile phone and enable faster checkout.  Nobody does this better than Starbucks today, but McDonald's and Duncan Donuts aren't far behind.  Google, Isis, Paypal, Square, and MCX are all vying for leadership in this area.  If NFC does finally take off, it will be leveraged by retailers in more places than just the POS. 6. Voice Response Mobile Phones have had the ability to interpret simple voice commands for a while, but Google and Amazon were the first to use voice to allow searches for products.  Allowing searches by text, barcode, and voice makes it easy to comparison shop in the aisles.  Walmart even uses voice to build shopping lists, and if the Siri API is even opened we could see lots more innovation in this area.

    Read the article

  • Openmatics Revolutionizes Fleet Management with Standards-Based Vehicle Telematics Platform

    - by Michael Snow
    Openmatics s.r.o. was founded in 2010 as a subsidiary of ZF Friedrichshafen AG, a global player in driveline and chassis technology. Oracle Customer:  Openmatics s.r.o.Location:  Pilsen, Czech RepublicIndustry:  AutomotiveEmployees:  70 Its goal was to develop and operate a flexible, open telematics platform for automotive applications, which is independent from vehicle and component suppliers—recognizing that the fragmented telematics market was not meeting today’s fleet management needs. Openmatics provides a rich product portfolio, and customers can extend the platform, as required, to meet their needs. Partners and third-parties can develop their own applications using the Openmatics’ software development kit and can sell them via the Openmatics app shop.ZF Friedrichshafen AG is a global player in driveline and chassis technology. With 121 production companies and 650 service partners in 26 countries, ZF is among the top 10 largest automotive suppliers worldwide. Founded in 1915 to develop and produce transmissions for airships and vehicles, the group’s product offerings now include transmissions and steering systems as well as chassis components and complete axle systems and modules.  A word from Openmatics s.r.o.  “Oracle WebCenter Portal, together with the underlying Oracle Application Development Framework, provided the fundamental infrastructure for the Openmatics platform. Fleet managers can now reduce fuel consumption and operating costs, and more efficiently manage vehicle usage, maintenance, and safety. The standards-based platform allows third-party suppliers to deploy their own vehicle telematics services as Openmatics apps and creates a de facto standard for the automotive industry, independent from a single manufacturer or service provider.” – Gero Strobel, Head of Development, Openmatics s.r.o. Challenges Create an industry standard for vehicle telematics by establishing a customizable platform that enables access to telematics information, such as current and past fuel consumption, through a web browser to better meet automotive market and customer needs Reduce fleet-management costs by eliminating the need to invest in isolated telematics hardware and software solutions per vehicle brand and vehicle component manufacturer Establish an open platform where third-party providers—such as original equipment manufacturers (OEM), insurers, fleet operators, and individual developers—can deploy their own vehicle telematics services Allow users to purchase targeted telematics services as single apps to reduce costs and ensure rapid growth of telematics services available on the platform Enable users to configure their telematics apps with ease to make sure the platform meets individual fleet management requirements, such as analyzing past and current fuel consumption of a truck fleet Solutions Deployed Oracle WebCenter Portal as a foundation for Openmatics, a standards-based automotive telematics platform that provides next-generation fleet management with unified digital communication from and to vehicles on the move Used Oracle Application Development Framework as the development framework for Oracle WebCenter Portal’s components and services, providing developers with ready-to-use software development kits with application programming interfaces, design templates, and visual tools that accelerated time to market Used Oracle Enterprise Pack for Eclipse to simplify telematics application development in Java Enabled fleet monitoring by recording vehicle data—such as fuel consumption information—through onboard units, delivering the information to Oracle Database, and making it accessible through a customizable app portfolio on any web browser Stored vehicle telematics data—sent as encrypted information—in Oracle Database, ensuring data integrity and immediate availability for the platform’s telematics applications Enabled a wide range of telematics services suppliers, from vehicle component manufacturers to fleet application developers, to offer vehicle telematics services on the Openmatics platform, ensuring platform independence from OEMs Provided Openmatics customers with the means to individually select the automotive telematics services that are relevant to their business requirements, eliminating the need to pay for superfluous information and reducing fleet management costs Oracle Products & Services Oracle Application Development Framework Oracle WebCenter Portal Oracle SOA Suite Oracle Enterprise Pack for Eclipse Oracle Database Oracle Consulting &amp;amp;amp;amp;amp;amp;amp;&amp;amp;amp;amp;amp;lt;span id=&amp;amp;amp;amp;amp;quot;XinhaEditingPostion&amp;amp;amp;amp;amp;quot;&amp;amp;amp;amp;amp;gt;&amp;amp;amp;amp;amp;lt;/span&amp;amp;amp;amp;amp;gt;amp;&amp;amp;amp;amp;amp;amp;lt;span id=&amp;amp;amp;amp;amp;amp;quot;XinhaEditingPostion&amp;amp;amp;amp;amp;amp;quot;&amp;amp;amp;amp;amp;amp;gt;&amp;amp;amp;amp;amp;amp;lt;/span&amp;amp;amp;amp;amp;amp;gt;lt;p&amp;amp;amp;amp;amp;amp;amp;amp;gt; &amp;amp;amp;amp;amp;amp;amp;amp;lt;/p&amp;amp;amp;amp;amp;amp;amp;amp;gt;

    Read the article

  • My shiny new gadget

    - by TechTwaddle
    About 3 months ago when I had tweeted (or twit?) that the HD7 could be my next phone I wasn’t a 100 percent sure, and when the HTC Mozart came out it was switch at first sight. I wanted to buy the Mozart mainly for three reasons; its unibody construction, smaller screen and the SLCD display. But now, holding a HD7 in my hand, I reminisce and think about how fate had its own plan. Too dramatic for a piece of gadget? Well, sort of, but seriously, this has been most exciting. So in short, I bought myself a HTC HD7 and am really loving it so far. Here are some pics (taken from my HD2 which now lies in a corner, crying),     Most of my day was spent setting up the device. Email accounts, Facebook, Marketplace etc. Since marketplace isn’t officially launched in India yet, my primary live id did not work. Whenever I tried launching marketplace it would say ‘marketplace is not currently supported in your country’. Searching the forums I found an easy work around. Just create a dummy live id with the country set to UK or US and log in to the device using this id. I was worried if the contacts and feeds from my primary live account would not be updated but that was not a problem. Adding another live account into the device does import your contacts, calendar and feeds from it. And that’s it, marketplace now works perfectly. I installed a few trial and free applications; haven’t checked if I can purchase apps though, will check that later and update this post. There is one issue I am still facing with the device, I can’t access the internet over GPRS. Windows Phone 7 only gives you the option to add an ‘APN’ and nothing else. Checking the connection settings on my HD2, I found out that there is also a proxy server I need to add to access GPRS, but so far I haven’t found a way to do that on WP7. Ideally HTC should have taken care of this, detect the operator and apply that operators settings on the device, but looks like that’s not happening. I also tried the ‘Connection Settings’ application that HTC bundled with the device, but it did nothing magical. If you’re reading this and know how to fix this problem please leave a comment. The next thing I did is install apps, a lot of apps. Read Engadget’s guide to essential apps for WP7. The apps and games I installed so far include Beezz (twitter app with push notifications), twitter (the official twitter app), Facebook, Youtube, NFS Undercover, Rocket Riot, Krashlander, Unite and the list goes on. All the apps run super smooth. The display looks fine indoors but I know it’s going to suck in bright sunlight. Anyhow, I am really impressed with what I’ve seen so far. I leave you with a few more photos. Have a great year ahead. Ciao!

    Read the article

  • Strategies for invoking subclass methods on generic objects

    - by Brad Patton
    I've run into this issue in a number of places and have solved it a bunch of different ways but looking for other solutions or opinions on how to address. The scenario is when you have a collection of objects all based off of the same superclass but you want to perform certain actions based only on instances of some of the subclasses. One contrived example of this might be an HTML document made up of elements. You could have a superclass named HTMLELement and subclasses of Headings, Paragraphs, Images, Comments, etc. To invoke a common action across all of the objects you declare a virtual method in the superclass and specific implementations in all of the subclasses. So to render the document you could loop all of the different objects in the document and call a common Render() method on each instance. It's the case where again using the same generic objects in the collection I want to perform different actions for instances of specific subclass (or set of subclasses). For example (an remember this is just an example) when iterating over the collection, elements with external links need to be downloaded (e.g. JS, CSS, images) and some might require additional parsing (JS, CSS). What's the best way to handle those special cases. Some of the strategies I've used or seen used include: Virtual methods in the base class. So in the base class you have a virtual LoadExternalContent() method that does nothing and then override it in the specific subclasses that need to implement it. The benefit being that in the calling code there is no object testing you send the same message to each object and let most of them ignore it. Two downsides that I can think of. First it can make the base class very cluttered with methods that have nothing to do with most of the hierarchy. Second it assumes all of the work can be done in the called method and doesn't handle the case where there might be additional context specific actions in the calling code (i.e. you want to do something in the UI and not the model). Have methods on the class to uniquely identify the objects. This could include methods like ClassName() which return a string with the class name or other return values like enums or booleans (IsImage()). The benefit is that the calling code can use if or switch statements to filter objects to perform class specific actions. The downside is that for every new class you need to implement these methods and can look cluttered. Also performance could be less than some of the other options. Use language features to identify objects. This includes reflection and language operators to identify the objects. For example in C# there is the is operator that returns true if the instance matches the specified class. The benefit is no additional code to implement in your object hierarchy. The only downside seems to be the lack of using something like a switch statement and the fact that your calling code is a little more cluttered. Are there other strategies I am missing? Thoughts on best approaches?

    Read the article

  • Why is x=x++ undefined?

    - by ugoren
    It's undefined because the it modifies x twice between sequence points. The standard says it's undefined, therefore it's undefined. That much I know. But why? My understanding is that forbidding this allows compilers to optimize better. This could have made sense when C was invented, but now seems like a weak argument. If we were to reinvent C today, would we do it this way, or can it be done better? Or maybe there's a deeper problem, that makes it hard to define consistent rules for such expressions, so it's best to forbid them? So suppose we were to reinvent C today. I'd like to suggest simple rules for expressions such as x=x++, which seem to me to work better than the existing rules. I'd like to get your opinion on the suggested rules compared to the existing ones, or other suggestions. Suggested Rules: Between sequence points, order of evaluation is unspecified. Side effects take place immediately. There's no undefined behavior involved. Expressions evaluate to this value or that, but surely won't format your hard disk (strangely, I've never seen an implementation where x=x++ formats the hard disk). Example Expressions x=x++ - Well defined, doesn't change x. First, x is incremented (immediately when x++ is evaluated), then it's old value is stored in x. x++ + ++x - Increments x twice, evaluates to 2*x+2. Though either side may be evaluated first, the result is either x + (x+2) (left side first) or (x+1) + (x+1) (right side first). x = x + (x=3) - Unspecified, x set to either x+3 or 6. If the right side is evaluated first, it's x+3. It's also possible that x=3 is evaluated first, so it's 3+3. In either case, the x=3 assignment happens immediately when x=3 is evaluated, so the value stored is overwritten by the other assignment. x+=(x=3) - Well defined, sets x to 6. You could argue that this is just shorthand for the expression above. But I'd say that += must be executed after x=3, and not in two parts (read x, evaluate x=3, add and store new value). What's the Advantage? Some comments raised this good point. It's not that I'm after the pleasure of using x=x++ in my code. It's a strange and misleading expression. What I want is to be able to understand complicated expressions. Normally, a complicated expression is no more than the sum of its parts. If you understand the parts and the operators combining them, you can understand the whole. C's current behavior seems to deviate from this principle. One assignment plus another assignment suddenly doesn't make two assignments. Today, when I look at x=x++, I can't say what it does. With my suggested rules, I can, by simply examining its components and their relations.

    Read the article

  • Pure Server-Side Filtering with RadGridView and WCF RIA Services

    Those of you who are familiar with WCF RIA Services know that the DomainDataSource control provides a FilterDescriptors collection that enables you to filter data returned by the query on the server. We have been using this DomainDataSource feature in our RIA Services with DomainDataSource online example for almost an year now. In the example, we are listening for RadGridViews Filtering event in order to intercept any filtering that is performed on the client and translate it to something that the DomainDataSource will understand, in this case a System.Windows.Data.FilterDescriptor being added or removed from its FilterDescriptors collection. Think of RadGridView.FilterDescriptors as client-side filtering and of DomainDataSource.FilterDescriptors as server-side filtering. We no longer need the client-side one. With the introduction of the Custom Filtering Controls feature many new possibilities have opened. With these custom controls we no longer need to do any filtering on the client. I have prepared a very small project that demonstrates how to filter solely on the server by using a custom filtering control. As I have already mentioned filtering on the server is done through the FilterDescriptors collection of the DomainDataSource control. This collection holds instances of type System.Windows.Data.FilterDescriptor. The FilterDescriptor has three important properties: PropertyPath: Specifies the name of the property that we want to filter on (the left operand). Operator: Specifies the type of comparison to use when filtering. An instance of FilterOperator Enumeration. Value: The value to compare with (the right operand). An instance of the Parameter Class. By adding filters, you can specify that only entities which meet the condition in the filter are loaded from the domain context. In case you are not familiar with these concepts you might find Brad Abrams blog interesting. Now, our requirements are to create some kind of UI that will manipulate the DomainDataSource.FilterDescriptors collection. When it comes to collections, my first choice of course would be RadGridView. If you are not familiar with the Custom Filtering Controls concept I would strongly recommend getting acquainted with my step-by-step tutorial Custom Filtering with RadGridView for Silverlight and checking the online example out. I have created a simple custom filtering control that contains a RadGridView and several buttons. This control is aware of the DomainDataSource instance, since it is operating on its FilterDescriptors collection. In fact, the RadGridView that is inside it is bound to this collection. In order to display filters that are relevant for the current column only, I have applied a filter to the grid. This filter is a Telerik.Windows.Data.FilterDescriptor and is used to filter the little grid inside the custom control. It should not be confused with the DomainDataSource.FilterDescriptors collection that RadGridView is actually bound to. These are the RIA filters. Additionally, I have added several other features. For example, if you have specified a DataFormatString on your original column, the Value column inside the custom control will pick it up and format the filter values accordingly. Also, I have transferred the data type of the column that you are filtering to the Value column of the custom control. This will help the little RadGridView determine what kind of editor to show up when you begin edit, for example a date picker for DateTime columns. Finally, I have added four buttons two of them can be used to add or remove filters and the other two will communicate the changes you have made to the server. Here is the full source code of the DomainDataSourceFilteringControl. The XAML: <UserControl x:Class="PureServerSideFiltering.DomainDataSourceFilteringControl"    xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"    xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"     xmlns:telerikGrid="clr-namespace:Telerik.Windows.Controls;assembly=Telerik.Windows.Controls.GridView"     xmlns:telerik="clr-namespace:Telerik.Windows.Controls;assembly=Telerik.Windows.Controls"     Width="300">     <Border x:Name="LayoutRoot"             BorderThickness="1"             BorderBrush="#FF8A929E"             Padding="5"             Background="#FFDFE2E5">           <Grid>             <Grid.RowDefinitions>                 <RowDefinition Height="Auto"/>                 <RowDefinition Height="150"/>                 <RowDefinition Height="Auto"/>             </Grid.RowDefinitions>               <StackPanel Grid.Row="0"                         Margin="2"                         Orientation="Horizontal"                         HorizontalAlignment="Center">                 <telerik:RadButton Name="addFilterButton"                                   Click="OnAddFilterButtonClick"                                   Content="Add Filter"                                   Margin="2"                                   Width="96"/>                 <telerik:RadButton Name="removeFilterButton"                                   Click="OnRemoveFilterButtonClick"                                   Content="Remove Filter"                                   Margin="2"                                   Width="96"/>             </StackPanel>               <telerikGrid:RadGridView Name="filtersGrid"                                     Grid.Row="1"                                     Margin="2"                                     ItemsSource="{Binding FilterDescriptors}"                                     AddingNewDataItem="OnFilterGridAddingNewDataItem"                                     ColumnWidth="*"                                     ShowGroupPanel="False"                                     AutoGenerateColumns="False"                                     CanUserResizeColumns="False"                                     CanUserReorderColumns="False"                                     CanUserFreezeColumns="False"                                     RowIndicatorVisibility="Collapsed"                                     IsFilteringAllowed="False"                                     CanUserSortColumns="False">                 <telerikGrid:RadGridView.Columns>                     <telerikGrid:GridViewComboBoxColumn DataMemberBinding="{Binding Operator}"                                                         UniqueName="Operator"/>                     <telerikGrid:GridViewDataColumn Header="Value"                                                     DataMemberBinding="{Binding Value.Value}"                                                     UniqueName="Value"/>                 </telerikGrid:RadGridView.Columns>             </telerikGrid:RadGridView>               <StackPanel Grid.Row="2"                         Margin="2"                         Orientation="Horizontal"                         HorizontalAlignment="Center">                 <telerik:RadButton Name="filterButton"                                   Click="OnApplyFiltersButtonClick"                                   Content="Apply Filters"                                   Margin="2"                                   Width="96"/>                 <telerik:RadButton Name="clearButton"                                   Click="OnClearFiltersButtonClick"                                   Content="Clear Filters"                                   Margin="2"                                   Width="96"/>             </StackPanel>           </Grid>       </Border> </UserControl>   And the code-behind: using System; using System.Collections.Generic; using System.Linq; using System.Net; using System.Windows; using System.Windows.Controls; using System.Windows.Documents; using System.Windows.Input; using System.Windows.Media; using System.Windows.Media.Animation; using System.Windows.Shapes; using Telerik.Windows.Controls.GridView; using System.Windows.Data; using Telerik.Windows.Controls; using Telerik.Windows.Data;   namespace PureServerSideFiltering {     /// <summary>     /// A custom filtering control capable of filtering purely server-side.     /// </summary>     public partial class DomainDataSourceFilteringControl : UserControl, IFilteringControl     {         // The main player here.         DomainDataSource domainDataSource;           // This is the name of the property that this column displays.         private string dataMemberName;           // This is the type of the property that this column displays.         private Type dataMemberType;           /// <summary>         /// Identifies the <see cref="IsActive"/> dependency property.         /// </summary>         /// <remarks>         /// The state of the filtering funnel (i.e. full or empty) is bound to this property.         /// </remarks>         public static readonly DependencyProperty IsActiveProperty =             DependencyProperty.Register(                 "IsActive",                 typeof(bool),                 typeof(DomainDataSourceFilteringControl),                 new PropertyMetadata(false));           /// <summary>         /// Gets or sets a value indicating whether the filtering is active.         /// </summary>         /// <remarks>         /// Set this to true if you want to lit-up the filtering funnel.         /// </remarks>         public bool IsActive         {             get { return (bool)GetValue(IsActiveProperty); }             set { SetValue(IsActiveProperty, value); }         }           /// <summary>         /// Gets or sets the domain data source.         /// We need this in order to work on its FilterDescriptors collection.         /// </summary>         /// <value>The domain data source.</value>         public DomainDataSource DomainDataSource         {             get { return this.domainDataSource; }             set { this.domainDataSource = value; }         }           public System.Windows.Data.FilterDescriptorCollection FilterDescriptors         {             get { return this.DomainDataSource.FilterDescriptors; }         }           public DomainDataSourceFilteringControl()         {             InitializeComponent();         }           public void Prepare(GridViewBoundColumnBase column)         {             this.LayoutRoot.DataContext = this;               if (this.DomainDataSource == null)             {                 // Sorry, but we need a DomainDataSource. Can't do anything without it.                 return;             }               // This is the name of the property that this column displays.             this.dataMemberName = column.GetDataMemberName();               // This is the type of the property that this column displays.             // We need this in order to see which FilterOperators to feed to the combo-box column.             this.dataMemberType = column.DataType;               // We will use our magic Type extension method to see which operators are applicable for             // this data type. You can go to the extension method body and see what it does.             ((GridViewComboBoxColumn)this.filtersGrid.Columns["Operator"]).ItemsSource                 = this.dataMemberType.ApplicableFilterOperators();               // This is very nice as well. We will tell the Value column its data type. In this way             // RadGridView will pick up the best editor according to the data type. For example,             // if the data type of the value is DateTime, you will be editing it with a DatePicker.             // Nice!             ((GridViewDataColumn)this.filtersGrid.Columns["Value"]).DataType = this.dataMemberType;               // Yet another nice feature. We will transfer the original DataFormatString (if any) to             // the Value column. In this way if you have specified a DataFormatString for the original             // column, you will see all filter values formatted accordingly.             ((GridViewDataColumn)this.filtersGrid.Columns["Value"]).DataFormatString = column.DataFormatString;               // This is important. Since our little filtersGrid will be bound to the entire collection             // of this.domainDataSource.FilterDescriptors, we need to set a Telerik filter on the             // grid so that it will display FilterDescriptor which are relevane to this column ONLY!             Telerik.Windows.Data.FilterDescriptor columnFilter = new Telerik.Windows.Data.FilterDescriptor("PropertyPath"                 , Telerik.Windows.Data.FilterOperator.IsEqualTo                 , this.dataMemberName);             this.filtersGrid.FilterDescriptors.Add(columnFilter);               // We want to listen for this in order to activate and de-activate the UI funnel.             this.filtersGrid.Items.CollectionChanged += this.OnFilterGridItemsCollectionChanged;         }           /// <summary>         // Since the DomainDataSource is a little bit picky about adding uninitialized FilterDescriptors         // to its collection, we will prepare each new instance with some default values and then         // the user can change them later. Go to the event handler to see how we do this.         /// </summary>         void OnFilterGridAddingNewDataItem(object sender, GridViewAddingNewEventArgs e)         {             // We need to initialize the new instance with some values and let the user go on from here.             System.Windows.Data.FilterDescriptor newFilter = new System.Windows.Data.FilterDescriptor();               // This is a must. It should know what member it is filtering on.             newFilter.PropertyPath = this.dataMemberName;               // Initialize it with one of the allowed operators.             // TypeExtensions.ApplicableFilterOperators method for more info.             newFilter.Operator = this.dataMemberType.ApplicableFilterOperators().First();               if (this.dataMemberType == typeof(DateTime))             {                 newFilter.Value.Value = DateTime.Now;             }             else if (this.dataMemberType == typeof(string))             {                 newFilter.Value.Value = "<enter text>";             }             else if (this.dataMemberType.IsValueType)             {                 // We need something non-null for all value types.                 newFilter.Value.Value = Activator.CreateInstance(this.dataMemberType);             }               // Let the user edit the new filter any way he/she likes.             e.NewObject = newFilter;         }           void OnFilterGridItemsCollectionChanged(object sender, System.Collections.Specialized.NotifyCollectionChangedEventArgs e)         {             // We are active only if we have any filters define. In this case the filtering funnel will lit-up.             this.IsActive = this.filtersGrid.Items.Count > 0;         }           private void OnApplyFiltersButtonClick(object sender, RoutedEventArgs e)         {             if (this.DomainDataSource.IsLoadingData)             {                 return;             }               // Comment this if you want the popup to stay open after the button is clicked.             this.ClosePopup();               // Since this.domainDataSource.AutoLoad is false, this will take into             // account all filtering changes that the user has made since the last             // Load() and pull the new data to the client.             this.DomainDataSource.Load();         }           private void OnClearFiltersButtonClick(object sender, RoutedEventArgs e)         {             if (this.DomainDataSource.IsLoadingData)             {                 return;             }               // We want to remove ONLY those filters from the DomainDataSource             // that this control is responsible for.             this.DomainDataSource.FilterDescriptors                 .Where(fd => fd.PropertyPath == this.dataMemberName) // Only "our" filters.                 .ToList()                 .ForEach(fd => this.DomainDataSource.FilterDescriptors.Remove(fd)); // Bye-bye!               // Comment this if you want the popup to stay open after the button is clicked.             this.ClosePopup();               // After we did our housekeeping, get the new data to the client.             this.DomainDataSource.Load();         }           private void OnAddFilterButtonClick(object sender, RoutedEventArgs e)         {             if (this.DomainDataSource.IsLoadingData)             {                 return;             }               // Let the user enter his/or her requirements for a new filter.             this.filtersGrid.BeginInsert();             this.filtersGrid.UpdateLayout();         }           private void OnRemoveFilterButtonClick(object sender, RoutedEventArgs e)         {             if (this.DomainDataSource.IsLoadingData)             {                 return;             }               // Find the currently selected filter and destroy it.             System.Windows.Data.FilterDescriptor filterToRemove = this.filtersGrid.SelectedItem as System.Windows.Data.FilterDescriptor;             if (filterToRemove != null                 && this.DomainDataSource.FilterDescriptors.Contains(filterToRemove))             {                 this.DomainDataSource.FilterDescriptors.Remove(filterToRemove);             }         }           private void ClosePopup()         {             System.Windows.Controls.Primitives.Popup popup = this.ParentOfType<System.Windows.Controls.Primitives.Popup>();             if (popup != null)             {                 popup.IsOpen = false;             }         }     } }   Finally, we need to tell RadGridViews Columns to use this custom control instead of the default one. Here is how to do it: using System; using System.Collections.Generic; using System.Linq; using System.Net; using System.Windows; using System.Windows.Controls; using System.Windows.Documents; using System.Windows.Input; using System.Windows.Media; using System.Windows.Media.Animation; using System.Windows.Shapes; using System.Windows.Data; using Telerik.Windows.Data; using Telerik.Windows.Controls; using Telerik.Windows.Controls.GridView;   namespace PureServerSideFiltering {     public partial class MainPage : UserControl     {         public MainPage()         {             InitializeComponent();             this.grid.AutoGeneratingColumn += this.OnGridAutoGeneratingColumn;               // Uncomment this if you want the DomainDataSource to start pre-filtered.             // You will notice how our custom filtering controls will correctly read this information,             // populate their UI with the respective filters and lit-up the funnel to indicate that             // filtering is active. Go ahead and try it.             this.employeesDataSource.FilterDescriptors.Add(new System.Windows.Data.FilterDescriptor("Title", System.Windows.Data.FilterOperator.Contains, "Assistant"));             this.employeesDataSource.FilterDescriptors.Add(new System.Windows.Data.FilterDescriptor("HireDate", System.Windows.Data.FilterOperator.IsGreaterThan, new DateTime(1998, 12, 31)));             this.employeesDataSource.FilterDescriptors.Add(new System.Windows.Data.FilterDescriptor("HireDate", System.Windows.Data.FilterOperator.IsLessThanOrEqualTo, new DateTime(1999, 12, 31)));               this.employeesDataSource.Load();         }           /// <summary>         /// First of all, we will need to replace the default filtering control         /// of each column with out custom filtering control DomainDataSourceFilteringControl         /// </summary>         private void OnGridAutoGeneratingColumn(object sender, GridViewAutoGeneratingColumnEventArgs e)         {             GridViewBoundColumnBase dataColumn = e.Column as GridViewBoundColumnBase;             if (dataColumn != null)             {                 // We do not like ugly dates.                 if (dataColumn.DataType == typeof(DateTime))                 {                     dataColumn.DataFormatString = "{0:d}"; // Short date pattern.                       // Notice how this format will be later transferred to the Value column                     // of the grid that we have inside the DomainDataSourceFilteringControl.                 }                   // Replace the default filtering control with our.                 dataColumn.FilteringControl = new DomainDataSourceFilteringControl()                 {                     // Let the control know about the DDS, after all it will work directly on it.                     DomainDataSource = this.employeesDataSource                 };                   // Finally, lit-up the filtering funnel through the IsActive dependency property                 // in case there are some filters on the DDS that match our column member.                 string dataMemberName = dataColumn.GetDataMemberName();                 dataColumn.FilteringControl.IsActive =                     this.employeesDataSource.FilterDescriptors                     .Where(fd => fd.PropertyPath == dataMemberName)                     .Count() > 0;             }         }     } } The best part is that we are not only writing filters for the DomainDataSource we can read and load them. If the DomainDataSource has some pre-existing filters (like I have created in the code above), our control will read them and will populate its UI accordingly. Even the filtering funnel will light-up! Remember, the funnel is controlled by the IsActive property of our control. While this is just a basic implementation, the source code is absolutely yours and you can take it from here and extend it to match your specific business requirements. Below the main grid there is another debug grid. With its help you can monitor what filter descriptors are added and removed to the domain data source. Download Source Code. (You will have to have the AdventureWorks sample database installed on the default SQLExpress instance in order to run it.) Enjoy!Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Biztalk 2009 install help

    - by _pemex99_
    Hi, I try to install Biztalk2009, with SQL 2008R2CTPNov, on Win Server 2008. I'm blocked at the configuration step "groups" : [19:22:18 Info Configuration Framework]Configuring feature: WMI [19:22:18 Info BtsCfg] Entering function: CBtsCfg::ConfigureFeature [19:22:18 Info BtsCfg] Configuring feature: WMI [19:22:18 Info BtsCfg] Entering function: CBtsCfg::IsSelectedAnswer [19:22:18 Info BtsCfg] Leaving function: CBtsCfg::IsSelectedAnswer [19:22:18 Info BtsCfg] Entering function: CWMI::Connect [19:22:18 Info BtsCfg] WMI is already connected [19:22:18 Info BtsCfg] Leaving function: CWMI::Connect [19:22:18 Info ConfigHelper] NT group BizTalk Server Operators was not created because it already exists [19:22:18 Info ConfigHelper NetAPI Info: ] Le groupe local spécifié existe déjà. [19:22:18 Info ConfigHelper] NT group BizTalk Server Administrators was not created because it already exists [19:22:18 Info ConfigHelper NetAPI Info: ] Le groupe local spécifié existe déjà. [19:22:18 Info BtsCfg] Entering function: CWMI::CreateGroup 2010-01-14 19:22:18:0527 [INFO] WMI CWMIInstProv::PutInstance() try to acquire lock 2010-01-14 19:22:18:0539 [INFO] WMI CWMIInstProv::PutInstance() lock acquired successfully 2010-01-14 19:22:18:0546 [INFO] WMI CWMIInstProv::VerifyMgmtDbCompatibility(CInstance) started 2010-01-14 19:22:18:0553 [INFO] WMI CWMIInstProv::VerifyMgmtDbCompatibility(CInstance) finished successfully 2010-01-14 19:22:18:0564 [INFO] WMI CWMIInstProv::PutInstance(MSBTS_GroupSetting.MgmtDbName="BizTalkMgmtDb",MgmtDbServerName="ECTXEVLBZTK") started 2010-01-14 19:22:18:0572 [INFO] WMI CAdapter::ConvertWMI2Admin() started 2010-01-14 19:22:18:0581 [INFO] WMI CDataContainer::SetWCHAR() - Possible problem: item value is overwritten 2010-01-14 19:22:18:0591 [INFO] WMI CAdapter::ConvertWMI2Admin() finished with HR=0 2010-01-14 19:22:18:0611 [INFO] WMI QueryStringValue query regkey 'MgmtDBServer' 2010-01-14 19:22:18:0620 [INFO] WMI CAdmCoreGroupInst::TryCreateNewGroup() started 2010-01-14 19:22:18:0632 [INFO] WMI Creating Mgmt database... 2010-01-14 19:22:18:0641 [INFO] WMI Calling CDataSource.Open() against ECTXEVLBZTK\master 2010-01-14 19:22:18:0792 [INFO] WMI CDataSource.Open() returned 2010-01-14 19:22:18:0810 [WARN] AdminLib GetBTSMessage: hrErr=80040e1d; Msg=Error "0x80040E1D" occurred.; 2010-01-14 19:22:18:0824 [WARN] AdminLib GetBTSMessage: hrErr=c0c02524; Msg=Failed to create Management database "BizTalkMgmtDb" on server "ECTXEVLBZTK". Error "0x80040E1D" occurred.; 2010-01-14 19:22:18:0835 [ERR] WMI Failed in pAdmInst->Create() in CWMIInstProv::PutInstance(). HR=c0c02524 2010-01-14 19:22:18:0846 [ERR] WMI WMI error description is generated: Failed to create Management database "BizTalkMgmtDb" on server "ECTXEVLBZTK". Error "0x80040E1D" occurred. 2010-01-14 19:22:18:0860 [INFO] WMI CWMIInstProv::PutInstance() finished. HR=c0c02524 [19:22:18 Error BtsCfg] f:\bt\890\private\source\setup\prod\btssetup\btscfg\btswmi.cpp(358): FAILED hr = c0c02524 [19:22:18 Error BtsCfg] Failed to create Management database "BizTalkMgmtDb" on server "ECTXEVLBZTK". Error "0x80040E1D" occurred. It seems that the install can't create Managment database, But the SSO database is created OK... Has someone a clue ?

    Read the article

  • Where can these be posted besides the Python Cookbook?

    - by Noctis Skytower
    Whitespace Assembler #! /usr/bin/env python """Assembler.py Compiles a program from "Assembly" folder into "Program" folder. Can be executed directly by double-click or on the command line. Give name of *.WSA file without extension (example: stack_calc).""" ################################################################################ __author__ = 'Stephen "Zero" Chappell <[email protected]>' __date__ = '14 March 2010' __version__ = '$Revision: 3 $' ################################################################################ import string from Interpreter import INS, MNEMONIC ################################################################################ def parse(code): program = [] process_virtual(program, code) process_control(program) return tuple(program) def process_virtual(program, code): for line, text in enumerate(code.split('\n')): if not text or text[0] == '#': continue if text.startswith('part '): parse_part(program, line, text[5:]) elif text.startswith(' '): parse_code(program, line, text[5:]) else: syntax_error(line) def syntax_error(line): raise SyntaxError('Line ' + str(line + 1)) ################################################################################ def process_control(program): parts = get_parts(program) names = dict(pair for pair in zip(parts, generate_index())) correct_control(program, names) def get_parts(program): parts = [] for ins in program: if isinstance(ins, tuple): ins, arg = ins if ins == INS.PART: if arg in parts: raise NameError('Part definition was found twice: ' + arg) parts.append(arg) return parts def generate_index(): index = 1 while True: yield index index *= -1 if index > 0: index += 1 def correct_control(program, names): for index, ins in enumerate(program): if isinstance(ins, tuple): ins, arg = ins if ins in HAS_LABEL: if arg not in names: raise NameError('Part definition was never found: ' + arg) program[index] = (ins, names[arg]) ################################################################################ def parse_part(program, line, text): if not valid_label(text): syntax_error(line) program.append((INS.PART, text)) def valid_label(text): if not between_quotes(text): return False label = text[1:-1] if not valid_name(label): return False return True def between_quotes(text): if len(text) < 3: return False if text.count('"') != 2: return False if text[0] != '"' or text[-1] != '"': return False return True def valid_name(label): valid_characters = string.ascii_letters + string.digits + '_' valid_set = frozenset(valid_characters) label_set = frozenset(label) if len(label_set - valid_set) != 0: return False return True ################################################################################ from Interpreter import HAS_LABEL, Program NO_ARGS = Program.NO_ARGS HAS_ARG = Program.HAS_ARG TWO_WAY = tuple(set(NO_ARGS) & set(HAS_ARG)) ################################################################################ def parse_code(program, line, text): for ins, word in enumerate(MNEMONIC): if text.startswith(word): check_code(program, line, text[len(word):], ins) break else: syntax_error(line) def check_code(program, line, text, ins): if ins in TWO_WAY: if text: number = parse_number(line, text) program.append((ins, number)) else: program.append(ins) elif ins in HAS_LABEL: text = parse_label(line, text) program.append((ins, text)) elif ins in HAS_ARG: number = parse_number(line, text) program.append((ins, number)) elif ins in NO_ARGS: if text: syntax_error(line) program.append(ins) else: syntax_error(line) def parse_label(line, text): if not text or text[0] != ' ': syntax_error(line) text = text[1:] if not valid_label(text): syntax_error(line) return text ################################################################################ def parse_number(line, text): if not valid_number(text): syntax_error(line) return int(text) def valid_number(text): if len(text) < 2: return False if text[0] != ' ': return False text = text[1:] if '+' in text and '-' in text: return False if '+' in text: if text.count('+') != 1: return False if text[0] != '+': return False text = text[1:] if not text: return False if '-' in text: if text.count('-') != 1: return False if text[0] != '-': return False text = text[1:] if not text: return False valid_set = frozenset(string.digits) value_set = frozenset(text) if len(value_set - valid_set) != 0: return False return True ################################################################################ ################################################################################ from Interpreter import partition_number VMC_2_TRI = { (INS.PUSH, True): (0, 0), (INS.COPY, False): (0, 2, 0), (INS.COPY, True): (0, 1, 0), (INS.SWAP, False): (0, 2, 1), (INS.AWAY, False): (0, 2, 2), (INS.AWAY, True): (0, 1, 2), (INS.ADD, False): (1, 0, 0, 0), (INS.SUB, False): (1, 0, 0, 1), (INS.MUL, False): (1, 0, 0, 2), (INS.DIV, False): (1, 0, 1, 0), (INS.MOD, False): (1, 0, 1, 1), (INS.SET, False): (1, 1, 0), (INS.GET, False): (1, 1, 1), (INS.PART, True): (2, 0, 0), (INS.CALL, True): (2, 0, 1), (INS.GOTO, True): (2, 0, 2), (INS.ZERO, True): (2, 1, 0), (INS.LESS, True): (2, 1, 1), (INS.BACK, False): (2, 1, 2), (INS.EXIT, False): (2, 2, 2), (INS.OCHR, False): (1, 2, 0, 0), (INS.OINT, False): (1, 2, 0, 1), (INS.ICHR, False): (1, 2, 1, 0), (INS.IINT, False): (1, 2, 1, 1) } ################################################################################ def to_trinary(program): trinary_code = [] for ins in program: if isinstance(ins, tuple): ins, arg = ins trinary_code.extend(VMC_2_TRI[(ins, True)]) trinary_code.extend(from_number(arg)) else: trinary_code.extend(VMC_2_TRI[(ins, False)]) return tuple(trinary_code) def from_number(arg): code = [int(arg < 0)] if arg: for bit in reversed(list(partition_number(abs(arg), 2))): code.append(bit) return code + [2] return code + [0, 2] to_ws = lambda trinary: ''.join(' \t\n'[index] for index in trinary) def compile_wsa(source): program = parse(source) trinary = to_trinary(program) ws_code = to_ws(trinary) return ws_code ################################################################################ ################################################################################ import os import sys import time import traceback def main(): name, source, command_line, error = get_source() if not error: start = time.clock() try: ws_code = compile_wsa(source) except: print('ERROR: File could not be compiled.\n') traceback.print_exc() error = True else: path = os.path.join('Programs', name + '.ws') try: open(path, 'w').write(ws_code) except IOError as err: print(err) error = True else: div, mod = divmod((time.clock() - start) * 1000, 1) args = int(div), '{:.3}'.format(mod)[1:] print('DONE: Comipled in {}{} ms'.format(*args)) handle_close(error, command_line) def get_source(): if len(sys.argv) > 1: command_line = True name = sys.argv[1] else: command_line = False try: name = input('Source File: ') except: return None, None, False, True print() path = os.path.join('Assembly', name + '.wsa') try: return name, open(path).read(), command_line, False except IOError as err: print(err) return None, None, command_line, True def handle_close(error, command_line): if error: usage = 'Usage: {} <assembly>'.format(os.path.basename(sys.argv[0])) print('\n{}\n{}'.format('-' * len(usage), usage)) if not command_line: time.sleep(10) ################################################################################ if __name__ == '__main__': main() Whitespace Helpers #! /usr/bin/env python """Helpers.py Includes a function to encode Python strings into my WSA format. Has a "PRINT_LINE" function that can be copied to a WSA program. Contains a "PRINT" function and documentation as an explanation.""" ################################################################################ __author__ = 'Stephen "Zero" Chappell <[email protected]>' __date__ = '14 March 2010' __version__ = '$Revision: 1 $' ################################################################################ def encode_string(string, addr): print(' push', addr) print(' push', len(string)) print(' set') addr += 1 for offset, character in enumerate(string): print(' push', addr + offset) print(' push', ord(character)) print(' set') ################################################################################ # Prints a string with newline. # push addr # call "PRINT_LINE" """ part "PRINT_LINE" call "PRINT" push 10 ochr back """ ################################################################################ # def print(array): # if len(array) <= 0: # return # offset = 1 # while len(array) - offset >= 0: # ptr = array.ptr + offset # putch(array[ptr]) # offset += 1 """ part "PRINT" # Line 1-2 copy get less "__PRINT_RET_1" copy get zero "__PRINT_RET_1" # Line 3 push 1 # Line 4 part "__PRINT_LOOP" copy copy 2 get swap sub less "__PRINT_RET_2" # Line 5 copy 1 copy 1 add # Line 6 get ochr # Line 7 push 1 add goto "__PRINT_LOOP" part "__PRINT_RET_2" away part "__PRINT_RET_1" away back """ Whitespace Interpreter #! /usr/bin/env python """Interpreter.py Runs programs in "Programs" and creates *.WSO files when needed. Can be executed directly by double-click or on the command line. If run on command line, add "ASM" flag to dump program assembly.""" ################################################################################ __author__ = 'Stephen "Zero" Chappell <[email protected]>' __date__ = '14 March 2010' __version__ = '$Revision: 4 $' ################################################################################ def test_file(path): disassemble(parse(trinary(load(path))), True) ################################################################################ load = lambda ws: ''.join(c for r in open(ws) for c in r if c in ' \t\n') trinary = lambda ws: tuple(' \t\n'.index(c) for c in ws) ################################################################################ def enum(names): names = names.replace(',', ' ').split() space = dict((reversed(pair) for pair in enumerate(names)), __slots__=()) return type('enum', (object,), space)() INS = enum('''\ PUSH, COPY, SWAP, AWAY, \ ADD, SUB, MUL, DIV, MOD, \ SET, GET, \ PART, CALL, GOTO, ZERO, LESS, BACK, EXIT, \ OCHR, OINT, ICHR, IINT''') ################################################################################ def parse(code): ins = iter(code).__next__ program = [] while True: try: imp = ins() except StopIteration: return tuple(program) if imp == 0: # [Space] parse_stack(ins, program) elif imp == 1: # [Tab] imp = ins() if imp == 0: # [Tab][Space] parse_math(ins, program) elif imp == 1: # [Tab][Tab] parse_heap(ins, program) else: # [Tab][Line] parse_io(ins, program) else: # [Line] parse_flow(ins, program) def parse_number(ins): sign = ins() if sign == 2: raise StopIteration() buffer = '' code = ins() if code == 2: raise StopIteration() while code != 2: buffer += str(code) code = ins() if sign == 1: return int(buffer, 2) * -1 return int(buffer, 2) ################################################################################ def parse_stack(ins, program): code = ins() if code == 0: # [Space] number = parse_number(ins) program.append((INS.PUSH, number)) elif code == 1: # [Tab] code = ins() number = parse_number(ins) if code == 0: # [Tab][Space] program.append((INS.COPY, number)) elif code == 1: # [Tab][Tab] raise StopIteration() else: # [Tab][Line] program.append((INS.AWAY, number)) else: # [Line] code = ins() if code == 0: # [Line][Space] program.append(INS.COPY) elif code == 1: # [Line][Tab] program.append(INS.SWAP) else: # [Line][Line] program.append(INS.AWAY) def parse_math(ins, program): code = ins() if code == 0: # [Space] code = ins() if code == 0: # [Space][Space] program.append(INS.ADD) elif code == 1: # [Space][Tab] program.append(INS.SUB) else: # [Space][Line] program.append(INS.MUL) elif code == 1: # [Tab] code = ins() if code == 0: # [Tab][Space] program.append(INS.DIV) elif code == 1: # [Tab][Tab] program.append(INS.MOD) else: # [Tab][Line] raise StopIteration() else: # [Line] raise StopIteration() def parse_heap(ins, program): code = ins() if code == 0: # [Space] program.append(INS.SET) elif code == 1: # [Tab] program.append(INS.GET) else: # [Line] raise StopIteration() def parse_io(ins, program): code = ins() if code == 0: # [Space] code = ins() if code == 0: # [Space][Space] program.append(INS.OCHR) elif code == 1: # [Space][Tab] program.append(INS.OINT) else: # [Space][Line] raise StopIteration() elif code == 1: # [Tab] code = ins() if code == 0: # [Tab][Space] program.append(INS.ICHR) elif code == 1: # [Tab][Tab] program.append(INS.IINT) else: # [Tab][Line] raise StopIteration() else: # [Line] raise StopIteration() def parse_flow(ins, program): code = ins() if code == 0: # [Space] code = ins() label = parse_number(ins) if code == 0: # [Space][Space] program.append((INS.PART, label)) elif code == 1: # [Space][Tab] program.append((INS.CALL, label)) else: # [Space][Line] program.append((INS.GOTO, label)) elif code == 1: # [Tab] code = ins() if code == 0: # [Tab][Space] label = parse_number(ins) program.append((INS.ZERO, label)) elif code == 1: # [Tab][Tab] label = parse_number(ins) program.append((INS.LESS, label)) else: # [Tab][Line] program.append(INS.BACK) else: # [Line] code = ins() if code == 2: # [Line][Line] program.append(INS.EXIT) else: # [Line][Space] or [Line][Tab] raise StopIteration() ################################################################################ MNEMONIC = '\ push copy swap away add sub mul div mod set get part \ call goto zero less back exit ochr oint ichr iint'.split() HAS_ARG = [getattr(INS, name) for name in 'PUSH COPY AWAY PART CALL GOTO ZERO LESS'.split()] HAS_LABEL = [getattr(INS, name) for name in 'PART CALL GOTO ZERO LESS'.split()] def disassemble(program, names=False): if names: names = create_names(program) for ins in program: if isinstance(ins, tuple): ins, arg = ins assert ins in HAS_ARG has_arg = True else: assert INS.PUSH <= ins <= INS.IINT has_arg = False if ins == INS.PART: if names: print(MNEMONIC[ins], '"' + names[arg] + '"') else: print(MNEMONIC[ins], arg) elif has_arg and ins in HAS_ARG: if ins in HAS_LABEL and names: assert arg in names print(' ' + MNEMONIC[ins], '"' + names[arg] + '"') else: print(' ' + MNEMONIC[ins], arg) else: print(' ' + MNEMONIC[ins]) ################################################################################ def create_names(program): names = {} number = 1 for ins in program: if isinstance(ins, tuple) and ins[0] == INS.PART: label = ins[1] assert label not in names names[label] = number_to_name(number) number += 1 return names def number_to_name(number): name = '' for offset in reversed(list(partition_number(number, 27))): if offset: name += chr(ord('A') + offset - 1) else: name += '_' return name def partition_number(number, base): div, mod = divmod(number, base) yield mod while div: div, mod = divmod(div, base) yield mod ################################################################################ CODE = (' \t\n', ' \n ', ' \t \t\n', ' \n\t', ' \n\n', ' \t\n \t\n', '\t ', '\t \t', '\t \n', '\t \t ', '\t \t\t', '\t\t ', '\t\t\t', '\n \t\n', '\n \t \t\n', '\n \n \t\n', '\n\t \t\n', '\n\t\t \t\n', '\n\t\n', '\n\n\n', '\t\n ', '\t\n \t', '\t\n\t ', '\t\n\t\t') EXAMPLE = ''.join(CODE) ################################################################################ NOTES = '''\ STACK ===== push number copy copy number swap away away number MATH ==== add sub mul div mod HEAP ==== set get FLOW ==== part label call label goto label zero label less label back exit I/O === ochr oint ichr iint''' ################################################################################ ################################################################################ class Stack: def __init__(self): self.__data = [] # Stack Operators def push(self, number): self.__data.append(number) def copy(self, number=None): if number is None: self.__data.append(self.__data[-1]) else: size = len(self.__data) index = size - number - 1 assert 0 <= index < size self.__data.append(self.__data[index]) def swap(self): self.__data[-2], self.__data[-1] = self.__data[-1], self.__data[-2] def away(self, number=None): if number is None: self.__data.pop() else: size = len(self.__data) index = size - number - 1 assert 0 <= index < size del self.__data[index:-1] # Math Operators def add(self): suffix = self.__data.pop() prefix = self.__data.pop() self.__data.append(prefix + suffix) def sub(self): suffix = self.__data.pop() prefix = self.__data.pop() self.__data.append(prefix - suffix) def mul(self): suffix = self.__data.pop() prefix = self.__data.pop() self.__data.append(prefix * suffix) def div(self): suffix = self.__data.pop() prefix = self.__data.pop() self.__data.append(prefix // suffix) def mod(self): suffix = self.__data.pop() prefix = self.__data.pop() self.__data.append(prefix % suffix) # Program Operator def pop(self): return self.__data.pop() ################################################################################ class Heap: def __init__(self): self.__data = {} def set_(self, addr, item): if item: self.__data[addr] = item elif addr in self.__data: del self.__data[addr] def get_(self, addr): return self.__data.get(addr, 0) ################################################################################ import os import zlib import msvcrt import pickle import string class CleanExit(Exception): pass NOP = lambda arg: None DEBUG_WHITESPACE = False ################################################################################ class Program: NO_ARGS = INS.COPY, INS.SWAP, INS.AWAY, INS.ADD, \ INS.SUB, INS.MUL, INS.DIV, INS.MOD, \ INS.SET, INS.GET, INS.BACK, INS.EXIT, \ INS.OCHR, INS.OINT, INS.ICHR, INS.IINT HAS_ARG = INS.PUSH, INS.COPY, INS.AWAY, INS.PART, \ INS.CALL, INS.GOTO, INS.ZERO, INS.LESS def __init__(self, code): self.__data = code self.__validate() self.__build_jump() self.__check_jump() self.__setup_exec() def __setup_exec(self): self.__iptr = 0 self.__stck = stack = Stack() self.__heap = Heap() self.__cast = [] self.__meth = (stack.push, stack.copy, stack.swap, stack.away, stack.add, stack.sub, stack.mul, stack.div, stack.mod, self.__set, self.__get, NOP, self.__call, self.__goto, self.__zero, self.__less, self.__back, self.__exit, self.__ochr, self.__oint, self.__ichr, self.__iint) def step(self): ins = self.__data[self.__iptr] self.__iptr += 1 if isinstance(ins, tuple): self.__meth[ins[0]](ins[1]) else: self.__meth[ins]() def run(self): while True: ins = self.__data[self.__iptr] self.__iptr += 1 if isinstance(ins, tuple): self.__meth[ins[0]](ins[1]) else: self.__meth[ins]() def __oint(self): for digit in str(self.__stck.pop()): msvcrt.putwch(digit) def __ichr(self): addr = self.__stck.pop() # Input Routine while msvcrt.kbhit(): msvcrt.getwch() while True: char = msvcrt.getwch() if char in '\x00\xE0': msvcrt.getwch() elif char in string.printable: char = char.replace('\r', '\n') msvcrt.putwch(char) break item = ord(char) # Storing Number self.__heap.set_(addr, item) def __iint(self): addr = self.__stck.pop() # Input Routine while msvcrt.kbhit(): msvcrt.getwch() buff = '' char = msvcrt.getwch() while char != '\r' or not buff: if char in '\x00\xE0': msvcrt.getwch() elif char in '+-' and not buff: msvcrt.putwch(char) buff += char elif '0' <= char <= '9': msvcrt.putwch(char) buff += char elif char == '\b': if buff: buff = buff[:-1] msvcrt.putwch(char) msvcrt.putwch(' ') msvcrt.putwch(char) char = msvcrt.getwch() msvcrt.putwch(char) msvcrt.putwch('\n') item = int(buff) # Storing Number self.__heap.set_(addr, item) def __goto(self, label): self.__iptr = self.__jump[label] def __zero(self, label): if self.__stck.pop() == 0: self.__iptr = self.__jump[label] def __less(self, label): if self.__stck.pop() < 0: self.__iptr = self.__jump[label] def __exit(self): self.__setup_exec() raise CleanExit() def __set(self): item = self.__stck.pop() addr = self.__stck.po

    Read the article

  • Using LogParser - part 2

    - by fatherjack
    PersonAddress.csv SalesOrderDetail.tsv In part 1 of this series we downloaded and installed LogParser and used it to list data from a csv file. That was a good start and in this article we are going to see the different ways we can stream data and choose whether a whole file is selected. We are also going to take a brief look at what file types we can interrogate. If we take the query from part 1 and add a value for the output parameter as -o:datagrid so that the query becomes LOGPARSER "SELECT top 15 * FROM C:\LP\person_address.csv" -o:datagrid and run that we get a different result. A pop-up dialog that lets us view the results in a resizable grid. Notice that because we didn't specify the columns we wanted returned by LogParser (we used SELECT *) is has added two columns to the recordset - filename and rownumber. This behaviour can be very useful as we will see in future parts of this series. You can click Next 10 rows or All rows or close the datagrid once you are finished reviewing the data. You may have noticed that the files that I am working with are different file types - one is a csv (comma separated values) and the other is a tsv (tab separated values). If you want to convert a file from one to another then LogParser makes it incredibly simple. Rather than using 'datagrid' as the value for the output parameter, use 'csv': logparser "SELECT SalesOrderID, SalesOrderDetailID, CarrierTrackingNumber, OrderQty, ProductID, SpecialOfferID, UnitPrice, UnitPriceDiscount, LineTotal, rowguid, ModifiedDate into C:\Sales_SalesOrderDetail.csv FROM C:\Sales_SalesOrderDetail.tsv" -i:tsv -o:csv Those familiar with SQL will not have to make a very big leap of faith to making adjustments to the above query to filter in/out records from the source file. Lets get all the records from the same file where the Order Quantity (OrderQty) is more than 25: logparser "SELECT SalesOrderID, SalesOrderDetailID, CarrierTrackingNumber, OrderQty, ProductID, SpecialOfferID, UnitPrice, UnitPriceDiscount, LineTotal, rowguid, ModifiedDate into C:\LP\Sales_SalesOrderDetailOver25.csv FROM C:\LP\Sales_SalesOrderDetail.tsv WHERE orderqty > 25" -i:tsv -o:csv Or we could find all those records where the Order Quantity is equal to 25 and output it to an xml file: logparser "SELECT SalesOrderID, SalesOrderDetailID, CarrierTrackingNumber, OrderQty, ProductID, SpecialOfferID, UnitPrice, UnitPriceDiscount, LineTotal, rowguid, ModifiedDate into C:\LP\Sales_SalesOrderDetailEq25.xml FROM C:\LP\Sales_SalesOrderDetail.tsv WHERE orderqty = 25" -i:tsv -o:xml All the standard comparison operators are to be found in LogParser; >, <, =, LIKE, BETWEEN, OR, NOT, AND. Input and Output file formats. LogParser has a pretty impressive list of file formats that it can parse and a good selection of output formats that will let you generate output in a format that is useable for whatever process or application you may be using. From any of these To any of these IISW3C: parses IIS log files in the W3C Extended Log File Format.   NAT: formats output records as readable tabulated columns. IIS: parses IIS log files in the Microsoft IIS Log File Format. CSV: formats output records as comma-separated values text. BIN: parses IIS log files in the Centralized Binary Log File Format. TSV: formats output records as tab-separated or space-separated values text. IISODBC: returns database records from the tables logged to by IIS when configured to log in the ODBC Log Format. XML: formats output records as XML documents. HTTPERR: parses HTTP error log files generated by Http.sys. W3C: formats output records in the W3C Extended Log File Format. URLSCAN: parses log files generated by the URLScan IIS filter. TPL: formats output records following user-defined templates. CSV: parses comma-separated values text files. IIS: formats output records in the Microsoft IIS Log File Format. TSV: parses tab-separated and space-separated values text files. SQL: uploads output records to a table in a SQL database. XML: parses XML text files. SYSLOG: sends output records to a Syslog server. W3C: parses text files in the W3C Extended Log File Format. DATAGRID: displays output records in a graphical user interface. NCSA: parses web server log files in the NCSA Common, Combined, and Extended Log File Formats. CHART: creates image files containing charts. TEXTLINE: returns lines from generic text files. TEXTWORD: returns words from generic text files. EVT: returns events from the Windows Event Log and from Event Log backup files (.evt files). FS: returns information on files and directories. REG: returns information on registry values. ADS: returns information on Active Directory objects. NETMON: parses network capture files created by NetMon. ETW: parses Enterprise Tracing for Windows trace log files and live sessions. COM: provides an interface to Custom Input Format COM Plugins. So, you can query data from any of the types on the left and really easily get it into a format where it is ready for analysis by other tools. To a DBA or network Administrator with an enquiring mind this is a treasure trove. In part 3 we will look at working with multiple sources and specifically outputting to SQL format. See you there!

    Read the article

  • Sun Fire X4800 M2 Delivers World Record TPC-C for x86 Systems

    - by Brian
    Oracle's Sun Fire X4800 M2 server equipped with eight 2.4 GHz Intel Xeon Processor E7-8870 chips obtained a result of 5,055,888 tpmC on the TPC-C benchmark. This result is a world record for x86 servers. Oracle demonstrated this world record database performance running Oracle Database 11g Release 2 Enterprise Edition with Partitioning. The Sun Fire X4800 M2 server delivered a new x86 TPC-C world record of 5,055,888 tpmC with a price performance of $0.89/tpmC using Oracle Database 11g Release 2. This configuration is available 06/26/12. The Sun Fire X4800 M2 server delivers 3.0x times better performance than the next 8-processor result, an IBM System p 570 equipped with POWER6 processors. The Sun Fire X4800 M2 server has 3.1x times better price/performance than the 8-processor 4.7GHz POWER6 IBM System p 570. The Sun Fire X4800 M2 server has 1.6x times better performance than the 4-processor IBM x3850 X5 system equipped with Intel Xeon processors. This is the first TPC-C result on any system using eight Intel Xeon Processor E7-8800 Series chips. The Sun Fire X4800 M2 server is the first x86 system to get over 5 million tpmC. The Oracle solution utilized Oracle Linux operating system and Oracle Database 11g Enterprise Edition Release 2 with Partitioning to produce the x86 world record TPC-C benchmark performance. Performance Landscape Select TPC-C results (sorted by tpmC, bigger is better) System p/c/t tpmC Price/tpmC Avail Database MemorySize Sun Fire X4800 M2 8/80/160 5,055,888 0.89 USD 6/26/2012 Oracle 11g R2 4 TB IBM x3850 X5 4/40/80 3,014,684 0.59 USD 7/11/2011 DB2 ESE 9.7 3 TB IBM x3850 X5 4/32/64 2,308,099 0.60 USD 5/20/2011 DB2 ESE 9.7 1.5 TB IBM System p 570 8/16/32 1,616,162 3.54 USD 11/21/2007 DB2 9.0 2 TB p/c/t - processors, cores, threads Avail - availability date Oracle and IBM TPC-C Response times System tpmC Response Time (sec) New Order 90th% Response Time (sec) New Order Average Sun Fire X4800 M2 5,055,888 0.210 0.166 IBM x3850 X5 3,014,684 0.500 0.272 Ratios - Oracle Better 1.6x 1.4x 1.3x Oracle uses average new order response time for comparison between Oracle and IBM. Graphs of Oracle's and IBM's response times for New-Order can be found in the full disclosure reports on TPC's website TPC-C Official Result Page. Configuration Summary and Results Hardware Configuration: Server Sun Fire X4800 M2 server 8 x 2.4 GHz Intel Xeon Processor E7-8870 4 TB memory 8 x 300 GB 10K RPM SAS internal disks 8 x Dual port 8 Gbs FC HBA Data Storage 10 x Sun Fire X4270 M2 servers configured as COMSTAR heads, each with 1 x 3.06 GHz Intel Xeon X5675 processor 8 GB memory 10 x 2 TB 7.2K RPM 3.5" SAS disks 2 x Sun Storage F5100 Flash Array storage (1.92 TB each) 1 x Brocade 5300 switches Redo Storage 2 x Sun Fire X4270 M2 servers configured as COMSTAR heads, each with 1 x 3.06 GHz Intel Xeon X5675 processor 8 GB memory 11 x 2 TB 7.2K RPM 3.5" SAS disks Clients 8 x Sun Fire X4170 M2 servers, each with 2 x 3.06 GHz Intel Xeon X5675 processors 48 GB memory 2 x 300 GB 10K RPM SAS disks Software Configuration: Oracle Linux (Sun Fire 4800 M2) Oracle Solaris 11 Express (COMSTAR for Sun Fire X4270 M2) Oracle Solaris 10 9/10 (Sun Fire X4170 M2) Oracle Database 11g Release 2 Enterprise Edition with Partitioning Oracle iPlanet Web Server 7.0 U5 Tuxedo CFS-R Tier 1 Results: System: Sun Fire X4800 M2 tpmC: 5,055,888 Price/tpmC: 0.89 USD Available: 6/26/2012 Database: Oracle Database 11g Cluster: no New Order Average Response: 0.166 seconds Benchmark Description TPC-C is an OLTP system benchmark. It simulates a complete environment where a population of terminal operators executes transactions against a database. The benchmark is centered around the principal activities (transactions) of an order-entry environment. These transactions include entering and delivering orders, recording payments, checking the status of orders, and monitoring the level of stock at the warehouses. Key Points and Best Practices Oracle Database 11g Release 2 Enterprise Edition with Partitioning scales easily to this high level of performance. COMSTAR (Common Multiprotocol SCSI Target) is the software framework that enables an Oracle Solaris host to serve as a SCSI Target platform. COMSTAR uses a modular approach to break the huge task of handling all the different pieces in a SCSI target subsystem into independent functional modules which are glued together by the SCSI Target Mode Framework (STMF). The modules implementing functionality at SCSI level (disk, tape, medium changer etc.) are not required to know about the underlying transport. And the modules implementing the transport protocol (FC, iSCSI, etc.) are not aware of the SCSI-level functionality of the packets they are transporting. The framework hides the details of allocation providing execution context and cleanup of SCSI commands and associated resources and simplifies the task of writing the SCSI or transport modules. Oracle iPlanet Web Server middleware is used for the client tier of the benchmark. Each web server instance supports more than a quarter-million users while satisfying the response time requirement from the TPC-C benchmark. See Also Oracle Press Release -- Sun Fire X4800 M2 TPC-C Executive Summary tpc.org Complete Sun Fire X4800 M2 TPC-C Full Disclosure Report tpc.org Transaction Processing Performance Council (TPC) Home Page Ideas International Benchmark Page Sun Fire X4800 M2 Server oracle.com OTN Oracle Linux oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Sun Storage F5100 Flash Array oracle.com OTN Disclosure Statement TPC Benchmark C, tpmC, and TPC-C are trademarks of the Transaction Processing Performance Council (TPC). Sun Fire X4800 M2 (8/80/160) with Oracle Database 11g Release 2 Enterprise Edition with Partitioning, 5,055,888 tpmC, $0.89 USD/tpmC, available 6/26/2012. IBM x3850 X5 (4/40/80) with DB2 ESE 9.7, 3,014,684 tpmC, $0.59 USD/tpmC, available 7/11/2011. IBM x3850 X5 (4/32/64) with DB2 ESE 9.7, 2,308,099 tpmC, $0.60 USD/tpmC, available 5/20/2011. IBM System p 570 (8/16/32) with DB2 9.0, 1,616,162 tpmC, $3.54 USD/tpmC, available 11/21/2007. Source: http://www.tpc.org/tpcc, results as of 7/15/2011.

    Read the article

  • Integrating Code Metrics in TFS 2010 Build

    - by Jakob Ehn
    The build process template and custom activity described in this post is available here: http://cid-ee034c9f620cd58d.office.live.com/self.aspx/BlogSamples/CodeMetricsSample.zip Running code metrics has been available since VS 2008, but only from inside the IDE. Yesterday Microsoft finally releases a Visual Studio Code Metrics Power Tool 10.0, a command line tool that lets you run code metrics on your applications.  This means that it is now possible to perform code metrics analysis on the build server as part of your nightly/QA builds (for example). In this post I will show how you can run the metrics command line tool, and also a custom activity that reads the output and appends the results to the build log, and also fails he build if the metric values exceeds certain (configurable) treshold values. The code metrics tool analyzes all the methods in the assemblies, measuring cyclomatic complexity, class coupling, depth of inheritance and lines of code. Then it calculates a Maintainability Index from these values that is a measure f how maintanable this method is, between 0 (worst) and 100 (best). For information on hwo this value is calculated, see http://blogs.msdn.com/b/codeanalysis/archive/2007/11/20/maintainability-index-range-and-meaning.aspx. After this it aggregates the information and present it at the class, namespace and module level as well. Running Metrics.exe in a build definition Running the actual tool is easy, just use a InvokeProcess activity last in the Compile the Project sequence, reference the metrics.exe file and pass the correct arguments and you will end up with a result XML file in the drop directory. Here is how it is done in the attached build process template: In the above sequence I first assign the path to the code metrics result file ([BinariesDirectory]\result.xml) to a variable called MetricsResultFile, which is then sent to the InvokeProcess activity in the Arguments property. Here are the arguments for the InvokeProcess activity: Note that we tell metrics.exe to analyze all assemblies located in the Binaries folder. You might want to do some more intelligent filtering here, you probably don’t want to analyze all 3rd party assemblies for example. Note also the path to the metrics.exe, this is the default location when you install the Code Metrics power tool. You must of course install the power tool on all build servers. Using the standard output logging (in the Handle Standard Output/Handle Error Output sections), we get the following output when running the build: Integrating Code Metrics into the build Having the results available next to the build result is nice, but we want to have results integrated in the build result itself, and also to affect the outcome of the build. The point of having QA builds that measure, for example, code metrics is to make it very clear how the code being built measures up to the standards of the project/company. Just having a XML file available in the drop location will not cause the developers to improve their code, but a (partially) failing build will! To do this, we need to write a custom activity that parses the metrics result file, logs it to the build log and fails the build if the values frfom the metrics is below/above some predefined treshold values. The custom activity performs the following steps Parses the XML. I’m using Linq 2 XSD for this, since the XML schema for the result file is available, it is vey easy to generate code that lets you query the structure using standard Linq operators. Runs through the metric result hierarchy and logs the metrics for each level and also verifies maintainability index and the cyclomatic complexity with the treshold values. The treshold values are defined in the build process template are are sent in as arguments to the custom activity If the treshold values are exceeded, the activity either fails or partially fails the current build. For more information about the structure of the code metrics result file, read Cameron Skinner's post about it. It is very simpe and easy to understand. I won’t go through the code of the custom activity here, since there is nothing special about it and it is available for download so you can look at it and play with it yourself. The treshold values for Maintainability Index and Cyclomatic Complexity is defined in the build process template, and can be modified per build definition: I have taken the default value for these settings from my colleague Terje Sandström post on Code Metrics - suggestions for approriate limits. You’ll notice that this is quite an improvement compared to using code metrics inside the IDE, where Red/Yellow/Green limits are fixed (and the default values are somewaht strange, see Terjes post for a discussion on this) This is the first version of the code metrics integration with TFS 2010 Build, I will proabably enhance the functionality and the logging (the “tree view” structure in the log becomes quite hard to read) soon. I will also consider adding it to the Community TFS Build Extensions site when it becomes a bit more mature. Another obvious improvement is to extend the data warehouse of TFS and push the metric results back to the warehouse and make it visible in the reports.

    Read the article

  • SQL SERVER – SQL in Sixty Seconds – 5 Videos from Joes 2 Pros Series – SQL Exam Prep Series 70-433

    - by pinaldave
    Joes 2 Pros SQL Server Learning series is indeed fun. Joes 2 Pros series is written for beginners and who wants to build expertise for SQL Server programming and development from fundamental. In the beginning of the series author Rick Morelan is not shy to explain the simplest concept of how to open SQL Server Management Studio. Honestly the book starts with that much basic but as it progresses further Rick discussing about various advanced concepts from query tuning to Core Architecture. This five part series is written with keeping SQL Server Exam 70-433. Instead of just focusing on what will be there in exam, this series is focusing on learning the important concepts thoroughly. This book no way take short cut to explain any concepts and at times, will go beyond the topic at length. The best part is that all the books has many companion videos explaining the concepts and videos. Every Wednesday I like to post a video which explains something in quick few seconds. Today we will go over five videos which I posted in my earlier posts related to Joes 2 Pros series. Introduction to XML Data Type Methods – SQL in Sixty Seconds #015 The XML data type was first introduced with SQL Server 2005. This data type continues with SQL Server 2008 where expanded XML features are available, most notably is the power of the XQuery language to analyze and query the values contained in your XML instance. There are five XML data type methods available in SQL Server 2008: query() – Used to extract XML fragments from an XML data type. value() – Used to extract a single value from an XML document. exist() – Used to determine if a specified node exists. Returns 1 if yes and 0 if no. modify() – Updates XML data in an XML data type. node() – Shreds XML data into multiple rows (not covered in this blog post). [Detailed Blog Post] | [Quiz with Answer] Introduction to SQL Error Actions – SQL in Sixty Seconds #014 Most people believe that when SQL Server encounters an error severity level 11 or higher the remaining SQL statements will not get executed. In addition, people also believe that if any error severity level of 11 or higher is hit inside an explicit transaction, then the whole statement will fail as a unit. While both of these beliefs are true 99% of the time, they are not true in all cases. It is these outlying cases that frequently cause unexpected results in your SQL code. To understand how to achieve consistent results you need to know the four ways SQL Error Actions can react to error severity levels 11-16: Statement Termination – The statement with the procedure fails but the code keeps on running to the next statement. Transactions are not affected. Scope Abortion – The current procedure, function or batch is aborted and the next calling scope keeps running. That is, if Stored Procedure A calls B and C, and B fails, then nothing in B runs but A continues to call C. @@Error is set but the procedure does not have a return value. Batch Termination – The entire client call is terminated. XACT_ABORT – (ON = The entire client call is terminated.) or (OFF = SQL Server will choose how to handle all errors.) [Detailed Blog Post] | [Quiz with Answer] Introduction to Basics of a Query Hint – SQL in Sixty Seconds #013 Query hints specify that the indicated hints should be used throughout the query. Query hints affect all operators in the statement and are implemented using the OPTION clause. Cautionary Note: Because the SQL Server Query Optimizer typically selects the best execution plan for a query, it is highly recommended that hints be used as a last resort for experienced developers and database administrators to achieve the desired results. [Detailed Blog Post] | [Quiz with Answer] Introduction to Hierarchical Query – SQL in Sixty Seconds #012 A CTE can be thought of as a temporary result set and are similar to a derived table in that it is not stored as an object and lasts only for the duration of the query. A CTE is generally considered to be more readable than a derived table and does not require the extra effort of declaring a Temp Table while providing the same benefits to the user. However; a CTE is more powerful than a derived table as it can also be self-referencing, or even referenced multiple times in the same query. A recursive CTE requires four elements in order to work properly: Anchor query (runs once and the results ‘seed’ the Recursive query) Recursive query (runs multiple times and is the criteria for the remaining results) UNION ALL statement to bind the Anchor and Recursive queries together. INNER JOIN statement to bind the Recursive query to the results of the CTE. [Detailed Blog Post] | [Quiz with Answer] Introduction to SQL Server Security – SQL in Sixty Seconds #011 Let’s get some basic definitions down first. Take the workplace example where “Tom” needs “Read” access to the “Financial Folder”. What are the Securable, Principal, and Permissions from that last sentence? A Securable is a resource that someone might want to access (like the Financial Folder). A Principal is anything that might want to gain access to the securable (like Tom). A Permission is the level of access a principal has to a securable (like Read). [Detailed Blog Post] | [Quiz with Answer] Please leave a comment explain which one was your favorite video as that will help me understand what works and what needs improvement. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • Union,Except and Intersect operator in Linq

    - by Jalpesh P. Vadgama
    While developing a windows service using Linq-To-SQL i was in need of something that will intersect the two list and return a list with the result. After searching on net i have found three great use full operators in Linq Union,Except and Intersect. Here are explanation of each operator. Union Operator: Union operator will combine elements of both entity and return result as third new entities. Except Operator: Except operator will remove elements of first entities which elements are there in second entities and will return as third new entities. Intersect Operator: As name suggest it will return common elements of both entities and return result as new entities. Let’s take a simple console application as  a example where i have used two string array and applied the three operator one by one and print the result using Console.Writeline. Here is the code for that. C#, using GeSHi 1.0.8.6 using System; using System.Collections.Generic; using System.Linq; using System.Text;     namespace ConsoleApplication1 {     class Program     {         static void Main(string[] args)         {             string[] a = { "a", "b", "c", "d" };             string[] b = { "d","e","f","g"};               var UnResult = a.Union(b);             Console.WriteLine("Union Result");               foreach (string s in UnResult)             {                 Console.WriteLine(s);                          }               var ExResult = a.Except(b);             Console.WriteLine("Except Result");             foreach (string s in ExResult)             {                 Console.WriteLine(s);             }               var InResult = a.Intersect(b);             Console.WriteLine("Intersect Result");             foreach (string s in InResult)             {                 Console.WriteLine(s);             }             Console.ReadLine();                        }          } }   Parsed in 0.022 seconds at 45.54 KB/s Here is the output of console application as Expected. Hope this will help you.. Technorati Tags: Linq,Except,InterSect,Union,C#

    Read the article

  • Some mail details about Orange Mauritius

    Being an internet service provider is not easy after all for a lot of companies. Luckily, there are quite some good international operators in this world. For example Orange Mauritius aka Mauritius Telecom aka Wanadoo(?) aka MyT here in Mauritius. The local circumstances give them a quasi-monopol position on fixed lines for telephony and therefore cable-based DSL internet connectivity. So far, not bad but as usual... the details. Just for the records, I am only using the services of Orange for mobile but friends and customers are bound, eh stuck, with other services of Orange Mauritius. And usually, being the IT guy, they get in touch with me to complain about problems or to ask questions on either their ADSL / MyT connection, mail services or whatever. Most of those issues are user-related and easily to solve by tweaking the configuration of their computer a little bit but sometimes it's getting weird. Using Orange ADSL... somewhere else Now, let's imagine we are an Orange ADSL customer for ages and we are using their mail services with our very own mail address like "[email protected]". We configured our mail client like Thunderbird, Outlook Express, Outlook or Windows Mail as publicly described, and we are able to receive and send emails like a champion. No problems at all, the world is green. Did I mention that we have a laptop? Ok, let's take our movable piece of information technology and visit a friend here on the island. Not surprising, he is also customer of Orange, so we can read and answer emails. But Orange is not the online internet service provider and one day, we happen to hang out with someone that uses Emtel via WiMAX or UMTS.. And the fun starts... We can still receive and read emails from our Orange mail account and the IT world is still bright but try to send mails to someone outside the domain "@intnet.mu" or "@orange.mu". Your mail client will deny sending mail with SMTP message 5.1.0 "blah not allowed". First guess, there is problem with the mail client, maybe magically the configuration changed over-night. But no it is still working at home... So, there is for sure a problem with the guy's internet connection. At least, it is his fault not to have Orange internet services, so it can not work properly... The Orange Mail FAQ After some more frustation we finally checkout the Orange Mail FAQ to see whether this (obviously?) common problem has been described already. Sorry, but those FAQ entries are even more confusing as it is not really clear how to handle this scenario. Best of all is that most of the entries are still refering to use servers of the domain "intnet.mu". I mean Orange will disable those systems in favour of the domain "orange.mu" in the near future and does not amend their FAQs. Come on, guys! Ok, settings for POP3 are there. Hm, what about the secure version POP3S? No signs at all... Even changing your mail client to use password encryption with STARTTLS is not allowed at all. Use "bow.intnet.mu" for incoming mail... Ahhh, pretty obvious host name. I mean, at least something like pop.intnet.mu or pop3.intnet.mu would have been more accurate. Funny of all, the hostname "pop.orange.mu" is accessible to receive your mail account. Alright, checking SMTP options for authentication or other like POP-before-SMTP or whatever well-known and established mechanism to send emails are described. I guess that spotting a whale or shark in Mauritian waters would be easier. Trial and error on SMTP settings reveal that neither STARTTLS or any other connection / password encryption is available. Using SSL/TLS on SMTP only reveals that there is no service answering your request. Calling customer service So, we have to bite into the bitter apple and get in touch with Orange customer service and complain/explain them our case and ask for advice. After some hiccups, we finally manage to get hold of someone competent in mail services and we receive the golden spoon of mail configuration made by Orange Mauritius: SMTP hostname: smtpauth.intnet.mu And the world of IT is surprisingly green again. Customer satisfaction? Dear Orange Mauritius, what's the problem with this information? Are you scared of mail spammer? Why isn't there any case in your FAQs? Ok, talking about your FAQs - simply said: they are badly outdated! Configure your mail client to use server name based in the domain intnet.mu but specify your account username with orange.mu as domain part. Although, that there are servers available on the domain orange.mu after all. So, why don't you provide current information like this: POP3 server name: pop.orange.muSMTP server name: smtp.orange.muSMTP authenticated: smtpauth.orange.mu It's not difficult, is it? In my humble opinion not really and you would provide clean, consistent and up-to-date information for your customers. This would produce less frustation and so less traffic on your customer service lines. Which after all, would improve the total user experience and satisfaction level on both sides. Without knowing these facts. Now, imagine you would take your laptop abroad and have to use other internet service providers to be able to be online... Calling your customer service would be unnecessary expensive!

    Read the article

  • ROracle support for TimesTen In-Memory Database

    - by Sam Drake
    Today's guest post comes from Jason Feldhaus, a Consulting Member of Technical Staff in the TimesTen Database organization at Oracle.  He shares with us a sample session using ROracle with the TimesTen In-Memory database.  Beginning in version 1.1-4, ROracle includes support for the Oracle Times Ten In-Memory Database, version 11.2.2. TimesTen is a relational database providing very fast and high throughput through its memory-centric architecture.  TimesTen is designed for low latency, high-volume data, and event and transaction management. A TimesTen database resides entirely in memory, so no disk I/O is required for transactions and query operations. TimesTen is used in applications requiring very fast and predictable response time, such as real-time financial services trading applications and large web applications. TimesTen can be used as the database of record or as a relational cache database to Oracle Database. ROracle provides an interface between R and the database, providing the rich functionality of the R statistical programming environment using the SQL query language. ROracle uses the OCI libraries to handle database connections, providing much better performance than standard ODBC.The latest ROracle enhancements include: Support for Oracle TimesTen In-Memory Database Support for Date-Time using R's POSIXct/POSIXlt data types RAW, BLOB and BFILE data type support Option to specify number of rows per fetch operation Option to prefetch LOB data Break support using Ctrl-C Statement caching support Times Ten 11.2.2 contains enhanced support for analytics workloads and complex queries: Analytic functions: AVG, SUM, COUNT, MAX, MIN, DENSE_RANK, RANK, ROW_NUMBER, FIRST_VALUE and LAST_VALUE Analytic clauses: OVER PARTITION BY and OVER ORDER BY Multidimensional grouping operators: Grouping clauses: GROUP BY CUBE, GROUP BY ROLLUP, GROUP BY GROUPING SETS Grouping functions: GROUP, GROUPING_ID, GROUP_ID WITH clause, which allows repeated references to a named subquery block Aggregate expressions over DISTINCT expressions General expressions that return a character string in the source or a pattern within the LIKE predicate Ability to order nulls first or last in a sort result (NULLS FIRST or NULLS LAST in the ORDER BY clause) Note: Some functionality is only available with Oracle Exalytics, refer to the TimesTen product licensing document for details. Connecting to TimesTen is easy with ROracle. Simply install and load the ROracle package and load the driver. > install.packages("ROracle") > library(ROracle) Loading required package: DBI > drv <- dbDriver("Oracle") Once the ROracle package is installed, create a database connection object and connect to a TimesTen direct driver DSN as the OS user. > conn <- dbConnect(drv, username ="", password="", dbname = "localhost/SampleDb_1122:timesten_direct") You have the option to report the server type - Oracle or TimesTen? > print (paste ("Server type =", dbGetInfo (conn)$serverType)) [1] "Server type = TimesTen IMDB" To create tables in the database using R data frame objects, use the function dbWriteTable. In the following example we write the built-in iris data frame to TimesTen. The iris data set is a small example data set containing 150 rows and 5 columns. We include it here not to highlight performance, but so users can easily run this example in their R session. > dbWriteTable (conn, "IRIS", iris, overwrite=TRUE, ora.number=FALSE) [1] TRUE Verify that the newly created IRIS table is available in the database. To list the available tables and table columns in the database, use dbListTables and dbListFields, respectively. > dbListTables (conn) [1] "IRIS" > dbListFields (conn, "IRIS") [1] "SEPAL.LENGTH" "SEPAL.WIDTH" "PETAL.LENGTH" "PETAL.WIDTH" "SPECIES" To retrieve a summary of the data from the database we need to save the results to a local object. The following call saves the results of the query as a local R object, iris.summary. The ROracle function dbGetQuery is used to execute an arbitrary SQL statement against the database. When connected to TimesTen, the SQL statement is processed completely within main memory for the fastest response time. > iris.summary <- dbGetQuery(conn, 'SELECT SPECIES, AVG ("SEPAL.LENGTH") AS AVG_SLENGTH, AVG ("SEPAL.WIDTH") AS AVG_SWIDTH, AVG ("PETAL.LENGTH") AS AVG_PLENGTH, AVG ("PETAL.WIDTH") AS AVG_PWIDTH FROM IRIS GROUP BY ROLLUP (SPECIES)') > iris.summary SPECIES AVG_SLENGTH AVG_SWIDTH AVG_PLENGTH AVG_PWIDTH 1 setosa 5.006000 3.428000 1.462 0.246000 2 versicolor 5.936000 2.770000 4.260 1.326000 3 virginica 6.588000 2.974000 5.552 2.026000 4 <NA> 5.843333 3.057333 3.758 1.199333 Finally, disconnect from the TimesTen Database. > dbCommit (conn) [1] TRUE > dbDisconnect (conn) [1] TRUE We encourage you download Oracle software for evaluation from the Oracle Technology Network. See these links for our software: Times Ten In-Memory Database,  ROracle.  As always, we welcome comments and questions on the TimesTen and  Oracle R technical forums.

    Read the article

  • When is my View too smart?

    - by Kyle Burns
    In this posting, I will discuss the motivation behind keeping View code as thin as possible when using patterns such as MVC, MVVM, and MVP.  Once the motivation is identified, I will examine some ways to determine whether a View contains logic that belongs in another part of the application.  While the concepts that I will discuss are applicable to most any pattern which favors a thin View, any concrete examples that I present will center on ASP.NET MVC. Design patterns that include a Model, a View, and other components such as a Controller, ViewModel, or Presenter are not new to application development.  These patterns have, in fact, been around since the early days of building applications with graphical interfaces.  The reason that these patterns emerged is simple – the code running closest to the user tends to be littered with logic and library calls that center around implementation details of showing and manipulating user interface widgets and when this type of code is interspersed with application domain logic it becomes difficult to understand and much more difficult to adequately test.  By removing domain logic from the View, we ensure that the View has a single responsibility of drawing the screen which, in turn, makes our application easier to understand and maintain. I was recently asked to take a look at an ASP.NET MVC View because the developer reviewing it thought that it possibly had too much going on in the view.  I looked at the .CSHTML file and the first thing that occurred to me was that it began with 40 lines of code declaring member variables and performing the necessary calculations to populate these variables, which were later either output directly to the page or used to control some conditional rendering action (such as adding a class name to an HTML element or not rendering another element at all).  This exhibited both of what I consider the primary heuristics (or code smells) indicating that the View is too smart: Member variables – in general, variables in View code are an indication that the Model to which the View is being bound is not sufficient for the needs of the View and that the View has had to augment that Model.  Notable exceptions to this guideline include variables used to hold information specifically related to rendering (such as a dynamically determined CSS class name or the depth within a recursive structure for indentation purposes) and variables which are used to facilitate looping through collections while binding. Arithmetic – as with member variables, the presence of arithmetic operators within View code are an indication that the Model servicing the View is insufficient for its needs.  For example, if the Model represents a line item in a sales order, it might seem perfectly natural to “normalize” the Model by storing the quantity and unit price in the Model and multiply these within the View to show the line total.  While this does seem natural, it introduces a business rule to the View code and makes it impossible to test that the rounding of the result meets the requirement of the business without executing the View.  Within View code, arithmetic should only be used for activities such as incrementing loop counters and calculating element widths. In addition to the two characteristics of a “Smart View” that I’ve discussed already, this View also exhibited another heuristic that commonly indicates to me the need to refactor a View and make it a bit less smart.  That characteristic is the existence of Boolean logic that either does not work directly with properties of the Model or works with too many properties of the Model.  Consider the following code and consider how logic that does not work directly with properties of the Model is just another form of the “member variable” heuristic covered earlier: @if(DateTime.Now.Hour < 12) {     <div>Good Morning!</div> } else {     <div>Greetings</div> } This code performs business logic to determine whether it is morning.  A possible refactoring would be to add an IsMorning property to the Model, but in this particular case there is enough similarity between the branches that the entire branching structure could be collapsed by adding a Greeting property to the Model and using it similarly to the following: <div>@Model.Greeting</div> Now let’s look at some complex logic around multiple Model properties: @if (ModelPageNumber + Model.NumbersToDisplay == Model.PageCount         || (Model.PageCount != Model.CurrentPage             && !Model.DisplayValues.Contains(Model.PageCount))) {     <div>There's more to see!</div> } In this scenario, not only is the View code difficult to read (you shouldn’t have to play “human compiler” to determine the purpose of the code), but it also complex enough to be at risk for logical errors that cannot be detected without executing the View.  Conditional logic that requires more than a single logical operator should be looked at more closely to determine whether the condition should be evaluated elsewhere and exposed as a single property of the Model.  Moving the logic above outside of the View and exposing a new Model property would simplify the View code to: @if(Model.HasMoreToSee) {     <div>There’s more to see!</div> } In this posting I have briefly discussed some of the more prominent heuristics that indicate a need to push code from the View into other pieces of the application.  You should now be able to recognize these symptoms when building or maintaining Views (or the Models that support them) in your applications.

    Read the article

  • ROracle support for TimesTen In-Memory Database

    - by Sherry LaMonica
    Today's guest post comes from Jason Feldhaus, a Consulting Member of Technical Staff in the TimesTen Database organization at Oracle.  He shares with us a sample session using ROracle with the TimesTen In-Memory database.  Beginning in version 1.1-4, ROracle includes support for the Oracle Times Ten In-Memory Database, version 11.2.2. TimesTen is a relational database providing very fast and high throughput through its memory-centric architecture.  TimesTen is designed for low latency, high-volume data, and event and transaction management. A TimesTen database resides entirely in memory, so no disk I/O is required for transactions and query operations. TimesTen is used in applications requiring very fast and predictable response time, such as real-time financial services trading applications and large web applications. TimesTen can be used as the database of record or as a relational cache database to Oracle Database. ROracle provides an interface between R and the database, providing the rich functionality of the R statistical programming environment using the SQL query language. ROracle uses the OCI libraries to handle database connections, providing much better performance than standard ODBC.The latest ROracle enhancements include: Support for Oracle TimesTen In-Memory Database Support for Date-Time using R's POSIXct/POSIXlt data types RAW, BLOB and BFILE data type support Option to specify number of rows per fetch operation Option to prefetch LOB data Break support using Ctrl-C Statement caching support Times Ten 11.2.2 contains enhanced support for analytics workloads and complex queries: Analytic functions: AVG, SUM, COUNT, MAX, MIN, DENSE_RANK, RANK, ROW_NUMBER, FIRST_VALUE and LAST_VALUE Analytic clauses: OVER PARTITION BY and OVER ORDER BY Multidimensional grouping operators: Grouping clauses: GROUP BY CUBE, GROUP BY ROLLUP, GROUP BY GROUPING SETS Grouping functions: GROUP, GROUPING_ID, GROUP_ID WITH clause, which allows repeated references to a named subquery block Aggregate expressions over DISTINCT expressions General expressions that return a character string in the source or a pattern within the LIKE predicate Ability to order nulls first or last in a sort result (NULLS FIRST or NULLS LAST in the ORDER BY clause) Note: Some functionality is only available with Oracle Exalytics, refer to the TimesTen product licensing document for details. Connecting to TimesTen is easy with ROracle. Simply install and load the ROracle package and load the driver. > install.packages("ROracle") > library(ROracle) Loading required package: DBI > drv <- dbDriver("Oracle") Once the ROracle package is installed, create a database connection object and connect to a TimesTen direct driver DSN as the OS user. > conn <- dbConnect(drv, username ="", password="", dbname = "localhost/SampleDb_1122:timesten_direct") You have the option to report the server type - Oracle or TimesTen? > print (paste ("Server type =", dbGetInfo (conn)$serverType)) [1] "Server type = TimesTen IMDB" To create tables in the database using R data frame objects, use the function dbWriteTable. In the following example we write the built-in iris data frame to TimesTen. The iris data set is a small example data set containing 150 rows and 5 columns. We include it here not to highlight performance, but so users can easily run this example in their R session. > dbWriteTable (conn, "IRIS", iris, overwrite=TRUE, ora.number=FALSE) [1] TRUE Verify that the newly created IRIS table is available in the database. To list the available tables and table columns in the database, use dbListTables and dbListFields, respectively. > dbListTables (conn) [1] "IRIS" > dbListFields (conn, "IRIS") [1] "SEPAL.LENGTH" "SEPAL.WIDTH" "PETAL.LENGTH" "PETAL.WIDTH" "SPECIES" To retrieve a summary of the data from the database we need to save the results to a local object. The following call saves the results of the query as a local R object, iris.summary. The ROracle function dbGetQuery is used to execute an arbitrary SQL statement against the database. When connected to TimesTen, the SQL statement is processed completely within main memory for the fastest response time. > iris.summary <- dbGetQuery(conn, 'SELECT SPECIES, AVG ("SEPAL.LENGTH") AS AVG_SLENGTH, AVG ("SEPAL.WIDTH") AS AVG_SWIDTH, AVG ("PETAL.LENGTH") AS AVG_PLENGTH, AVG ("PETAL.WIDTH") AS AVG_PWIDTH FROM IRIS GROUP BY ROLLUP (SPECIES)') > iris.summary SPECIES AVG_SLENGTH AVG_SWIDTH AVG_PLENGTH AVG_PWIDTH 1 setosa 5.006000 3.428000 1.462 0.246000 2 versicolor 5.936000 2.770000 4.260 1.326000 3 virginica 6.588000 2.974000 5.552 2.026000 4 <NA> 5.843333 3.057333 3.758 1.199333 Finally, disconnect from the TimesTen Database. > dbCommit (conn) [1] TRUE > dbDisconnect (conn) [1] TRUE We encourage you download Oracle software for evaluation from the Oracle Technology Network. See these links for our software: Times Ten In-Memory Database,  ROracle.  As always, we welcome comments and questions on the TimesTen and  Oracle R technical forums.

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44  | Next Page >