Search Results

Search found 20040 results on 802 pages for 'part'.

Page 26/802 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • Request for Taking Part in Survey

    <b>Maciej Piosik:</b> Therefore I would like to ask the FOSS community members, both developers and users, to help me with my research. Your motivations of using this kind of software is the key to my study.

    Read the article

  • OData Mix10 &ndash; Part Dos

    - by Jon Dalberg
    The other day I had a snazzy post on fetching all the video (WMV) files from Mix ‘10. A simple, console application that grabbed the urls from the OData feed and downloaded the videos. I wanted to change that app to fire the OData query asynchronously so here’s what resulted: 1: static void Main(string[] args) 2: { 3: var mix = new Mix.EventEntities(new Uri("http://api.visitmix.com/OData.svc")); 4:   5: var temp = mix.Files.Where(f => f.TypeName == "WMV"); 6: var query = temp as DataServiceQuery<Mix.File>; 7:   8: query.BeginExecute(OnFileQueryComplete, query); 9:   10: // waiting... 11: Console.ReadLine(); 12: } 13:   14: static void OnFileQueryComplete(IAsyncResult result) 15: { 16: var query = result.AsyncState as DataServiceQuery<Mix.File>; 17: var response = query.EndExecute(result); 18:   19: var web = new WebClient(); 20:   21: var myVideos = Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.MyVideos), "Mix10"); 22:   23: Directory.CreateDirectory(myVideos); 24:   25: foreach (Mix.File f in response) 26: { 27: var fileName = new Uri(f.Url).Segments.Last(); 28: Console.WriteLine(f.Url); 29: web.DownloadFile(f.Url, Path.Combine(myVideos, fileName)); 30: } 31: } There are two important things here that are not explained well in the MSDN docs: See lines 5 and 6? That’s where I query for the WMV files and it returns an IQueryable<T>. You *have* to cast that to a DataServiceQuery<T> and then call BeginExecute. The documented example does not filter so it didn’t show that step. Line 16 shows the correct way to get the previously executed DataServiceQuery<T> from the async result. If you looked at the MSDN example docs it shows (incorrectly) just casting the result, like this: // wrong var query = result as DataServiceQuery<Mix.File>; Other than those items it is relatively straight forward and we’re all async-ified. Enjoy!

    Read the article

  • T-SQL User-Defined Functions: the good, the bad, and the ugly (part 1)

    - by Hugo Kornelis
    So you thought that encapsulating code in user-defined functions for easy reuse is a good idea? Think again! SQL Server supports three types of user-defined functions. Only one of them qualifies as good. The other two – well, the title says it all, doesn’t it? The bad: scalar functions A scalar user-defined function (UDF) is very much like a stored procedure, except that it always returns a single value of a predefined data type – and because of that property, it isn’t invoked with an EXECUTE statement,...(read more)

    Read the article

  • Moving from Silverlight 4 Beta to RC - Part 2

    In my previous post I talked about updating my development environment from Silverlight 4 Beta to Silverlight 4 RC (release candidate). After updating, I opened the solution for my Task-It project and found that several things were broken. I would've been surprised if it just worked as is! What disappointed me is that after spending a decent amount of time searching the web, I could not find information telling me what I needed to update/change...and wouldn't it be nice if there was a wizard to update it for you? What changed? I wish I had made notes along the line of each of the things I found, but here are a few that I can recall: In the Web project, the following dll's no longer exist: System.Web.DomainServices.dll System.Web.DomainServices.LinqToSql.dll (if you are using the Entity Framework I believe that one is System.Web.DomainServices.EntityFramework.dll) System.Web.Ria.dll In the Silverlight project: System.Windows.Ria.dll I'm not positive which new assemblies need to be referenced for your project, but I'm going to list the ones I think you need. One way to verify is to create a new Silverlight application with support for WCF RIA Services and see which dlls are included. In the Web project: System.Data.Entity.dll System.ServiceModel.DomainServices.EntityFramework.dll (I've moved from LinqToSql to EntityFramework, so I'm not sure which one the LinqToSql stuff comes from) System.ServiceModel.DomainServices.Hosting.dll System.ServiceModel.DomainServices.Server.dll In the Silverlight project: System.ServiceModel.DomainServices.Client.dll System.ServiceModel.DomainServices.Client.Web.dll System.ServiceModel.Web.Extensions.dll Where are these dll's? These all live in either the Silverlight or RIA Services subdirectories under:         C:\Program Files\Microsoft SDKs Of if you are on a 64-bit machine like me:         C:\Program Files (x86)\Microsoft SDKs Wrap up Good luck, and I hope this helps to get you back in business! If anyone finds anything that I've missed, please enter a comment and I'll update the post accordingly.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • PASS Summit 2011 &ndash; Part II

    - by Tara Kizer
    I arrived in Seattle last Monday afternoon to attend PASS Summit 2011.  I had really wanted to attend Gail Shaw’s (blog|twitter) and Grant Fritchey’s (blog|twitter) pre-conference seminar “All About Execution Plans” on Monday, but that would have meant flying out on Sunday which I couldn’t do.  On Tuesday, I attended Allan Hirt’s (blog|twitter) pre-conference seminar entitled “A Deep Dive into AlwaysOn: Failover Clustering and Availability Groups”.  Allan is a great speaker, and his seminar was packed with demos and information about AlwaysOn in SQL Server 2012.  Unfortunately, I have lost my notes from this seminar and the presentation materials are only available on the pre-con DVD.  Hmpf! On Wednesday, I attended Gail Shaw’s “Bad Plan! Sit!”, Andrew Kelly’s (blog|twitter) “SQL 2008 Query Statistics”, Dan Jones’ (blog|twitter) “Improving your PowerShell Productivity”, and Brent Ozar’s (blog|twitter) “BLITZ! The SQL – More One Hour SQL Server Takeovers”.  In Gail’s session, she went over how to fix bad plans and bad query patterns.  Update your stale statistics! How to fix bad plans Use local variables – optimizer can’t sniff it, so it’ll optimize for “average” value Use RECOMPILE (at the query or stored procedure level) – CPU hit OPTIMIZE FOR hint – most common value you’ll pass How to fix bad query patterns Don’t use them – ha! Catch-all queries Use dynamic SQL OPTION (RECOMPILE) Multiple execution paths Split into multiple stored procedures OPTION (RECOMPILE) Modifying parameter values Use local variables Split into outer and inner procedure OPTION (RECOMPILE) She also went into “last resort” and “very last resort” options, but those are risky unless you know what you are doing.  For the average Joe, she wouldn’t recommend these.  Examples are query hints and plan guides. While I enjoyed Andrew’s session, I didn’t take any notes as it was familiar material.  Andrew is a great speaker though, and I’d highly recommend attending his sessions in the future. Next up was Dan’s PowerShell session.  I need to look into profiles, manifests, function modules, and function import scripts more as I just didn’t quite grasp these concepts.  I am attending a PowerShell training class at the end of November, so maybe that’ll help clear it up.  I really enjoyed the Excel integration demo.  It was very cool watching PowerShell build the spreadsheet in real-time.  I must look into this more!  On a side note, I am jealous of Dan’s hair.  Fabulous hair! Brent’s session showed us how to quickly gather information about a server that you will be taking over database administration duties for.  He wrote a script to do a fast health check and then later wrapped it into a stored procedure, sp_Blitz.  I can’t wait to use this at my work even on systems where I’ve been the primary DBA for years, maybe there’s something I’ve overlooked.  We are using EPM to help standardize our environment and uncover problems, but sp_Blitz will definitely still help us out.  He even provides a cloud-based update feature, sp_BlitzUpdate, for sp_Blitz so you don’t have to constantly update it when he makes a change.  I think I’ll utilize his update code for some other challenges that we face at my work.

    Read the article

  • NRF Big Show 2011 -- Part 2

    - by David Dorf
    One of the things I love about attending NRF is visiting the smaller booths to see what new innovative ideas have sprung up. After all, by watching emerging technologies we can get a sense of how the retail experience might change. After NRF I'm hoping to write a post on what I found, if anything, so be sure to check back. At the Oracle Retail booth we'll be demonstrating some of the aspects of the changing retail experience. These demos use a mix of GA and experimental components. Here are some highlights: 1. Checkin We wrote a consumer iPhone app we call Store Gateway that lets consumers access information from the store. They'll start by doing a checkin when they arrive that will alert the store manager via another iPhone app we wrote called Mobile Manager. Additionally, we display a welcome messaging using Starmount's digital sign. 2. Receive Offers There are three interaction points where a store can easily make an offer to a consumer: checkin, product scans, and checkout. For this demo we're calling our Universal Offer Engine at checkin to determine the best offer for this particular consumer. This offer is then displayed on the consumer's phone as well as on the digital sign. 3. Scan Products To thwart consumers from scanning product barcodes, we used Store Inventory Management to print QRCodes on shelf label then provided access to a scanner in the Store Gateway iphone app. When the consumer scans the shelf label they are shown product information provided by the retailer. 4. Checkout While we don't have a NFC-enabled mobile phone, we have a NFC chip that can attach to a phone. We're using this to checkout using a reader provided by ViVOTech. Tap the phone on the reader, and the POS accesses the customer#, coupons, and payment information. This really speeds the checkout process. 5. Digital Receipt After the transaction is complete, a digital copy of the receipt is sent to Intuit's QuickReceipts where consumers to store all their digital receipts. There's even an iPhone app that provides easy access to the receipts. This covers about half of what what we'll be showing, so be sure to stop by. I'll also be talking about how mobile is impacting the retail experience at the Wednesday morning session NRF Mobile Retail Initiative: a Blueprint for Action. See you at the Big Show!

    Read the article

  • Building a Store Locator ASP.NET Application Using Google Maps API (Part 1)

    Over the past couple of months I've been working on a couple of projects that have used the free <a href="http://code.google.com/apis/maps/">Google Maps API</a> to add interactive maps and <a href="http://en.wikipedia.org/wiki/Geocoding">geocoding</a> capabilities to ASP.NET websites. In a nutshell, the Google Maps API allow you to display maps on your website, to add markers onto the map, and to compute the latitude and longitude of an address, among many other tasks.With some Google Maps API experience under my belt, I decided it would be fun to implement a store locator feature and share it here on 4Guys. A store locator lets a visitor enter an address or postal code and then shows the nearby stores. Typically, store locators display the

    Read the article

  • Portal and Content - Components, part 3 – Applied Customization Framework (4 of 7)

    - by Stefan Krantz
    Have you ever been challenged with the situation where your work task asks you to implement functionality in the WebCenter Portal and you browse through the Resource Catalog (Business Dictionary) and find the functionality you need. However when you get started there is small short comings and you ask your self- how can I re-use what is out of the box ca?- I wonder what code I need to use to produce the similar functions and include my new requirements?- Must I write a new taskflow? The answer to above questions are in many times answered with simply you can  do a taskflow customization to out-of-the-box taskflows. In this post I will help you understand how to do such customization. Best described is a 4 step process, see image flow below for illustration: Just to clarify few naming confusions that might occur when go through above process. Customization Role is a function within JDeveloper that will allow you to implement view and flow customizations to existing taskflows WebCenter Portal – Spaces Taskflow Customization Framework this technology scope do not only refer to WebCenter Spaces, this also include WebCenter Portal/Framework A taskflow customization do not overwrite or replace any code, it just creates an additional tip view of the taskflow in the MDS for the current application (WebCenter Portal or WebCenter Spaces) To sum up this simple procedure I also like to help you find your way around the main topic for this post series, this post series is focusing primarily on Content integration with WebCenter Portal, so where can I find content related taskflows in the WebCenter Libraries. The list below mention some useful locations to taskflows and each taskflow page fragments. Library Reference - WebCenter Document Library Service View Content Presenter Path: oracle.webcenter.doclib.view.jsf.taskflows.presenterTaskflow: contentPresenter.xml - The Content Presenter taskflowTaskflow: contentPresenterWizard.xml - The publishing wizard to select content, select template and preview including contributionDocument Manager Path: oracle.webcenter.doclib.view.jsf.taskflows.docManager Taskflow: documentManager.xml - The Document Manager taskflow which includes references to document management feature including browsing, download, uploading and viewing. For more information on Taskflow customizations please see following documentation:http://docs.oracle.com/cd/E23943_01/webcenter.1111/e10148/jpsdg_taskflows.htm#BACIEGJD

    Read the article

  • Oracle to SQL Server: Crossing the Great Divide, Part 1

    When a SQL expert moves from Oracle to SQL Server, he can spot obvious strengths and weaknesses in the product that aren't obvious to the SQL Server DBA. Jonathan Lewis is that man, as he records his train of thought whilst he investigates the mechanics of the database engine. The result makes interesting reading.

    Read the article

  • T-SQL User-Defined Functions: the good, the bad, and the ugly (part 4)

    - by Hugo Kornelis
    Scalar user-defined functions are bad for performance. I already showed that for T-SQL scalar user-defined functions without and with data access, and for most CLR scalar user-defined functions without data access , and in this blog post I will show that CLR scalar user-defined functions with data access fit into that picture. First attempt Sticking to my simplistic example of finding the triple of an integer value by reading it from a pre-populated lookup table and following the standard recommendations...(read more)

    Read the article

  • Part 7: EBS Modifications and Flagged Files in R12

    - by volker.eckardt(at)oracle.com
    Let me, based on my previous blog, explain the procedure of flagged files a bit better and facilitate the same with screenshots. Flagged files is a concept within the Oracle eBusiness Suite (EBS) release 12, where you flag a standard deployment file, let’s say a Forms file, a Package or a Java class file. When you run the patch analyse, the list of flagged files will be checked and in case one of these files gets patched, the analyse report will tell you. Note: This functionality is also available in release 11, here it is implemented and known as “applcust.txt”. You can flag as many files as you want, in whatever relationship they are with your customizations. In addition to the flag itself you can add a comment. You should use this comment to point to your customization reference (here XXAR_RPT_066 or XXAP_CUST_030). Consider the following two cases: You have created your own report, based on a standard report. In this case you will flag the report file itself, and the key views used. When a patch updates one of these files, you will be informed and can initiate a proper review and testing. (ex.: first line for ARXCTA.rdf) You have created an extensive personalization and because it is business critical you like to be informed if the page definition gets updated. In this case you register the PG.xml file as flagged file. (ex.: second line below for CreateExtBankAcctPG.xml) The menu path to register flagged files is the following: (R) System Administrator > (M) Oracle Applications Manager > Site Map > Maintenance > Register Flagged Files     Your DBA should now run the Patch Analyse every time he is going to apply a new patch. (R) System Administrator > (M) Oracle Applications Manager > Patch Wizard > Task “Recommend/Analyze Patches” The screenshot above shows the impact summary. For this blog entry the number “2” titled “Flagged Files Changed“ is in our focus. When you click the “2” you will get a similar screen like the first in this blog, showing you exactly the files which will get patched if you continue and apply this patch in this environment right now. Note: It is also shown that just 20% of all patch files will get applied. This situation might be different in case your environments are on a different patch level. For sure also the customization impact might then be different. The flagging step can be done directly in the Oracle Applications Manager.  Our developers are responsible for. To transport such a flag+comment we use a FNDLOAD script. It is suggested to put the flagged files data file directly into your CEMLI patch. Herewith the flagged files registration will be executed right at the same time when the patch gets applied. Process Steps: Developer: Builds CEMLI Reviews code and identifies key standard objects referenced Determines standard object files and flags them Creates FNDLOAD file and adds the same to the CEMLI patch DBA: Executes for every new Oracle standard patch the patch analyse in a representative environment Checks and retrieves the flagged files and comments Sends flagged file list back to development team for analyse / retest Developer: Analyses / Updates / Retests effected CEMLIs Prerequisite: The patch analyse has to be executed in an environment where flagged files have been registered. (If you run the patch analyse in a vanilla or outdated environment (compared to your PROD), the analyse will not be so helpful!) When to start with Flagged files? Start right now utilizing this feature. It is an invest to improve the production stability and fulfil your SLA!   Summary Flagged Files is a very helpful EBS R12 technique when analysing patches. Implement a procedure within your development process to maintain such flags. Let the DBA run the patch analyse in an environment with a similar patch and customization level as your current production.   Related Links: EBS Patching Procedures - Chapter 2-13 - Registered Flagged Files

    Read the article

  • WIF, ADFS 2 and WCF&ndash;Part 5: Service Client (more Flexibility with WSTrustChannelFactory)

    - by Your DisplayName here!
    See the previous posts first. WIF includes an API to manually request tokens from a token service. This gives you more control over the request and more flexibility since you can use your own token caching scheme instead of being bound to the channel object lifetime. The API is straightforward. You first request a token from the STS and then use that token to create a channel to the relying party service. I’d recommend using the WS-Trust bindings that ship with WIF to talk to ADFS 2 – they are pre-configured to match the binding configuration of the ADFS 2 endpoints. The following code requests a token for a WCF service from ADFS 2: private static SecurityToken GetToken() {     // Windows authentication over transport security     var factory = new WSTrustChannelFactory(         new WindowsWSTrustBinding(SecurityMode.Transport),         stsEndpoint);     factory.TrustVersion = TrustVersion.WSTrust13;       var rst = new RequestSecurityToken     {         RequestType = RequestTypes.Issue,         AppliesTo = new EndpointAddress(svcEndpoint),         KeyType = KeyTypes.Symmetric     };       var channel = factory.CreateChannel();     return channel.Issue(rst); } Afterwards, the returned token can be used to create a channel to the service. Again WIF has some helper methods here that make this very easy: private static void CallService(SecurityToken token) {     // create binding and turn off sessions     var binding = new WS2007FederationHttpBinding(         WSFederationHttpSecurityMode.TransportWithMessageCredential);     binding.Security.Message.EstablishSecurityContext = false;       // create factory and enable WIF plumbing     var factory = new ChannelFactory<IService>(binding, new EndpointAddress(svcEndpoint));     factory.ConfigureChannelFactory<IService>();       // turn off CardSpace - we already have the token     factory.Credentials.SupportInteractive = false;       var channel = factory.CreateChannelWithIssuedToken<IService>(token);       channel.GetClaims().ForEach(c =>         Console.WriteLine("{0}\n {1}\n  {2} ({3})\n",             c.ClaimType,             c.Value,             c.Issuer,             c.OriginalIssuer)); } Why is this approach more flexible? Well – some don’t like the configuration voodoo. That’s a valid reason for using the manual approach. You also get more control over the token request itself since you have full control over the RST message that gets send to the STS. One common parameter that you may want to set yourself is the appliesTo value. When you use the automatic token support in the WCF federation binding, the appliesTo is always the physical service address. This means in turn that this address will be used as the audience URI value in the SAML token. Well – this in turn means that when you have an application that consists of multiple services, you always have to configure all physical endpoint URLs in ADFS 2 and in the WIF configuration of the service(s). Having control over the appliesTo allows you to use more symbolic realm names, e.g. the base address or a completely logical name. Since the URL is never de-referenced you have some degree of freedom here. In the next post we will look at the necessary code to request multiple tokens in a call chain. This is a common scenario when you first have to acquire a token from an identity provider and have to send that on to a federation gateway or Resource STS. Stay tuned.

    Read the article

  • Customizing UPK outputs (Part 2 - Player)

    - by [email protected]
    There are a few things that can be done to give the Player output a personalized look to match your corporate branding. In my previous post, I talked about changing the logo. In addition to the logo, you can change the graphic in the heading, button colors, border colors and many other items. Prior to making any customizations, I strongly recommend making a copy of the existing Player style. This will give you a backup in case things go wrong. I'd also recommend that you create your own brand. This way, when you install the newest updates from us, your brand will remain intact. Creating your own brand is pretty easy. Make sure you have modify permissions on the publishing styles directory, if you are using a multi-user installation. Under the Publishing/Styles folder, create a new folder with your company name. Copy all the publishing styles from the UPK folder to your newly created folder. Now, when you go through the Publishing wizard, you will have two categories to choose from: the UPK category or your custom category. Now, for updating the Player output. First, the graphic that appears on the right hand side of the Player. If you're using a multi-user installation, check out the player style from your custom brand. Open the player style. Open the img folder. The file named "banner_image.png" represents the graphic that appears on the right hand side of the player. It is currently sized at 425 x 54. Try to keep your graphic about the same size. Rename your graphic file to be "banner_image.png", and drag it into the img folder. Save the package. Check in the package if you are in a multi-user installation. You've just updated the banner heading! Next, let's work on updating some of the other colors in the player. All the customizable areas are located in the skin.css file which is in the root of the Player style. Many of our customers update the colors to match their own theme. You don't have to be a programmer to make these changes, honest. :) To change the colors in the player: Make a copy of the original skin.css file. (This is to make sure you have a working version to revert to, in case something goes wrong.) Open the skin.css file from the Player package. You can edit it using Notepad. Make the desired changes. Save the file. Save the package. Publish to view your new changes. When you open the skin.css, you will see groupings like this: .headerDivbar { height: 21px; background-color: #CDE2FD; } Change the value of the background-color to the color of your choice. Note that you cannot use "red" as a color, but rather you should enter the hexadecimal color code. If you don't know the color code, search the web for "hexadecimal colors" and you'll find many sites to provide the information. Here are a few of the variables that you can update. Heading: .headerDivbar -this changes the color of the banner that appears under the graphic Button colors: .navCellOn - changes the color of the mode buttons when your mouse is hovering on them. .navCellOff - changes the color of the mode buttons when the mouse is not over them Lines: .thorizontal - this is the color of the horizontal lines surrounding the outline .tvertical - this is the color of the vertical lines on the left and right margin in the outline. .tsep - this is the color of the line that separates the outline from the content area Search frame: .tocSearchColor - this is the color of the search area .tocFrameText - this is the background color of the TOC tree. Hint: If you want to try out the changes prior to updating the style, you can update the skin.css in some content you've already published for the player (it's located in the css folder of the player package). This way, you can immediately see the changes without going through publishing. Once you're happy with the changes, update the skin.css in player style. Want to customize more? Refer to the "Customizing the Player" section of the Content Development manual for more details on all the options in the skin.css that can be changed, and pictures of what each variable controls. I'd love to see how you've customized the player for your corporate needs. Also, if there are other areas of the player you'd like to modify but have not been able to, let us know. Feel free to share your thoughts in the comments. --Maria Cozzolino, Manager of Requirements & UI Design for UPK

    Read the article

  • Inheritance Mapping Strategies with Entity Framework Code First CTP5: Part 2 – Table per Type (TPT)

    - by mortezam
    In the previous blog post you saw that there are three different approaches to representing an inheritance hierarchy and I explained Table per Hierarchy (TPH) as the default mapping strategy in EF Code First. We argued that the disadvantages of TPH may be too serious for our design since it results in denormalized schemas that can become a major burden in the long run. In today’s blog post we are going to learn about Table per Type (TPT) as another inheritance mapping strategy and we'll see that TPT doesn’t expose us to this problem. Table per Type (TPT)Table per Type is about representing inheritance relationships as relational foreign key associations. Every class/subclass that declares persistent properties—including abstract classes—has its own table. The table for subclasses contains columns only for each noninherited property (each property declared by the subclass itself) along with a primary key that is also a foreign key of the base class table. This approach is shown in the following figure: For example, if an instance of the CreditCard subclass is made persistent, the values of properties declared by the BillingDetail base class are persisted to a new row of the BillingDetails table. Only the values of properties declared by the subclass (i.e. CreditCard) are persisted to a new row of the CreditCards table. The two rows are linked together by their shared primary key value. Later, the subclass instance may be retrieved from the database by joining the subclass table with the base class table. TPT Advantages The primary advantage of this strategy is that the SQL schema is normalized. In addition, schema evolution is straightforward (modifying the base class or adding a new subclass is just a matter of modify/add one table). Integrity constraint definition are also straightforward (note how CardType in CreditCards table is now a non-nullable column). Another much more important advantage is the ability to handle polymorphic associations (a polymorphic association is an association to a base class, hence to all classes in the hierarchy with dynamic resolution of the concrete class at runtime). A polymorphic association to a particular subclass may be represented as a foreign key referencing the table of that particular subclass. Implement TPT in EF Code First We can create a TPT mapping simply by placing Table attribute on the subclasses to specify the mapped table name (Table attribute is a new data annotation and has been added to System.ComponentModel.DataAnnotations namespace in CTP5): public abstract class BillingDetail {     public int BillingDetailId { get; set; }     public string Owner { get; set; }     public string Number { get; set; } } [Table("BankAccounts")] public class BankAccount : BillingDetail {     public string BankName { get; set; }     public string Swift { get; set; } } [Table("CreditCards")] public class CreditCard : BillingDetail {     public int CardType { get; set; }     public string ExpiryMonth { get; set; }     public string ExpiryYear { get; set; } } public class InheritanceMappingContext : DbContext {     public DbSet<BillingDetail> BillingDetails { get; set; } } If you prefer fluent API, then you can create a TPT mapping by using ToTable() method: protected override void OnModelCreating(ModelBuilder modelBuilder) {     modelBuilder.Entity<BankAccount>().ToTable("BankAccounts");     modelBuilder.Entity<CreditCard>().ToTable("CreditCards"); } Generated SQL For QueriesLet’s take an example of a simple non-polymorphic query that returns a list of all the BankAccounts: var query = from b in context.BillingDetails.OfType<BankAccount>() select b; Executing this query (by invoking ToList() method) results in the following SQL statements being sent to the database (on the bottom, you can also see the result of executing the generated query in SQL Server Management Studio): Now, let’s take an example of a very simple polymorphic query that requests all the BillingDetails which includes both BankAccount and CreditCard types: projects some properties out of the base class BillingDetail, without querying for anything from any of the subclasses: var query = from b in context.BillingDetails             select new { b.BillingDetailId, b.Number, b.Owner }; -- var query = from b in context.BillingDetails select b; This LINQ query seems even more simple than the previous one but the resulting SQL query is not as simple as you might expect: -- As you can see, EF Code First relies on an INNER JOIN to detect the existence (or absence) of rows in the subclass tables CreditCards and BankAccounts so it can determine the concrete subclass for a particular row of the BillingDetails table. Also the SQL CASE statements that you see in the beginning of the query is just to ensure columns that are irrelevant for a particular row have NULL values in the returning flattened table. (e.g. BankName for a row that represents a CreditCard type) TPT ConsiderationsEven though this mapping strategy is deceptively simple, the experience shows that performance can be unacceptable for complex class hierarchies because queries always require a join across many tables. In addition, this mapping strategy is more difficult to implement by hand— even ad-hoc reporting is more complex. This is an important consideration if you plan to use handwritten SQL in your application (For ad hoc reporting, database views provide a way to offset the complexity of the TPT strategy. A view may be used to transform the table-per-type model into the much simpler table-per-hierarchy model.) SummaryIn this post we learned about Table per Type as the second inheritance mapping in our series. So far, the strategies we’ve discussed require extra consideration with regard to the SQL schema (e.g. in TPT, foreign keys are needed). This situation changes with the Table per Concrete Type (TPC) that we will discuss in the next post. References ADO.NET team blog Java Persistence with Hibernate book a { text-decoration: none; } a:visited { color: Blue; } .title { padding-bottom: 5px; font-family: Segoe UI; font-size: 11pt; font-weight: bold; padding-top: 15px; } .code, .typeName { font-family: consolas; } .typeName { color: #2b91af; } .padTop5 { padding-top: 5px; } .padTop10 { padding-top: 10px; } p.MsoNormal { margin-top: 0in; margin-right: 0in; margin-bottom: 10.0pt; margin-left: 0in; line-height: 115%; font-size: 11.0pt; font-family: "Calibri" , "sans-serif"; }

    Read the article

  • Rawr Code Clone Analysis&ndash;Part 0

    - by Dylan Smith
    Code Clone Analysis is a cool new feature in Visual Studio 11 (vNext).  It analyzes all the code in your solution and attempts to identify blocks of code that are similar, and thus candidates for refactoring to eliminate the duplication.  The power lies in the fact that the blocks of code don't need to be identical for Code Clone to identify them, it will report Exact, Strong, Medium and Weak matches indicating how similar the blocks of code in question are.   People that know me know that I'm anal enthusiastic about both writing clean code, and taking old crappy code and making it suck less. So the possibilities for this feature have me pretty excited if it works well - and thats a big if that I'm hoping to explore over the next few blog posts. I'm going to grab the Rawr source code from CodePlex (a World Of Warcraft gear calculator engine program), run Code Clone Analysis against it, then go through the results one-by-one and refactor where appropriate blogging along the way.  My goals with this blog series are twofold: Evaluate and demonstrate Code Clone Analysis Provide some concrete examples of refactoring code to eliminate duplication and improve the code-base Here are the initial results:   Code Clone Analysis has found: 129 Exact Matches 201 Strong Matches 300 Medium Matches 193 Weak Matches Also indicated is that there was a total of 45,181 potentially duplicated lines of code that could be eliminated through refactoring.  Considering the entire solution only has 109,763 lines of code, if true, the duplicates lines of code number is pretty significant. In the next post we’ll start examining some of the individual results and determine if they really do indicate a potential refactoring.

    Read the article

  • Adventures in Lab Management Configuration: Part 2 of 3

    - by Enrique Lima
    The first post was the high level overview. Now it is time for the details on what was done to the existing CMMI Project based on CMMI v 4.2. The first step was to go into Visual Studio, then from the Team Project Collection Settings and then to the Process Template Manager.  Once there, it was a matter of selecting the appropriate template (MSF for CMMI Process Improvement v5.0) and download to a point I could reference later (for example C:\Templates). Then on to using the steps from the guidance post. Since I was using an x64 deployment, I will make reference to the path as <toolpath>, however the actual path to reference in a 64-bit environment is “C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE”. As I mentioned on the previous post, make sure to first perform a backup of the Configuration, Collection and Warehouse DBs.  If you did not apply any changes to the names and such, then you will find those as tfs_Configuration, tfs_DefaultCollection and tfs_Warehouse. Now, the work needed with the witadmin tool: That includes the uploading of the structures that differ from v4.2 to v5.0 There is likely going to be an issue with the naming of some fields. For example, TFS 2010 likes something along the lines of “Area ID”, whereas TFS 2008 would have had it as “AreaID”.  So, this will need to be corrected.  Some posts will have you go through this after the errors pop up.  I would recommend doing this process prior to executing the importwitd process.  witadmin listfields /collection:<path to collection> > c:\ListFields.txt Review the following fields: AreaID, review the Name property and validate if it states “AreaID”, the you will need to rename the Name field to reflect “Area ID”. ExternalLinkCount, RelatedLinkCount, HyperLinkCount, AttachedFileCount and IterationID would be the other fields to check. To correct the issue, then execute the following: witadmin changefield /collection:<path to collection> /n:"System.ExternalLinkCount" /name:"External Link Count" Repeat for Area ID, Related Link Count, Hyperlink Count, Attached File Count and Iteration ID.  Once this is done, proceed with the commands below. witadmin importwitd /collection:<path to collection> /p:<project> /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\WorkItem Tracking\TypeDefinitions\TestCase.xml" witadmin importwitd /collection:<path to collection> /p:<project> /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\WorkItem Tracking\TypeDefinitions\SharedStep.xml" witadmin importcategories /collection:<path to collection> /p:<project> /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\WorkItem Tracking\categories.xml" Modifications to the Bug Definition: First step is to export the existing definition. witadmin exportwitd /collection<path to collection> /p:<project> /n:bug /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\MyBug.xml" Make modifications to recently exported MyBug.xml file.  Details for the modification are here:  http://msdn.microsoft.com/en-us/library/ff452591.aspx#ModifyTask Once the changes are done, proceed with the import command witadmin importwitd /collection:<path to collection> /p: <project> /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\MyBug.xml" Repeat the process for the the Scenario or Requirement Type Definition witadmin exportwitd /collection<path to collection> /p:<project> /n:requirement /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\MyRequirement.xml" Make modifications to recently exported MyRequirement.xml file.  Details for the modification are here:  http://msdn.microsoft.com/en-us/library/ff452591.aspx#ModifyTask Once the changes are done, proceed with the import command witadmin importwitd /collection:<path to collection> /p: <project> /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\MyRequirement.xml" Provide the Bug Field Mapping definition, after creating the file as specified here: http://msdn.microsoft.com/en-us/library/ff452591.aspx#TCMBugFieldMapping tcm bugfieldmapping /import /mappingfile:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\bugfieldmappings.xml" /collection:<path to collection> /teamproject:<project name>

    Read the article

  • The Information Driven Value Chain - Part 1

    - by Paul Homchick
    One hundred years ago, there were places on Earth that no man had ever seen.  Today, a man standing in one of those places can instantaneously communicate with someone who may be strolling down the street on his way to lunch half way around the globe.  Our world is shrinking and becoming virtual. It is a world of incredible bounty and speed where we can get a product delivered to us anywhere on earth within a day or two. However, this world is also one of challenge where volatility, uncertainty, risk and chaos are our daily companions. To prosper amid the realities of this new world, the enterprise needs a business model. Globalization and instant communications demand greater operational flexibility than ever before. Extended supply chains have elevated the management of risk to a central concern, and regulatory demands from multiple governments place an increasing burden of compliance on companies. Finally, the speed of today's business requires continuous innovation to keep from falling behind the global competition.

    Read the article

  • OBIA on Teradata - Part 2 Teradata DB Utilization for ETL

    - by Mohan Ramanuja
    Techniques to Monitor Queries and ETL Load CPU and Disk I/OSelect username, processor, sum(cputime), sum(diskio) from dbc.ampusage where processor ='1-0' order by 2,3 descgroup by 1,2;UserName    Vproc    Sum(CpuTime)    Sum(DiskIO)AC00916        10    6.71            24975 List Hardware ErrorsThere is a possibility that the system might have adequate disk space but out of free cylinders. In order to monitor hardware errors, the following query was used:Select * from dbc.Software_Event_Log where Text like '%restart%' order by thedate, thetime;For active users, usage of CPU and analysis of bad CPU to I/O ratiosSelect * from DBC.AMPUSAGE where username='CRMSTGC_DEV_ID';  AND SUBSTR(ACCOUNTNAME,6,3)='006'; Usage By I/OSelect AccountName, UserName, sum(CpuTime), sum(DiskIO)  from DBC.AMPUSAGE group by AccountName, UserName Order by Sum(DiskIO) desc; AccountName                       UserName                          Sum(CpuTime)  Sum(DiskIO)$M1$10062209                      AB89487                           374628.612    7821847$M1$10062210                      AB89487                           186692.244    2799412$M1$10062213                      COC_ETL_ID                        119531.068    331100426$M1$10062200                      AB63472                           118973.316    109881984$M1$10062204                      AB63472                           110825.356    94666986$M1$10062201                      AB63472                           110797.976    75016994$M1$10062202                      AC06936                           100924.448    407839702$M1$10062204                      AB67963                           0         4$M1$10062207                      AB91990                           0         2$M1$10062208                      AB63461                           0         24$M1$10062211                      AB84332                           0         6$M1$10062214                      AB65484                           0         8$M1$10062205                      AB77529                           0         58$M1$10062210                      AC04768                           0         36$M1$10062206                      AB54940                           0         22 Usage By CPUSelect AccountName, UserName, sum(CpuTime), sum(DiskIO)  from DBC.AMPUSAGE group by AccountName, UserName Order by Sum(CpuTime) desc;AccountName                       UserName                          Sum(CpuTime)  Sum(DiskIO)$M1$10062209                      AB89487                           374628.612    7821847$M1$10062210                      AB89487                           186692.244    2799412$M1$10062213                      COC_ETL_ID                        119531.068    331100426$M1$10062200                      AB63472                           118973.316    109881984$M1$10062204                      AB63472                           110825.356    94666986$M1$10062201                      AB63472                           110797.976    75016994$M2$100622105813004760047LOAD     T23_ETLPROC_ENT                   0 6$M1$10062215                      AA37720                           0     180$M1$10062209                      AB81670                           0     6Select count(distinct vproc) from dbc.ampusage;432select * from dbc.dbcinfo;AccountName     UserName     CpuTime DiskIO  CpuTimeNorm         Vproc VprocType    Model$M1$10062205                      CRM_STGC_DEV_ID                   0.32    1764    12.7423999023438    0     AMP      2580$M1$10062205                      CRM_STGC_DEV_ID                   0.28    1730    11.1495999145508    3     AMP      2580$M1$10062205                      CRM_STGC_DEV_ID                   0.304    1736    12.1052799072266    4    AMP      2580$M1$10062205                      CRM_STGC_DEV_ID                   0.248    1731    9.87535992431641    7    AMP      2580$M1$10062205                      CRM_STGC_DEV_ID                   0.332    1731    13.2202398986816    8    AMP      2580$M1$10062205                      CRM_STGC_DEV_ID                   0.284    1712    11.3088799133301    11   AMP      2580$M1$10062205                      CRM_STGC_DEV_ID                   0.24    1757    9.55679992675781    12    AMP      2580$M1$10062205                      CRM_STGC_DEV_ID                   0.292    1737    11.6274399108887    15   AMP      2580$M1$10062205                      CRM_STGC_DEV_ID                   0.268    1753    10.6717599182129    16   AMP      2580$M1$10062205                      CRM_STGC_DEV_ID                   0.276    1732    10.9903199157715    19   AMP      2580select * from dbc.dbcinfo;InfoKey    InfoDataLANGUAGE   SUPPORT           MODE    StandardRELEASE    12.00.03.03VERSION    12.00.03.01a

    Read the article

  • Gauging Maturity of your BPM Strategy - part 2 / 2

    - by Sanjeev Sharma
    In my earlier post I had discussed the essence of maturity assessment and the business imperative for doing the same in the context of BPM. In this post I will discuss Oracle’s BPM Maturity assessment methodology. Oracle’s BPM Maturity model comprises of the following components: Maturity – represents stages of evolution of your BPM capability with 0 being the lowest level and 5 being the highest level  Domain – represents multiple perspectives both technical and business oriented against which your BPM capability can be assessed Adoption – represents scale of BPM rollout starting at the project level to the enterprise level Note: Your BPM capability can be at different levels of maturity for the different domains. Oracle’s BPM assessment methodology measures the maturity of your BPM capability at the individual domain level as well as the aggregate level. The output of Oracle’s BPM assessment benefits you in two ways: Gap Analysis by comparing the “As-Is” BPM capability with the desired “To-Be” BPM capability along the various domains  (see Figure 1) Systematic Adoption by aligning evolution of BPM capability with its rollout in multiple phases (see Figure 2)

    Read the article

  • Is verification and validation part of testing process?

    - by user970696
    Based on many sources I do not believe the simple definition that aim of testing is to find as many bugs as possible - we test to ensure that it works or that it does not. E.g. followint are goals of testing form ISTQB: Determine that (software products) satisfy specified requirements ( I think its verificication) Demonstrate that (software products) are fit for purpose (I think that is validation) Detect defects I would agree that testing is verification, validation and defect detection. Is that correct?

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >