Search Results

Search found 7146 results on 286 pages for 'john price'.

Page 102/286 | < Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >

  • NetBeans at JavaOne Latin America 2012

    - by TinuA
    The place to be in early December is Sao Paolo, Brazil, for JavaOne 2012 Latin America (pt_ BR site)--and the NetBeans team will be making the trip!Drop-in on technical sessions and hands-labs that show the latest features of the NetBeans IDE in action. Watch demos of HTML5, CSS3 and JavaScript support in NetBeans IDE 7.3 (Release: Winter 2013) and find out how developers can easily and quickly create rich Web and mobile applications. Discover how the IDE provides the best and latest support for building JavaEE and JavaFX 2.0 applications, and join the conversation about what's up ahead for NetBeans development.With over 50 technical sessions, tons of demos and labs, JavaOne Latin America is the conference to attend to enhance your coding skills and mingle with experts and developers from the Oracle and Java communities. Mark your calendars and check out NetBeans IDE in the following sessions! Tuesday, December 4 12:15 - 13:15 Designing Java EE Applications in the Age of CDI Speakers: Michel Graciano, Consultant, Summa Technologies do Brasil; Michael Santos, TecSinapse Mezanino: Sala 14 Wednesday, December 5 10:00 - 11:00 Make Your Clients Richer: JavaFX and the NetBeans Platform Speakers: Gail Anderson, Director of Research; Paul Anderson, Director of Training, Anderson Software Group, Inc. Mezanino: Sala 12 Thursday, December 6 13:45 - 14:45 Unlocking the Java Platform with NetBeans Speaker: John Jullion-Ceccarelli, Software Development Director, Oracle Keynote Hall 15:00 - 16:00 Project EASEL: Developing and Managing HTML5 in a Java World Speaker: John Jullion-Ceccarelli, Software Development Director, Oracle Mezanino: Sala 14 See full conference schedule for detailed agenda. Get more JavaOne news.

    Read the article

  • Today's Links (6/17/2011)

    - by Bob Rhubart
    Call for Nominations: Oracle Eco-Enterprise Innovation Awards Is your organization using Oracle products to reduce your environmental footprint while reducing costs? If so, submit your nomination for Oracle's Eco-Enterprise Innovation award. These awards will be presented to select customers and their partners who are using any of Oracle's products to not only take an environmental lead, but also to reduce their costs and improve their business efficiencies by using green business practices. Beyond The Data Grid: Coherence, Normalization, Joins, and Linear Scalability | Ben Stopford Ben Stopford presents ODC, a highly distributed in-memory normalized NoSQL datastore designed for scalability, based on normalized data, Snowflake Schema, and Connected Replication pattern. Upgrading ALSB services to OSB | John Chin-a-Woeng John Chin-a-Woeng walks you through the upgrade from Aqualogic Service Bus (ALSB 3.0) to Oracle Service Bus (OSB 10.3). SOA & Middleware: Pinning tasks to a user in BPM 11g | Niall Commiskey Commiskey illustrates a scenario. JDeveloper 11gR2: New option Test WebService in WSDL editor | Lucas Jellema The "Test WebService" button in the WSDL Editor in JDeveloper 11gr2 is "just a little feature addition," says Oracle ACE Director Lucas Jellema. "But it can be quite useful all the same." Enterprise Business Intelligence 11g Seminar with Mark Rittman Oracle ACE Director Mark Rittman conducts a two-day course for Oracle University, in Dublin, IE, July 4-5, 2011. Data Integration Webcast Series Join Oracle experts for a series covering our data integration solutions. You’ll get invaluable information to help boost your data infrastructure so that you can accelerate your business.

    Read the article

  • 50% off ASP.NET hosting

    - by Fabrice Marguerie
    I haven't blogged for a long time because I'm busy working on an exciting new project. It's too early to tell you more. I'll provide details in a few months. Meanwhile, I wanted to write a quick post to share an excellent offer with you. It's that time of the year when you can get deals on many things, including web hosting. I'd like to remind you about Arvixe, a great web hosting provider for Windows (for ASP.NET) and Linux. For 48 hours, between Thursday November 24th at 00:00 PST (08:00 GMT/UTC) and Friday November 25th, Arvixe will be offering 50% off all of their shared hosting products. This will be for all new accounts, for life (as long as you continue to renew the account)!I've been using Arvixe for my websites for more than one year and a half now, and I highly recommend them. Here is an overview of what I get for a very good price:Unlimited diskspaceUnlimited data transferUnlimited domainsUnlimited POP3 and IMAP mailboxesUnlimited SQL Server 2008 databasesUnlimited MySQL databases.NET 1.1, 2, 3.5 and 4Dedicated application poolsFull trustIIS 7Daily backupsand more... And now, you can get that too for half the price. Just go to Arvixe.com and secure your own hosting account now by using the coupon code "Black Friday" during checkout.Disclaimer: the links to Arvixe are affiliate links that may bring me some money home if you sign up. Still, I recommend Arvixe because I use them and I'm very happy with what they offer.

    Read the article

  • Why is NDA so hard to understand?

    - by Dave Campbell
    Maybe this concept is simpler for me because of all the jobs I've been on over the years requiring security clearances. I've signed quite a few NDA forms. Some for big companies, some for small, but the meaning of "NDA" remains constant: Non-Disclosure Agreement. To me, that takes no further explanation, but apparently it's confusing to some people, and I don't understand how you can be confused. The papers I signed with the U.S. Army in 1970 read "10 years and $10,000" for a violation... can't imagine what it's up to now, but THAT is a strict NDA :) So those things I've been told, I cannot talk about, period. Even if the entire world knows about them, I cannot speak about them until the information goes off NDA. An example was a Silverlight release a while back. It might have been Silverlight 3, I don't remember. Everyone was anxiously awaiting the release so they could post their material. Of course the entire world knew it was coming out and imminently so. Some enterprising folks had even found the bits on a server before the official announcement. So then the situation became: everyone knew about it, some were even coding with it and blogging about it and yet we couldn't talk about it. Scott Guthrie's posting about it opened the flood gates and then it went off NDA, but up until that moment, we were locked. Sitting out on the edge you're uninstalling and re-installing all the time and you get frustrated when things that used to work don't, but hey... those bits were still warm when you got 'em, and that's the fun. But that fun comes at a price, and the price is the NDA. Awkward yes, confusing no... See you at MIX10, and Stay in the 'Light! MIX10

    Read the article

  • SQL Saturday #274 Slovenia

    - by Dejan Sarka
    Yes, here it is SQL Saturday #274 is coming to Slovenia (#sqlsatSlovenia). The event will take place on Saturday, December 21st, at company pixi* labs, Informacijske tehnologije, d.o.o. Poslovna cona A 2 SI-4208 Šencur This company generously offered to host the event. We, the whole Slovenian SQL Server community, are very grateful for this. At this time, a call for speakers went out, and we are already getting the first proposals. We are especially happy that we will get possibility to show the foreign speakers how beautiful Slovenia and especially the capital Ljubljana is in December. Expect a lot of partying right on the streets, no matter of weather. Be prepared, we have slightly weird customs when it comes to drinks. For example, our regular special discount offer is not three drinks for the price of two; it is six drinks for the price of five. If you are a speaker or want to become one, consider sending a proposal. Since most of the sessions will be held in English and you don’t want to speak, consider coming as a visitor as well. Or maybe you would be interested to become a sponsor. Although we are targeting a low budgeted event, any kind of sponsorship is very welcome. Please feel free to contact the organizers if you are interested to become a sponsor: Matija Lah – [email protected], Mladen Prajdic - [email protected], or Dejan Sarka  - [email protected]. Looking forward to see you all!

    Read the article

  • Confused about implementing Single Responsibility Principle

    - by HichemSeeSharp
    Please bear with me if the question looks not well structured. To put you in the context of my issue: I am building an application that invoices vehicles stay duration in a parking. In addition to the stay service there are some other services. Each service has its own calculation logic. Here is an illustration (please correct me if the design is wrong): public abstract class Service { public int Id { get; set; } public bool IsActivated { get; set; } public string Name { get; set } public decimal Price { get; set; } } public class VehicleService : Service { //MTM : many to many public virtual ICollection<MTMVehicleService> Vehicles { get; set; } } public class StayService : VehicleService { } public class Vehicle { public int Id { get; set; } public string ChassisNumber { get; set; } public DateTime? EntryDate { get; set; } public DateTime? DeliveryDate { get; set; } //... public virtual ICollection<MTMVehicleService> Services{ get; set; } } Now, I am focusing on the stay service as an example: I would like to know at invoicing time which class(es) would be responsible for generating the invoice item for the service and for each vehicle? This should calculate the duration cost knowing that the duration could be invoiced partially so the like is as follows: not yet invoiced stay days * stay price per day. At this moment I have InvoiceItemsGenerator do everything but I am aware that there is a better design.

    Read the article

  • Raspberry Pi Now Shipping with 512MB RAM; Still Only $35

    - by Jason Fitzpatrick
    Fans of the tiny Raspberry Pi will be pleased to hear the new version of their Model B board now ships with 512MB of RAM (up from the previous 256MB). The best part about the upgrade? The price point stays at $35 a board. From the official Raspberry Pi blog: One of the most common suggestions we’ve heard since launch is that we should produce a more expensive “Model C” version of Raspberry Pi with extra RAM. This would be useful for people who want to use the Pi as a general-purpose computer, with multiple large applications running concurrently, and would enable some interesting embedded use cases (particularly using Java) which are slightly too heavyweight to fit comfortably in 256MB. The downside of this suggestion for us is that we’re very attached to $35 as our highest price point. With this in mind, we’re pleased to announce that from today all Model B Raspberry Pis will ship with 512MB of RAM as standard. If you have an outstanding order with either distributor, you will receive the upgraded device in place of the 256MB version you ordered. Units should start arriving in customers’ hands today, and we will be making a firmware upgrade available in the next couple of days to enable access to the additional memory. We’re excited to get our hands on a new board and try out Raspbmc with that extra RAM. HTG Explains: What is the Windows Page File and Should You Disable It? How To Get a Better Wireless Signal and Reduce Wireless Network Interference How To Troubleshoot Internet Connection Problems

    Read the article

  • What is a good way to share internal helpers?

    - by toplel32
    All my projects share the same base library that I have build up over quite some time. It contains utilities and static helper classes to assist them where .NET doesn't exactly offer what I want. Originally all the helpers were written mainly to serve an internal purpose and it has to stay that way, but sometimes they prove very useful to other assemblies. Now making them public in a reliable way is more complicated than most would think, for example all methods that assume nullable types must now contain argument checking while not charging internal utilities with the price of doing so. The price might be negligible, but it is far from right. While refactoring, I have revised this case multiple times and I've come up with the following solutions so far: Have an internal and public class for each helper The internal class contains the actual code while the public class serves as an access point which does argument checking. Cons: The internal class requires a prefix to avoid ambiguity (the best presentation should be reserved for public types) It isn't possible to discriminate methods that don't need argument checking   Have one class that contains both internal and public members (as conventionally implemented in .NET framework). At first, this might sound like the best possible solution, but it has the same first unpleasant con as solution 1. Cons: Internal methods require a prefix to avoid ambiguity   Have an internal class which is implemented by the public class that overrides any members that require argument checking. Cons: Is non-static, atleast one instantiation is required. This doesn't really fit into the helper class idea, since it generally consists of independent fragments of code, it should not require instantiation. Non-static methods are also slower by a negligible degree, which doesn't really justify this option either. There is one general and unavoidable consequence, alot of maintenance is necessary because every internal member will require a public counterpart. A note on solution 1: The first consequence can be avoided by putting both classes in different namespaces, for example you can have the real helper in the root namespace and the public helper in a namespace called "Helpers".

    Read the article

  • The Mac Tax

    - by Robert May
    One of our users was having difficulties with their mac and using some web software.  I decided to go peruse the landscape and see how much of a premium people were paying for their macs.  I priced out a Dell and a Mac from their websites.  I tried to get them as close to the same configuration, from a hardware standpoint, as I could.  I found the following: Apple Macbook Pro   Dell XPS 17 There are several important differences in the hardware: The mac doesn’t have a blueray player, but the dell does. The mac has a slightly slower processor. The mac claims to have a better battery, but doesn’t list the specifics, so there’s no way to tell. The mac doesn’t list the video card stats, so there’s no way to tell how comparable they are, but they’re probably close. The mac doesn’t come with any additional software.  No iWorks, iPhoto, etc.  They were left to their default of None, so arguably, the Dell is more functional out of the box. Other than changing the hardware specs to be close, all other configuration options were left at their default. So riddle me this, Batman:  Why do people buy Macs?  I have several dev buddies that own them, but I can’t justify the cost.  First, most of them load bootcamp and/or parallels at extra cost to run windows 7 and windows apps.  The hardware isn’t as good.  The price is almost twice as expensive. How do you justify the premium price? Technorati Tags: General

    Read the article

  • Integrated Reporting Is Getting Closer

    - by Evelyn Neumayr
    By John O’Rourke, Vice President, Product Marketing, Oracle Oracle recently sponsored a webcast on CFO.com titled:  The CFO Playbook on Integrated Reporting: Integrating Sustainability into Financial Disclosures which focused on why top companies in the U.S. and overseas are incorporating sustainability content into their annual reports and other financial disclosures.  The webcast speakers, James Margolis, partner with Environmental Resources Management (ERM), a global provider of environmental, health, safety, risk and sustainability consulting services (EHSS) and Mike Wallace, Director of the Global Reporting Initiative's Focal Point USA, discussed the benefits of integrating sustainability reporting with traditional financial reporting. They noted how investors, corporate directors, lenders and most recently, the Securities and Exchange Commission, use this information to better understand, benchmark and value companies. They also talked about the November 2012 release of an Integrated Reporting Framework by the International Integrated Reporting Council (IIRC).  Read the press release and link to the framework here.  The shift towards integrated financial and sustainability reporting is gaining momentum with a number of global stock exchanges endorsing this approach in 2012.  Visit these links to listen to the webcast and download the slides. You can also view a demonstration of Oracle's solution for integrated financial and sustainability reporting. If you’re interested in learning more about this and Oracle’s other sustainability reporting solutions, click here. If you have any questions or need additional information, please feel free to contact me at john[email protected].

    Read the article

  • Extreme Portability: OpenJDK 7 and GlassFish 3.1.1 on Power Mac G5!

    - by MarkH
    Occasionally you hear someone grumble about platform support for some portion or combination of the Java product "stack". As you're about to see, this really is not as much of a problem as you might think. Our friend John Yeary was able to pull off a pretty slick feat with his vintage Power Mac G5. In his words: Using a build script sent to me by Kurt Miller, build recommendations from Kelly O'Hair, and the great work of the BSD Port team... I created a new build of OpenJDK 7 for my PPC based system using the Zero VM. The results are fantastic. I can run GlassFish 3.1.1 along with all my enterprise applications. I recently had the opportunity to pick up an old G5 for little money and passed on it. What would I do with it? At the time, I didn't think it would be more than a space-consuming novelty. Turns out...I could have had some fun and a useful piece of hardware at the same time. Maybe it's time to go bargain-hunting again. For more information about repurposing classic Apple hardware and learning a few JDK-related tricks in the process, visit John's site for the full article, available here. All the best,Mark

    Read the article

  • How to handle monetary values in PHP and MySql?

    - by Songo
    I've inherited a huge pile of legacy code written in PHP on top of a MySQL database. The thing I noticed is that the application uses doubles for storage and manipulation of data. Now I came across of numerous posts mentioning how double are not suited for monetary operations because of the rounding errors. However, I have yet to come across a complete solution to how monetary values should be handled in PHP code and stored in a MySQL database. Is there a best practice when it comes to handling money specifically in PHP? Things I'm looking for are: How should the data be stored in the database? column type? size? How should the data be handling in normal addition, subtraction. multiplication or division? When should I round the values? How much rounding is acceptable if any? Is there a difference between handling large monetary values and low ones? Note: A VERY simplified sample code of how I might encounter money values in everyday life: $a= $_POST['price_in_dollars']; //-->(ex: 25.06) will be read as a string should it be cast to double? $b= $_POST['discount_rate'];//-->(ex: 0.35) value will always be less than 1 $valueToBeStored= $a * $b; //--> any hint here is welcomed $valueFromDatabase= $row['price']; //--> price column in database could be double, decimal,...etc. $priceToPrint=$valueFromDatabase * 0.25; //again cast needed or not? I hope you use this sample code as a means to bring out more use cases and not to take it literally of course. Bonus Question If I'm to use an ORM such as Doctrine or PROPEL, how different will it be to use money in my code.

    Read the article

  • See the exciting new features available for iProcurement and Sourcing with 12.1.3 Rollup Patch 14254641:R12.PRC_PF.B!

    - by user793044
    See the exciting new features available for iProcurement and Sourcing with 12.1.3 Rollup Patch 14254641:R12.PRC_PF.B! Functional Area New Feature Note Reference Sourcing Suppliers can now accept Terms and Conditions to comply with the buyer's Non-Disclosure Agreements (NDA). The PDF generation process has been enhanced to provide faster generation of negotiation PDFs containing large amounts of data. Note 1499944.1 Sourcing New features From Procurement RUP Family R12.1.3 September Update 2012: Accept Terms and Conditions to Comply With NDA iProcurement Users can now do the following: Requesters can specify the GL date (encumbrance date) for each distribution against a line at the time of creating requisitions.  Enter an Accounting Date on and Procurement Requisition, if Dual Budgetary Control is enabled for Purchasing. Choose a Favorite Charge Account to override your default charge account, using the Preferences page.  Buyers can update the unit price, suggested supplier, and site details while requesting a catalog item (inventory item) that is not linked to a blanket purchase agreement. Note 1499911.1 iProcurement New Features From RUP Family R12.1.3 September Update 2012: GL/Accouting Date,PO_CUSTOM_FUNDS_PKG.plb,Price and Supplier Update For new features across all the Procurement product groups and information about applying Patch 14254641 see Note 1468883.1.

    Read the article

  • GAPI output doesn't match Google Analytics website

    - by Yekver
    I have to get the main info about my Google Analytics Goals. I'm using GAPI lib, with this code: <?php require_once 'conf.inc'; require_once 'gapi.class.php'; $ga = new gapi(ga_email,ga_password); $dimensions = array('pagePath', 'hostname'); $metrics = array('goalCompletionsAll', 'goalConversionRateAll', 'goalValueAll'); $ga->requestReportData(ga_profile_id, $dimensions, $metrics, '-goalCompletionsAll', '', '2012-09-07', '2012-10-07', 1, 500); $gaResults = $ga->getResults(); foreach($gaResults as $result) { var_dump($result); } cut this code is output: object(gapiReportEntry)[7] private 'metrics' => array (size=3) 'goalCompletionsAll' => int 12031 'goalConversionRateAll' => float 206.93154454764 'goalValueAll' => float 0 private 'dimensions' => array (size=2) 'pagePath' => string '/catalogs.php' (length=13) 'hostname' => string 'www.example.com' (length=13) object(gapiReportEntry)[6] private 'metrics' => array (size=3) 'goalCompletionsAll' => int 9744 'goalConversionRateAll' => float 661.05834464043 'goalValueAll' => float 0 private 'dimensions' => array (size=2) 'pagePath' => string '/price.php' (length=10) 'hostname' => string 'www.example.com' (length=13) What I see on Google Analytics website on Goals URLs page with the same period of date is: Goal Completion Location Goal Completions Goal Value 1. /price.php 9,396 $0.00 2. /saloni.php 3,739 $0.00 As you can see outputs doesn't match. Why? What's wrong?

    Read the article

  • Is it worth moving from Microsoft tech to Linux, NodeJS & other open source frameworks to save money for a start-up?

    - by dormisher
    I am currently getting involved in a startup, I am the only developer involved at the moment, and the other guys are leaving all the tech decisions up to me at the moment. For my day job I work at a software house that uses Microsoft tech on a day to day basis, we utilise .NET, SqlServer, Windows Server etc. However, I realise that as a startup we need to keep costs down, and after having a brief look at the cost of hosting for Windows I was shocked to see some of the prices for a dedicated server. The cheapest I found was £100 a month. Also if the business needs to scale in the future and we end up needing multiple servers, we could end up shelling out £10's of £000's a year in SQL Server / Windows Server licenses etc. I then had a quick look at the price of Linux hosting for a dedicated server and saw the price was waaaaaay lower than windows hosting. One place was offering a machine with 2 cores for less than £20 a month. This got me thinking maybe the way to go is open source on Linux. As I write a lot of Javascript at work (I'm working on a single page backbone app at the moment), I thought maybe NodeJS and a web framework like Express would be cool to use. I then thought that instead of using SQL why not use an open source NoSQL database like MongoDB, which has great support on NodeJS? My only concern is that some of the work the application is going to do is going to be dynamically building images and various other image related stuff, i.e. stuff that is quite CPU heavy - so I'm thinking of maybe writing anything CPU heavy in C++ and consuming it as a module in Node. That's the background - but basically is Linux a good match for: Hosting a NodeJS/Express site? Compiling C++ node modules? Using a NoSQL DB like MongoDB? And is it a good idea to move to these unfamiliar technologies to save money?

    Read the article

  • Web hosting company basically forces me to use their domain name [closed]

    - by Jinx
    I've recently stumbled upon an unusual problem with one of hosting companies called giga-international.com. Anyway, I've ordered com.hr domain from Croatian domain name registration company, and my client insisted on using this host provider as couple of his friends already are hosted with them. I thought something was fishy when the first result on Google for Giga International was this little forum rant instead of their webpage. When I was checking their services they listed many features etc... space available, bandwidth etc. I just wanted to check how much ram do I get for my PHP scripts so I emailed them, and they told me that was company secret. Seriously? Anyway, since my client still insisted on hosting with them I've bought their Webspace package. During registration I had to choose free domain name because I couldn't advance registration without it. Nowhere was said, not even in general terms and conditions that I wouldn't be able to change that domain name. At least not for double the price of domain name per year. They said I can either move my domain name over to them (and pay them domain registration), or pay them 1 Euro per month for managing a DNS entry. On any previous hosting solution I was able to manage my domain names just by pointing my domain to their name servers, and this is something completely new and absurd for me. They also said that usual approach is not possible because of security and hardware limitations. I'd like to know what you guys think about this case, and should I report, and where should I report this case. In short. They forced me to register free domain name which doesn't suit my needs in order to register for their webspace package, and refuse to change domain name for my account until I either transfer domain to them or pay them DNS management which costs double the price of the domain name per year.

    Read the article

  • UBUNTU's Network Connection Manger can't detect Huawei ETS2051 Modem device!

    - by Doctoa
    I have a modem device called Huawei ETS2051 and the Network Connection Manger can't detect it, but when I use Gnome-PPP it work fine but the problem is when I use Gnome-PPP; apps like Ubuntu software Center Can't reconice that's Iam connecting to the Internet so the app is just act like it's offline while other apps like web browsers and IM's work good under Gnome-PPP. any way what I want is to have a Full Ubuntu experince by making The Network Connection Manger detect my ETS2051. I have another 3G USB modem and The Network Connection Manger detect it and it's work just fine but the internet price for this one is high and I can't effort it so am count on that ETS2051 modem as you can see for it's low price and stable internet speed that satesfy my needs. More information: Gnome-PPP is a GUI for wvdial. the ETS2051 modem use a serial USB port. I have a Windows driver CD for the device. I have also find This qustion about the software Center acting like it's offline around wvdial and there's this launchpad bug. and am really insest to use Ubuntu Software Center so please no other software manger apps recomendation... I've also this Genius ColorPage HR6X Slim scanner that's Ubuntu can't detect it, so if you interset you can check and answer the qustion from here...

    Read the article

  • Fusion Applications: Get Trained to Implement

    - by mseika
    0 0 1 214 1349 Technische Universität München 11 3 1560 14.0 96 800x600 Normal 0 21 false false false DE JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Normale Tabelle"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","serif";} Becoming an Oracle Fusion Applications Implementation Expert gives you the opportunity to gain differentiation and a competitive advantage. Oracle University can help you to get trained and then specialized. View the current course schedule for Romania and Turkey. You can choose a delivery format to suit your learning style and situation: Classroom Training in Bucharest, Romania or Istanbul, Turkey or the travel-free options: Live Virtual Class or Training On Demand. SPECIAL OFFER Standard prices for all Oracle Fusion Applications Training On Demand courses currently have a reduced price. This offer is valid until 31 August 2013. Your OPN discount will be then applied to this reduced price to make this learning form even more attractive.

    Read the article

  • Fusion Applications: Get Trained to Implement

    - by mseika
    0 0 1 165 1041 Technische Universität München 8 2 1204 14.0 96 800x600 Normal 0 21 false false false DE JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Normale Tabelle"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","serif";} Becoming an Oracle Fusion Applications Implementation Expert gives you the opportunity to gain differentiation and a competitive advantage. Oracle University can help you to get trained and then specialized. View the current course schedule for France. You can choose a delivery format to suit your learning style and situation: Classroom Training in Paris Colombes or the travel-free options: Live Virtual Class or Training On Demand. SPECIAL OFFER Standard prices for all Oracle Fusion Applications Training On Demand courses currently have a reduced price. This offer is valid until 31 August 2013. Your OPN discount will be then applied to this reduced price to make this learning form even more attractive.

    Read the article

  • Oracle regains the #1 UNIX Shipments Marketshare

    - by EricReid-Oracle
    Oracle has regained the #1 UNIX Server Shipments spot! According to IDC, Oracle’s share was 33.6%, up from 32.7% in the year ago period, and 32.2% in C4Q13:  IDC: WW Unix Unit Shipments, Share, Growth 2013Q1 Share 2013Q4 Share 2014Q1 Share Sequential Growth Y/Y Growth Oracle 10,141 32.7% 10,294 32.2% 8,355 33.6% -18.8% -17.6% IBM 10,203 32.9% 11,533 36.0% 6,919 27.8% -40.0% -32.2% HP 7,046 22.7% 6,786 21.2% 6,549 26.4% -3.5% -7.1% Fujitsu 1,174 3.8% 1,141 3.6% 1,069 4.3% -6.3% -8.9% Dell 565 1.8% 499 1.6% 519 2.1% 4.0% -8.2% NEC 69 0.2% 81 0.3% 63 0.3% -22.5% -9.4% Others 1,804 5.8% 1,684 5.3% 1,380 5.6% -18.1% -23.5% Total Market 31,002 100.0% 32,018 100.0% 24,854 100.0% -22.4% -19.8% While the UNIX server space is currently undergoing some contraction (on a pure numbers basis), this can be traced in part to an overall consolidation trend, due to the greatly-increased price-performance of our systems. Consider this: one SPARC T5-4 system has 1/16th the number of sockets and 1/192nd the number of cores of the previous high-end M9000-64 system -- all at 5X the price-performance. SPARC. Solaris. Nuff' said.

    Read the article

  • Feedback on "market manipulation", a peripheral game mechanic for a satirical MMO

    - by BerndBrot
    This question asks for feedback on a specific game-mechanic. Since there is not one right feedback on a game mechanic, I tried to provide enough context and guidelines to still make it possible for users to rate answers and to accept an answer as the best answer (following these criteria from Writer.SE's meta website). Please comment if you have any suggestions on how I could improve the question in that regard. So, let's begin with the game itself and some of its elements which are relevant for this question. Context I'm working on a satirical, text-based multiplayer adventure and role-playing game set in modern-day London. The game resolves around the concept of sin and features a myriad of (venomous) allusions to all the things that go wrong in this world. Players can choose between character classes like bullshit artist (consultant), bankster, lawyer, mobster, celebrity, politician, etc. In order to complete the game, the player has to live so sinfully with regard to any of the seven deadly sins that a demon is willing to offer them a contract of sponsorship. On their quest to live a sinful live, characters explore more and more locations of modern-day London (on a GoogleMap), fight "monsters" like insurance sales agents or Jehovah's Witnesses, and complete quests, like building a PowerPoint presentation out of marketing buzz words or keeping up a number of substance abuse effects in order to progress on the gluttony path. Battles are turn based with both combatants having a deck of cards, with which they try to make their enemy give in to temptations of all sorts. Tempted enemies sometimes become contacts (an item drop mechanic), which can be exploited for various benefits, depending on their area of influence (finance, underworld, bureaucracy, etc.), level of influence, and kind of sway that the player has over them (bribed, seduced, threatened, etc.) Once a contract has been exploited, the player loses that contact. Most actions require turns. Turns are limited, but refill each day. Criteria A number of peripheral game mechanics are supposed to represent real world abuses and mischief in a humorous way integrate real world data and events to strengthen the feeling of relevance of the game's humor with regard to real world problems add fun ways of interacting with other players add ways for players to express themselves through game-play Market manipulation is one such peripheral game mechanic and should fulfill all of these goals. Market manipulation This is my initial design of the mechanic: Players can enter the London Stock Exchange (LSE) (without paying a turn) LSE displays the stock prices of a number of companies in industries like weapons or tobacco as well as some derivatives based on wheat and corn. The stock prices are calculated based on the actual stock prices of these companies and derivatives (in real time) any market manipulations that were conducted by the players any market corrections of the system Players can buy and sell shares with cash, a resource in the game, at current in-game market value (without paying a turn). Players can manipulate the market, i.e. let the price of a share either rise or fall, by some amount, over a certain period of time. Manipulating the market requires 1 turn A contact in the financial sector (see above). The higher the level of influence of the contact, the stronger the effect of the manipulation on the stock price, and/or the shorter it takes for the manipulation to manifest itself. Market manipulation also adds a crime to the player's record. (There are a multitude of ways to take care of that, but it is still another "cost" of market manipulations.) The system continuously corrects market manipulations by letting the in-game prices converge towards their real world counterparts at a rate of 2% of the difference between the two per hour. Because of this market correction mechanism, pushing up prices (and screwing down prices) becomes increasingly difficult the higher (lower) the price already is. Whenever food prices reach a certain level, in-game stories are posted about hunger catastrophes happening somewhere far, far away (maybe with links to real world news stories). Whenever a player sells a certain number of shares with a sufficiently high margin, they are mentioned in that day's in-game financial news. Since the number of stock options is very limited, players will inevitably collide in their efforts to manipulate the market in their favor. Hopefully, it will also be a fun side-arena for guilds and covenants to fight each other. Question(s) What do you think of this mechanism given the criteria for peripheral game mechanics that I specified for my game? Do you have any ideas how the mechanic could be improved with regard to these criteria (or otherwise)? Could it be improved to allow for more expressive game-play, or involve an allusion to some other real world madness (like short selling, leveraging, or some other banking magic)? Are there any game-theoretic problems with this mechanic, like maybe certain dominant individual strategies that, collectively, lead to every player profiting and thus eliminating the idea of market manipulation PVP? Also, if you like (or dislike) this question, feel free to participate in the discussion on GDSE meta: "Should we be more lax with regard to SE's question/answer format to make game design questions possible?"

    Read the article

  • IBM Keynote: (hardware,software)–>{IBM.java.patterns}

    - by Janice J. Heiss
    On Sunday evening, September 30, 2012, Jason McGee, IBM Distinguished Engineer and Chief Architect Cloud Computing, along with John Duimovich IBM Distinguished Engineer and Java CTO, gave an information- and idea-rich keynote that left Java developers with much to ponder.Their focus was on the challenges to make Java more efficient and productive given the hardware and software environments of 2012. “One idea that is very interesting is the idea of multi-tenancy,” said McGee, “and how we can move up the spectrum. In traditional systems, we ran applications on dedicated middleware, operating systems and hardware. A lot of customers still run that way. Now people introduce hardware virtualization and share the hardware. That is good but there is a lot more we can do. We can share middleware and the application itself.” McGee challenged developers to better enable the Java language to function in these higher density models. He spoke about the need to describe patterns that help us grasp the full environment that an application needs, whether it’s a web or full enterprise application. Developers need to understand the resources that an application interacts with in a way that is simple and straightforward. The task is to then automate that deployment so that the complexity of infrastructure can be by-passed and developers can live in a simpler world where the cloud can automatically configure the needed environment. McGee argued that the key, something IBM has been working on, is to use a simpler pattern that allows a cloud-based architecture to embrace the entire infrastructure required for an application and make it highly available, scalable and able to recover from failure. The cloud-based architecture would automate the complexity of setting up and managing the infrastructure. IBM has been trying to realize this vision for customers so they can describe their Java application environment simply and allow the cloud to automate the deployment and management of applications. “The point,” explained McGee, “is to package the executable used to describe applications, to drop it into a shared system and let that system provide some intelligence about how to deploy and manage those applications.”John Duimovich on Improvements in JavaMcGee then brought onstage IBM’s Distinguished Engineer and CTO for Java, John Duimovich, who showed the audience ways to deploy Java applications more efficiently.Duimovich explained that, “When you run lots of copies of Java in the cloud or any hypervisor virtualized system, there are a lot of duplications of code and jar files. IBM has a facility called ‘shared classes’ where we put shared code, read only artefacts in a cache that is sharable across hypervisors.” By putting JIT code in ahead of time, he explained that the application server will use 20% less memory and operate 30% faster.  He described another example of how the JVM allows for the maximum amount of sharing that manages the tenants and file sockets and memory use through throttling and control. Duimovich touched on the “thin is in” model and IBM’s Liberty Profile and lightweight runtime for the cloud, which allows for greater efficiency in interacting with the cloud.Duimovich discussed the confusion Java developers experience when, for example, the hypervisor tells them that that they have 8 and then 4 and then 16 cores. “Because hypervisors are virtualized, they can change based on resource needs across the hypervisor layer. You may have 10 instances of an operation system and you may need to reallocate memory, " explained Duimovich.  He showed how to resize LPARs, reallocate CPUs and migrate applications as needed. He explained how application servers can resize thread pools and better use resources based on information from the hypervisors.Java Challenges in Hardware and SoftwareMcGee ended the keynote with a summary of upcoming hardware and software challenges for the Java platform. He noted that one reason developers love Java is it allows them to ignore differences in hardware. He stated that the most important things happening in hardware were in network and storage – in developments such as the speed of SSD, the exploitation of high-speed, low-latency networking, and recent developments such as storage-class memory, and non-volatile main memory. “So we are challenged to maintain the benefits of Java and the abstraction it provides from hardware while still exploiting the new innovations in hardware,” said McGee.McGee discussed transactional messaging applications where developers send messages transactionally persist a message to storage, something traditionally done by backing messages on spinning disks, something mostly outdated. “Now,” he pointed out, “we would use SSD and store it in Flash and get 70,000 messages a second. If we stored it using a PCI express-based flash memory device, it is still Flash but put on a PCI express bus on a card closer to the CPU. This way I get 300,000 messages a second and 25% improvement in latency.” McGee’s central point was that hardware has a huge impact on the performance and scalability of applications. New technologies are enabling developers to build classes of Java applications previously unheard of. “We need to be able to balance these things in Java – we need to maintain the abstraction but also be able to exploit the evolution of hardware technology,” said McGee. According to McGee, IBM's current focus is on systems wherein hardware and software are shipped together in what are called Expert Integrated Systems – systems that are pre-optimized, and pre-integrated together. McGee closed IBM’s engaging and thought-provoking keynote by pointing out that the use of Java in complex applications is increasingly being augmented by a host of other languages with strong communities around them – JavaScript, JRuby, Scala, Python and so forth. Java developers now must understand the strengths and weaknesses of such newcomers as applications increasingly involve a complex interconnection of languages.

    Read the article

  • Highlights from the Oracle Customer Experience Summit @ OpenWorld

    - by Kathryn Perry
    A guest post by David Vap, Group Vice President, Oracle Applications Product Development The Oracle Customer Experience Summit was the first-ever event covering the full breadth of Oracle's CX portfolio -- Marketing, Sales, Commerce, and Service. The purpose of the Summit was to articulate the customer experience imperative and to showcase the suite of Oracle products that can help our customers create the best possible customer experience. This topic has always been a very important one, but now that there are so many alternative companies to do business with and because people have such public ways to voice their displeasure, it's necessary for vendors to have multiple listening posts in place to gauge consumer sentiment. They need to know what is going on in real time and be able to react quickly to turn negative situations into positive ones. Those can then be shared in a social manner to enhance the brand and turn the customer into a repeat customer. The Summit was focused on Oracle's portfolio of products and entirely dedicated to customers who are committed to building great customer experiences within their businesses. Rather than DBAs, the attendees were business people looking to collaborate with other like-minded experts and find out how Oracle can help in terms of technology, best practices, and expertise. The event was at the Westin St. Francis Hotel in San Francisco as part of Oracle OpenWorld. We had eight hundred people attend, which was great for the first year. Next year, there's no doubt in my mind, we can raise that number to 5,000. Alignment and Logic Oracle's Customer Experience portfolio is made up of a combination of acquired and organic products owned by many people who are new to Oracle. We include homegrown Fusion CRM, as well as RightNow, Inquira, OPA, Vitrue, ATG, Endeca, and many others. The attendees knew of the acquisitions, so naturally they wanted to see how the products all fit together and hear the logic behind the portfolio. To tell them about our alignment, we needed to be aligned. To accomplish that, a cross functional team at Oracle agreed on the messaging so that every single Oracle presenter could cover the big picture before going deep into a product or topic. Talking about the full suite of products in one session produced overflow value for other products. And even though this internal coordination was a huge effort, everyone saw the value for our customers and for our long-term cooperation and success. Keynotes, Workshops, and Tents of Innovation We scored by having Seth Godin as our keynote speaker ? always provocative and popular. The opening keynote was a session orchestrated by Mark Hurd, Anthony Lye, and me. Mark set the stage by giving real-world examples of bad customer experiences, Anthony clearly articulated the business imperative for addressing these experiences, and I brought it all to life by taking the audience around the Customer Lifecycle and showing demos and videos, with partners included at each of the stops around the lifecycle. Brian Curran, a VP for RightNow Product Strategy, presented a session that was in high demand called The Economics of Customer Experience. People loved hearing how to build a business case and justify the cost of building a better customer experience. John Kembel, another VP for RightNow Product Strategy, held a workshop that customers raved about. It was based on the journey mapping methodology he created, which is a way to talk to customers about where they want to make improvements to their customers' experiences. He divided the audience into groups led by facilitators. Each person had the opportunity to engage with experts and peers and construct some real takeaways. From left to right: Brian Curran, John Kembel, Seth Godin, and George Kembel The conference hotel was across from Union Square so we used that space to set up Innovation Tents. During the day we served lunch in the tents and partners showed their different innovative ideas. It was very interesting to see all the technologies and advancements. It also gave people a place to mix and mingle and to think about the fringe of where we could all take these ideas. Product Portfolio Plus Thought Leadership Of course there is always room for improvement, but the feedback on the format of the conference was positive. Ninety percent of the sessions had either a partner or a customer teamed with an Oracle presenter. The presentations weren't dry, one-way information dumps, but more interactive. I just followed up with a CEO who attended the conference with his Head of Marketing. He told me that they are using John Kembel's journey mapping methodology across the organization to pull people together. This sort of thought leadership in these highly competitive areas gives Oracle permission to engage around the technology. We have to differentiate ourselves and it's harder to do on the product side because everyone looks the same on paper. But on thought leadership ? we can, and did, take some really big steps. David VapGroup Vice PresidentOracle Applications Product Development

    Read the article

  • Silverlight Tree View with Multiple Levels

    - by psheriff
    There are many examples of the Silverlight Tree View that you will find on the web, however, most of them only show you how to go to two levels. What if you have more than two levels? This is where understanding exactly how the Hierarchical Data Templates works is vital. In this blog post, I am going to break down how these templates work so you can really understand what is going on underneath the hood. To start, let’s look at the typical two-level Silverlight Tree View that has been hard coded with the values shown below: <sdk:TreeView>  <sdk:TreeViewItem Header="Managers">    <TextBlock Text="Michael" />    <TextBlock Text="Paul" />  </sdk:TreeViewItem>  <sdk:TreeViewItem Header="Supervisors">    <TextBlock Text="John" />    <TextBlock Text="Tim" />    <TextBlock Text="David" />  </sdk:TreeViewItem></sdk:TreeView> Figure 1 shows you how this tree view looks when you run the Silverlight application. Figure 1: A hard-coded, two level Tree View. Next, let’s create three classes to mimic the hard-coded Tree View shown above. First, you need an Employee class and an EmployeeType class. The Employee class simply has one property called Name. The constructor is created to accept a “name” argument that you can use to set the Name property when you create an Employee object. public class Employee{  public Employee(string name)  {    Name = name;  }   public string Name { get; set; }} Finally you create an EmployeeType class. This class has one property called EmpType and contains a generic List<> collection of Employee objects. The property that holds the collection is called Employees. public class EmployeeType{  public EmployeeType(string empType)  {    EmpType = empType;    Employees = new List<Employee>();  }   public string EmpType { get; set; }  public List<Employee> Employees { get; set; }} Finally we have a collection class called EmployeeTypes created using the generic List<> class. It is in the constructor for this class where you will build the collection of EmployeeTypes and fill it with Employee objects: public class EmployeeTypes : List<EmployeeType>{  public EmployeeTypes()  {    EmployeeType type;            type = new EmployeeType("Manager");    type.Employees.Add(new Employee("Michael"));    type.Employees.Add(new Employee("Paul"));    this.Add(type);     type = new EmployeeType("Project Managers");    type.Employees.Add(new Employee("Tim"));    type.Employees.Add(new Employee("John"));    type.Employees.Add(new Employee("David"));    this.Add(type);  }} You now have a data hierarchy in memory (Figure 2) which is what the Tree View control expects to receive as its data source. Figure 2: A hierachial data structure of Employee Types containing a collection of Employee objects. To connect up this hierarchy of data to your Tree View you create an instance of the EmployeeTypes class in XAML as shown in line 13 of Figure 3. The key assigned to this object is “empTypes”. This key is used as the source of data to the entire Tree View by setting the ItemsSource property as shown in Figure 3, Callout #1. Figure 3: You need to start from the bottom up when laying out your templates for a Tree View. The ItemsSource property of the Tree View control is used as the data source in the Hierarchical Data Template with the key of employeeTypeTemplate. In this case there is only one Hierarchical Data Template, so any data you wish to display within that template comes from the collection of Employee Types. The TextBlock control in line 20 uses the EmpType property of the EmployeeType class. You specify the name of the Hierarchical Data Template to use in the ItemTemplate property of the Tree View (Callout #2). For the second (and last) level of the Tree View control you use a normal <DataTemplate> with the name of employeeTemplate (line 14). The Hierarchical Data Template in lines 17-21 sets its ItemTemplate property to the key name of employeeTemplate (Line 19 connects to Line 14). The source of the data for the <DataTemplate> needs to be a property of the EmployeeTypes collection used in the Hierarchical Data Template. In this case that is the Employees property. In the Employees property there is a “Name” property of the Employee class that is used to display the employee name in the second level of the Tree View (Line 15). What is important here is that your lowest level in your Tree View is expressed in a <DataTemplate> and should be listed first in your Resources section. The next level up in your Tree View should be a <HierarchicalDataTemplate> which has its ItemTemplate property set to the key name of the <DataTemplate> and the ItemsSource property set to the data you wish to display in the <DataTemplate>. The Tree View control should have its ItemsSource property set to the data you wish to display in the <HierarchicalDataTemplate> and its ItemTemplate property set to the key name of the <HierarchicalDataTemplate> object. It is in this way that you get the Tree View to display all levels of your hierarchical data structure. Three Levels in a Tree View Now let’s expand upon this concept and use three levels in our Tree View (Figure 4). This Tree View shows that you now have EmployeeTypes at the top of the tree, followed by a small set of employees that themselves manage employees. This means that the EmployeeType class has a collection of Employee objects. Each Employee class has a collection of Employee objects as well. Figure 4: When using 3 levels in your TreeView you will have 2 Hierarchical Data Templates and 1 Data Template. The EmployeeType class has not changed at all from our previous example. However, the Employee class now has one additional property as shown below: public class Employee{  public Employee(string name)  {    Name = name;    ManagedEmployees = new List<Employee>();  }   public string Name { get; set; }  public List<Employee> ManagedEmployees { get; set; }} The next thing that changes in our code is the EmployeeTypes class. The constructor now needs additional code to create a list of managed employees. Below is the new code. public class EmployeeTypes : List<EmployeeType>{  public EmployeeTypes()  {    EmployeeType type;    Employee emp;    Employee managed;     type = new EmployeeType("Manager");    emp = new Employee("Michael");    managed = new Employee("John");    emp.ManagedEmployees.Add(managed);    managed = new Employee("Tim");    emp.ManagedEmployees.Add(managed);    type.Employees.Add(emp);     emp = new Employee("Paul");    managed = new Employee("Michael");    emp.ManagedEmployees.Add(managed);    managed = new Employee("Sara");    emp.ManagedEmployees.Add(managed);    type.Employees.Add(emp);    this.Add(type);     type = new EmployeeType("Project Managers");    type.Employees.Add(new Employee("Tim"));    type.Employees.Add(new Employee("John"));    type.Employees.Add(new Employee("David"));    this.Add(type);  }} Now that you have all of the data built in your classes, you are now ready to hook up this three-level structure to your Tree View. Figure 5 shows the complete XAML needed to hook up your three-level Tree View. You can see in the XAML that there are now two Hierarchical Data Templates and one Data Template. Again you list the Data Template first since that is the lowest level in your Tree View. The next Hierarchical Data Template listed is the next level up from the lowest level, and finally you have a Hierarchical Data Template for the first level in your tree. You need to work your way from the bottom up when creating your Tree View hierarchy. XAML is processed from the top down, so if you attempt to reference a XAML key name that is below where you are referencing it from, you will get a runtime error. Figure 5: For three levels in a Tree View you will need two Hierarchical Data Templates and one Data Template. Each Hierarchical Data Template uses the previous template as its ItemTemplate. The ItemsSource of each Hierarchical Data Template is used to feed the data to the previous template. This is probably the most confusing part about working with the Tree View control. You are expecting the content of the current Hierarchical Data Template to use the properties set in the ItemsSource property of that template. But you need to look to the template lower down in the XAML to see the source of the data as shown in Figure 6. Figure 6: The properties you use within the Content of a template come from the ItemsSource of the next template in the resources section. Summary Understanding how to put together your hierarchy in a Tree View is simple once you understand that you need to work from the bottom up. Start with the bottom node in your Tree View and determine what that will look like and where the data will come from. You then build the next Hierarchical Data Template to feed the data to the previous template you created. You keep doing this for each level in your Tree View until you get to the last level. The data for that last Hierarchical Data Template comes from the ItemsSource in the Tree View itself. NOTE: You can download the sample code for this article by visiting my website at http://www.pdsa.com/downloads. Select “Tips & Tricks”, then select “Silverlight TreeView with Multiple Levels” from the drop down list.

    Read the article

  • C#/.NET Little Wonders: The Joy of Anonymous Types

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. In the .NET 3 Framework, Microsoft introduced the concept of anonymous types, which provide a way to create a quick, compiler-generated types at the point of instantiation.  These may seem trivial, but are very handy for concisely creating lightweight, strongly-typed objects containing only read-only properties that can be used within a given scope. Creating an Anonymous Type In short, an anonymous type is a reference type that derives directly from object and is defined by its set of properties base on their names, number, types, and order given at initialization.  In addition to just holding these properties, it is also given appropriate overridden implementations for Equals() and GetHashCode() that take into account all of the properties to correctly perform property comparisons and hashing.  Also overridden is an implementation of ToString() which makes it easy to display the contents of an anonymous type instance in a fairly concise manner. To construct an anonymous type instance, you use basically the same initialization syntax as with a regular type.  So, for example, if we wanted to create an anonymous type to represent a particular point, we could do this: 1: var point = new { X = 13, Y = 7 }; Note the similarity between anonymous type initialization and regular initialization.  The main difference is that the compiler generates the type name and the properties (as readonly) based on the names and order provided, and inferring their types from the expressions they are assigned to. It is key to remember that all of those factors (number, names, types, order of properties) determine the anonymous type.  This is important, because while these two instances share the same anonymous type: 1: // same names, types, and order 2: var point1 = new { X = 13, Y = 7 }; 3: var point2 = new { X = 5, Y = 0 }; These similar ones do not: 1: var point3 = new { Y = 3, X = 5 }; // different order 2: var point4 = new { X = 3, Y = 5.0 }; // different type for Y 3: var point5 = new {MyX = 3, MyY = 5 }; // different names 4: var point6 = new { X = 1, Y = 2, Z = 3 }; // different count Limitations on Property Initialization Expressions The expression for a property in an anonymous type initialization cannot be null (though it can evaluate to null) or an anonymous function.  For example, the following are illegal: 1: // Null can't be used directly. Null reference of what type? 2: var cantUseNull = new { Value = null }; 3:  4: // Anonymous methods cannot be used. 5: var cantUseAnonymousFxn = new { Value = () => Console.WriteLine(“Can’t.”) }; Note that the restriction on null is just that you can’t use it directly as the expression, because otherwise how would it be able to determine the type?  You can, however, use it indirectly assigning a null expression such as a typed variable with the value null, or by casting null to a specific type: 1: string str = null; 2: var fineIndirectly = new { Value = str }; 3: var fineCast = new { Value = (string)null }; All of the examples above name the properties explicitly, but you can also implicitly name properties if they are being set from a property, field, or variable.  In these cases, when a field, property, or variable is used alone, and you don’t specify a property name assigned to it, the new property will have the same name.  For example: 1: int variable = 42; 2:  3: // creates two properties named varriable and Now 4: var implicitProperties = new { variable, DateTime.Now }; Is the same type as: 1: var explicitProperties = new { variable = variable, Now = DateTime.Now }; But this only works if you are using an existing field, variable, or property directly as the expression.  If you use a more complex expression then the name cannot be inferred: 1: // can't infer the name variable from variable * 2, must name explicitly 2: var wontWork = new { variable * 2, DateTime.Now }; In the example above, since we typed variable * 2, it is no longer just a variable and thus we would have to assign the property a name explicitly. ToString() on Anonymous Types One of the more trivial overrides that an anonymous type provides you is a ToString() method that prints the value of the anonymous type instance in much the same format as it was initialized (except actual values instead of expressions as appropriate of course). For example, if you had: 1: var point = new { X = 13, Y = 42 }; And then print it out: 1: Console.WriteLine(point.ToString()); You will get: 1: { X = 13, Y = 42 } While this isn’t necessarily the most stunning feature of anonymous types, it can be handy for debugging or logging values in a fairly easy to read format. Comparing Anonymous Type Instances Because anonymous types automatically create appropriate overrides of Equals() and GetHashCode() based on the underlying properties, we can reliably compare two instances or get hash codes.  For example, if we had the following 3 points: 1: var point1 = new { X = 1, Y = 2 }; 2: var point2 = new { X = 1, Y = 2 }; 3: var point3 = new { Y = 2, X = 1 }; If we compare point1 and point2 we’ll see that Equals() returns true because they overridden version of Equals() sees that the types are the same (same number, names, types, and order of properties) and that the values are the same.   In addition, because all equal objects should have the same hash code, we’ll see that the hash codes evaluate to the same as well: 1: // true, same type, same values 2: Console.WriteLine(point1.Equals(point2)); 3:  4: // true, equal anonymous type instances always have same hash code 5: Console.WriteLine(point1.GetHashCode() == point2.GetHashCode()); However, if we compare point2 and point3 we get false.  Even though the names, types, and values of the properties are the same, the order is not, thus they are two different types and cannot be compared (and thus return false).  And, since they are not equal objects (even though they have the same value) there is a good chance their hash codes are different as well (though not guaranteed): 1: // false, different types 2: Console.WriteLine(point2.Equals(point3)); 3:  4: // quite possibly false (was false on my machine) 5: Console.WriteLine(point2.GetHashCode() == point3.GetHashCode()); Using Anonymous Types Now that we’ve created instances of anonymous types, let’s actually use them.  The property names (whether implicit or explicit) are used to access the individual properties of the anonymous type.  The main thing, once again, to keep in mind is that the properties are readonly, so you cannot assign the properties a new value (note: this does not mean that instances referred to by a property are immutable – for more information check out C#/.NET Fundamentals: Returning Data Immutably in a Mutable World). Thus, if we have the following anonymous type instance: 1: var point = new { X = 13, Y = 42 }; We can get the properties as you’d expect: 1: Console.WriteLine(“The point is: ({0},{1})”, point.X, point.Y); But we cannot alter the property values: 1: // compiler error, properties are readonly 2: point.X = 99; Further, since the anonymous type name is only known by the compiler, there is no easy way to pass anonymous type instances outside of a given scope.  The only real choices are to pass them as object or dynamic.  But really that is not the intention of using anonymous types.  If you find yourself needing to pass an anonymous type outside of a given scope, you should really consider making a POCO (Plain Old CLR Type – i.e. a class that contains just properties to hold data with little/no business logic) instead. Given that, why use them at all?  Couldn’t you always just create a POCO to represent every anonymous type you needed?  Sure you could, but then you might litter your solution with many small POCO classes that have very localized uses. It turns out this is the key to when to use anonymous types to your advantage: when you just need a lightweight type in a local context to store intermediate results, consider an anonymous type – but when that result is more long-lived and used outside of the current scope, consider a POCO instead. So what do we mean by intermediate results in a local context?  Well, a classic example would be filtering down results from a LINQ expression.  For example, let’s say we had a List<Transaction>, where Transaction is defined something like: 1: public class Transaction 2: { 3: public string UserId { get; set; } 4: public DateTime At { get; set; } 5: public decimal Amount { get; set; } 6: // … 7: } And let’s say we had this data in our List<Transaction>: 1: var transactions = new List<Transaction> 2: { 3: new Transaction { UserId = "Jim", At = DateTime.Now, Amount = 2200.00m }, 4: new Transaction { UserId = "Jim", At = DateTime.Now, Amount = -1100.00m }, 5: new Transaction { UserId = "Jim", At = DateTime.Now.AddDays(-1), Amount = 900.00m }, 6: new Transaction { UserId = "John", At = DateTime.Now.AddDays(-2), Amount = 300.00m }, 7: new Transaction { UserId = "John", At = DateTime.Now, Amount = -10.00m }, 8: new Transaction { UserId = "Jane", At = DateTime.Now, Amount = 200.00m }, 9: new Transaction { UserId = "Jane", At = DateTime.Now, Amount = -50.00m }, 10: new Transaction { UserId = "Jaime", At = DateTime.Now.AddDays(-3), Amount = -100.00m }, 11: new Transaction { UserId = "Jaime", At = DateTime.Now.AddDays(-3), Amount = 300.00m }, 12: }; So let’s say we wanted to get the transactions for each day for each user.  That is, for each day we’d want to see the transactions each user performed.  We could do this very simply with a nice LINQ expression, without the need of creating any POCOs: 1: // group the transactions based on an anonymous type with properties UserId and Date: 2: byUserAndDay = transactions 3: .GroupBy(tx => new { tx.UserId, tx.At.Date }) 4: .OrderBy(grp => grp.Key.Date) 5: .ThenBy(grp => grp.Key.UserId); Now, those of you who have attempted to use custom classes as a grouping type before (such as GroupBy(), Distinct(), etc.) may have discovered the hard way that LINQ gets a lot of its speed by utilizing not on Equals(), but also GetHashCode() on the type you are grouping by.  Thus, when you use custom types for these purposes, you generally end up having to write custom Equals() and GetHashCode() implementations or you won’t get the results you were expecting (the default implementations of Equals() and GetHashCode() are reference equality and reference identity based respectively). As we said before, it turns out that anonymous types already do these critical overrides for you.  This makes them even more convenient to use!  Instead of creating a small POCO to handle this grouping, and then having to implement a custom Equals() and GetHashCode() every time, we can just take advantage of the fact that anonymous types automatically override these methods with appropriate implementations that take into account the values of all of the properties. Now, we can look at our results: 1: foreach (var group in byUserAndDay) 2: { 3: // the group’s Key is an instance of our anonymous type 4: Console.WriteLine("{0} on {1:MM/dd/yyyy} did:", group.Key.UserId, group.Key.Date); 5:  6: // each grouping contains a sequence of the items. 7: foreach (var tx in group) 8: { 9: Console.WriteLine("\t{0}", tx.Amount); 10: } 11: } And see: 1: Jaime on 06/18/2012 did: 2: -100.00 3: 300.00 4:  5: John on 06/19/2012 did: 6: 300.00 7:  8: Jim on 06/20/2012 did: 9: 900.00 10:  11: Jane on 06/21/2012 did: 12: 200.00 13: -50.00 14:  15: Jim on 06/21/2012 did: 16: 2200.00 17: -1100.00 18:  19: John on 06/21/2012 did: 20: -10.00 Again, sure we could have just built a POCO to do this, given it an appropriate Equals() and GetHashCode() method, but that would have bloated our code with so many extra lines and been more difficult to maintain if the properties change.  Summary Anonymous types are one of those Little Wonders of the .NET language that are perfect at exactly that time when you need a temporary type to hold a set of properties together for an intermediate result.  While they are not very useful beyond the scope in which they are defined, they are excellent in LINQ expressions as a way to create and us intermediary values for further expressions and analysis. Anonymous types are defined by the compiler based on the number, type, names, and order of properties created, and they automatically implement appropriate Equals() and GetHashCode() overrides (as well as ToString()) which makes them ideal for LINQ expressions where you need to create a set of properties to group, evaluate, etc. Technorati Tags: C#,CSharp,.NET,Little Wonders,Anonymous Types,LINQ

    Read the article

< Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >